text
stringlengths
1
2.55M
id
stringlengths
21
25
metadata
dict
\section{Introduction} With the explosive growth of Internet of Things (IoT) devices, wireless communication networks (WCNs) are increasingly facing the challenge of allocating finite transmit power and bandwidth for system utility maximization~\cite{xu2021survey}. Accordingly, one needs to design advanced radio resource management schemes to serve numerous wireless access devices. Massive multiple-input multiple-output (MIMO) and multiuser transmission are two key enablers for supporting larger-scale connection in future WCNs~\cite{he2021survey}. Therefore, some works have been carried on researching the beamforming design (BF)~\cite{he2015energy}, power allocation (PA)~\cite{yu2020power}, and user scheduling (US)~\cite{ammar2021distributed}, etc. Generally speaking, US and BF (US-BF) design are two fundamental problems in multiuser WCNs, \textred{which are implemented at the media access control layer \cite{dimic2005on} and the physical layer \cite{zhang2009networked}, respectively. Unfortunately, these two issues are always coupled, which is difficult to be solved.} Therefore, they are generally investigated separately in the existing literature, such as BF design with a given user set~\cite{shi2011iteratively} or US optimization combined with PA (US-PA)\cite{dong2019energy}. \textred{For example, the authors of~\cite{yu2007transmitter} and~\cite{huh2012network} only consider the BF problem, where the uplink-downlink duality theory is adopted for tackling the non-convex problem of transceivers design. The authors of \cite{huang2020hybrid} and \cite{huang2021multi-hop} also solve the BF problem for RIS-empowered Terahertz communications with deep reinforcement learning methods. To further improve the performance of WCNs, cross-layer design is increasingly becoming popular~\cite{fu2014a}. The authors of~\cite{yoo2006on} investigate the US-BF problem by sequentially performing the semi-orthogonal user selection (SUS) algorithm for US optimization and the zero-forcing BF (ZFBF) algorithm for BF design. The authors of \cite{chen2017low} propose a low complexity US-BF scheme for 5G MIMO nonorthogonal multiple-access systems, but the non-convex problem is separated by tracking two subproblems, namely, BF scheme and greedy min-power US scheme, instead of jointly solving them. The authors of~\cite{zhang2017sum-rate} also discuss cross layer optimization with statistical channel information for massive MIMO scenario, by tackling US and BF individually.} Meanwhile, the existing researches on coordinated multiuser communication are mainly based on the conventional Shannon theory~\cite{shannon1948mathematical}, which assumes that the communication capacity has extremely low decoding error probability with enough long blocklength transmission. However, in the ultra-reliable low latency communication (URLLC) senarios, such as factory automation and remote surgery, this condition with the long blocklength transmission may not be satisfied~\cite{nasir2020resource}. To take the impact of finite blocklength transmission into account, the achievable rate has been expressed as a complicated function composed of the received signal-to-noise (SNR), the blocklength, and the decoding error probability, which is smaller than the Shannon rate~\cite{polyanskiy2010channel}. Consequently, the optimization problem in scenarios with finite blocklength transmission is more challenging~\cite{he2020beamforming}. In order to solve the problem of interest, the algorithms designed in the aforementioned references are mainly based on the convex optimization theory~\cite{bertsekas2003convex}. However, such model-driven optimization algorithms usually suffer from a high computational complexity, which may restrict their practical application ability in WCNs. Recently, deep neural networks (DNNs) have emerged as an effective tool to solve such challenging radio resource management problems in WCNs~\cite{she2021tutorial}. Different from the model-driven optimization algorithms running independently for each instance, DNNs are trained with numerous data to learn the mapping between radio resource optimization policies and WCN environments. Hence, the main computational cost of DNNs is shifted into the offline training stage, and only simple mathematical operations are needed in the online optimization stage. The work in~\cite{li2021multicell} shows that DNNs could achieve competitive performance with lower computational complexity than existing model-driven optimization algorithms. A similar conclusion has been demonstrated in~\cite{xia2020deep}, where DNNs are used for BF design of multiuser multiple-input single-output (MISO) downlink systems, but the size of the considered problem is rather small. \textred{The authors of \cite{kaushik2021} regard resource allocation problems in the field of wireless communications as the generalized assignment problems (GAP), and propose a novel deep unsupervised learning approach to solve GAP in a time-efficient manner. The authors of \cite{liang2020towards} focus on solving PA problem via ensembling several deep neural networks. This is also an unsupervised approach and achieves competitive results compared with conventional methods. However, the core network is specifically designed for power control problem and it could not be extended for US.} In addition, these DNN-based architectures~\cite{liang2020towards,kaushik2021,li2021multicell,xia2020deep} are mainly inherited from image processing tasks and not tailored to radio resource management problems, especially the fact that they fail to exploit the prior topology knowledge in WCNs. The numerical results obtained in~\cite{chen2021gnn} illustrated that the performance of DNNs degrades dramatically with increasing WCN size. To achieve a better scalability of learning-based radio resource management, a potential approach is to incorporate the network topology into the learning of neural networks, namely graph neural networks (GNNs)~\cite{he2021overview}. For instance, the authors of~\cite{cui2019spatial} combined DNNs with the geographic location of transceivers, and thereby proposed a spatial convolution model for wireless link scheduling problems with hundreds of nodes. The authors of~\cite{eisen2020optimal} proposed a random edge graph neural network (REGNN) for PA optimization on graphs formed by the interference links within WCNs. The work in~\cite{shen2019graph} demonstrates that GNNs are insensitive to the permutation of data, such as channel state information (CSI). Further, this work was extended in~\cite{shen2020graph} to solve both PA and BF problems via message passing graph neural networks (MPGNNs), which have the ability to generalize to large-scale problems while enjoying a high computational efficiency. However, their proposed designs in~\cite{shen2019graph,shen2020graph} only investigated the continuous optimization problems with simple constraints. The discrete optimization problems with complicated constraints are still an opening issue and need to be further considered. Fortunately, the application of primal-dual learning in~\cite{he2021gblinks} provides an effective way to solve the complicated constrained radio resource management problems. \textred{Based on the above considerations, this work studies the joint US-BF optimization problem in the multiuser MISO downlink system. Unlike the conventional methods, the US-BF design will be simultaneously achieved via solving a single optimization problem, instead of different problems. Moreover, to improve the computational efficiency and utilize network historical data information, we propose a GNN-based Joint US-BF (J-USBF) learning algorithm. The main contributions and advantages of this work are summarized as follows:} \begin{itemize} \item \textred{A joint US-BF optimization problem for multiuser MISO downlink systems is formulated with the goal of maximizing the number of scheduled users subject to user rate and base station (BS) power constraints. To solve this discrete-continuous variables optimization problem, a SCA-based US-BF (SCA-USBF) algorithm is firstly designed to pave the way for the J-USBF algorithm.} \item \textred{A J-USBF learning algorithm is developed by combining the joint user scheduling and power allocation network (JEEPON) model with the BF analytical solution. In particular, we first formulate the investigated problem as a graph optimization problem through wireless graph representation, then design a GNN-based JEEPON model to learn the US-PA strategy on graphs, and utilize the BF analytical solution to achieve joint US-BF design. Meanwhile, a primal-dual learning framework is developed to train JEEPON in an unsupervised manner.} \item Finally, numerical results is conducted to validate the effectiveness of the proposed algorithms. Compared with the SCA-USBF algorithm, the J-USBF learning algorithm achieves close performance and higher computational efficiency, and enjoys the generalizability in dynamic WCN scenarios. \end{itemize} The remainder of this paper is organized as follows. Section~\rmnum{2} introduces a challenging radio resource management problem in the multiuser MISO downlink system. Section~\rmnum{3} proposes the SCA-USBF for solving the investigated problem. Section~\rmnum{4} designs the JEEPON and provides a primal-dual learning framework to train it in an unsupervised manner. Numerical results are presented in Section~\rmnum{5}. Finally, conclusions are drawn in Section~\rmnum{6}. \textbf{\textcolor{black}{$\mathbf{\mathit{Notations}}$}}: Throughout this paper, lowercase and uppercase letters (such as $a$ and $A$) represent scalars, while the bold counterparts $\mathbf{a}$ and $\mathbf{A}$ represent vectors and matrices, respectively. $\left|\cdot\right|$ indicates the absolute value of a complex scalar or the cardinality of a set. $\left\|\cdot\right\|_{0}$, $\left\|\cdot\right\|_{1}$, and $\left\|\cdot\right\|_{2}$ denote the $\ell_{0}$-norm, $\ell_{1}$-norm, and $\ell_{2}$-norm, respectively. The superscripts $(\cdot)^{T}$, $(\cdot)^{H}$, and $(\cdot)^{-1}$ denote the transpose, conjugate transpose, and inverse of a matrix, respectively. $\mathbb{R}$, $\mathbb{R}^{+}$, and $\mathbb{C}$ are the sets of real, non-negative real, and complex numbers, respectively. Finally, $\mathbb{R}^{M\times1}$ and $\mathbb{C}^{M\times1}$ represent $M$-dimensional real and complex column vectors, respectively. \section{System Model and Problem Formulation} In this work, we consider a multiuser MISO downlink system with taking the reliable and delivery latency into account, where a BS with $N$ antennas serves $K$ single-antenna users\footnote{\textred{Since the complexity of discussed problem, the single-cell scenario is considered in this paper. Research on more complex scenario with multi-cells will be discussed in future work, where inter-cell interference should be considered.}}. For simplicity, let $\mathcal{K}=\{1,2,\cdots,K\}$ and $\mathcal{S}=\{1,2,\cdots,K^{\ast}\}\subseteq\mathcal{K}$ be the set of candidate users and scheduled users, respectively, where $K^{\ast}\leq{K}$. The channel between user $k$ and the BS is denoted as $\mathbf{h}_{k}\in\mathbb{C}^{N\times1}$. Let $p_{k}\geq{0}$ and $\mathbf{w}_{k}\in\mathbb{C}^{N\times1}$ represent the transmit power and unit-norm BF vector used by the BS for user $k$, respectively. Thus, the received signal at user $k$ is given by \begin{equation}\label{Eq.(01)} y_{k}=\sum\limits_{l\in\mathcal{S}}\sqrt{p_{l}}\mathbf{h}_{k}^{H}\mathbf{w}_{l}s_{l}+n_{k}, \end{equation} where $s_{l}$ is the normalized data symbol intended for the $l$-th user, and $n_{k}\sim\mathcal{CN}(0,\sigma_{k}^{2})$ denotes the additive Gaussian white noise at user $k$ with zero mean and variance $\sigma_{k}^{2}$. For notational convenience, we define $\overline{\mathbf{h}}_{k}=\frac{\mathbf{h}_{k}}{\sigma_{k}}$ and the downlink signal-to-interference-plus-noise ratio (SINR) of user $k$ as \begin{equation}\label{Eq.(02)} \overrightarrow{\gamma}_{k}=\frac{p_{k}\left|\overline{\mathbf{h}}_{k}^{H}\mathbf{w}_{k}\right|^{2}} {\sum\limits_{l\neq k,l\in\mathcal{S}}p_{l}\left|\overline{\mathbf{h}}_{k}^{H}\mathbf{w}_{l}\right|^{2}+1}. \end{equation} To satisfy the extreme requirements of delay, finite blocklength transmission regime is adopted in this paper. The results in~\cite{polyanskiy2010channel} show that the achievable rate is not only a function of the received SNR (or SINR), but also the decoding error probability $\epsilon$ and the transmission finite blocklength $n$. Accordingly, the achievable rate of user $k$ with finite blocklength transmission is given by\footnote{The proposed algorithms is also suitable for solving similar optimization problems, where the user rate is based on Shannon capacity formula.} \begin{equation}\label{Eq.(03)} R(\overrightarrow{\gamma}_{k})=C(\overrightarrow{\gamma}_{k})-\vartheta\sqrt{V(\overrightarrow{\gamma}_{k})}, \end{equation} where $C(\overrightarrow{\gamma}_{k})=\ln(1+\overrightarrow{\gamma}_{k})$ denotes the Shannon capacity, $\vartheta=\frac{Q^{-1}(\epsilon)}{\sqrt{n}}$, $Q^{-1}(\cdot)$ is the inverse of Gaussian Q-function $Q(x)=\frac{1}{\sqrt{2\pi}}\int_{x}^{\infty}\mathrm{exp}(-\frac{t^{2}}{2})dt$, and $V(\overrightarrow{\gamma}_{k})$ denotes the channel dispersion, which is defined as \begin{equation}\label{Eq.(04)} V(\overrightarrow{\gamma}_{k})=1-\frac{1}{(1+\overrightarrow{\gamma}_{k})^{2}}. \end{equation} The target of this work is to maximize the number of users belonging to the scheduled user set $\mathcal{S}\subseteq\mathcal{K}$ subject to the constraints of per-user minimum rate requirement and BS maximum power budget. Specifically, one needs to carefully select the scheduled user set $\mathcal{S}$, and design BF vectors with reasonable transmit power\footnote{For the ultra-dense or large-scale connective URLLC scenario, it may be a better choice to schedule as many users as possible while satisfying reliability and latency requirements. Accordingly, we aim to maximize the set cardinality of scheduled users in this work.}. To this end, the joint US-BF optimization problem is formulated as follows\footnote{\textred{In our experiment, we obtain perfect CSI via link level simulation. However, it is indeed hard to estimate CSI in the real communication systems~\cite{du2021robust}. Although there are pilot-based and blind channel estimation methods, the perfect CSI cannot be obtained due to the estimation error, which may lead to performance deterioration. Statistical CSI, including RSRP (Reference Signal Receiving Power), RSRQ (Reference Signal Receiving Quality), RSSI (Received Signal Strength Indicator), et al., might be helpful under this condition. We would like to further investigate the joint US-BF problem in the future work.}} \begin{subequations}\label{Eq.(05)} \begin{align} &\max_{\{p_{k},\mathbf{w}_{k}\}}|\mathcal{S}|,\label{Eq.(05a)}\\ \mathrm{s.t.}~&r_{k}\leq R(\overrightarrow{\gamma}_{k}),~\left\|\mathbf{w}_{k}\right\|_{2}=1,\forall{k}\in\mathcal{S},\label{Eq.(05b)}\\ &\sum\limits_{k\in\mathcal{S}}p_{k}\leq{P},\textred{~p_{k}\geq{0},\forall{k}\in\mathcal{S},}\label{Eq.(05c)} \end{align} \end{subequations} where $\left|\mathcal{S}\right|$ is the cardinality of set $\mathcal{S}$, $r_{k}$ is the per-user minimum rate requirement, and $P$ denotes the power budget of the BS. Problem~\eqref{Eq.(05)} is a mixed-integer continuous-variable programming problem that involves a discrete objective function and two continuous-variable constraints about power and unit-norm BF vectors. It is difficult to obtain the global optimal solution of problem~\eqref{Eq.(05)}, even the near-optimal solution. Although the greedy heuristic search based US-BF (G-USBF) algorithm in Appendix A could be considered as a possible effective solution, it brings extremely high computational complexity especially for large-scale WCNs. In the sequel, the SCA-based US-BF optimization algorithm and the GNN-based learning algorithm are successively proposed to solve the problem~\eqref{Eq.(05)}. \section{Design of The SCA-USBF Algorithm} In this section, we pay our attention on designing an effective optimization algorithm for problem~\eqref{Eq.(05)} from the perspective of successive convex approximation (SCA) optimization theory. Since problem~\eqref{Eq.(05)} is non-convex, the first thing is to transform it into a tractable form via some basic mathematical transformations. \textred{One idea is to apply the uplink-downlink duality theory~\cite{schubert2004solution} to equivalently transform the downlink problem~\eqref{Eq.(05)} into a virtual uplink dual problem~\eqref{Eq.(06)}~\footnote{\textred{Similar to formula~\eqref{Eq.(01)}, the virtual uplink input-output relationship could be expressed as $\mathbf{y}=\sum\limits_{k\in{\mathcal{S}}}\sqrt{q_{k}}\overline{\mathbf{h}}_{k}s_{k}+\mathbf{n}$, where $\mathbf{y}\in\mathbb{R}^{N\times1}$ is the virtual uplink received signal at BS, $s_k$ is the virtual uplink normalized data symbol intended for the $k$-th user, and $\mathbf{n}\in\mathbb{R}^{N\times1}$ is the additive Gaussian white noise with $\mathcal{CN}(0,\mathbf{I})$. For the virtual uplink communication systems, $\mathbf{w}_{k}$ is used as the received vector for the $k$-th user. Thus, the virtual uplink received SINR of the $k$-th user can be calculated via the received signal $\mathbf{w}_{k}^{H}\mathbf{y}$.}}, i.e.,} \begin{subequations}\label{Eq.(06)} \begin{align} &\max_{\left\{q_{k},\mathbf{w}_{k}\right\}}|\mathcal{S}|,\label{Eq.(06a)}\\ \mathrm{s.t.}~&r_{k}\leq{R}(\overleftarrow{\gamma}_{k}), \left\|\mathbf{w}_{k}\right\|_{2}=1,\forall{k}\in\mathcal{S},\label{Eq.(06b)}\\ &\sum\limits_{k\in\mathcal{S}}q_{k}\leq{P},\textred{~q_{k}\geq{0},\forall{k}\in\mathcal{S},}\label{Eq.(06c)} \end{align} \end{subequations} where $q_{k}$ is the virtual uplink transmit power of user $k$, and $\overleftarrow{\gamma}_{k}$ represents the corresponding virtual uplink received SINR, i.e., \begin{equation}\label{Eq.(07)} \overleftarrow{\gamma}_{k}=\frac{q_{k}\left|\overline{\mathbf{h}}_{k}^{H}\mathbf{w}_{k}\right|^{2}}{\sum\limits_{l\neq{k},l\in\mathcal{S}}q_{l}\left|\overline{\mathbf{h}}_{l}^{H}\mathbf{w}_{k}\right|^{2}+1}. \end{equation} \textred{Note that the definition (\ref{Eq.(07)}) focuses on calculating SINRs for the scheduled user set $\mathcal{S}$, with its implicit information is that the SINRs of the unscheduled users are all zero values in theory. For convenience, we further propose a new SINR definition ${\overleftarrow{\gamma}}_{k}^{(\mathcal{K})}$ which is directly calculated based on the candidate user set $\mathcal{K}$, i.e.,} {\color{red}\begin{equation}\label{Eq.(08)} \overleftarrow{\gamma}_{k}^{(\mathcal{K})}=\frac{q_{k}\left|\overline{\mathbf{h}}_{k}^{H}\mathbf{w}_{k}\right|^{2}}{\sum\limits_{l\neq{k},l\in\mathcal{K}}q_{l}\left|\overline{\mathbf{h}}_{l}^{H}\mathbf{w}_{k}\right|^{2}+1}. \end{equation}} \textred{To clearly indicate whether a user is scheduled or not, we introduce $\kappa_{k}$ as a binary variable indicator of the user state, with $\kappa_{k}=1$ if user $k$ is scheduled and $\kappa_{k}=0$ otherwise, $k\in\mathcal{K}$. Therefore, $\kappa_{k}=1$ also means that the minimum rate constraint is met for the $k$-th user, i.e., formulas $r_{k}\le{R}(\overleftarrow{\gamma}_{k})$ and $q_{k}\geq{0}$ will hold. However, $\kappa_{k}=0$ does not mean that formulas $R\left(\overleftarrow{\gamma}_{k}\right) = 0$ and $q_{k}=0$ are always true. For instance, for a candidate user set $\mathcal{K}$ and scheduled user set $\mathcal{S},\mathcal{S}\subset\mathcal{K}$. Transmission power of the BS is not always precisely exhausted for scheduled user set $\mathcal{S}$. For user $k'$ from the rest user set $\mathcal{K}\setminus\mathcal{S}$, if the residual power could not meet the minimum transmission power requirement, then we have $\kappa_{k'}=0,0<R({\overleftarrow{\gamma}_{k'}})<r_{k'},||\mathbf{w}_{k'}||_{2} = 1$. In such circumstance, $\kappa_{k'}r_{k'}\le{R}(\overleftarrow{\gamma}_{k'})$ holds, but $\kappa_{k'}=0$, i.e., $\kappa_{k'}\notin\mathcal{S}$. Meanwhile, for user $k\in\mathcal{S}$, $\overleftarrow{\gamma}_{k}>\overleftarrow{\gamma}_{k}^{\mathcal{(K)}}>0$ and $\kappa_{k}=1$ hold. For user $k\notin\mathcal{S}$, if $k\in\mathcal{K},k\ne{k'}$, then $\overleftarrow{\gamma}_{k}=\overleftarrow{\gamma}_{k}^{\mathcal{(K)}}=0$ and $\kappa_{k}=0$ hold. Let $\bm{\kappa}=[\kappa_{1},\kappa_{2},\cdots,\kappa_{k},\cdots,\kappa_{K}]^{T}$, problem~\eqref{Eq.(06)} is approximately written as} {\color{red}\begin{subequations}\label{Eq.(09)} \begin{align} &\max_{\left\{\kappa_{k},q_{k},\mathbf{w}_{k}\right\}} \left\|\bm{\kappa}\right\|_{0},\label{Eq.(09a)}\\ \mathrm{s.t.}~&\kappa_{k}\in\{0,1\},\forall{k}\in\mathcal{K},\label{Eq.(09b)}\\ &\kappa_{k}r_{k}\leq{R}(\overleftarrow{\gamma}_{k}^{\mathcal{(K)}}), \left\|\mathbf{w}_{k}\right\|_{2}=1,\forall{k}\in\mathcal{K},\label{Eq.(09c)}\\ &\sum\limits_{k\in\mathcal{K}}q_{k}\leq{P},~q_{k}\geq{0},\forall k\in\mathcal{K}.\label{Eq.(09d)} \end{align} \end{subequations}} \textred{As discussed above, for user $k\in\mathcal{S}$, $\overleftarrow{\gamma}_{k}>\overleftarrow{\gamma}_{k}^{\mathcal{(K)}}>0$ holds, and for user $k\notin\mathcal{S}$, $\kappa_{k}=0$ holds. Therefore, (\ref{Eq.(09c)}) is a more strict constraint than (\ref{Eq.(06b)}), and the solution to problem (\ref{Eq.(06)}) is the upper bound of problem (\ref{Eq.(09)}).} The goal of problem~\eqref{Eq.(09)} is to maximize the number of scheduled users under the given constraints. Further, constraints~\eqref{Eq.(09b)} and~\eqref{Eq.(09c)} can be equivalently transformed into continuous constraint type and SINR form~\cite{he2020beamforming}, respectively. Let $\widetilde{\gamma}_{k}>0$ be the minimum SINR associated with achieving the minimum achievable rate $r_k$ for the $k$-th user. \textred{Thus, problem~\eqref{Eq.(09)} can be equivalently transformed as} \begin{subequations}\label{Eq.(10)} \begin{align} &\max_{\{\kappa_{k},q_{k},\mathbf{w}_{k}\}} \left\|\bm{\kappa}\right\|_{0},\label{Eq.(10a)}\\ \mathrm{s.t.}~&0\leq\kappa_{k}\leq{1},\forall{k}\in\mathcal{K},\label{Eq.(10b)}\\ &\sum\limits_{k\in\mathcal{K}}\left(\kappa_{k}-\kappa_{k}^{2}\right)\leq{0},\label{Eq.(10c)}\\ &\kappa_{k}\widetilde{\gamma}_{k}\leq\overleftarrow{\gamma}_{k}^{(\mathcal{K})}, \left\|\mathbf{w}_{k}\right\|_{2}=1,\forall{k}\in\mathcal{K},\label{Eq.(10d)}\\ &\sum\limits_{k\in\mathcal{K}}q_{k}\leq{P},~q_{k}\geq{0},\forall{k}\in\mathcal{K}.\label{Eq.(10e)} \end{align} \end{subequations} Constraints~\eqref{Eq.(10b)} and~\eqref{Eq.(10c)} assure that the value of $\kappa_{k}$ equals to either one or zero, i.e., $\kappa_{k}\in\{0,1\}$, $\forall k\in\mathcal{K}$. According to~\cite[Proposition 2]{che2014joint}, the strong Lagrangian duality holds for problem~\eqref{Eq.(10)}. Introducing similar mathematical tricks on handling constraint~\eqref{Eq.(10c)}, \textred{problem~\eqref{Eq.(10)} is reformulated as follows} \begin{subequations}\label{Eq.(11)} \begin{align} &\min_{\{\kappa_{k},q_{k},\mathbf{w}_{k}\}}-\sum\limits_{k\in\mathcal{K}}\kappa_{k}+g\left(\bm{\kappa}\right)-h\left(\bm{\kappa}\right),\label{Eq.(11a)}\\ \mathrm{s.t.}~&~\eqref{Eq.(10b)},~\eqref{Eq.(10d)},~\eqref{Eq.(10e)},\label{Eq.(11b)} \end{align} \end{subequations} where $\lambda$ is a proper non-negative constant, and $g\left(\bm{\kappa}\right)$ and $h\left(\bm{\kappa}\right)$ are defined respectively as \begin{subequations}\label{Eq.(12)} \begin{align} g\left(\bm{\kappa}\right)&\triangleq\lambda\sum\limits_{k\in\mathcal{K}}\kappa_{k}+\lambda\left(\sum\limits_{k\in\mathcal{K}}\kappa_{k}\right)^{2},\\ h\left(\bm{\kappa}\right)&\triangleq\lambda\sum\limits_{k\in\mathcal{K}}\kappa_{k}^{2}+\lambda\left(\sum\limits_{k\in\mathcal{K}}\kappa_{k}\right)^{2}. \end{align} \end{subequations} Note that the optimal receiver BF vector $\mathbf{w}_k^{(\ast)}$ for maximizing the uplink SINR $\overleftarrow{\gamma}_{k}^{(\mathcal{K})}$ of the $k$-th user is the minimum mean square error (MMSE) filter with fixed $\{q_{k}\}$, i.e., \begin{equation}\label{Eq.(13)} \mathbf{w}_{k}^{(\ast)}=\frac{\left(\mathbf{I}_{N}+\sum\limits_{k\in\mathcal{K}}q_{k}\overline{\mathbf{h}}_{k}\overline{\mathbf{h}}_{k}^{H}\right)^{-1}\overline{\mathbf{h}}_{k}} {\left\|\left(\mathbf{I}_{N}+\sum\limits_{k\in\mathcal{K}}q_{k}\overline{\mathbf{h}}_{k}\overline{\mathbf{h}}_{k}^{H}\right)^{-1}\overline{\mathbf{h}}_{k}\right\|_{2}}, \end{equation} where $\mathbf{I}_{N}$ denotes $N$-by-$N$ identity matrix. For fixed $\{\mathbf{w}_{k}\}$, \textred{problem~\eqref{Eq.(11)} is rewritten as} \begin{subequations}\label{Eq.(14)} \begin{align} &\min_{\{\kappa_{k},q_{k}\}}-\sum\limits_{k\in\mathcal{K}}\kappa_{k}+g\left(\bm{\kappa}\right)-h\left(\bm{\kappa}\right),\label{Eq.(14a)}\\ \mathrm{s.t.}~&\widetilde{\gamma}_{k}\kappa_{k}-q_{k}\left|\overline{\mathbf{h}}_{k}^{H}\mathbf{w}_{k}\right|^{2}+\varphi_{k}(\bm{\kappa},\mathbf{q})-\phi_{k}(\bm{\kappa},\mathbf{q})\leq{0},\forall{k}\in\mathcal{K},\label{Eq.(14b)}\\ &~\eqref{Eq.(10b)},~\eqref{Eq.(10e)},\label{Eq.(14c)} \end{align} \end{subequations} where $\varphi_{k}\left(\bm{\kappa},\mathbf{q}\right)$ and $\phi_{k}\left(\bm{\kappa},\mathbf{q}\right)$ are defined as \begin{subequations}\label{Eq.(15)} \begin{align} \varphi_{k}(\bm{\kappa},\mathbf{q})&\triangleq\frac{1}{2}\left(\widetilde{\gamma}_{k}\kappa_{k}+\sum\limits_{l\in\mathcal{K},l\neq k}q_{l}\left|\overline{\mathbf{h}}_{l}^{H}\mathbf{w}_{k}\right|^{2}\right)^{2},\\ \phi_{k}(\bm{\kappa},\mathbf{q})&\triangleq\frac{1}{2}\widetilde{\gamma}_{k}^{2}\kappa_{k}^{2}+\frac{1}{2}\left(\sum\limits_{l\in\mathcal{K},l\neq k}q_{l}\left|\overline{\mathbf{h}}_{l}^{H}\mathbf{w}_{k}\right|^{2}\right)^{2}. \end{align} \end{subequations} Problem~\eqref{Eq.(14)} belongs to the class of difference of convex programming problem, since the objective function~\eqref{Eq.(14a)} and constraint~\eqref{Eq.(14b)} are the difference of two convex functions. In the sequel, we resort to the classic SCA-based methods~\cite{nguyen2015achieving}. Using the convexity of functions $h\left(\bm{\kappa}\right)$ and $\phi\left(\bm{\kappa},\mathbf{q}\right)$, we have \begin{equation}\label{Eq.(16)} \begin{split} &h(\bm{\kappa})\geq\psi\left(\bm{\kappa}\right)\triangleq h(\bm{\kappa}^{(\tau)})+\sum\limits_{k\in\mathcal{K}}h'(\bm{\kappa}^{(\tau)})(\kappa_{k}-\kappa_{k}^{(\tau)}),\\ &\phi_{k}(\bm{\kappa},\mathbf{q})\geq\varrho_{k}(\bm{\kappa},\mathbf{q})\triangleq\phi_{k}(\bm{\kappa}^{(\tau)},\mathbf{q}^{(\tau)})\\ &+\widetilde{\gamma}_{k}^{2}\kappa_{k}^{(\tau)}(\kappa_{k}-\kappa_{k}^{(\tau)})+\sum\limits_{l\in\mathcal{K},l\neq{k}}\rho_{k,l}(\mathbf{q}^{(\tau)})(q_{l}-q_{l}^{(\tau)}), \end{split} \end{equation} where $h'\left(\kappa_{k}\right)\triangleq2\lambda\left(\kappa_{k}+\sum\limits_{l\in\mathcal{K}}\kappa_{l}\right)$, $\rho_{k,m}\left(\mathbf{q}\right)\triangleq\left|\overline{\mathbf{h}}_{m}^{H}\mathbf{w}_{k}\right|^{2}\sum\limits_{n\in\mathcal{K},n\neq k}q_{n}\left|\overline{\mathbf{h}}_{n}^{H}\mathbf{w}_{k}\right|^{2}$, and superscript $\tau$ is the $\tau$-th iteration of the SCA-USBF algorithm presented shortly. From the aforementioned discussions, \textred{the convex approximation problem solved at the $(\tau+1)$-th iteration of the proposed algorithm is given by} \begin{subequations}\label{Eq.(17)} \begin{align} &\min_{\{\kappa_{k},q_{k}\}}-\sum\limits_{k\in\mathcal{K}}\kappa_{k}+g(\bm{\kappa})-\psi(\bm{\kappa}),\label{Eq.(17a)}\\ \mathrm{s.t.}~&\widetilde{\gamma}_{k}\kappa_{k}-q_{k}\left|\overline{\mathbf{h}}_{k}^{H}\mathbf{w}_{k}\right|^{2}+\varphi_{k}(\bm{\kappa},\mathbf{q})-\varrho_{k}(\bm{\kappa},\mathbf{q})\leq{0},\forall{k}\in\mathcal{K},\label{Eq.(17b)}\\ &~\eqref{Eq.(10b)},~\eqref{Eq.(10e)}.\label{Eq.(17c)} \end{align} \end{subequations} Based on the above mathematical transformation, the SCA-USBF is summarized in \textbf{Algorithm}~\ref{Alg.(1)}. In the description of \textbf{Algorithm}~\ref{Alg.(1)}, $\delta$ denotes the maximum permissible error, and $\upsilon^{(\tau)}$ and $\zeta^{(t)}$ denote the objective value of problem~\eqref{Eq.(11)} at the $\tau$-th iteration and problem~\eqref{Eq.(17)} at the $\tau$-th iteration and the $t$-th iteration, respectively. \textred{Note that SCA-USBF is also suitable for problem scenarios based on the Shannon capacity formula, as we just need to replace the $R(\overleftarrow{\gamma}_{k})$ with $C(\overleftarrow{\gamma}_{k})$ in problems (\ref{Eq.(05)}), (\ref{Eq.(06)}), and (\ref{Eq.(09)}), and replace the minimum SINR $\widetilde{\gamma}_{k}$ in problem (10b) for achieving minimum achievable rate $r_k$ with $\widetilde{\gamma}'_{k} = 2^{r_k}-1$. }The convergence of SCA-USBF can be guaranteed by the monotonic boundary theory. \textred{To speed up the convergence of SCA-USBF, we can first filter out the users which meets constraints~\eqref{Eq.(05b)} and~\eqref{Eq.(05c)} by adopting a single user communication with the maximum ratio transmission (MRT) and full power transmission, thus, at least one user could be scheduled in such circumstance.} \begin{algorithm}[!ht] \caption{The SCA-USBF Algorithm for Problem~\eqref{Eq.(10)}}\label{Alg.(1)} \begin{algorithmic}[1] \STATE Let $t=0$, $\tau=0$, $\lambda=10^{-2}$ and $\delta=10^{-5}$. Initialize the BF vectors $\{\mathbf{w}_{k}^{(0)}\}$ and downlink power vectors $\{p_{k}^{(0)}\}$, such that constraint~\eqref{Eq.(05b)} and~\eqref{Eq.(05c)} are satisfied. \label{Alg.(1-1)} \STATE Initialize $\zeta^{(0)}$ and $\upsilon^{(0)}$, calculate the downlink SINR $\overrightarrow{\gamma}_{k}$ via $\{p_{k}^{(0)},\mathbf{w}_{k}^{(0)}\}$ and Eq.~\eqref{Eq.(02)}, and obtain the uplink power vector $\mathbf{q}=[q_{1},\cdots,q_{K^{\ast}}]^{T}$ with \begin{equation}\label{Eq.(18)} \mathbf{q}=\bm{\Psi}^{-1}\mathbf{I}_{K^{\ast}}, \end{equation} where $\mathbf{I}_{K^{\ast}}$ is the all-one vector with $K^{\ast}$ dimensions, and matrix $\bm{\Psi}$ is given by \begin{equation}\label{Eq.(19)} [\bm{\Psi}]_{k,l}=\left\{ \begin{aligned} \frac{|\overline{\mathbf{h}}_{k}^{H}\mathbf{w}_{k}|^{2}}{\overrightarrow{\gamma}_{k}},k=l,\\ -|\overline{\mathbf{h}}_{l}^{H}\mathbf{w}_{k}|^{2},k\neq{l}. \end{aligned} \right. \end{equation} \STATE Let $t\leftarrow{t+1}$. Solve problem~\eqref{Eq.(17)} to obtain $\{\kappa_{k}^{(t)},q_{k}^{(t)}\}$ and $\zeta^{(t)}$.\label{Alg.(1-3)} \STATE If $|\frac{\zeta^{(t)}-\zeta^{(t-1)}}{\zeta^{(t-1)}}|\leq\delta$, go to Step~\ref{Alg.(1-5)}. Otherwise, go to Step~\ref{Alg.(1-3)}.\label{Alg.(1-4)} \STATE Let $\tau\leftarrow\tau+1$, update $\{\mathbf{w}_{k}^{(\tau)}\}$ with $\{q_{k}^{(t)}\}$ and Eq.~\eqref{Eq.(13)}, and obtain the objective value $\upsilon^{(\tau)}$. If $|\frac{\upsilon^{(\tau)}-\upsilon^{(\tau-1)}}{\upsilon^{(\tau-1)}}|\leq\delta$, stop iteration and go to Step~\ref{Alg.(1-6)}. Otherwise, go to Step~\ref{Alg.(1-3)}.\label{Alg.(1-5)} \STATE Calculate the uplink SINR $\overleftarrow{\gamma}_{k}$ via $\{q_{k}^{(t)},\mathbf{w}_{k}^{(\tau)}\}$ and Eq.~\eqref{Eq.(07)}, and obtain the downlink power vector $\mathbf{p}=[p_{1},\cdots,p_{K^{\ast}}]^{T}$ with\label{Alg.(1-6)} \begin{equation}\label{Eq.(20)} \mathbf{p}=\mathbf{D}^{-1}\mathbf{I}_{K^{\ast}}, \end{equation} where matrix $\mathbf{D}$ is given by \begin{equation}\label{Eq.(21)} [\mathbf{D}]_{k,l}=\left\{ \begin{aligned} \frac{|\overline{\mathbf{h}}_{k}^{H}\mathbf{w}_{k}|^{2}}{\overleftarrow{\gamma}_{k}},k=l,\\ -|\overline{\mathbf{h}}_{k}^{H}\mathbf{w}_{l}|^{2},k\neq{l}. \end{aligned} \right. \end{equation} \STATE Calculate the objective function value, then output the US, PA and BF vectors $\{\kappa_{k},p_{k},\mathbf{w}_{k}\}$. \end{algorithmic} \end{algorithm} \section{Design of The J-USBF Algorithm} \textred{In this section, the transformation of problem~\eqref{Eq.(05)} to problem~\eqref{Eq.(10)} is inherited, where the BF vector has an analytical solution. We focus on proposing the J-USBF learning algorithm to output the joint US-BF strategy. Specifically, we first introduce the graph representation method of single-cell WCNs, then design the JEEPON model to learn the US-PA strategy, and combine the BF analytical solution to achieve the J-USBF learning algorithm, which is summarized as \textbf{Algorithm}~\ref{Alg.(03)}.} In the sequel, we focus on studying the design of JEEPON and the corresponding training framework. \begin{algorithm}[!ht] \caption{The J-USBF Learning Algorithm}\label{Alg.(03)} \begin{algorithmic}[1] \REQUIRE $\mathcal{D}=\{\mathbf{h}_{k}\}$: Testing sample with $K$ users; \\ ~\quad $\bm{\Theta}$: The trainable parameters of JEEPON. \\ \ENSURE The optimization strategy $\{\kappa_{k}^{(\ast)},q_{k}^{(\ast)},\mathbf{w}_{k}^{(\ast)}\}$ of sample $\mathcal{D}$. \STATE Construct graph $\mathcal{G}(\mathcal{V},\mathcal{E})$ for sample $\mathcal{D}$ via the graph representation module. \STATE Input graph $\mathcal{G}(\mathcal{V},\mathcal{E})$ to JEEPON and obtain the US-PA strategy $\{\kappa_{k}^{(\ast)},q_{k}^{(\ast)}\}$. \STATE Calculate the BF vectors $\{\mathbf{w}_{k}^{(\ast)}\}$ via Eq.~\eqref{Eq.(13)}, and output the strategy $\{\kappa_{k}^{(\ast)},q_{k}^{(\ast)},\mathbf{w}_{k}^{(\ast)}\}$. \STATE Calculate the uplink SINR $\overleftarrow{\gamma}_{k}$ via $\{q_{k}^{(\ast)},\mathbf{w}_{k}^{(\ast)}\}$ and Eq.~\eqref{Eq.(07)}, and obtain the downlink power vector $\{p_{k}^{(\ast)}\}$ via Eq.~\eqref{Eq.(21)}. \end{algorithmic} \end{algorithm} \subsection{Problem Transformation and Loss Function Definition} Different from the proposed SCA-USBF that alternately updates the BF vectors, in the sequel, it is regarded as intermediate variables about the virtual uplink power vectors. Taking~\eqref{Eq.(13)} into~\eqref{Eq.(08)}, the uplink received SINR of user $k$ is rewritten as \begin{equation}\label{Eq.(22)} \widehat{\gamma}_{k}=\frac{q_{k}|\overline{\mathbf{h}}_{k}^{H}\bm{\Lambda}^{-1}\overline{\mathbf{h}}_{k}|^{2}}{\sum\limits_{l\neq{k},l\in\mathcal{K}}q_{l}|\overline{\mathbf{h}}_{l}^{H}\bm{\Lambda}^{-1}\overline{\mathbf{h}}_{k}|^{2}+|\bm{\Lambda}^{-1}\overline{\mathbf{h}}_{k}|_{2}^{2}}, \end{equation} where $\bm{\Lambda}=\mathbf{I}_{N}+\sum\limits_{k\in\mathcal{K}}q_{k}\overline{\mathbf{h}}_{k}\overline{\mathbf{h}}_{k}^{H}$. Thus, problem~\eqref{Eq.(10)} is formulated as follows \begin{subequations}\label{Eq.(23)} \begin{align} &\max_{\{\kappa_{k},q_{k}\}} \sum\limits_{k\in\mathcal{K}}\kappa_{k},\label{Eq.(23a)}\\ \mathrm{s.t.}~&{0}\leq\kappa_{k}\leq{1},\forall{k}\in\mathcal{K},\label{Eq.(23b)}\\ &\sum\limits_{k\in\mathcal{K}}(\kappa_{k}-\kappa_{k}^{2})\leq{0},\label{Eq.(23c)}\\ &\kappa_{k}\widetilde{\gamma}_{k}\leq\widehat{\gamma}_{k},\forall{k}\in\mathcal{K},\label{Eq.(23d)}\\ &\sum\limits_{k\in\mathcal{K}}q_{k}\leq{P},~q_{k}\geq{0},\forall{k}\in\mathcal{K}.\label{Eq.(23e)} \end{align} \end{subequations} \textred{To facilitate the design of JEEPON, incorporating partially the constraints into the objective function, the violation-based Lagrangian relaxation method~\cite{fioretto2020predicting} is adopted to formulate problem~\eqref{Eq.(23)} as an unconstrained optimization problem. Observe the constraints constraints~\eqref{Eq.(23b)} and~\eqref{Eq.(23e)} only contain single optimization variables that can be satisfied by subsequent projection-based methods. For constraints~\eqref{Eq.(23c)} and~\eqref{Eq.(23d)}, we introduce the non-negative Lagrangian multipliers $\{\mu,\nu\in\mathbb{R}^{+}\}$ to capture how much the constraints are violated.} Thus, the partial Lagrangian relaxation function of problem~\eqref{Eq.(23)} is given by \begin{equation}\label{Eq.(24)} \begin{aligned} \mathcal{L}(\bm{\kappa},\mathbf{q},\mu,\nu)=-\sum\limits_{k\in\mathcal{K}}\kappa_{k}+\mu\sum\limits_{k\in\mathcal{K}}\chi_{c}^{\geq}(\kappa_{k}-\kappa_{k}^{2})+\nu\sum\limits_{k\in\mathcal{K}}\chi_{c}^{\geq}(\kappa_{k}\widetilde{\gamma}_{k}-\widehat{\gamma}_{k}), \end{aligned} \end{equation} where $\chi_{c}^{\geq}(x)=\max\{x,0\}$ is the violation degree function. Further, the Lagrangian dual problem of~\eqref{Eq.(23)} is formulated as \begin{equation}\label{Eq.(25)} \begin{aligned} \max_{\{\mu,\nu\}}\,\min_{\{\kappa_{k},q_{k}\}}\mathcal{L}(\bm{\kappa},\mathbf{q},\mu,\nu). \end{aligned} \end{equation} \textred{To update the trainable parameters of JEEPON, a primal-dual learning framework (PDLF) is proposed to train it in an unsupervised manner, and the loss function is defined as $\mathrm{Loss}=\mathcal{L}/K$. In the sequel, we focus on describing the architecture of JEEPON and PDLF.} \subsection{Graph Representation and Model Design} \textred{WCNs can be naturally divided into undirected/directed graphs depending on the topology structures, and homogeneous/heterogeneous graphs depending on types of the communication links and user equipments (UEs)~\cite{he2021overview}. For notational convenience, a graph with node set $\mathcal{V}$ and edge set $\mathcal{E}$ is defined as $\mathcal{G}(\mathcal{V},\mathcal{E})$, where the node $v\in\mathcal{V}$ and edge $e_{u,v}\in\mathcal{E}$ (between node $u$ and node $v$) feature vectors are represented as $\mathbf{x}_{v}$ and $\mathbf{e}_{u,v}$, respectively. In the graph representation of single-cell cellular networks, we can consider the UEs as nodes and the interfering links between different UEs as edges, as shown in Fig.~\ref{GraphStructure}. In general, the node and edge features of graph $\mathcal{G}(\mathcal{V},\mathcal{E})$ mainly include CSI and other environmental information, such as user weights and Gaussian noise. In order to reduce the dimensionality of node and edge feature vectors, we consider using the orthogonality (modulo value) of CSI to represent channel gains and interferences. Therefore, the features of node $v$ and edge $e_{u,v}$ are defined as $\mathbf{x}_{v}=|\overline{\mathbf{h}}_{v}^{H}\mathbf{h}_{v}|$ and $\mathbf{e}_{u,v}=|\overline{\mathbf{h}}_{u}^{H}\mathbf{h}_{v}|$, respectively.} \begin{figure}[!ht] \centering \includegraphics[width=0.5\columnwidth,keepaspectratio]{Fig-Graph.eps} \caption{A wireless channel graph with four UEs.} \label{GraphStructure} \end{figure} Following the completion of the WCN graph representation, we focus on the design of JEEPON to output the US-PA strategy, where the optimization vectors are carefully defined in the representation vector of nodes. Specifically, JEEPON applies a message passing mechanism based graph convolutional network (GCN)~\cite{gilmer2017neural} to iteratively update the representation vector of node $v\in\mathcal{V}$ by aggregating features from its neighbor nodes and edges. The GCN consists of two steps, first generating and collecting messages from the first-order neighborhood nodes and edges of node $v$, and then updating the representation vector of node $v$ with the aggregated messages. After $\ell$ times of graph convolutions, the representation vector of node $v$ captures the messages within its $\ell$-hop neighborhood nodes and edges. To be specific, the update rule of the $\ell$-th GCN layer at node $v$ is formulated as \begin{equation}\label{Eq.(26)} \begin{aligned} \mathbf{m}_{u,v}^{(\ell)}&=\mathbf{M}_{\theta}^{(\ell)}\left(\bm{\beta}_{u}^{(\ell-1)},\mathbf{x}_{u},\mathbf{e}_{u,v}\right),u\in\mathcal{N}_{v},\\ \mathbf{g}_{v}^{(\ell)}&=\mathbf{G}\left(F_{\mathrm{max}}\left(\{\mathbf{m}_{u,v}^{(\ell)}\}\right),F_{\mathrm{mean}}\left(\{\mathbf{m}_{u,v}^{(\ell)}\}\right)\right),v\in\mathcal{V},\\ \bm{\beta}_{v}^{(\ell)}&=\mathbf{U}_{\theta}^{(\ell)}\left(\bm{\beta}_{v}^{(\ell-1)},\mathbf{x}_{v},\mathbf{g}_{v}^{(\ell)}\right),v\in\mathcal{V}, \end{aligned} \end{equation} where $\mathcal{N}_{v}$ is the first-order neighborhood set of node $v$, $\bm{\beta}_{v}^{(\ell)}\triangleq[\kappa_{v},q_{v}]\in\mathbb{R}^{2}$ represents the pairwise optimization vector of node $v$ at the $\ell$-th GCN layer, and $\bm{\beta}_{v}^{(0)}$ is initialized with an all-zero vector. Therefore, when the update of the $\ell$-th graph convolution operation is completed, the representation vector of node $v$ could be formulated as $[\bm{\beta}_{v}^{(\ell)},\mathbf{x}_{v}]$. Fig.~\ref{MessagePassingProcess} illustrates the state update process of node $v$ at the $\ell$-th GCN layer. Here, $\mathrm{M}_{\theta}^{(\ell)}(\cdot)$ is a message construction function defined on each edge to generate edge message $\mathbf{m}_{u,v}^{(\ell)}\in\mathbb{R}^{m}$ by combining incoming node and edge features, where $m$ is the dimension size. $\mathbf{G}(\cdot)$ is a message aggregation function that uses the concatenation of the max function $F_{\mathrm{max}}(\cdot)$ and the mean function $F_{\mathrm{mean}}(\cdot)$ to gather the relevant edge messages $\{\mathbf{m}_{u,v}^{(\ell)}|u\in\mathcal{N}_{v}\}$ and output the aggregated message $\mathbf{g}_{v}^{(\ell)}\in\mathbb{R}^{2m}$. $\mathbf{U}_{\theta}^{(\ell)}(\cdot)$ is a state update function defined on each node, which is used to update the node representation through the aggregated message $\mathbf{g}_{v}^{(\ell)}$, node feature $\mathbf{x}_{v}$ and optimization vector $\bm{\beta}_{v}^{(\ell-1)}$. In JEEPON, function $\mathrm{M}_{\theta}^{(\ell)}(\cdot)$ and function $\mathbf{U}_{\theta}^{(\ell)}(\cdot)$ are parameterized by different neural network modules. \begin{figure}[!ht] \centering \includegraphics[width=0.9\columnwidth,keepaspectratio]{Fig-MessagePassing.eps} \caption{The state update process of node $v$ at the $\ell$-th GCN layer.} \label{MessagePassingProcess} \end{figure} Through the combination of several GCN layers, JEEPON can gather multi-hop node and edge features. An illustration of JEEPON is given in Fig.~\ref{JEEPONModel}, which consists of $N_{\mathrm{L}}$ GCN layers and one projection activation (PAC) layer. Each GCN layer includes an input layer, an output layer, and two different MLPs which are composed of linear (LN) layers, batch normalization (BN) layers and activation (AC) layers. In the final part of JEEPON, we utilize the PAC layer to put $\{\kappa_{k}^{(N_{\mathrm{L}})},q_{k}^{(N_{\mathrm{L}})}\}$ into the feasible region $\bm{\Omega}$, i.e., \begin{equation}\label{Eq.(27)} \begin{aligned} \bm{\Omega}\triangleq\{\bm{\kappa},\mathbf{q}:{0}\leq\kappa_{k}\leq{1},\sum\limits_{k\in\mathcal{K}}q_{k}\leq{P},q_{k}\geq{0},\forall{k}\in\mathcal{K}\}. \end{aligned} \end{equation} To this end, the projection functions of the PAC layer are defined as \begin{equation}\label{Eq.(28)} \begin{aligned} \kappa_{k}^{(\ast)}&=F_{\mathrm{relu}}(\kappa_{k},1),~q_{k}^{'}=F_{\mathrm{relu}}(q_{k},P),\forall{k}\in\mathcal{K}, \\ q_{k}^{(\ast)}&=\frac{P}{\mathrm{max}\{P,\sum\limits_{k\in\mathcal{K}}q_{k}^{'}\}}q_{k}^{'},\forall{k}\in\mathcal{K}, \end{aligned} \end{equation} where function $F_{\mathrm{relu}}(\mathbf{z},\mathbf{b})=\max\{\min\{\mathbf{z},\mathbf{0}\},\mathbf{b}\}$, and $\mathbf{b}$ is the upper bound of the input. Furthermore, due to the matrix inversion operation in the uplink SINR equation~\eqref{Eq.(22)}, it leads to a high computational overhead. To speed up the computation, the following \textit{Lemma~\ref{lemma01}} is applied to replace the direct matrix inversion by $K$ matrix iterations. Specifically, it reduces the computational complexity to $\mathcal{O}(KN^{2})$ instead of matrix inversion complexity $\mathcal{O}(KN^{2}+N^{3})$, where $\mathcal{O}(\cdot)$ is the big-$\mathcal{O}$ notation for describing the computational complexity. \begin{figure}[!ht] \centering \includegraphics[width=0.8\columnwidth,keepaspectratio]{Fig-JEEPON.eps} \caption{The architecture of JEEPON.} \label{JEEPONModel} \end{figure} \begin{lemma}\label{lemma01} According to the Sherman-Morrison formula~\cite{sherman1950adjustment}, for an invertible square matrix $\mathbf{A}\in\mathbb{C}^{N\times{N}}$, if there exists two column vectors $\mathbf{u},\mathbf{v}\in\mathbb{C}^{N\times1}$, $1+\mathbf{v}^{H}\mathbf{A}^{-1}\mathbf{u}\neq0$, then the inverse of $\mathbf{A}$ is given by \begin{equation}\label{Eq.(29)} (\mathbf{A}+\mathbf{u}\mathbf{v}^{H})^{-1}=\mathbf{A}^{-1}-\frac{\mathbf{A}^{-1}\mathbf{u}\mathbf{v}^{H}\mathbf{A}^{-1}}{1+\mathbf{v}^{H}\mathbf{A}^{-1}\mathbf{u}}. \end{equation} Based on this formula, let $\mathbf{T}_{n}=\bm{\Lambda}^{-1},n\in\{0,1,\cdots,K\}$, then $\mathbf{T}_{n}$ can be converted into an iterative matrix product form, which is formulated as follows \begin{equation}\label{Eq.(30)} \mathbf{T}_{n}= \left\{ \begin{aligned} \mathbf{I}_{N}&, n=0,\\ \mathbf{T}_{n-1}-\frac{\mathbf{T}_{n-1}q_{n}\overline{\mathbf{h}}_{n}\overline{\mathbf{h}}_{n}^{H}\mathbf{T}_{n-1}}{1+q_{n}\overline{\mathbf{h}}_{n}^{H}\mathbf{T}_{n-1}\overline{\mathbf{h}}_{n}}&, n>0. \end{aligned} \right. \end{equation} \end{lemma} \subsection{Primal-Dual Learning Framework} With regard to the aforementioned aspects, PDLF is developed for training the JEEPON model to solve the Lagrangian dual problem~\eqref{Eq.(25)}, which is composed of two parts, the primal update part and the dual update part, as shown in Fig.~\ref{PDLF}. \textred{The primal update part takes the user's historical channel data sample $\mathcal{D}\triangleq\{\mathbf{h}_{k}\}$ as input, and outputs the related US-PA strategy $\bm{\Phi}(\mathcal{D},\bm{\Theta})\triangleq\{\kappa_{k},q_{k}\}$, where $\bm{\Theta}$ is the trainable parameters of JEEPON. Specifically, it includes a graph representation module for WCN topology construction, a JEEPON model for US-PA optimization, and a loss function module for updating $\bm{\Theta}$. In the dual part, the Lagrangian multipliers $\{\mu,\nu\}$ are updated by the sub-gradient optimization method. PDLF runs two parts alternately, the former minimizes function $\mathcal{L}$ with fixed $\{\mu,\nu\}$ by updating $\bm{\Theta}$ to obtain a suitable $\{\kappa_{k},q_{k},\mathbf{w}_{k}\}$, and the latter maximizes function $\mathcal{L}$ with fixed $\{\kappa_{k},q_{k},\mathbf{w}_{k}\}$ by updating $\{\mu,\nu\}$. Therefore, the update rule of the Lagrangian multipliers $\{\mu,\nu\}$ at the $\tau$-th epoch is} \begin{equation}\label{Eq.(31)} \begin{aligned} \mu^{(\tau+1)}&=\mu^{(\tau)}+\varepsilon_{\mu}\sum\limits_{k\in\mathcal{K}}\chi_{c}^{\geq}\left(\kappa_{k}^{(\tau)}-(\kappa_{k}^{(\tau)})^{2}\right), \\ \nu^{(\tau+1)}&=\nu^{(\tau)}+\varepsilon_{\nu}\sum\limits_{k\in\mathcal{K}}\chi_{c}^{\geq}\left(\kappa_{k}^{(\tau)}\widetilde{\gamma}_{k}-\widehat{\gamma}_{k}\right), \end{aligned} \end{equation} where $\varepsilon_{\mu}$ and $\varepsilon_{\nu}$ is the update step-size of $\mu$ and $\nu$, respectively. In addition, the Lagrangian multipliers are updated every epoch based on the violation degree of the training datasets. For the inner optimization of problem~\eqref{Eq.(25)}, JEEPON is proposed to transform it into a statistical learning problem, which aims to obtain appropriate optimization vectors $\{\kappa_{k},q_{k}\}$ by updating the trainable parameters of JEEPON. \begin{figure}[!ht] \centering \includegraphics[width=0.9\columnwidth,keepaspectratio]{Fig-PDLF.eps} \captionsetup{labelfont={footnotesize,color=red},font={footnotesize,color=red}} \caption{The architecture of the PDLF.} \label{PDLF} \end{figure} \begin{algorithm}[!ht] \caption{PDLF for Training JEEPON.}\label{Alg.(02)} \begin{algorithmic}[1] \REQUIRE $N_{\mathrm{e}}$: Number of epochs; \\ ~\quad $\bm{\Theta}$: The trainable parameters of JEEPON; \\ ~\quad $\varepsilon_{\mu},\varepsilon_{\nu}$: Step size of Lagrangian multipliers; \\ ~\quad $\mathcal{D}\triangleq\{\mathcal{D}_{i}\}_{i=1}^{N_{\mathrm{ta}}}$: Training dataset with $N_{\mathrm{ta}}$ samples. \\ \ENSURE The trained JEEPON model. \STATE Initialize the trainable parameters $\bm{\Theta}$ and the Lagrangian multipliers $\{\mu^{(0)},\nu^{(0)}\}$. \FOR {epoch $\tau\leftarrow1,2,\cdots,N_{\mathrm{e}}$} \STATE Initialize dual gradient variables $\{\nabla_{\mu}^{(0)},\nabla_{\nu}^{(0)}\}$. \FOR {each sample $\mathcal{D}_{i}:i\leftarrow1,2,\cdots,N_{\mathrm{ta}}$} \STATE Construct the graph $\mathcal{G}_{i}(\mathcal{V},\mathcal{E})$ for sample $\mathcal{D}_{i}$. \STATE Obtain the US-PA strategy $\{\kappa_{k}^{(i)},q_{k}^{(i)}\}$ via JEEPON, and then update $\bm{\Theta}$ via the loss function module. \STATE Update dual gradient variables: \STATE\quad $\nabla_{\mu}^{(i)}\leftarrow\nabla_{\mu}^{(i-1)}+\sum\limits_{k\in\mathcal{D}_{i}}\chi_{c}^{\geq}(\kappa_{k}^{(i)}-(\kappa_{k}^{(i)})^{2})$, $\nabla_{\nu}^{(i)}\leftarrow\nabla_{\nu}^{(i-1)}+\sum\limits_{k\in\mathcal{D}_{i}}\chi_{c}^{\geq}(\kappa_{k}^{(i)}\widetilde{\gamma}_{k}-\widehat{\gamma}_{k})$. \ENDFOR \STATE Obtain the Lagrangian multipliers $\{\mu^{(\tau)},\nu^{(\tau)}\}$ via Eq.~\eqref{Eq.(31)}. \ENDFOR \end{algorithmic} \end{algorithm} PDLF is designed for training JEEPON. Unlike the penalty-based supervised training method in~\cite{xia2020deep}, the proposed PDLF alternately updates $\bm{\Theta}$ and $\{\mu,\nu\}$ in an unsupervised manner, as summarized in \textbf{Algorithm}~\ref{Alg.(02)}. Specifically, we generate a training dataset $\mathcal{D}\triangleq\{\mathcal{D}_{i}\}_{i=1}^{N_{\mathrm{ta}}}$ with $N_{\mathrm{ta}}$ samples, and each sample with the same size. The training stage of PDLF lasts for $N_{\mathrm{e}}$ epochs in total. \textred{In the primal update part, PDLF first constructs the graph representation for sample $\mathcal{D}_{i}$ (Setp 5), and takes it as the input of JEEPON. Then, JEEPON outputs the US-PA strategy $\bm{\Phi}(\mathcal{D}_{i},\bm{\Theta})\triangleq\{\kappa_{k}^{(i)},q_{k}^{(i)}\}$ of sample $\mathcal{D}_{i}$ (Step 6), and adopt the loss function module to update $\bm{\Theta}$ (Step 7). The sub-gradient values of $\{\mu,\nu\}$ are also stored to avoid repeated traversal of the training dataset (Steps 8-10).} Therefore, in the dual update part, $\{\mu,\nu\}$ are updated by the recorded dual gradient variables $\{\nabla_{\mu}^{(N_{\mathrm{ta}})},\nabla_{\nu}^{(N_{\mathrm{ta}})}\}$ and equation~\eqref{Eq.(31)} (Step 13). \section{Numerical Results} In this section, numerical results are presented for the joint US-BF optimization problem in the multiuser MISO downlink system. We first introduce the experimental environment and system parameters. Next, the convergence of SCA-USBF and J-USBF is evaluated. Then, the performance of G-USBF, SCA-USBF, and J-USBF is discussed in different system scenarios, as well as the generalizability of J-USBF. In addition, the performance of J-USBF and the convolutional neural network based US-BF (CNN-USBF) algorithm (see Appendix B) are also compared. Finally, the computational complexity of the algorithms is presented and discussed, which clearly validates the speed advantage of J-USBF. \subsection{Experimental Setup} In the experiment\footnote{\textred{Offline training for J-USBF is necessary and important, where numerous data is required. However, real data is quite difficult to obtain although some researchers are committed to solving this problem \cite{huang2021true-data,coronado20195G-EmPOWER,munoz2016the}, so we could only apply simulated data instead.}}, the $K$ single-antenna users are randomly distributed in the range of $(d_{l},d_{r})$ from the BS, $d_{l},d_{r}\in(d_{\mathrm{min}},d_{\mathrm{max}})$, where $d_{\mathrm{min}}=50\mathrm{m}$ is the reference distance and $d_{\mathrm{max}}=200\mathrm{m}$ denotes the cell radius, as shown in Fig.~\ref{SystemModel}. The channel of user $k$ is modeled as $\mathbf{h}_{k}=\sqrt{\rho_{k}}\widetilde{\mathbf{h}}_{k}\in\mathbb{C}^{N\times1}$ where $\widetilde{\mathbf{h}}_{k}\sim\mathcal{CN}(\mathbf{0},\mathbf{I}_{N})$ is the small-scale fading, $\varrho=3$ is the path-loss exponent, and $\rho_{k}=1/(1+(d_{k}/d_{\mathrm{min}})^{\varrho})$ denotes the long-term path-loss between user $k$ and the BS with $d_{k}$ representing the distance in meters (m). For simplicity, we assume that all users have the same additive noise variance, i.e., $\sigma_{k}^{2}=\sigma^{2},\forall{k}\in\mathcal{K}$, thus, the signal-to-noise ratio (SNR) is defined as $\mathrm{SNR}=10\log_{10}(P/\sigma^{2})$ in dB. \begin{figure}[ht] \centering \includegraphics[width=0.8\columnwidth,keepaspectratio]{Fig-SystemModel.eps} \caption{User distribution of the multiuser MISO downlink system.} \label{SystemModel} \end{figure} In the neural network module, J-USBF is implemented by $N_{\mathrm{L}}=2$ GCN layers via \emph{Pytorch}, and the functions $\mathrm{M}_{\theta}(\cdot)$ and $\mathbf{U}_{\theta}(\cdot)$ in each GCN layer are parameterized by MLPs with sizes $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$, respectively. Training and test stages for J-USBF are sequentially implemented. The learing rate of J-USBF and Lagrangian multipliers are set to $\eta=5\times10^{-5}$ and $\varepsilon_{\mu},\varepsilon_{\nu}=10^{-5}$, respectively. For each configuration, we respectively prepare $N_{\mathrm{ta}}=5000$ training samples and $N_{\mathrm{te}}=500$ testing samples, where the validation split is set to $0.2$ and the training samples are randomly shuffled at each epoch. The entire training stage lasts for $N_{\mathrm{e}}=200$ epochs. According to the conclusion in~\cite[Corollary 1]{he2020beamforming}, the user minimum achievable SINR $\widetilde{\gamma}$ will be set by the system parameters $\{D,n,\epsilon\}$. Note that unless mentioned otherwise, the experiments adopt the default system parameters in Table~\ref{Tab-01}. \begin{table}[!ht] \centering \renewcommand{\arraystretch}{1.1} \caption{Default system parameters.}\label{Tab-01} \begin{tabular}{|c|c|} \hline Parameters & Values \\\hline Range of SNR & $10~\mathrm{dB}$ \\\hline Blocklength $n$ & $128$ \\\hline Decoding error probability $\epsilon$ & $10^{-6}$ \\\hline Transmission data bits $D$ & $256~\mathrm{bits}$ \\\hline BS antenna number $N$ & $32$ \\\hline Number of candidate users $K$ & $30$ \\\hline Maximum permissible error $\delta$ & $10^{-5}$ \\\hline Sizes of MLPs in $\mathrm{M}_{\theta}(\cdot)$ & $\mathcal{H}_{1}=\{4,256,128,64,32,16,m\}$ \\\hline Sizes of MLPs in $\mathbf{U}_{\theta}(\cdot)$ & $\mathcal{H}_{2}=\{3+2m,256,128,64,32,16,3\}$\\\hline \end{tabular} \end{table} \subsection{Convergence Analysis of SCA-USBF and J-USBF} \textred{The convergence of SCA-USBF and J-USBF is evaluated in this section, where part of the system parameters are set to $K=50$ and $(d_{l},d_{r})=(60\mathrm{m},100\mathrm{m})$. Fig.~\ref{Fig-Target1} illustrates the objective value curve of SCA-USBF for different random channel realizations, indicating that SCA-USBF can reach the convergence state through iterations. Fig.~\ref{Fig-Target2} illustrates the objective value curve of J-USBF during the training stage, where the objective value of the training samples varies in the range (light blue line), and the average objective value curve (blue line) converges as the number of iterations increases to $3.5\times10^{5}$. During the testing stage, the constraint violation ratios of J-USBF for different testing samples are shown in Table~\ref{Tab-02}. It is observed that the percentage of illegal results is $2.268\%$, which is much lower than the results satisfying the constraints.} Note that the value of $\kappa_{k},\forall{k}\in\mathcal{K}$ will be set to $1$ if $0<\kappa_{k}<1$ is obtained, and all the scheduled users are filtered again with the per-user minimum SINR requirement. \begin{figure}[ht] \centering {\color{red} \begin{minipage}{0.45\linewidth} \centering \subfigure[SCA-USBF]{ \label{Fig-Target1} \includegraphics[width=1.0\linewidth]{Fig-Target1.eps}} \end{minipage} \begin{minipage}{0.45\linewidth} \centering \subfigure[J-USBF]{ \label{Fig-Target2} \includegraphics[width=1.0\linewidth]{Fig-Target2.eps}} \end{minipage} } \captionsetup{labelfont={footnotesize,color=red},font={footnotesize,color=red}} \caption{The objective value curves of SCA-USBF and J-USBF.} \label{EXP01_FIG} \end{figure} \begin{table}[!ht] \centering \renewcommand{\arraystretch}{1.1} \captionsetup{labelfont={color=red},font={color=red}} \caption{Different constraint situations of J-USBF.}\label{Tab-02} {\color{red} \begin{tabular}{|c|c|} \hline Different constraint situations & Percentage of total samples \\\hline $\kappa_{k}=0,q_{k}\geq{0},\forall{k}\in\mathcal{K}$ & $75.436\%$ \\\hline $0<\kappa_{k}<1,q_{k}\geq{0},\forall{k}\in\mathcal{K}$ & $2.264\%$ \\\hline $\kappa_{k}=1,\widetilde{\gamma}_{k}>\widehat{\gamma}_{k},\forall{k}\in\mathcal{K}$ & $0.004\%$ \\\hline $\kappa_{k}=1,\widetilde{\gamma}_{k}\leq\widehat{\gamma}_{k},\forall{k}\in\mathcal{K}$ & $22.296\%$ \\\hline \end{tabular} } \end{table} \subsection{Performance and Generalizability Evaluation} In this subsection, the performance of J-USBF, SCA-USBF and G-USBF with different system parameters are evaluated and compared. For intuitive comparison, the obtained results of SCA-USBF and J-USBF are normalized by the results of G-USBF, defined as $R_{1}=\frac{N_{\mathrm{S}}}{N_{\mathrm{G}}}\times100\%$ and $R_{2}=\frac{N_{\mathrm{J}}}{N_{\mathrm{G}}}\times100\%$, where $N_{\mathrm{S}}$, $N_{\mathrm{J}}$ and $N_{\mathrm{G}}$ are the average number of scheduled users obtained through SCA-USBF, J-USBF and G-USBF, respectively. In addition, we also define the result percentage of CNN-USBF and J-USBF as $R_{3}=\frac{N_{\mathrm{C}}}{N_{\mathrm{J}}}\times100\%$, where $N_{\mathrm{C}}$ is the number of scheduled users obtained through CNN-USBF. \subsubsection{Performance with Various $K$ and $(d_{l},d_{r})$} This experiment investigates the influences of $K$ and $(d_{l},d_{r})$ and compares the performance of J-USBF with G-USBF and SCA-USBF, as well as with CNN-USBF in large-scale user scenarios. Table~\ref{Tab-03} shows that when $K$ is small, the performance of J-USBF is closer to that of G-USBF, because sufficient system resources are conducive to model optimization. J-USBF remains stable when $K$ changes from 20 to 50, and there exist only $2.56\%$ performance degradation at most. Besides, the performance gain of J-USBF improves with the distance interval changes from $20\mathrm{m}$ to $40\mathrm{m}$, since the smaller distance interval leads to the lack of diversity for each user, which brings more difficulties to the learning of J-USBF. In Fig.~\ref{Fig-Distance}, we show the average performance of these three algorithms with different $(d_{l},d_{r})$. It suggests that J-USBF could achieve a more stable and closer performance compared with G-USBF as $(d_{l},d_{r})$ increases. Owing to the fact that the number of scheduled users is reduced with the increase of $(d_{l},d_{r})$, and the obtained results are more homogeneous, which is beneficial to the learning of J-USBF. \begin{table*}[!ht] \centering \fontsize{8}{8}\selectfont \renewcommand{\arraystretch}{1.5} \newcolumntype{C}[1]{>{\centering}p{#1}} \caption{Performance normalized by G-USBF with various $K$.}\label{Tab-03} \begin{tabular}{|C{3em}|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{$K$} & \multicolumn{8}{c|}{$R_{1}$ and $R_{2}$ with varying $(d_{l},d_{r})$} \\\cline{2-9} & \multicolumn{2}{c|}{$(50\mathrm{m},70\mathrm{m})$} & \multicolumn{2}{c|}{$(60\mathrm{m},80\mathrm{m})$} & \multicolumn{2}{c|}{$(50\mathrm{m},90\mathrm{m})$} & \multicolumn{2}{c|}{$(60\mathrm{m},100\mathrm{m})$} \\\cline{2-9} & $R_{1}$ & $R_{2}$ & $R_{1}$ & $R_{2}$ & $R_{1}$ & $R_{2}$ & $R_{1}$ & $R_{2}$ \\\hline 10 & $100\%$ & \blue{$95.68\%$} & $99.98\%$ & \blue{$94.70\%$} & $100\%$ & \blue{$93.94\%$} & $99.89\%$ & \blue{$93.41\%$} \\\cline{1-9} 20 & $99.67\%$ & \blue{$90.04\%$} & $99.64\%$ & \blue{$91.35\%$} & $99.52\%$ & \blue{$92.02\%$} & $99.22\%$ & \blue{$92.32\%$} \\\cline{1-9} 30 & $99.94\%$ & \blue{$89.68\%$} & $99.63\%$ & \blue{$90.33\%$} & $98.80\%$ & \blue{$90.57\%$} & $98.77\%$ & \blue{$91.25\%$} \\\cline{1-9} 40 & $99.86\%$ & \blue{$88.91\%$} & $99.54\%$ & \blue{$89.86\%$} & $98.52\%$ & \blue{$90.27\%$} & $98.24\%$ & \blue{$91.08\%$} \\\cline{1-9} 50 & $99.84\%$ & \blue{$88.15\%$} & $98.73\%$ & \blue{$88.79\%$} & $97.48\%$ & \blue{$89.46\%$} & $97.10\%$ & \blue{$90.15\%$} \\\hline \end{tabular} \end{table*} \begin{figure}[!ht] \centering \includegraphics[width=0.6\columnwidth,keepaspectratio]{Fig-Distance.eps} \caption{Average number of scheduled users with various $(d_{l},d_{r})$.} \label{Fig-Distance} \end{figure} \textred{Considering large-scale user scenarios, we focus on the performance comparison of J-USBF and CNN-USBF, whereas ignoring G-USBF and SCA-USBF due to the high computational overhead. Table~\ref{Tab-04} shows that the performance gap between CNN-USBF and J-USBF widens as $K$ increases, especially when $K=200$ and $(d_{l},d_{r})=(60\mathrm{m},100\mathrm{m})$, the performance of the former can only reach $87.36\%$ of the latter. This indicates that incorporating WCN topology information into model learning is helpful for performance improvement and stability maintenance.} \begin{table}[!ht] \centering \fontsize{8}{8}\selectfont \renewcommand{\arraystretch}{1.5} \newcolumntype{C}[1]{>{\centering}p{#1}} \captionsetup{labelfont={color=red},font={color=red}} \caption{Performance normalized by J-USBF with various $K$.}\label{Tab-04} {\color{red}\begin{tabular}{|C{3em}|c|c|c|c|} \hline \multirow{2}{*}{$K$} & \multicolumn{4}{c|}{$R_{3}$ with varying $(d_{l},d_{r})$} \\\cline{2-5} & $(50\mathrm{m},70\mathrm{m})$ & $(60\mathrm{m},80\mathrm{m})$ & $(50\mathrm{m},90\mathrm{m})$ & $(60\mathrm{m},100\mathrm{m})$ \\\cline{1-5} 50 & $93.06\%$ & $96.26\%$ & $92.42\%$ & $93.95\%$ \\\cline{1-5} 100 & $92.89\%$ & $90.83\%$ & $91.42\%$ & $90.80\%$ \\\cline{1-5} 150 & $89.14\%$ & $90.53\%$ & $90.59\%$ & $89.46\%$ \\\cline{1-5} 200 & $88.75\%$ & $88.38\%$ & $87.51\%$ & $87.36\%$ \\\hline \end{tabular}} \end{table} \subsubsection{Performance with Various SNR Settings} This experiment compares the performance of J-USBF, SCA-USBF and G-USBF with different SNR settings, and the obtained results are summarized in Table~\ref{Tab-05}. It is observed that J-USBF achieves competitive performance (larger than $90.77\%$) with $\mathrm{SNR}=5$~dB, while SCA-USBF maintains over $95.73\%$ near-optimal performance compared with G-USBF. Although the performance gap of J-USBF is enlarged as $K$ increase, the trend of degradation is rather slow. For the configuration $\mathrm{SNR}=15$~dB and $(d_{l},d_{r})=(50\mathrm{m},100\mathrm{m})$, J-USBF obtains only a $1.78\%$ relative performance gap with G-USBF when $K$ changes from 20 to 50. Even when $(d_{l},d_{r})=(100\mathrm{m},150\mathrm{m})$, J-USBF still maintains a stable performance. \textred{Moreover, Fig.~\ref{Fig-SNR} illustrates the gap between the SCA-USBF and J-USBF increases while SNR changes from 0~dB to 20~dB. With the increase of SNR, channel condition becomes better and more users might meet the requirement of QoS. Therefore, the solution space for problem~\eqref{Eq.(05)} enlarges and SCA-USBF shows its advantages under this condition, because it obtains optimal/suboptimal results. On the other hand, J-USBF is difficult to capture the optimal value as the solution space increases in such circumstance. However, the gap between the SCA-USBF and J-USBF decreases when SNR changes from 20~dB to 30~dB, since much better channel condition is sufficient for serving all the users.} \begin{table*}[!ht] \centering \fontsize{8}{8}\selectfont \renewcommand{\arraystretch}{1.5} \newcolumntype{C}[1]{>{\centering}p{#1}} \caption{Performance normalized by G-USBF with various SNR settings.}\label{Tab-05} \begin{tabular}{|C{4em}|C{3em}|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{$\mathrm{SNR}(\mathrm{dB})$} & \multirow{3}{*}{$K$} & \multicolumn{8}{c|}{$R_{1}$ and $R_{2}$ with varying $(d_{l},d_{r})$} \\\cline{3-10} && \multicolumn{2}{c|}{$(60\mathrm{m},90\mathrm{m})$} & \multicolumn{2}{c|}{$(90\mathrm{m},120\mathrm{m})$} & \multicolumn{2}{c|}{$(50\mathrm{m},100\mathrm{m})$} & \multicolumn{2}{c|}{$(100\mathrm{m},150\mathrm{m})$} \\\cline{3-10} && $R_{1}$ & $R_{2}$ & $R_{1}$ & $R_{2}$ & $R_{1}$ & $R_{2}$ & $R_{1}$ & $R_{2}$ \\\hline \multirow{3}{*}{5} & 10 & $98.96\%$ & \blue{$92.13\%$} & $98.43\%$ & \blue{$94.50\%$} & $99.37\%$ & \blue{$92.56\%$} & $97.74\%$ & \blue{$93.17\%$} \\\cline{2-10} & 20 & $98.22\%$ & \blue{$91.08\%$} & $97.82\%$ & \blue{$92.43\%$} & $98.82\%$ & \blue{$91.15\%$} & $96.59\%$ & \blue{$92.03\%$} \\\cline{2-10} & 30 & $97.17\%$ & \blue{$90.77\%$} & $96.58\%$ & \blue{$91.04\%$} & $97.75\%$ & \blue{$90.83\%$} & $95.73\%$ & \blue{$91.66\%$} \\\hline \multirow{3}{*}{15} & 20 & $99.99\%$ & \blue{$90.15\%$} & $99.60\%$ & \blue{$90.69\%$} & $100\%$ & \blue{$90.48\%$} & $99.17\%$ & \blue{$91.15\%$} \\\cline{2-10} & 30 & $99.87\%$ & \blue{$89.20\%$} & $99.61\%$ & \blue{$89.78\%$} & $99.97\%$ & \blue{$89.52\%$} & $98.46\%$ & \blue{$90.60\%$} \\\cline{2-10} & 50 & $99.64\%$ & \blue{$88.34\%$} & $99.55\%$ & \blue{$89.06\%$} & $99.80\%$ & \blue{$88.70\%$} & $97.36\%$ & \blue{$89.94\%$} \\\hline \end{tabular} \end{table*} \begin{figure*}[ht] \centering \subfigure[$(d_{l},d_{r})=(60\mathrm{m},90\mathrm{m})$]{ \label{Fig-SNR1} \includegraphics[width=0.45\linewidth]{Fig-SNR1.eps}} \subfigure[$(d_{l},d_{r})=(100\mathrm{m},150\mathrm{m})$]{ \label{Fig-SNR2} \includegraphics[width=0.45\linewidth]{Fig-SNR2.eps}} \caption{Performance of the algorithms with various SNR settings.} \label{Fig-SNR} \end{figure*} \subsubsection{Performance with Various SINR Requirements} The ultimate scheduling results of the investigated problem are significantly affected by the per-user minimum SINR requirement, where value $\widetilde{\gamma}=F_{\mathrm{\gamma}}(D,n,\epsilon)$ is obtained with different system parameters $D$, $n$, and $\epsilon$, and the results are summarized in Table~\ref{Tab-06}. From the table, It is observed that the average performance of J-USBF remains above $88.97\%$ compared with G-USBF under different SINR requirements and user distribution distances. However, one needs to point out that with the reduction of SINR requirements, the performance improvement of J-USBF is lower than G-USBF, especially when $(d_{l},d_{r})=(60\mathrm{m},80\mathrm{m})$. Therefore, J-USBF shows a slight performance degradation compared with G-USBF when the SINR requirement is reduced, while the performance improvement of SCA-USBF increases at the same time. \begin{table*}[!ht] \centering \fontsize{8}{8}\selectfont \renewcommand{\arraystretch}{1.5} \newcolumntype{C}[1]{>{\centering}p{#1}} \caption{Performance normalized by G-USBF with various SINR requirements.}\label{Tab-06} \begin{tabular}{|C{7em}|C{3em}|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{$F_{\mathrm{\gamma}}(D,n,\epsilon)$} & \multirow{3}{*}{$\widetilde{\gamma}$} & \multicolumn{8}{c|}{$R_{1}$ and $R_{2}$ with varying $(d_{l},d_{r})$} \\\cline{3-10} && \multicolumn{2}{c|}{$(60\mathrm{m},80\mathrm{m})$} & \multicolumn{2}{c|}{$(80\mathrm{m},100\mathrm{m})$} & \multicolumn{2}{c|}{$(60\mathrm{m},100\mathrm{m})$} & \multicolumn{2}{c|}{$(80\mathrm{m},120\mathrm{m})$} \\\cline{3-10} && $R_{1}$ & $R_{2}$ & $R_{1}$ & $R_{2}$ & $R_{1}$ & $R_{2}$ & $R_{1}$ & $R_{2}$ \\\hline $(256,256,10^{-6})$ & $1.633$ & $99.92\%$ & \blue{$88.97\%$} & $99.96\%$ & \blue{$90.36\%$} & $99.97\%$ & \blue{$89.82\%$} & $99.91\%$ & \blue{$91.03\%$} \\\hline $(256,128,10^{-6})$ & $5.054$ & $99.63\%$ & \blue{$90.33\%$} & $98.30\%$ & \blue{$91.87\%$} & $98.77\%$ & \blue{$91.25\%$} & $98.36\%$ & \blue{$92.78\%$} \\\hline $(256,96,10^{-6})$ & $9.291$ & $96.41\%$ & \blue{$90.79\%$} & $94.22\%$ & \blue{$92.94\%$} & $95.84\%$ & \blue{$91.84\%$} & $95.38\%$ & \blue{$93.62\%$} \\\hline $(256,64,10^{-6})$ & $27.97$ & $95.58\%$ & \blue{$91.05\%$} & $93.95\%$ & \blue{$93.05\%$} & $94.55\%$ & \blue{$92.76\%$} & $94.19\%$ & \blue{$94.08\%$} \\\hline \end{tabular} \end{table*} \subsubsection{Generalizability with Various User Distributions} Generalizability is another critical evaluation property for J-USBF, and it focuses on investigating whether the trained network model has the ability to perform well in unknown WCN scenarios. To test the generalizability, J-USBF is trained from a certain scenario whose system parameters are different from the test ones. Specifically, J-USBF is trained with $(d_{l},d_{r})=(100\mathrm{m},120\mathrm{m})$, then the trained model is applied to the test scenarios with different $(d_{l},d_{r})$, without any further training\footnote{\textred{For scenarios with different number of users $K$, number of antennas $N$ and SINR requirements $\widetilde{\gamma}$, the generalizability of J-USBF performs poorly and needs to be further optimized.}}. Table~\ref{Tab-07} shows comparison results of G-USBF and J-USBF, where $R_{4}=\frac{N_{\mathrm{J},(100,120)}}{N_{\mathrm{G}}}\times100\%$ represents the average performance of J-USBF normalized by G-USBF and $N_{\mathrm{J},(100,120)}$ is the average number of scheduled users using J-USBF. Form the table, it is observed that J-USBF performs well over the neighboring user distribution distances when the test distance interval is 40m. Moreover, when $(d_{l},d_{r})=(80\mathrm{m},100\mathrm{m})$ and there is no intersection with $(100\mathrm{m},120\mathrm{m})$, the performance of J-USBF is still acceptable at $K=10$. Based on the aforementioned analysis, our proposed J-USBF can be well generalized to scenarios with neighboring user distribution distances. \begin{table}[!ht] \centering \fontsize{8}{8}\selectfont \renewcommand{\arraystretch}{1.5} \newcolumntype{C}[1]{>{\centering}p{#1}} \caption{Generalizability with various user distributions.}\label{Tab-07} \begin{tabular}{|C{3em}|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{$K$} & \multicolumn{8}{c|}{$N_{\mathrm{G}}$ and $R_{4}$ with varying $(d_{l},d_{r})$} \\\cline{2-9} & \multicolumn{2}{c|}{$(100\mathrm{m},120\mathrm{m})$} & \multicolumn{2}{c|}{$(80\mathrm{m},100\mathrm{m})$} & \multicolumn{2}{c|}{$(80\mathrm{m},120\mathrm{m})$} & \multicolumn{2}{c|}{$(100\mathrm{m},140\mathrm{m})$} \\\cline{2-9} & $N_{\mathrm{G}}$ & $R_{4}$ & $N_{\mathrm{G}}$ & $R_{4}$ & $N_{\mathrm{G}}$ & $R_{4}$ & $N_{\mathrm{G}}$ & $R_{4}$ \\\hline 10 & $5.068$ & \blue{$94.97\%$} & $7.504$ & \blue{$86.06\%$} & $6.36$ & \blue{$91.86\%$} & $4.394$ & \blue{$92.13\%$} \\\hline 20 & $5.624$ & \blue{$93.92\%$} & $8.548$ & \blue{$84.63\%$} & $7.496$ & \blue{$89.58\%$} & $5.068$ & \blue{$91.46\%$} \\\hline 30 & $5.924$ & \blue{$92.69\%$} & $9.054$ & \blue{$83.34\%$} & $8.16$ & \blue{$88.31\%$} & $5.372$ & \blue{$88.16\%$} \\\hline 40 & $6.038$ & \blue{$91.24\%$} & $9.304$ & \blue{$82.75\%$} & $8.508$ & \blue{$87.92\%$} & $5.608$ & \blue{$87.73\%$} \\\hline 50 & $6.15$ & \blue{$90.80\%$} & $9.538$ & \blue{$83.04\%$} & $8.818$ & \blue{$85.87\%$} & $5.782$ & \blue{$86.09\%$} \\\hline \end{tabular} \end{table} \subsection{Computational Complexity Analysis} \textred{In this subsection, the computational complexity of G-USBF, SCA-USBF and J-USBF is analyzed and compared. Considering the differences in implementation platforms and algorithm design languages, we count the floating-point computation of the proposed algorithms. Firstly, G-USBF includes the US optimization and BF design, whose floating-point computation is about $\sum\limits_{\hat{k}=2}^{K}4(K-\hat{k}+1)(I_{1}(\hat{k}^{3}N+5\hat{k}^{2}N)+\hat{k}^{2})$, where $\hat{k}$ and $I_{1}$ represent the number of scheduled users and iterations, respectively. Secondly, SCA-USBF includes the inner and outer optimizations, whose floating-point computation is about $4I_{3}(I_{2}(7K^{2}N+4KN+14K^{2})+K(N^{3}+2N^{2}+2N))$, where $I_{2}$ and $I_{3}$ represent the number of iterations for both parts. For J-USBF, since the JEEPON model is trained offline, we mainly consider the computation of the testing stage, including the graph representation module, the GCN module and the SINR module. For simplicity, we assume that the GCN module is composed by MLPs with the dimensions $\mathcal{H}\triangleq\{h_{i}\}$. Therefore, the floating-point computation of J-USBF is about $2(2K^{2}N+2KN^{2}+K\sum\limits_{\ell=1}^{L}\sum\limits_{i=1}^{|\mathcal{H}|}(2+h_{\ell,i-1})h_{\ell,i})$. For intuitive comparison, Fig.~\ref{Fig_Complexity} illustrates the comparison of the floating-point computational magnitude of each algorithm for different number of users and iterations. The computational magnitude of J-USBF is lower than that of G-USBF and SCA-USBF, which indicates its computational efficiency advantage.} \begin{figure}[!ht] \centering \includegraphics[width=0.6\columnwidth,keepaspectratio]{Fig-Complexity.eps} \captionsetup{labelfont={footnotesize,color=red},font={footnotesize,color=red}} \color{red}\caption{The floating-point computational magnitude of the algorithms.} \label{Fig_Complexity} \end{figure} \section{Conclusions} In this paper, the joint US-BF optimization problem is studied for the multiuser MISO downlink system. Specifically, with the help of uplink-downlink duality theory and mathematical transformations, we formulate the original problem into a convex optimization problem, and propose the G-USBF, SCA-USBF and the J-USBF. Numerical results show that J-USBF achieves close performance and higher computational efficiency. Additionally, the proposed J-USBF also enjoys the generalizability in dynamic WCN scenarios. \textred{For future directions, solving the problem of unbearable CSI acquisition burden and signaling overhead caused by the instantaneous perfect CSI applied in this work is interesting and meaningful. Deep learning based resource allocation algorithm needs to be redesigned, and statistical CSI may be helpful to achieve the goal.} \begin{appendices} \section{Design of The G-USBF Algorithm} \textred{In this appendix, the G-USBF algorithm is proposed to slove problem~\eqref{Eq.(05)}, which is inspired by the work in~\cite{zhang2011adaptive} and the near-far effect of WCNs. The feasibility problem in reference~\cite[problem (35)]{he2020beamforming} forms the basis of G-USBF, which is formulated as follows \begin{subequations}\label{Eq.(A01)} \begin{align} \min\limits_{\{\mathbf{w}_{k}\}}\sum\limits_{k\in\mathcal{S}}||\mathbf{w}_{k}||^{2},\\ \mathrm{s.t.}~{r_k}\leq{R}(\overrightarrow{\gamma}_{k}), \end{align} \end{subequations} where $\mathbf{w}_{k}\in\mathbb{C}^{N\times{1}}$ is the BF vector of user $k$, and its downlink power is denoted as $p_{k}=\|\mathbf{w}_{k}\|^{2}$. Solving problem~\eqref{Eq.(A01)} can be used to determine whether the scheduled user set $\mathcal{S}$ is feasible, i.e., the user rate constraint and the BS power budget need to be satisfied. G-USBF is designed with two stages, namely, the conventional greedy search stage and the user set optimization stage, as summarized in \textbf{Algorithm}~\ref{Alg.(A01)}. Here, G-USBF expands the scheduled user set $\mathcal{S}$ from the candidate user set $\mathcal{K}$ in the first stage, and then optimizes $\mathcal{S}$ in the second stage to achieve the goal of scheduling more users. Since the G-USBF algorithm has close performance and lower computational complexity compared with the exhaustive search algorithm, therefore, it is used as the baseline.} \begin{algorithm}[!ht] {\color{red} \caption{The G-USBF Algorithm for Problem~\eqref{Eq.(05)}}\label{Alg.(A01)} \begin{algorithmic}[1] \STATE Input candidate user set $\mathcal{K}$ and user CSI $\{\mathbf{h}_{k}\}$, and initialize scheduled user set $\mathcal{S}=\varnothing$. \STATE Sort the user channels of $\mathcal{K}$ from good to bad via the MRT method, and add the top-ranked user to $\mathcal{S}$. \STATE In the greedy search stage, move one user from $\mathcal{K}$ to $\mathcal{S}$ in sequence without repetition, and obtain temporary user sets with $|\mathcal{K}\backslash\mathcal{S}|$ groups. \STATE For each temporary user set, solve problem~\eqref{Eq.(A01)} to obtain $\{p_{k},\mathbf{w}_{k}\}$, and preserve the user set $\mathcal{S}_{1}^{(\ast)}$ with the smallest required power. \STATE Let $\mathcal{K}\leftarrow\mathcal{K}\backslash\mathcal{S}_{1}^{(\ast)},\mathcal{S}\leftarrow\mathcal{S}_{1}^{(\ast)}$ if $\mathcal{K}\neq\varnothing$ and $\sum\limits_{k\in\mathcal{S}}p_{k}\leq{P}$ is obtained, then go to step 3. Otherwise, go to step 6. \STATE In the user set optimization stage, move one user with the largest power consumption from $\mathcal{S}$ to $\mathcal{K}$, and obtain the user set $\mathcal{S}_{2}$. \STATE Let $\mathcal{S}\leftarrow\mathcal{S}_{2}$ and run the greedy search again to obtain a new user set $\mathcal{S}_{2}^{(\ast)}$. If $|\mathcal{S}_{1}^{(\ast)}|=|\mathcal{S}_{2}^{(\ast)}|$, stop iteration then output $\mathcal{S}_{2}^{(\ast)}$ and $\{p_{k},\mathbf{w}_{k}\}$. Otherwise, let $\mathcal{S}\leftarrow\mathcal{S}_{2}^{(\ast)}$ and go to step 6. \end{algorithmic}} \end{algorithm} \section{Design of The CNN-USBF Algorithm} \textred{In this appendix, the CNN-USBF algorithm is proposed to slove problem~\eqref{Eq.(23)}, which is inspired by the work in~\cite{li2021survey}. In particular, CNN-USBF takes the WCN graph representation as input and outputs the US-PA optimization strategy and BF vectors. To be specific, the update rule of CNN-USBF for node $v$ in graph $\mathcal{G}(\mathcal{V},\mathcal{E})$ is formulated as} {\color{red}\begin{equation}\label{Eq.(32)} \begin{aligned} \mathrm{Input:}&~\mathbf{D}_{v}^{(0)}=[\mathbf{x}_{v},F_{\mathrm{max}}(\{\mathbf{e}_{u,v}\}),F_{\mathrm{mean}}(\{\mathbf{e}_{u,v}\})],u\in\mathcal{N}_{v},\\ \mathrm{CNN\raisebox{0mm}{-}stage:}&~\mathbf{D}_{v}^{(i)}=F_{\mathrm{std}}(\mathrm{Cov1d}(\mathbf{D}_{v}^{(i-1)})),i=1,2,\cdots,N_{\mathrm{1}},\\ \mathrm{DNN\raisebox{0mm}{-}stage:}&~\mathbf{D}_{v}^{(i)}=F_{\mathrm{std}}(\mathrm{LNN}(\mathbf{D}_{v}^{(i-1)})),i=N_{\mathrm{1}}+1,\cdots,N_{\mathrm{1}}+N_{\mathrm{2}},\\ \mathrm{Output:}&~\mathbf{D}_{v}^{(N_{\mathrm{2}})}=[\kappa_{v}^{(\ast)},q_{v}^{(\ast)}], \mathrm{and~BF~vector}~\mathbf{w}_{v}^{(\ast)},v\in\mathcal{V}, \end{aligned} \end{equation}} \textred{where $N_{\mathrm{1}}$ and $N_{\mathrm{2}}$ denote the layers of CNN and DNN, respectively. $\mathbf{D}_{v}^{(0)}$ is the features of node $v$ and its neighborhood edges, $\mathbf{D}_{v}^{(N_{\mathrm{2}})}$ is the US-PA strategy of node $v$, and $F_{\mathrm{std}}(\mathbf{z})=F_{\mathrm{AC}}(F_{\mathrm{BN}}(\mathbf{z}))$ is the standardization function used to standardize the network input to accelerate training process and reduce generalization error, which is implemented by BN and AC layers. The neural network module of CNN-USBF is constructed through CNN and DNN, which are implemented and trained by \emph{Pytorch} and PDLF, respectively. The algorithm steps of CNN-USBF refer to J-USBF. Note that unless mentioned otherwise, the neural network structure of CNN-USBF refer to Table~\ref{Tab-08} and is trained separately for different WCN scenarios.} \begin{table}[!ht] \centering \renewcommand{\arraystretch}{1.1} \captionsetup{labelfont={color=red},font={color=red}} \caption{The neural network structure of CNN-USBF.}\label{Tab-08} {\color{red}\begin{tabular}{|l|l|} \hline Layer & Parameters \\\hline Layer 1 (Input) & Input of size $3K$, batch of size $K$, $N_{\mathrm{e}}$ epochs \\\hline Layer 2 (Cov1d, BN and AC) & Input=$3$, output=$256$; Input=$256$; LReLU \\\hline Layer 3 (Cov1d, BN and AC) & Input=$256$, output=$128$; Input=$128$; LReLU \\\hline Layer 4 (Cov1d, BN and AC) & Input=$128$, output=$64$; Input=$64$; LReLU \\\hline Layer 5 (LNN, BN and AC) & Input=$64$, output=$32$; Input=$32$; LReLU \\\hline Layer 6 (LNN, BN and AC) & Input=$32$, output=$16$; Input=$16$; LReLU \\\hline Layer 7 (LNN, BN and AC) & Input=$16$, output=$2$; Input=$2$; LReLU \\\hline Layer 8 (Output and PAC) & Output of size $2K+KN$, Adam optimizer \\\hline \end{tabular}} \end{table} \end{appendices} \begin{small}
proofpile-arXiv_065-0
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Vector Quantised Variational AutoEncoder (VQ-VAE) ~\cite{van2017neural} is a popular method developed to compress images into discrete representations for the generation. Typically, after the compression and discretization representation by the convolutional network, an autoregressive model is used to model and sample in the discrete latent space, including PixelCNN family~\cite{oord2016conditional,van2016pixel,chen2018pixelsnail}, transformers family~\cite{ramesh2021zero,chen2020generative}, etc. However, in addition to the disadvantage of the huge number of model parameters, these autoregressive models can only make predictions based on the observed pixels (left upper part of the target pixel) due to the inductive bias caused by the strict adherence to the progressive scan order~\cite{khan2021transformers,bengio2015scheduled}. If the conditional information is located at the end of the autoregressive sequence, it is difficult for the model to obtain relevant information. \begin{figure} \centering \includegraphics[scale=0.55]{pafidr1234.png} \caption{FID v.s. Operations and Parameters. The size of the blobs is proportional to the number of network parameters, the X-axis indicates FLOPs on a log scale and the Y-axis is the FID score.} \label{fig1} \end{figure} A recent alternative generative model is the Denoising Diffusion Model, which can effectively mitigate the lack of global information ~\cite{sohl2015deep,ho2020denoising}, also achieving comparable or state-of-the-art performance in text~\cite{hoogeboom2021argmax,austin2021structured}, image~\cite{dhariwal2021diffusion} and speech generation~\cite{kong2020diffwave} tasks. Diffusion models are parameterized Markov chains trained to translate simple distributions to more sophisticated target data distributions in a finite set of steps. Typically the Markov chain begins with an isotropic Gaussian distribution in continuous state space, with the transitions of the chain for reversing a diffusion process that gradually adds Gaussian noise to source images. In the inverse process, as the current step is based on the global information of the previous step in the chain, this endows the diffusion model with the ability to capture the global information. However, the diffusion model has a non-negligible disadvantage in that the time and computational effort involved in generating the images are enormous. The main reason is that the reverse process typically contains thousands of steps. Although we do not need to iterate through all the steps when training, all these steps are still required when generating a sample, which is much slower compared to GANs and even autoregressive models. Some recent works~\cite{song2020denoising,nichol2021improved} have attempted addressing these issues by decreasing the sampling steps, but the computation cost is still high as each step of the reverse process generates a full-resolution image. In this work, we propose the \textbf{V}ector \textbf{Q}uantized \textbf{D}iscrete \textbf{D}iffusion \textbf{M}odel (VQ-DDM), a versatile framework for image generation consisting of a discrete variational autoencoder and a discrete diffusion model. VQ-DDM consists of two stages: (1) learning an abundant and efficient discrete representation of images, (2) fitting the prior distribution of such latent visual codes via discrete diffusion model. VQ-DDM substantially reduces the computational resources and required time to generate high-resolution images by using a discrete scheme. Then the common problem of the lack of global content and overly large number of parameters of the autoregressive model is solved by fitting a latent variable prior using the discrete diffusion model. Finally, since a bias of codebook will limit generation quality, while model size is also dependent on the number of categories, we propose a re-build and fine-tune(ReFiT) strategy to construct a codebook with higher utilization, which will also reduce the number of parameters in our model. In summary, our key contributions include the following: \begin{itemize} \item VQ-DDM fits the prior over discrete latent codes with a discrete diffusion model. The use of diffusion model allows the generative models consider the global information instead of only focusing on partially seen context to avoid sequential bias. \item We propose a ReFiT approach to improve the utilisation of latent representations in the visual codebook, which can increase the code usage of VQ-GAN from $31.85\%$ to $97.07\%$, while the FID between reconstruction image and original training image is reduced from $10.18$ to $5.64$ on CelebA-HQ $256\times256$. \item VQ-DDM is highly efficient for the both number of parameters and generation speed. As shown in Figure~\ref{fig1}, using only 120M parameters, it outperforms VQ-VAE-2 with around 10B parameters and is comparable with VQ-GAN with 1B parameters in image generation tasks in terms of image quality. It is also 10 $ \sim $ 100 times faster than other diffusion models for image generation~\cite{song2020denoising,ho2020denoising}. \end{itemize} \begin{figure*}[h] \centering \includegraphics[scale=0.15]{pipeline.png} \caption{The proposed VQ-DDM pipeline contains 2 stages: (1) Compress the image into discrete variables via discrete VAE. (2) Fit a prior distribution over discrete coding by a diffusion model. Black squares in the diffusion diagram illustrate states when the underlying distributions are uninformative, but which become progressively more specific during the reverse process. The bar chart at the bottom of the image represents the probability of a particular discrete variable being sampled. } \label{pipeline} \end{figure*} \section{Preliminaries} \subsection{Diffusion Models in continuous state space} Given data $\mathbf{x}_0$ from a data distribution $q(\mathbf{x}_0)$, the diffusion model consists of two processes: the \textit{diffusion process} and the \textit{reverse process}~\cite{sohl2015deep,ho2020denoising}. The \textit{diffusion process} progressively destroys the data $\mathbf{x}_0$ into $\mathbf{x}_T$ over $T$ steps, via a fixed Markov chain that gradually introduces Gaussian noise to the data according to a variance schedule $\beta_{1:T} \in (0,1]^T$ as follows: \begin{equation} q(\mathbf{x}_{1:T}|\mathbf{x}_0) = \prod_{t=1}^T q(\mathbf{x}_t|\mathbf{x}_{t-1}) , \end{equation} \begin{equation} q(\mathbf{x}_t | \mathbf{x}_{t-1}) = \mathcal{N}(\mathbf{x}_t;\sqrt{1-\beta_t}\mathbf{x}_{t-1},\beta_t \mathbf{I}) . \end{equation} With an adequate number of steps $T$ and a suitable variance schedule $\beta$, $p(\mathbf{x}_T)$ becomes an isotropic Gaussian distribution. The \textit{reverse process} is defined as a Markov chain parameterized by $\theta$, which is used to restore the data from the noise: \begin{equation} p_{\theta}(\mathbf{x}_{0:T}) = p(\mathbf{x}_T) \prod_{t=1}^T p_{\theta} (\mathbf{x}_{t-1}|\mathbf{x}_t), \end{equation} \begin{equation} p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t}) = \mathcal{N} (\mathbf{x}_{t-1};\mu_{\theta}(\mathbf{x}_t,t),\Sigma_{\theta}(\mathbf{x}_t,t)). \end{equation} The objective of training is to find the best $\theta$ to fit the data distribution $q(\mathbf{x}_0)$ by optimizing the variational lower bound (VLB)~\cite{kingma2013auto} \begin{equation} \begin{split} \mathbb{E}_{q(\mathbf{x}_0)}& [\log p_{\theta}(\mathbf{x}_0)]\\ = &\mathbb{E}_{q(\mathbf{x}_0)}\log\mathbb{E}_{q(\mathbf{x}_{1:T}|\mathbf{x}_0)} \left[ \frac{p_{\theta}(\mathbf{x}_{0:T})}{q(\mathbf{x}_{1:T}|\mathbf{x}_0)} \right] \\ \geq &\mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ \log \frac{p_{\theta}(\mathbf{x}_{0:T})}{q(\mathbf{x}_{1:T}|\mathbf{x}_0)} \right] =: L_\mathrm{vlb}. \end{split} \label{vlb} \end{equation} Ho \etal \cite{ho2020denoising} revealed that the variational lower bound in Eq.~\ref{vlb} can be calculated with closed form expressions instead of Monte Carlo estimates as the \textit{diffusion process} posteriors and marginals are Gaussian, which allows sampling $\mathbf{x}_t$ at an arbitrary step $t$ with $\alpha_t = 1-\beta_t$, $\bar{\alpha}_t=\prod_{s=0}^t \alpha_s$ and $\tilde{\beta_t}=\frac{1-\bar{\alpha}_{t-1}}{1-\bar{\alpha}_t}$: \begin{equation} q(\mathbf{x}_t|\mathbf{x}_0) = \mathcal{N}(\mathbf{x}_t | \sqrt{\bar{\alpha}_t} \mathbf{x}_0, (1-\bar{\alpha}_t)\mathbf{I} ), \end{equation} \begin{equation} \begin{split} L_\mathrm{vlb} = \mathbb{E}_{q(\mathbf{x}_0)} &[ D_{\mathrm{KL}}(q(\mathbf{x}_T|\mathbf{x}_0) || p(\mathbf{x}_T)) - \log p_{\theta} ( \mathbf{x}_0 | \mathbf{x}_1 ) \\ &+ \sum_{t=2}^T D_{\mathrm{KL}}(q(\mathbf{x}_{t-1}|\mathbf{x}_t,\mathbf{x}_0) || p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_t)) ]. \end{split} \label{kl} \end{equation} Thus the reverse process can be parameterized by neural networks $\epsilon_{\theta}$ and $\upsilon_{\theta}$, which can be defined as: \begin{equation} \mu_{\theta}(\mathbf{x}_t,t) = \frac{1}{\sqrt{\alpha_t}} \left(\mathbf{x}_t - \frac{\beta_t}{\sqrt{1-\bar{\alpha}_t}} \epsilon_{\theta} (\mathbf{x}_t,t) \right), \end{equation} \begin{equation} \begin{split} \Sigma_{\theta}(\mathbf{x}_t,t) = \exp(\upsilon_{\theta}&(\mathbf{x}_t,t)\log\beta_t \\ &+ (1-\upsilon_{\theta}(\mathbf{x}_t,t))\log\tilde{\beta_t}). \end{split} \end{equation} Using a modified variant of the VLB loss as a simple loss function will offer better results in the case of fixed $\Sigma_{\theta}$~\cite{ho2020denoising}: \begin{equation} L_{\mathrm{simple}} = \mathbb{E}_{t,\mathbf{x}_0,\epsilon} \left[ ||\epsilon - \epsilon_{\theta}(\mathbf{x}_t,t)||^2 \right], \end{equation} which is a reweighted version resembling denoising score matching over multiple noise scales indexed by $t$~\cite{song2019generative}. Nichol \etal \cite{nichol2021improved} used an additional $L_{\mathrm{vlb}}$ to the simple loss for guiding a learned $\Sigma_{\theta}(\mathbf{x}_t,t)$, while keeping the $\mu_{\theta}(\mathbf{x}_t,t)$ still the dominant component of the total loss: \begin{equation} L_{\mathrm{hybrid}} = L_{\mathrm{simple}} + \lambda L_{\mathrm{vlb}}. \end{equation} \subsection{Discrete Representation of Images} van den Oord \etal \cite{van2017neural} presented a discrete variational autoencoder with a categorical distribution as the latent prior, which is able to map the images into a sequence of discrete latent variables by an encoder and reconstruct the image according to those variables with a decoder. Formally, given a codebook $\mathbb{Z}\in\mathbb{R}^{K\times d}$, where $K$ represents the capacity of latent variables in the codebook and $d$ is the dimension of each latent variable, after compressing the high dimension input data $\textbf{x}\in \mathbb{R}^{c\times H\times W}$ into latent vectors $\textbf{h}\in \mathbb{R}^{h\times w\times d}$ by an encoder $E$, $\textbf{z}$ is the quantised $\textbf{h}$, which substitutes the vectors $h_{i,j}\in\textbf{h}$ by the nearest neighbor $z_k \in \mathbb{Z}$. The decoder $D$ is trained to reconstruct the data from the quantised encoding $\textbf{z}_q$: \begin{equation} \textbf{z} = \mathrm{Quantize}(\textbf{h}) := \mathrm{arg\ min}_k ||h_{i,j}-z_k|| , \end{equation} \begin{equation} \hat{\textbf{x}} = D(\textbf{z}) = D(\mathrm{Quantize}(E(\textbf{x}))). \end{equation} As $\mathrm{Quantize}(\cdot)$ has a non-differentiable operation $\mathrm{arg\ min}$, the straight-through gradient estimator is used for back-propagating the reconstruction error from decoder to encoder. The whole model can be trained in an end-to-end manner by minimizing the following function: \begin{equation} L = ||\textbf{x}-\hat{\textbf{x}}||^2 + ||sg[E(\textbf{x})] - \textbf{z}|| + \beta || sg[\textbf{z}] - E(\textbf{x}) || , \label{vqeq} \end{equation} where $sg[\cdot]$ denotes stop gradient and broadly the three terms are reconstruction loss, codebook loss and commitment loss, respectively. VQ-GAN~\cite{esser2021taming} extends VQ-VAE~\cite{van2017neural} in multiple ways. It substitutes the L1 or L2 loss of the original VQ-VAE with a perceptual loss~\cite{zhang2018unreasonable}, and adds an additional discriminator to distinguish between real and generated patches~\cite{CycleGAN2017}. The codebook update of the discrete variational autoencoder is intrinsically a dictionary learning process. Its objective uses L2 loss to narrow the gap between the codes $\mathbb{Z}_t \in \mathbb{R}^{K_t\times d}$ and the encoder output $\textbf{h}\in\mathbb{R}^{ h\times w \times d}$ \cite{van2017neural}. In other words, the codebook training is like $k$-means clustering, where cluster centers are the discrete latent codes. However, since the volume of the codebook space is dimensionless and $\textbf{h}$ is updated each iteration, the discrete codes $\mathbb{Z}$ typically do not follow the encoder training quickly enough. Only a few codes get updated during training, with most unused after initialization. \section{Methods} Our goal is to leverage the powerful generative capability of the diffusion model to perform high fidelity image generation tasks with a low number of parameters. Our proposed method, VQ-DDM, is capable of generating high fidelity images with a relatively small number of parameters and FLOPs, as summarized in Figure ~\ref{pipeline}. Our solution starts by compressing the image into discrete variables via the discrete VAE and then constructs a powerful model to fit the joint distribution over the discrete codes by a diffusion model. During diffusion training, the darker coloured parts in Figure ~\ref{pipeline} represent noise introduced by uniform resampling. When the last moment is reached, the latent codes have been completely corrupted into noise. In the sampling phase, the latent codes are drawn from an uniform categorical distribution at first, and then resampled by performing reverse process $T$ steps to get the target latent codes. Eventually, target latent codes are pushed into the decoder to generate the image. \subsection{Discrete Diffusion Model} \label{ddm} Assume the discretization is done with $K$ categories, i.e.\ $z_t \in \{1,\dots,K\}$, with the one-hot vector representation given by $\textbf{z}_t \in \{0,1\}^K$. The corresponding probability distribution is expressed by $\textbf{z}_t^{\mathrm{logits}}$ in logits. We formulate the discrete diffusion process as \begin{equation} q(\textbf{z}_t|\textbf{z}_{t-1}) = \mathrm{Cat} (\textbf{z}_t ; \textbf{z}_{t-1}^{\mathrm{logits}} \mathbf{Q}_t ), \end{equation} where $\mathrm{Cat}(\textbf{x}|\textbf{p})$ is the categorical distribution parameterized by $\textbf{p}$, while $\mathbf{Q}_t$ is the process transition matrix. In our method, $\mathbf{Q}_t = (1-\beta_t)\textbf{I} + \beta_t / K $, which means $\textbf{z}_t$ has $1-\beta_t$ probability to keep the state from last timestep and $\beta_t$ chance to resample from a uniform categorical distribution. Formally, it can be written as \begin{equation} q(\textbf{z}_t|\textbf{z}_{t-1}) = \mathrm{Cat} (\textbf{z}_t ; (1-\beta_t)\textbf{z}_{t-1}^{\mathrm{logits}} + \beta_t / K). \label{ddp} \end{equation} It is straightforward to get $\textbf{z}_t$ from $\textbf{z}_0$ under the schedule $\beta_t$ with $\alpha_t = 1-\beta_t$, $\bar{\alpha}_t=\prod_{s=0}^t \alpha_s$: \begin{equation} q(\textbf{z}_t|\textbf{z}_{0}) = \mathrm{Cat}(\textbf{z}_t ; \bar{\alpha}_t \textbf{z}_0 + (1-\bar{\alpha}_t)/K) \label{ddp0} \end{equation} \begin{equation} or \quad q(\textbf{z}_t|\textbf{z}_{0}) = \mathrm{Cat}(\textbf{z}_t ; \textbf{z}_0 \bar{\mathbf{Q}}_t) ; \ \bar{\mathbf{Q}}_t = \prod_{s=0}^t \mathbf{Q}_s. \end{equation} We use the same cosine noise schedule as \cite{nichol2021improved,hoogeboom2021argmax} because our discrete model is also established on the latent codes with a small $16\times16$ resolution. Mathematically, it can be expressed in the case of $\bar{\alpha}$ by \begin{equation} \bar{\alpha} = \frac{f(t)}{f(0)}, \quad f(t) =\mathrm{cos}\left(\frac{t/T+s}{1+s} \times \frac{\pi}{2}\right)^2 . \label{noises} \end{equation} By applying Bayes' rule, we can compute the posterior $q(\textbf{z}_{t-1}|\textbf{z}_{t},\textbf{z}_{0})$ as: \begin{equation} \begin{split} q(\textbf{z}_{t-1} | \textbf{z}_{t},\textbf{z}_{0})& = \mathrm{Cat} \left(\textbf{z}_t ; \frac{\textbf{z}_t^{\mathrm{logits}} \mathbf{Q}_t^{\top} \odot \textbf{z}_0 \bar{\mathbf{Q}}_{t-1} }{\textbf{z}_0 \bar{\mathbf{Q}}_{t} {\textbf{z}_t^{\mathrm{logits}}}^{\top}} \right) \\ = \mathrm{Cat} &(\textbf{z}_t ; \ \boldsymbol{\theta}(\textbf{z}_t,\textbf{z}_0) / \sum_{k=1}^K \theta_k (z_{t,k},z_{0,k}) ), \\ \end{split} \label{qpost} \end{equation} \begin{equation} \begin{split} \boldsymbol{\theta}(\textbf{z}_t,\textbf{z}_0) = [\alpha_t \textbf{z}_t^{\mathrm{logits}} + & (1-\alpha_t)/ K] \\ &\odot [\bar{\alpha}_{t-1} \textbf{z}_0 + (1-\bar{\alpha}_{t-1}) / K]. \end{split} \end{equation} It is worth noting that $ \boldsymbol{\theta}(\textbf{z}_t,\textbf{z}_0) / \sum_{k=1}^K \theta_k (z_{t,k},z_{0,k})$ is the normalized version of $\boldsymbol{\theta}(\textbf{z}_t,\textbf{z}_0)$, and we use $\mathrm{N}[\boldsymbol{\theta}(\textbf{z}_t,\textbf{z}_0)]$ to denote $ \boldsymbol{\theta}(\textbf{z}_t,\textbf{z}_0) / \sum_{k=1}^K \theta_k (z_{t,k},z_{0,k})$ below. Hoogeboom \etal \cite{hoogeboom2021argmax} predicted $\hat{\textbf{z}}_0$ from $\textbf{z}_t$ with a neural network $\mu(\textbf{z}_t,t)$, instead of directly predicting $p_{\theta}(\textbf{z}_{t-1}|\textbf{z}_{t})$. Thus the reverse process can be parameterized by the probability vector from $q(\textbf{z}_{t-1} | \textbf{z}_{t},\hat{\textbf{z}}_{0})$. Generally, the reverse process $p_{\theta}(\textbf{z}_{t-1}|\textbf{z}_{t})$ can be expressed by \begin{equation} \begin{split} p_{\theta}(\textbf{z}_0|\textbf{z}_1) & = \mathrm{Cat} (\textbf{z}_0 |\hat{\textbf{z}}_0), \\ p_{\theta}(\textbf{z}_{t-1}|\textbf{z}_{t})& = \mathrm{Cat} (\textbf{z}_t | \ \mathrm{N}[\boldsymbol{\theta}(\textbf{z}_t,\hat{\textbf{z}}_0)]) . \end{split} \end{equation} Inspired by~\cite{jang2016categorical,maddison2016concrete}, we use a neural network $\mu(\mathbf{Z}_t,t)$ to learn and predict the a noise $n_t$ and obtain the logits of $\hat{\mathbf{z}}_0$ from \begin{equation} \hat{\mathbf{z}}_0 = \mu(\mathbf{Z}_t,t) + \mathbf{Z}_t. \label{pnois} \end{equation} It is worth noting that the neural network $\mu(\cdot)$ is based on the $\mathbf{Z}_t \in \mathbb{N}^{h\times w}$, where all the discrete representation $\mathbf{z}_t$ of the image are combined. The final noise prior $\mathbf{Z}_T$ is uninformative, and it is possible to separably sample from each axis during inference. However, the reverse process is jointly informed and evolves towards a highly coupled $\mathbf{Z}_0$. We do not define a specific joint prior for $\mathbf{z}_t$, but encode the joint relationship into the learned reverse process. This is implicitly done in the continuous domain diffusion. As $\mathbf{z}_{t-1}$ is based on the whole previous representation $\mathbf{z}_t$, the reverse process can sample the whole discrete code map directly while capturing the global information. The loss function used is the VLB from Eq.~\ref{kl}, where the summed KL divergence for $T>2$ is given by \begin{equation} \begin{split} \mathrm{KL}( q(\textbf{z}_{t-1} | \textbf{z}_{t},\textbf{z}_{0}) || p_{\theta}(\textbf{z}_{t-1}|\textbf{z}_{t})) &= \\ \sum_k \mathrm{N}[\boldsymbol{\theta}(\textbf{z}_t,\textbf{z}_0)] &\times \log \frac{\mathrm{N}[\boldsymbol{\theta}(\textbf{z}_t,\textbf{z}_0)] }{\mathrm{N}[\boldsymbol{\theta}(\textbf{z}_t,\hat{\textbf{z}}_0)] }. \end{split} \end{equation} \subsection{Re-build and Fine-tune Strategy} Our discrete diffusion model is based on the latent representation of the discrete VAE codebook $\mathbb{Z}$. However, the codebooks with rich content are normally large, with some even reaching $K=16384$. This makes it highly unwieldy for our discrete diffusion model, as the transition matrices of discrete diffusion models have a quadratic level of growth to the number of classes $K$, \eg $O(K^2T)$~\cite{austin2021structured}. To reduce the categories used for our diffusion model, we proposed a Re-build and Fine-tune (ReFit) strategy to decrease the size $K$ of codebook $\mathbb{Z}$ and boost the reconstruction performance based on a well-trained discrete VAEs trained by the straight-through method. From Eq.~\ref{vqeq}, we can find the second term and the third term are related to the codebook, but only the second term is involved in the update of the codebook. $||sg[E(\textbf{x})] - \textbf{z}||$ reveals that only a few selected codes, the same number as the features from $E(\textbf{x})$, are engaged in the update per iteration. Most of the codes are not updated or used after initialization, and the update of the codebook can lapse into a local optimum. We introduce a re-build and fine-tune strategy to avoid the waste of codebook capacity. With the trained encoder, we reconstruct the codebook so that all codes in the codebook have the opportunity to be selected. This will greatly increase the usage of the codebook. Suppose we desire to obtain a discrete VAE having a codebook with $\mathbb{Z}_t$ based on a trained discrete VAE with an encoder $E_s$ and a decoder $D_s$. We first encode each image $\textbf{x}\in \mathbb{R}^{c\times H\times W}$ to latent features $\textbf{h}$, or loosely speaking, each image gives us $h\times w$ features with $d$ dimension. Next we sample $P$ features uniformly from the entire set of features found in training images, where $P$ is the sampling number and far larger than the desired codebook capacity $K_t$. This ensures that the re-build codebook is composed of valid latent codes. Since the process of codebook training is basically the process of finding cluster centres, we directly employ k-means with AFK-MC$^2$~\cite{bachem2016fast} on the sampled $P$ features and utilize the centres to re-build the codebook $\mathbb{Z}_t$. We then replace the original codebook with the re-build $\mathbb{Z}_t$ and fine-tune it on top of the well-trained discrete VAE. \section{Experiments and Analysis} \subsection{Datasets and Implementation Details} \label{desc} We show the effectiveness of the proposed VQ-DDM on \textit{CelebA-HQ}~\cite{karras2017progressive} and \textit{LSUN-Church}~\cite{yu2015lsun} datasets and verify the proposed Re-build and Fine-tune strategy on \textit{CelebA-HQ} and \textit{ImageNet} datasets. The details of the dataset are given in the Appendix. The discrete VAE follows the same training strategy as VQ-GAN\cite{esser2021taming}. All training images are processed to $256\times256$, and the compress ratio is set to $16$, which means the latent vector $\textbf{z} \in \mathbb{R}^{1\times16\times16}$. When conducting Rebuild and Fine-tune, the sampling number $P$ is set to $20k$ for \textit{LSUN} and \textit{CelebA}. For the more content-rich case, we tried a larger P value $50k$ for \textit{ImageNet}. In practical experiments, we sample $P$ images with replacement uniformly from the whole training data and obtained corresponding latent features. For each feature map, we make another uniform sampling over the feature map size $16\times16$ to get the desired features. In the fine-tuning phase, we freeze the encoder and set the learning rate of the decoder to $1e$-$6$ and the learning rate of the discriminator to $2e$-$6$ with 8 instances per batch. With regard to the diffusion model, the network for estimating $n_t$ has the same structure as~\cite{ho2020denoising}, which is a U-Net~\cite{ronneberger2015u} with self-attention~\cite{vaswani2017attention}. The detailed settings of hyperparameters are provided in the Appendix. We set timestep $T=4000$ in our experiments and the noise schedule is the same as~\cite{nichol2021improved} \subsection{Codebook Quality} \label{cbq} A large codebook dramatically increases the cost of DDM. To reduce the cost to an acceptable scale, we proposed a resample and fine-tune strategy to compress the size of the codebook, while maintaining quality. To demonstrate the effectiveness of the proposed strategy, we compare the codebook usage and FID of reconstructed images of our method to VQ-GAN\cite{esser2021taming}, VQ-VAE-2\cite{razavi2019generating} and DALL-E\cite{ramesh2021zero}. In this experiment, we compressed the images from $3\times256\times256$ to $1\times16\times16$ with two different codebook capacities $K=\{512,1024\}$. We also proposed an indicator to measure the usage rate of the codebook, which is the number of discrete features that have appeared in the test set or training set divided by the codebook capacity. The quantitative comparison results are shown in Table~\ref{codebook_com} while the reconstruct images are demonstrated in Figs.~\ref{Recon-inr} \& \ref{Recon-celeba}. Reducing the codebook capacity from 1024 to 512 only brings $\sim 0.1$ decline in CelebA and $\sim 1$ in ImageNet. As seen in Figure ~\ref{Recon-celeba}, the reconstructed images (c,d) after ReFiT strategy are richer in colour and more realistic in expression than the reconstructions from VQ-GAN (b). The codebook usage of our method has improved significantly compared to other methods, nearly 3x high than the second best. Our method also achieves the equivalent reconstruction quality at the same compression rate and with 32$\times$ lower capacity $K$ of codebook $\mathbb{Z}$. For VQ-GAN with capacity $16384$, although it only has $976$ effective codes, which is smaller than $1024$ in our ReFiT method when $P=20k$, it achieves a lower FID in reconstructed images vs validation images. One possible reason is that the value of $P$ is not large enough to cover some infrequent combinations of features during the re-build phase. As the results in Table~\ref{codebook_com}, after we increase the sampling number $P$ from $20k$ to $100k$, we observe that increasing the value of $P$ achieved higher performance. \begin{table} \centering \resizebox{0.46\textwidth}{!} {% \begin{threeparttable} \begin{tabular}{ccccccc} \toprule Model &Latent Size & Capacity & \multicolumn{2}{c}{Usage of $\mathbb{Z}$} & \multicolumn{2}{c}{FID $\downarrow$} \\ & & & CelebA & ImageNet & CelebA & ImageNet \\\midrule VQ-VAE-2 & Cascade & 512 & $\sim$65\% &- & - & $\sim$10 \\ DALL-E & 32x32 & 8192 & - & - & - & 32.01 \\ VQ-GAN & 16x16 & 16384 & - & 5.96\% & - & 4.98 \\ VQ-GAN & 16x16 & 1024 & 31.85\% & 33.67\% & 10.18 & 7.94 \\ \textit{\textbf{ours}} ($P=100k$)& 16x16 & 1024 & - & 100\% & - &4.98 \\ \textit{\textbf{ours}} ($P=20k$)& 16x16 & 1024 & 97.07\% & 100\% & 5.59 & 5.99 \\ \textit{\textbf{ours}} ($P=20k$) & 16x16 & 512 & 93.06\% & 100\% & 5.64 &6.95 \\ \bottomrule \end{tabular}% \begin{tablenotes} \item[1] All methods are trained straight-through, except DALL-E with Gumbel-Softmax~\cite{ramesh2021zero}. \item[2] CelebA-HQ at $256$$\times$$256$. Reported FID is between 30$k$ reconstructed data vs training data. \item[3] Reported FID is between 50$k$ reconstructed data vs validation data \end{tablenotes} \end{threeparttable} } \caption{FID between reconstructed images and original images on CelebA-HQ and ImageNet } \label{codebook_com} \end{table} \begin{figure*} \centering \includegraphics[scale=0.20]{inrs.png} \caption{Reconstruction images $384\times384$ from ImageNet based VQ-GAN and ReFiT} \label{Recon-inr} \end{figure*} \begin{figure}[t!] \centering \resizebox{0.50\textwidth}{!}{ \begin{subfigure}{0.125\textwidth} \centering \includegraphics[scale=0.20]{ori.png} \caption{Source} \end{subfigure} \begin{subfigure}{0.125\textwidth} \centering \includegraphics[scale=0.20]{raw.png} \caption{VQ-GAN} \end{subfigure} \begin{subfigure}{0.125\textwidth} \centering \includegraphics[scale=0.20]{re_1024.png} \caption{ReFiT K=1024} \end{subfigure} \begin{subfigure}{0.125\textwidth} \centering \includegraphics[scale=0.20]{re.png} \caption{ReFiT K=512} \end{subfigure} } \caption{Reconstruction images of CelebA HQ $256\times256$ from VQ-GAN and ReFiT.} \label{Recon-celeba} \end{figure} \subsection{Generation Quality} \label{genq} We evaluate the performance of VQ-DDM for the unconditional image generation on \textit{CelebA-HQ} $256\times256$. Specifically, we evaluated the performance of our approach in terms of FID and compared it with various likelihood-based based methods including GLOW~\cite{kingma2018glow}, NVAE~\cite{vahdat2020nvae}, VAEBM~\cite{xiao2020vaebm}, DC-VAE~\cite{parmar2021dual}, VQ-GAN~\cite{esser2021taming} and likelihood-free method, e.g., PGGAN~\cite{karras2017progressive}. We also conducted an experiment on \textit{LSUN-Church}. In \textit{CelebA-HQ} experiments, the discrete diffusion model was trained with $K=512$ and $K=1024$ codebooks respectively. We also report the different FID from $T=2$ to $T=4000$ with corresponding time consumption in Figure ~\ref{cost}. Regarding the generation speed, it took about 1000 hours to generate $50k$ $256\times256$ images using DDPM with 1000 steps on a NVIDIA 2080Ti GPU, 100 hours for DDIM with 100 steps~\cite{song2020denoising}, and around 10 hours for our VQ-DDM with 1000 steps. \begin{figure} \centering \includegraphics[scale=.5]{cost.png} \caption{Steps and corresponding FID during the sampling. The text annotations are hours to sample 50k latent feature maps on 1 NVIDIA 2080Ti GPU} \end{figure} \begin{figure} \centering \includegraphics[scale=.4]{time.png} \caption{Hours to sampling 50k latent codes by VQ-DDM and generating 50k images with VQ-DDM and DDPM} \label{cost} \end{figure} \begin{figure*}[t!] \centering \subcaptionbox{Samples $(256\times256)$ from a VQ-DDM model trained on CelebA HQ. FID=$13.2$ \label{celebs}}{ \includegraphics[scale=0.27]{nc1.png} } \subcaptionbox{Samples $(256\times256)$ from a VQ-DDM model trained on LSUN-Church. FID=$16.9$ \label{lsuns}}{ \includegraphics[scale=0.27]{lsun.png} } \caption{Samples from VQ-DDM models.} \end{figure*} Table~\ref{celeba} shows the main results on VQ-DDM along with other established models. Although VQ-DDM is also a likelihood-based method, the training phase relies on the negative log-likehood (NLL) of discrete hidden variables, so we do not compare the NLL between our method and the other methods. The training NLL is around $1.258$ and test NLL is $1.286$ while the FID is $13.2$. Fig.~\ref{celebs} shows the generated samples from VQ-DDM trained on the \textit{CelebA-HQ}. For \textit{LSUN-Church}, the codebook capacity $K$ is set to $1024$, while the other parameters are set exactly the same. The training NLL is $1.803$ and the test NLL is $1.756$ while the FID between the generated images and the training set is $16.9$. Some samples are shown in Fig.~\ref{lsuns}. After utilizing ReFiT, the generation quality of the model is significantly improved, which implies a decent codebook can have a significant impact on the subsequent generative phase. Within a certain range, the larger the codebook capacity leads to a better performance. However, excessive number of codebook entries will cause the model collapse~\cite{hoogeboom2021argmax}. \subsection{Image Inpainting} \label{gbq} Autoregressive models have recently demonstrated superior performance in the image inpainting tasks~\cite{chen2020generative, esser2021taming}. However, one limitation of this approach is that if the important context is found at the end of the autoregressive series, the models will not be able to correctly complete the images. As mentioned in Sec.~\ref{ddm}, the diffusion model will directly sample the full latent code map, with sampling steps based on the \emph{full} discrete map of the previous step. Hence it can significantly improve inpainting as it does not depend on context sequencing. We perform the mask diffusion and reverse process in the discrete latent space. After encoding the masked image $x_0 \sim q(\textbf{x}_0)$ to discrete representations $z_{0} \sim q(\textbf{z}_{0})$, we diffuse $\textbf{z}_{0}$ with $t$ steps to $\tilde{\textbf{z}}_{t} \sim q(\textbf{z}_{t}|\textbf{z}_{0})$. Thus the last step with mask $\tilde{\textbf{z}}_{T}^m$ can be demonstrated as $\tilde{\textbf{z}}_{T}^m = (1-m) \times \tilde{\textbf{z}}_{T} + m \times \mathbb{C}$, where $\mathbb{C}\sim \mathrm{Cat}(K,1/K)$ is the sample from a uniform categorical distribution and $m \in \{0,1\}^K $ is the mask, $m=0$ means the context there is masked and $m=1$ means that given the information there. In the reverse process, $\textbf{z}_{T-1}$ can be sampled from $p_{\theta}(\mathbf{z}_{T-1}|\tilde{\textbf{z}}_{T}^m)$ at $t=T$, otherwise, $\textbf{z}_{t-1} \sim p_{\theta}(\mathbf{z}_{t-1}|\textbf{z}_{t}^m)$, and the masked $\textbf{z}_{t-1}^m = (1-m) \times \textbf{z}_{t-1} + m \times \tilde{\textbf{z}}_{t-1}$. We compare our approach and another that exploits a transformer with a sliding attention window as an autoregressive generative model~\cite{esser2021taming}. The completions are shown in Fig.~\ref{global_if}, in the first row, the upper 62.5\% (160 out of 256 in latent space) of the input image is masked and the lower 37.5\% (96 out of 256) is retained, and in the second row, only a quarter of the image information in the lower right corner is retained as input. We also tried masking in an arbitrary position. In the third row, we masked the perimeter, leaving only a quarter part in the middle. Since the reverse diffusion process captures the global relationships, the image completions of our model performs much better. Our method can make a consistent completions based on arbitrary contexts, whereas the inpainting parts from transformer lack consistency. It is also worth noting that our model requires no additional training in solving the task of image inpainting. \begin{table}[t!] \centering \resizebox{0.46\textwidth}{!}{ \begin{threeparttable} \begin{tabular}{llll} \toprule Method & FID $\downarrow$ & Params & FLOPs \\ \midrule \multicolumn{2}{l}{\textbf{\textit{Likelihood-based}}} \\ \midrule GLOW~\cite{kingma2018glow} & 60.9 & 220 M & 540 G \\ NVAE~\cite{vahdat2020nvae} & 40.3 & 1.26 G & 185 G \\ \textbf{\textit{ours}} ($K=1024$ w/o ReFiT) & 22.6 & 117 M & 1.06 G \\ VAEBM~\cite{xiao2020vaebm} & 20.4 & 127 M & 8.22 G \\ \textbf{\textit{ours}} ($K=512$ w/ ReFiT) & 18.8 & 117 M & \textbf{1.04 G } \\ DC-VAE~\cite{parmar2021dual} & 15.8 & - & - \\ \textbf{\textit{ours}} ($K=1024$ w/ ReFiT) & 13.2 & 117 M & 1.06 G \\ DDIM(T=100)~\cite{song2020denoising} &10.9 &114 M &124 G \\ VQ-GAN + Transformer~\cite{esser2021taming} & 10.2 & 802 M & 102 G\tnote{a} \\ \midrule \multicolumn{2}{l}{\textbf{\textit{Likelihood-free}}} \\\midrule PG-GAN~\cite{karras2017progressive} & 8.0 & 46.1 M & 14.1 G \\ \bottomrule \end{tabular} \begin{tablenotes} \item[a] VQ-GAN is an autoregressive model, and the number in the table is the computation needed to generate the full size latent feature map. The FLOPs needed to generate one discrete index out of 256 is 0.399 G. \end{tablenotes} \end{threeparttable}} \caption{FID on CelebA HQ $256\times256$ dataset. All the FLOPs in the table only consider the generation stage or inference phase for one $256\times256$ images. } \label{celeba} \end{table} \begin{figure*}[t!] \centering{ \includegraphics[scale=0.145]{gb.png} } \caption{Completions with the arbitrary masks.} \label{global_if} \end{figure*} \section{Related Work} \subsection{Vector Quantised Variational Autoencoders} VQ-VAE~\cite{van2017neural} leads a trend of discrete representation of images. The common practice is to model the discrete representations using an autoregressive model, e.g. PixelCNN~\cite{van2016pixel,chen2018pixelsnail}, transformers~\cite{esser2021taming,ramesh2021zero,ramesh2021zero}, etc. Some works had attempted to fit the prior distribution of discrete latent variables using a light non-autoregressive approach, like EM approach~\cite{roy2018theory} and Markov chain with self-organizing map~\cite{fortuin2018som}, but yet they are struggling to fit a large scale of data. Ho \etal \cite{ho2020denoising} have also shown that the diffusion models can be regarded as an autoregressive model along the time dimension, but in reality, it is non-autoregressive along the pixel dimension. A concurrent work~\cite{esser2021imagebart} follow a similar pipeline which uses a diffusion model on discrete latent variables, but the work uses parallel modeling of multiple short Markov chains to achieve denoising. \subsection{Diffusion Models} Sohl-Dickstein \etal \cite{sohl2015deep} presented a simple discrete diffusion model, which diffused the target distribution into the independent binomial distribution. Recently, Hoogeboom \etal \cite{hoogeboom2021argmax} have extended the discrete model from binomial to multinomial. Further, Austin \etal \cite{austin2021structured} proposed a generalized discrete diffusion structure, which provides several choices for the diffusion transition process. In the continuous state space, there are some recent diffusion models that surpassed the state-of-the-art in the image generation area. With the guidance from the classifiers, Dhariwal \etal \cite{dhariwal2021diffusion} enabled diffusion models called ADM to generate images beyond BigGAN, which was previously one of the most powerful generative models. In CDM~\cite{ho2021cascaded}, the authors performed the cascade pipeline on the diffusion model to generate the image with ultra-high fidelity and reach state-of-the-art on conditional ImageNet generation. In addition, there have been several recent works that have attempted to use diffusion models to modelling the latent variables of VAE~\cite{kingma2021variational,wehenkel2021diffusion}, while revealed the connection among several diffusion models mentioned above. \section{Conclusion} In this paper, we introduce VQ-DDM, a high-fidelity image generation model with a two-stage pipeline. In the first stage, we train a discrete VAE with a well-utilized content-rich codebook. With the help of such an efficient codebook, it is possible to generate high-quality images by a discrete diffusion model with relatively tiny parameters in the second stage. Simultaneously, benefiting from the discrete diffusion model, the sampling process captures the global information and the image inpainting is no longer affected by the location of the given context and mask. Meanwhile, in comparison with other diffusion models, our approach further reduces the gap in generation speed with respect to GAN. We believe that VQ-DDM can also be utilized for audio, video and multimodal generation. \subsection*{Limitations} For a complete diffusion, we need a large number of steps, which will result in a very fluctuating training process and limit the image generation quality. Hence, our model may suffer from underperformance when exposed to the large scale and complex datasets. {\small \bibliographystyle{ieee_fullname}
proofpile-arXiv_065-1
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Blazars are the most extreme subclass of active galactic nuclei (AGN) with a relativistic jet closely aligned to the observer's line of sight \citep{1984RvMP...56..255B, 1995PASP..107..803U}. Blazars are divided into flat-spectrum radio quasars (FSRQs) and BL~Lacertae objects (BL~Lacs) based on the presence or not of strong broad emission lines in their optical-ultraviolet spectra \citep{1995PASP..107..803U}. The spectral energy distribution (SED) and multi-wavelength flux variability of blazars are important tools for studying the physics of extragalactic relativistic jets. The SED of a blazar is dominated by the non-thermal radiation of the jet, and exhibits a characteristic double-hump structure \citep{1998MNRAS.299..433F}. The low energy component is believed to arise from the synchrotron radiation of relativistic electrons in the magnetic field, while the high-energy component is probably produced by inverse Compton (IC) scattering of relativistic electrons off low-energy target photons. These could be synchrotron photons produced by the same electron population in the jet (SSC model, \cite[e.g.][]{1996ApJ...461..657B,1996A&AS..120C.537M}), or could be external photons (EC model, \cite[e.g.][]{1992A&A...256L..27D,1993ApJ...416..458D,1994ApJ...421..153S}). A significant enhancement of a blazar's flux, usually considered to be a factor of $2-3$~above its average value, is referred to as a blazar flare. During flares the polarization and the spectral index of a blazar's emission may also experience dramatic variability \cite[e.g.][]{2020MNRAS.492.1295P}. Blazar flares have been commonly observed at different energy bands (e.g., from radio wavelengths to very high-energy $\gamma$-rays) with different duration timescales ranging from several years to a few minutes \cite[e.g.][]{1990ApJ...356..432G, 2007AJ....133.1947N,2008A&A...486..411S,2008ApJ...677..906F,2008MNRAS.384L..19B,2009ApJ...691L..13D,2010ApJ...722..520A,2011ApJ...738...25A,2012ApJ...756...13B,2015ApJ...807...79H}. Based on their duration, one can roughly define three types of blazar flares, namely year-long flares \cite[e.g.][]{2013A&A...553A.107C}, day-to-month long flares \cite[e.g.][]{2011ApJS..192...12I} and intraday flares \cite[e.g.][]{1995ARA&A..33..163W}. Different duration timescales may relate to different aspects of blazar physics. For instance, year-long flares are thought to be related to the orbital motion of a binary or jet precession \citep{2004A&A...419..913O,2018MNRAS.478.3199B}, or instabilities in the accretion flow \citep{2010MNRAS.402.2087V, 2011MNRAS.418L..79T}. Day-to-month-long and intraday flares are thought to be related to processes taking place in the jet itself, and can be reasonably explained by models involving one or more compact dissipation zones in the jet \cite[e.g.][]{2012AJ....143...23G, 2013A&A...557A..71R}. Therefore, measurements of day-to-month-long and intraday flares, including their temporal variability and spectrum, could constrain the physical parameters of the dissipation zone, such as size, location, geometry, bulk Lorentz factor, particle acceleration, and cooling processes. One of the most peculiar aspects of blazar variability are the so-called orphan flares. These are flares that occur in a specific energy band without correlated variability in other bands, and have been discovered in many blazars (orphan X-ray flare \cite[e.g.][]{2013A&A...552A..11R}; orphan optical flare \cite[e.g.][]{2013ApJ...763L..11C,2019hepr.confE..27W,2019Galax...7...21W, 2019ApJ...880...32L}; orphan GeV flare \cite[e.g.][]{2019ApJ...880...32L, 2019ApJ...884..116L,2017ApJ...836..205A,2016MNRAS.463L..26B,2015ApJ...804..111M,2017ApJ...850...87M}; orphan TeV flare \cite[e.g.][]{2017ApJ...836..205A}). Interestingly, different types of flares, i.e., orphan flares and multi-wavelength flares, have been observed to occur in the same blazar from time to time. For instance, the FSRQ PKS~0208-512 exhibited three flares at optical and near-infrared wavelengths within 3 years, with the second one having no $\gamma$-ray counterpart. Ref.~\citep{2013ApJ...771L..25C} found that the Compton dominance ($q$), which is defined as the luminosity ratio between the IC component and synchrotron component, was different for the three flares. This was interpreted as evidence for a varying magnetic field and/or varying soft photon field during these optical outbursts. Various models have been proposed to explain the origins of orphan flares. For example, Ref.~\citep{2016MNRAS.463L..26B} proposed that the orphan $\gamma$-ray flare of PKS~1222+21 can be explained when relativistic blobs in the jet encounter luminous stars. Ref.~\citep{2015ApJ...804..111M} suggested that the ring-of-fire model (a blob propagates relativistically along the fast-moving spine of a blazar jet and passes through a synchrotron-emitting ring of electrons from the slow-moving sheath of the jet) can reproduce the orphan $\gamma$-ray flare of PKS~1510-089. Ref.~\citep{2014IJMPS..2860180C} suggested the orphan optical flare of PKS~0208-512 can be explained by different allocations of energy between the magnetization of the emitting region and particle acceleration. Ref.~\citep{2016Galax...4...45J} showed that the orientation of the magnetic field might be associated with orphan flares. Ref.~\citep{2021MNRAS.503..688S} argues that an orphan $\gamma$-ray flare from FSRQs is likely to arise if the particles are accelerated in magnetically dominated pair plasmas. The synchrotron emission is suppressed since the particles are accelerated nearly along the direction of the local magnetic field (small pitch angles), while the $\gamma$-ray flare is produced by inverse Compton scattering on an external radiation field. Ref.~\citep{2005ApJ...630..186R} explained an orphan TeV flare of 1ES 1959+650 with a hadronic model in which relativistic protons interact with the photon field supplied by electron synchrotron radiation reflected off a dilute reflector. While all these models are viable and can reproduce the spectral features of an orphan flare, one may wonder if there is a single scenario that can apply to all orphan flares. More importantly, if both orphan and multi-wavelength flares are observed from the same blazar, do we have to apply different models to explain different types of flares or can a single scenario account for both? In this work, we attempt to establish a connection between orphan and multi-wavelength flares occurring in a certain blazar, and search for a theoretical interpretation of the spectral variety of blazar flares in a unified physical picture. In general, the non-thermal blazar emission is produced when the jet's energy (magnetic or kinetic) is dissipated and transferred to relativistic particles. The properties of the resulting non-thermal emission may strongly depend on the distance of the dissipation site from the central supermassive black hole (SMBH), since the physical environment can experience a pronounced change along the jet \cite[e.g.][]{1979ApJ...232...34B, 1980ApJ...235..386M, 1989ApJ...340..181G, 2013ApJ...771L..25C}. However, the location of the dissipation zone in blazar jets remains uncertain \cite[e.g.][]{2011A&A...534A..86T,2016ARA&A..54..725M, 2019BAAS...51c..92R}. In some previous studies, it was suggested that dissipation may occur stochastically along the jet of a blazar \cite[e.g.][]{2019PhRvD..99f3008L, 2019ApJ...886...23X, 2020arXiv201210291P}. In this framework, the non-flaring emission of a blazar results from the superposition of radiation produced in numerous dissipation zones. If additional energy dissipation takes place in one (or a few) of them, so that its (their) emission outshines the rest of the jet, then the blazar is expected to flare. Therefore, there may be no essential difference between the non-flaring state and the flaring state of a blazar, except that the flaring state is related to a much stronger dissipation event. The distance of the flaring zone to the SMBH can then determine the spectral and temporal properties of the flare. In this paper, we explore in detail such a scenario. Hereafter, we refer to it as the stochastic dissipation model. The rest of the paper is organized as follows. The model setup is introduced in Section~\ref{sec:method}. In Section~\ref{sec:sec3} we apply the model to two well-known blazars, namely the FSRQ 3C~279 and the BL~Lac PKS~2155-304. In Section~\ref{sec:sec4}, we study the applicability of the model to the general blazar population. The discussion and conclusions are given in Section~\ref{sec:sec5} and Section~\ref{sec:sec6}, respectively. In this work, we use the following cosmological parameters, $H_{0}= 68$ km $\rm s^{-1} Mpc^{-1}$, $\Omega_{\Lambda} = 0.7$ and $\Omega_{\rm M} = 0.3$. \section{Model Setup} \label{sec:method} We assume that the emission of the blazar jet in a flaring state is composed of at least two emission components. One component arises from a flaring zone that dominates the flare emission. The appearance of such components may be related to MHD instabilities \citep{kink, current_driven} or magnetic reconnections in the jet \citep{1998ApJ...493..291B,2009MNRAS.394L.126M,2009MNRAS.395L..29G,2013MNRAS.431..355G,2016MNRAS.462.3325P}. The other component is the superposition of emission from numerous but comparatively weak dissipation zones along the entire jet. The latter can be regarded as a background emission to the flare \cite[e.g.][]{2019ApJ...877...39M} and might describe the non-flaring blazar emission. The two emission components are assumed to be decoupled from each other. Since we focus primarily on the former component in this work, the modelling of the background radiation spectrum is simply characterized with a polynomial function. \begin{figure*}[htbp] \centering \includegraphics[width=2\columnwidth]{fig1_new.eps} \caption{Schematic view (not to scale) of the stochastic dissipation model. DT and BLR represent dusty torus and broad-line region respectively. The background radiation comes from numerous but relatively weak dissipation zones (not shown here). The flaring zone (indexed blobs) occurs at a random distance from the base of the jet. We argue that the orphan $\gamma$-ray flares are more likely to arise, if the flaring zone occurs in location A. The orphan optical flares may arise if the flaring zone occurs in location C, while multi-wavelength correlated flares are expected in location B. \label{fig:sketch}} \end{figure*} In Fig.~\ref{fig:sketch} we show a sketch of the considered scenario. Since the ratio between the power of the synchrotron radiation and the power of the IC radiation is roughly proportional to the ratio between the energy density of the magnetic field $u_{\rm B}$ and that of the target radiation field $u_{\rm ph}$, the emission from the flaring zone will be dominated by the IC process if it occurs relatively close to the SMBH, given the presence of the broad line region (BLR) and/or the dusty torus (DT). As a result, the $\gamma$-ray emission from the flaring zone may exceed that of the jet's background emission, while the synchrotron radiation of the flaring zone at lower frequency could still be subdominant. In this case, the blazar's emission is enhanced specifically at the $\gamma$-ray band, and the blazar appears to be experiencing an orphan $\gamma$-ray flare. As the distance of the flaring zone to the SMBH increases, the synchrotron radiation becomes increasingly important with respect to either SSC or EC radiation. The synchrotron process could dominate if the dissipation takes place beyond a certain distance and an orphan optical flare may then be expected. For a quantitative description, we assume that electrons are continuously injected into the flaring zone (as long as this remains active) with a broken power-law energy distribution \cite[e.g.][]{2009MNRAS.397..985G, 2010MNRAS.402..497G} \begin{equation}\label{eq:1} n_{\rm e}^{\rm inj}(\gamma)\propto\left\{ \begin{array}{ll} \gamma^{-p_1}, & \gamma_{\rm min}\leq \gamma\leq \gamma_{\rm break}~\\ \gamma_{\rm break}^{p_2-p_1}\gamma^{-p_2}, & \gamma_{\rm break}<\gamma\leq\gamma_{\rm max}, \end{array} \right. \end{equation} where $\gamma_{\min}, \gamma_{\max}$ are the minimum and maximum Lorentz factors of the distribution, $\gamma_{\rm break}$ is the break Lorentz factor, which is not related to radiative cooling, $p_1$ and $p_2$ are respectively the low-energy slope and the high-energy slope of the broken power-law spectrum. Assuming a spherical dissipation zone with radius $R$ in the comoving frame and given the electron injection luminosity $L_{\rm e}^{\rm inj}$, the steady-state electron density distribution can then be written as \citep{2019ApJ...886...23X} \begin{equation} N_{\rm e}(\gamma)=\frac{3L_{\rm e}^{\rm inj}n_{\rm e}^{\rm inj}(\gamma)}{4\pi R^3m_{\rm e}c^2 \int{\gamma n_{\rm e}^{\rm inj}(\gamma){\rm d}{\gamma}}}{\rm min}(t_{\rm cool}(\gamma),t_{\rm esc}), \end{equation} where $t_{\rm esc}=R/c$ is the electron escape timescale, $t_{\rm cool}=3m_{\rm e}c/(4\sigma_{\rm T}\gamma(u_{\rm B}+f_{\rm KN}u_{\rm ph}))$\footnote{$u_{\rm ph}$ here does not count in the density of the synchrotron radiation. Taking it into account makes the calculation become non-linear \citep{2010A&A...524A..31Z}. Although it may be dealt with iteratively, it will lead to an excessively expensive computation when we apply the MCMC method to the spectral fitting later. To evaluate the influence, we have re-calculated the model flux for the best-fit parameters shown in Table~\ref{tab:parameters}, after including the synchrotron radiation in $u_{\rm ph}$ when calculating the cooling of electrons. The difference is 10\% at most (for flare 1 of PKS~2155-304 without external radiation field) for $E_\gamma<1\,$TeV.} is the electron radiative cooling timescale, $\sigma_{\rm T}$ is the Thomson scattering cross section, and $f_{\rm KN}$ is the factor accounting for Klein-Nishina (KN) effects \citep{2005MNRAS.363..954M}. The synchrotron and IC emission from the relativistic electrons can be then calculated given the magnetic field and the radiation field. High-energy photons from the IC process can be absorbed by soft photons via photon-photon pair production. We consider the absorption of $\gamma$-ray photons due to the synchrotron radiation of the flaring zone, the BLR radiation and the DT radiation, as well as the extragalactic background light (EBL, model C in Ref.~\citep{2010ApJ...712..238F}) during the propagation in the intergalactic space. The magnetic field strength in the dissipation zone may vary with the distance to the SMBH, $r$. Considering a truncated conical jet and assuming that the radius of the dissipation zone $R$ is comparable to the transverse radius of the jet at its location, we may write \begin{equation}\label{eq:2} R(r) = R_0\left(\frac{r}{0.1~\rm pc}\right), \end{equation} where $R_0$ is the transverse radius of the jet at 0.1 pc. If we assume a constant magnetic luminosity along the jet, which is consistent with some results of the VLBA survey \cite[e.g.][]{2009MNRAS.400...26O,2011A&A...532A..38S}, the magnetic field strength can also be parameterized as a function of $r$ \begin{equation}\label{eq:3} B(r)=B_0\frac{R_0}{R(r)}, \end{equation} where $B_0$ is the magnetic field strength of the dissipation zone for $r=0.1$\,pc. The target radiation field for the IC process consists of the synchrotron radiation of the electrons in the dissipation zone and the external photon field. Both radiation fields depend on the distance $r$ of the flaring zone to the SMBH. The energy density of synchrotron photons (as measured in the comoving frame of the dissipation zone) is given by $u_{\rm syn}(r) = L_{\rm syn}/(4\pi c R(r)^2\Gamma^4)$, where $L_{\rm syn}$ is the luminosity of synchrotron emission in the observer's frame and $\Gamma$ is the bulk Lorentz factor of the jet. The energy density of external photons, i.e., from the BLR and DT, in the jet comoving frame can be written as \cite{2012ApJ...754..114H} \begin{equation}\label{eq:5} u_{\rm BLR}(r)=\frac{\xi_{\rm BLR}\Gamma^2 L_{\rm disk}}{3\pi r^2_{\rm BLR}c[1+(r/r_{\rm BLR})^{\beta_{\rm BLR}}]} \end{equation} \begin{equation}\label{eq:6} u_{\rm DT}(r)=\frac{\xi_{\rm DT}\Gamma^2 L_{\rm disk}}{3\pi r^2_{\rm DT}c[1+(r/r_{\rm DT})^{\beta_{\rm DT}}]}, \end{equation} where $\xi_{\rm BLR}=0.1$ and $\xi_{\rm DT}=0.1$ are the fractions of the disk luminosity reprocessed into BLR and torus radiation, respectively, $r_{\rm BLR}=0.1(L_{\rm disk}/10^{46}\rm erg~s^{-1})^{1/2}$ pc and $r_{\rm DT}=2.5(L_{\rm disk}/10^{46}\rm erg~s^{-1})^{1/2}$ pc denote the characteristic distances of the BLR and torus from the SMBH in the AGN frame. We assume that the radiation energy density drops steeply with distance beyond the characteristic distance $r_{\rm BLR(DT)}$, adopting $\beta_{\rm BLR}=3$ \cite{2009ApJ...704...38S} and $\beta_{\rm DT}=4$ \cite{2012ApJ...754..114H}. The spectral shape of the radiation fields is assumed to be that of a grey body peaking at a frequency $4.5\times10^{14}\Gamma$ Hz for the BLR \cite{2019PhRvD..99f3008L} and at $3\times 10^{13}\Gamma$ Hz for the DT \cite{2009MNRAS.397..985G}, both measured in the jet's comoving frame. Note that the background emission of the jet (or the non-flaring emission) may also serve as a target photon field for the IC process. Its influence is, however, minor as will be shown in Section~\ref{sec:sec5}. For simplicity, we ignore it as a target photon field in the following calculations. There have been suggestions that the jet decelerates from highly relativistic speeds to mildly or sub-relativistic speeds on kiloparsec scales \citep{1997MNRAS.286..425W,1999MNRAS.304..135H,2008A&A...491..321M,2014MNRAS.441.1488P}. Continuous jet models involving decelerating flows have been used to fit the SEDs of AGN \citep{2003ApJ...594L..27G,2009ApJ...699...31A,2013MNRAS.429.1189P}. Following Ref.~\cite{2013MNRAS.429.1189P}, we assume the jet's bulk Lorentz factor remains constant up to 0.1\,pc as $\Gamma_0\gg 1$, and decelerates beyond this distance as a function of $\log(r)$, reaching $\Gamma_{\rm min}=2$ at 100\,pc. We approximate the Doppler factor by $\delta_{\rm D}\approx \Gamma$. Then the Doppler factor for $r>0.1~{\rm pc}$ can be given by \citep{2013MNRAS.429.1189P}: \begin{equation}\label{eq:4} \delta_{\rm D}(r)=\delta_{\rm D,0} - \frac{\delta_{\rm D,0}-2}{{\rm log}(\frac{100~{\rm pc}}{0.1~{\rm pc}})}{\rm log}\left(\frac{r}{0.1~\rm pc}\right), \end{equation} where $\delta_{\rm D,0}\approx \Gamma_{0}$ is the Doppler factor at 0.1 pc, noting that other forms of $\delta_D(r)$ may also be possible. In fact, some observations \cite[e.g.][]{2009ApJ...706.1253H,2012ApJ...758...84P,2013AJ....145...12B,2015ApJ...798..134H, 2019ApJ...887..147P}, numerical simulations \cite[e.g.][]{2007MNRAS.380...51K} and theoretical studies \citep[e.g.][]{2019MNRAS.484.1378G} show that the jet may still accelerate from sub-parsec up to tens of parsec scales. We will discuss a constant-speed jet case and an accelerating jet case in Sec.~\ref{sec:sec4}. In total, there are ten free parameters for one dissipation zone. Among them, six parameters are related to the spectrum of electron injection, i.e., three for the characteristic electron Lorentz factors ($\gamma_{\rm min}$, $\gamma_{\rm break}$, and $\gamma_{\rm max}$), two for the power-law slopes of the electron spectrum ($p_1$ and $p_2$), and one for the electron injection luminosity ($L_{\rm e}^{\rm inj}$). The remaining four parameters are related to the physical properties of the flaring zone, namely the distance of the flaring zone from the SMBH $r$, the radius of the flaring zone $R$, the magnetic field strength $B$, and the Doppler factor $\delta_{\rm D}$. For the modeling of multiple flares from the same source, the last three parameters are not independent of each other, but are related to their distance from the black hole $r$ and their values at 0.1 pc ($R_0$, $B_0$, $\delta_{\rm D,0}$). To reduce the number of free parameters, we fix the values of $p_1$, $\gamma_{\rm min}$ and $\gamma_{\rm max}$ in different flares of a given blazar. To summarize, for the modeling of multiple flares from one source, our model has six parameters ($R_0$, $B_0$, $p_1$, $\gamma_{\rm min}$, $\gamma_{\rm max}$, $\delta_{\rm D,0}$) that are common among different dissipation sites, and four parameters ($L_{\rm e}^{\rm inj}$, $p_2$, $\gamma_{\rm break}$, $r$) that are unique to each dissipation site. Thus, if we apply the stochastic dissipation model to explain, for instance, three flares from a blazar, we have to specify in total eighteen parameters. \section{Application to 3C~279 and PKS~2155-304} \label{sec:sec3} Blazars are historically divided into two classes, namely FSRQs and BL~Lacs, according to their optical spectra. The former display strong, broad emission lines, while the latter show at most weak emission lines, and in many cases are completely featureless \citep{1995PASP..107..803U}. These sources are thought to be powered by accretion disks with different mass accretion rates and radiative efficiencies (for a review, see Ref.~\citep{2019ARA&A..57..467B}). As a result, the strength of ambient photon fields in these blazar subclasses is expected to be very different. \begin{table*} \caption{\label{tab:parameters}Best-fit parameters for three flares of 3C~279 and PKS~2155-304 modeled with the stochastic dissipation scenario. Errors correspond to the $1\sigma$ uncertainties.} \begin{ruledtabular} \begin{tabular}{c|ccc|ccc} & \multicolumn{3}{c|}{3C~279} & \multicolumn{3}{c}{PKS~2155-304} \\ \hline $L_{\rm disk}$ ($10^{42}~\rm erg~s^{-1}$) & \multicolumn{3}{c|}{$2000$\footnote{The disk luminosity of 3C~279 is $2\times 10^{45}~\rm erg~s^{-1}$ \citep{1999ApJ...521..112P}.}} & \multicolumn{3}{c}{1\footnote{Ref.~\citep{2011ApJ...732..113S} has measured and found no emission line is observed in the spectrum of PKS~2155-304. According to their analysis, the BLR luminosity upper limit of PKS~2155-304 is $1.1\times10^{41}~\rm erg~s^{-1}$. So we set the BLR luminosity of $10^{41}~\rm erg~s^{-1}$ and calculate the corresponding disk luminosity of $10^{42}~\rm erg~s^{-1}$.}} \\ $R_0$ ($10^{16}~$cm) & \multicolumn{3}{c|}{$0.91_{-0.08}^{+0.07}$\footnote{This corresponds to a half-opening angle of for $\sim1.7^{\circ}$ the truncated conical jet. }} & \multicolumn{3}{c}{$3.41_{-0.54}^{+0.88}$\footnote{This corresponds to a half-opening angle of for $\sim6.3^{\circ}$ the truncated conical jet. }} \\ $B_0$ (G) & \multicolumn{3}{c|}{$0.45_{-0.04}^{+0.03}$} & \multicolumn{3}{c}{$0.14_{-0.01}^{+0.02}$} \\ $\gamma_{\rm min}$ ($10^2$) & \multicolumn{3}{c|}{$5.07_{-0.06}^{+0.05}$} & \multicolumn{3}{c}{$1.01_{-0.34}^{+0.68}$} \\ $\gamma_{\rm max}$\footnote{The theoretical limit of $\gamma_{\rm max}$ can be obtained by equating the acceleration and the cooling or escape timescales, i.e., $t_{\rm acc}={\rm min}(t_{\rm cool},t_{\rm esc})$, where $t_{\rm acc}\sim E/ecB$ can be given in the limiting case following Ref.~\citep{2002PhRvD..66b3005A}. We obtain $\gamma_{\rm max}=9.9\times 10^{6} $ for 3C~279 and $\gamma_{\rm max}=7.9\times 10^{7} $ for PKS~2155-304 based on the best-fit parameters. The best-fit values of $\gamma_{\rm max}$ given by the MCMC method are consistent with these theoretical limits.} ($10^6$) & \multicolumn{3}{c|}{$9.68_{-0.65}^{+45.65}$} & \multicolumn{3}{c}{$1.56_{-0.47}^{+0.78}$} \\ $p_{\rm 1}$ & \multicolumn{3}{c|}{$1.80_{-0.04}^{+0.06}$} & \multicolumn{3}{c}{$1.63_{-0.04}^{+0.03}$} \\ $\delta_{\rm D,0}$ & \multicolumn{3}{c|}{$70.6_{-3.4}^{+4.5}$} & \multicolumn{3}{c}{$53.6_{-7.0}^{+6.7}$} \\ \hline Flare state & Flare 1 & Flare 2 & Flare 3 & Flare 1 & Flare 2 & Flare 3 \\ \hline $L_{\rm e}^{\rm inj} (10^{44}~\rm erg~s^{-1})$ & $80.0_{-8.1}^{+10}$ & $943_{-55}^{+53}$ & $314_{-52}^{+55}$ & $0.32_{-0.05}^{+0.07}$ & $0.21_{-0.04}^{+0.08}$ & $8.74_{-2.27}^{+3.33}$ \\ $p_{\rm 2}$ & $3.75_{-0.16}^{+0.14}$ & $3.37_{-0.05}^{+0.05}$ & $3.29_{-0.15}^{+0.22}$ & $3.72_{-0.15}^{+0.11}$ & $3.24_{-0.10}^{+0.06}$ & $4.27_{-0.07}^{+0.06}$ \\ $\gamma_{\rm break} (10^4)$ & $0.34_{-0.04}^{+0.03}$ & $0.43_{-0.07}^{+0.07}$ & $4.15_{-1.27}^{+1.57}$ & $2.50_{-0.57}^{+0.41}$ & $3.88_{-0.75}^{+1.73}$ & $6.55_{-0.52}^{+0.68}$ \\ $r$ ($10^{-1}~$pc) & $2.01_{-0.17}^{+0.23}$ & $30.8_{-1.9}^{+1.7}$ & $125_{-18}^{+22}$ & $0.12_{-0.01}^{+0.01}$ & $0.51_{-0.07}^{+0.10}$ & $39.2_{-3.3}^{+2.3}$ \end{tabular} \end{ruledtabular} \end{table*} \begin{figure*}[htbp] \centering \includegraphics[width=2\columnwidth]{SED_279_color_v3.eps} \caption{The fitting results for 3C~279. The black solid lines represent background radiation from many dissipation zones, and the orange solid lines are total radiation including background radiation and the emission from a flaring zone. The blue solid lines represent the total emission from the flaring zone. The dashed, dot-dashed and dotted lines are IC emission from the flaring zone for different seed photon fields (see inset legends). The grey points show archival data, and the colored symbols show the data points that correspond to the three different states of 3C~279. The references for the data can be found in Section~\ref{sec:sec3}. \label{fig:3C 279}} \end{figure*} \begin{figure*}[htbp] \centering \includegraphics[width=2\columnwidth]{SED_PKS_BLR_color_v3.eps} \caption{Same as Fig.~\ref{fig:3C 279} but for PKS~2155-304.\label{fig:PKS 2155-304}} \end{figure*} 3C~279 is a very bright and highly variable blazar at all wavelengths. It is classified as an FSRQ at redshift of 0.536. An orphan $\gamma$-ray flare was reported on 20 Dec 2013 \citep{2019ApJ...884..116L}. PKS~2155-304 is a well-known blazar in the southern hemisphere and also has bright and variable emissions, particularly in $\gamma$-ray energies. It is a relatively nearby high synchrotron-peaked (HSP) BL~Lac object at redshift of 0.116. An orphan optical flare lasting a few months was reported for PKS~2155-304 in 2016 \citep{2019hepr.confE..27W}. In addition to the orphan flares, many multi-wavelength correlated flares are observed in both sources. Thus, they are ideal test beds for our model. \begin{figure*}[htbp] \centering \includegraphics[width=2\columnwidth]{SED_PKS_color_v3.eps} \caption{Same as Fig.~\ref{fig:PKS 2155-304} for PKS~2155-304 but without external photon field. The common parameters for all flares are: $R_0=7.8^{+1.7}_{-0.9}\times10^{14}\,$cm, $B_0=0.38^{+0.07}_{-0.06}\,$G, $\gamma_{\rm min}=215^{+216}_{-178}$, $\gamma_{\rm max}=1.09^{+0.33}_{-0.61}\times10^6$, $p_1=1.6^{+0.1}_{-0.1}$, $\delta_{\rm D,0}=136^{+16}_{-23}$. The parameters for each flare read: $L_{\rm e}^{\rm inj}=1.15^{+0.40}_{-0.46}\times10^{45}\,{\rm erg~s^{-1}}$, $p_2=3.7^{+0.3}_{-0.2}$, $\gamma_{\rm break}=1.56^{+0.48}_{-0.43}\times10^{4}$ for Flare 1; $L_{\rm e}^{\rm inj}=3.33^{+1.33}_{-1.01}\times10^{44}\,{\rm erg~s^{-1}}$, $p_2=3.3^{+0.2}_{-0.3}$, $\gamma_{\rm break}=7.49^{+4.63}_{-4.63}\times10^{3}$ for Flare 2; $L_{\rm e}^{\rm inj}=5.77^{+2.81}_{-0.27}\times10^{44}\,{\rm erg~s^{-1}}$, $p_2=3.7^{+0.2}_{-0.1}$, $\gamma_{\rm break}=6.12^{+2.45}_{-6.12}\times10^{3}$ for Flare 3. \label{fig:PKS 2155-304_2}} \end{figure*} First, in order to define a low state for each blazar, we search the archival data of each blazar for a period with simultaneous multi-wavelength data of the lowest flux level. We fit the SED of this non-flaring period phenomenologically with a polynomial function and regard it as the background emission component. Then, we choose three flaring states for each blazar that are characterized by different multi-wavelength spectral properties. One of the three flaring states is an orphan optical or $\gamma$-ray flare, and the other two are multi-wavelength flares with different Compton dominance ($q$). It is worth noting that the definition of the orphan flare strongly depends on how the referenced non-flaring SED of the blazar is chosen. For example, Ref.~\citep{2019ApJ...884..116L, 2019hepr.confE..27W} define the orphan flares by comparing the SED of the flaring state with that of the pre-flare state. We here choose the historically lowest-state SED of the blazar as the background emission. Therefore, the reported orphan flares may appear as multi-wavelength flares when compared with the SED of our chosen non-flaring emission. We search the parameter space, which is composed of eighteen parameters, to find the best-fit values of the model parameters for the three flaring states at once. Note that the multi-wavelength SED of a flare is the superposition of the background component and the flare component, as mentioned in the previous section. The synchrotron radiation and the IC radiation are calculated using the \texttt{naima} Python package \citep{naima}. The best-fit model parameters for each flaring state are obtained with a Markov chain Monte Carlo (MCMC) method (1$\sigma$ error-bars are also obtained with the MCMC method). We use the \texttt{emcee} Python package (version 3.0.2) \citep{2013PASP..125..306F} that is based on the chi-squared statistics. To save computation time and ensure faster convergence, we firstly perform an ``eye-ball fitting'' to the SED and exclude some inappropriate parameter space. We fit the data with 150 parallel walkers for 1000 steps each with a burn-in phase of 300 steps. The parameters and best-fit parameter values including their 1$\sigma$ error-bars can be found in Table~\ref{tab:parameters} and the details of the fitting results of the two blazars will be detailed respectively in the following two subsections. \subsection{3C 279}\label{sec:sec4.1} Fig.~\ref{fig:3C 279} shows the data and best-fit models of four different states of 3C~279. The grey points are historical data which come from the SSDC SED builder\footnote{https://tools.ssdc.asi.it/SED/}. The blue points in the first panel are low-state data collected from February to May of 2010 (period H in Ref.~\citep{2012ApJ...754..114H}). The violet points in the second panel show the SED of the orphan $\gamma$-ray flare (Flare 1) in 2013 as reported by Ref.~\citep{2015ApJ...807...79H}. The cyan and pink points in third and fourth panels are multi-wavelength flaring state data collected on 16 June 2015 (Flare 2) and 1-8 June 2011 (Flare 3), respectively \citep{2019ApJS..245...18F}. The optical flux of the latter two flares is comparable but the $\gamma$-ray flux of Flare 3 is significantly lower than that of Flare 2. The black curve in each panel is the polynomial function characterizing the background emission (low state) component. The solid blue curve shows the flare emission (high state) component. The solid orange curve represents the sum of these two components. The best-fit parameters of 3C~279, which are listed in Table~\ref{tab:parameters}, are within a reasonable range, except for the large Doppler factor $\delta_{\rm D,0}=70.6_{-3.4}^{+4.5}$. Nonetheless, this value is consistent with other studies. For instance, Ref.~\citep{2004AJ....127.3115J} suggested that the Doppler factor of 3C~279 was at least 39 close to the SMBH by analysing the observation results from the Very Long Baseline Array (VLBA). Ref.~\citep{2017AIPC.1792e0015H} found that a very high bulk Lorentz factor ($>50$) at the jet base was required to explain the minute-scale variability of 3C~279 by considering a standard EC model with conical jet geometry. Recently, Ref.~\citep{2020MNRAS.492.3829L} found that the bulk Lorentz factors of some moving emission features of 3C~279 should exceed 37 by analyzing VLBA images at 43 GHz. These authors argue that turbulent motions at the relativistic sound speed could boost the Doppler factor up to $\sim70$ when such turbulent velocities are directed toward the line of sight relative to the systemic flow. As expected, the position of the flaring zone for the orphan $\gamma$-ray flare is the closest to the SMBH with $r=0.2\,$pc, while the ratio between the synchrotron flux and the EC ($\gamma$-ray) flux increases as the distance $r$ increases. A recent paper studied three orphan $\gamma$-ray flares from three FSRQ sources including 3C~279 \citep{2020arXiv201210291P}. Using a two-zone leptonic model, these authors showed that the orphan $\gamma$-ray flare of 3C~279 might have originated from the region close to its SMBH. Even though the flare reported in that paper is not the same as the one studied here, their conclusions about the production site of the orphan $\gamma$-ray flare are consistent with ours. For FSRQs, the $\gamma$-ray emission mainly arises from the EC process if the flaring zone is relatively close to the SMBH. While the energy density of the magnetic field ($u_{B}(r)\propto B^2(r)$) decreases along the jet as $r^{-2}$, the energy densities of external photons drop more quickly (i.e., $u_{\rm BLR}(r)\propto r^{-3}\delta_{\rm D}^2, u_{\rm DT}(r)\propto r^{-4}\delta_{\rm D}^2$) once the distance is beyond the characteristic radius of BLR or DT ($r>r_{\rm BLR (DT)}$). Therefore, the Compton dominance, considering the BLR and DT components separately, reads $q_{\rm BLR}(r)=u_{\rm BLR}(r)/u_{B}(r)\propto r^{-1}\delta_{\rm D}^2$, and $q_{\rm DT}(r)=u_{\rm DT}(r)/u_{B}(r)\propto r^{-2}\delta_{\rm D}^2$, with $\delta_{\rm D}$ being constant or decreasing with radius. The KN effect would slightly modify the expressions of the Compton ratio but it would not alter the radial dependence of the trend. Such a result suggests that the $\gamma$-ray emission is more intense when the dissipation zone is located closer to the BLR \cite[e.g.][]{2009ApJ...704...38S}, and verifies our speculation that orphan $\gamma$-ray flares tend to appear when dissipation occurs comparatively close to the SMBH. Although no orphan optical flare has been discovered from 3C~279 yet, we may expect to observe orphan optical flares from the source if the flaring zone is located far from the SMBH. \subsection{PKS~2155-304}\label{sec:sec4.2} Fig.~\ref{fig:PKS 2155-304} shows the data and our best-fit models of four different states of PKS~2155-304. The gray points are the sum of historical spectral data. The blue points in the first panel are non-flaring data collected in 2013. The violet and cyan points in the second and third panels show the spectra during a multi-wavelength flare reported in 2014 (Flare 1) and 2015 (Flare 2) respectively. The pink points in the fourth panel give the spectrum measured during an orphan optical flare in 2016 (Flare 3)~\citep{2019hepr.confE..27W}. Note that the very high energy data of PKS~2155-304 are EBL corrected, so we only consider the $\gamma$-ray opacity due to the radiation of the blazar jet. As a BL~Lac object, PKS~2155-304 is not expected to have strong BLR and DT radiation. Indeed, no emission line is observed in its spectrum, posing an upper limit of $1.1\times 10^{41}~\rm erg~s^{-1}$ on its BLR luminosity. Therefore, we consider two cases for the external radiation field in the modeling of PKS~2155-304. In the first case, we assume the presence of the BLR with the luminosity equal to the measured upper limit, leading to a characteristic BLR radius $r_{\rm BLR}=10^{-3}\,$pc and a small dust torus in this case. In the second case, we simply do not take into account any external radiation field. The results are shown in Figs.~\ref{fig:PKS 2155-304} and \ref{fig:PKS 2155-304_2}, respectively. All displayed curves are the best-fit models and have the same meaning as those in Fig.~\ref{fig:3C 279}. \begin{figure*}[htbp] \centering \resizebox{\hsize}{!}{\includegraphics{Figure_new10.eps}} \caption{Results for three toy scenarios for orphan and multi-wavelength flares from a fiducial BL~Lac and FSRQ source. In Scenario A (decelerating jet case) and B (accelerating jet case), the magnetic field strength and Doppler factor vary along the jet, while in Scenario C both parameters are considered constant.} The black solid lines describe the low-state SEDs of PKS~2155-304 and 3C~279 that are used as reference for the fiducial BL~Lac and FSRQ, respectively. All lines have the same meaning as in Fig.~\ref{fig:PKS 2155-304}. \label{fig:1} \end{figure*} In both cases, the multi-wavelength SEDs in all three flaring states can be satisfactorily reproduced. In the case with external radiation fields, we find a similar trend of the ratio between the synchrotron flux to the IC flux as a function of $r$ as in 3C~279. The orphan optical flare (Flare 3) arises when the flaring zone occurs at a distance (i.e., $r=3.92\,$pc) far beyond the BLR, the characteristic radius of which is $r_{\rm BLR}=10^{-3}\,$pc. The obtained Doppler factor ($\delta_{\rm D,0}=53.6_{-7.0}^{+6.6}$) is a bit larger but still consistent with other studies. For instance, Ref.~\citep{2008MNRAS.384L..19B, 2008MNRAS.386L..28G} suggest that a bulk Lorentz factor above 50 is necessary to explain the minute-scale TeV variability of PKS~2155-304. However, when no external radiation fields are taken into account, the trend breaks down because the $\gamma$-ray emission in this case is produced by the SSC process that depends on the intensity of the synchrotron radiation and the size of the dissipation zone. The KN effect also plays an important role in the SSC-dominated case. As a result, an extremely large Doppler factor $\delta_{\rm D,0}=136.4^{+15.6}_{-22.5}$ is inferred by the fit. Hence, we do not consider this case as a reasonable solution, at least for the flares of PKS~2155-304. The best-fit results do not correspond well to the observed data, which also brings a very large uncertainty. For a detailed analysis, we refer readers to Appendix~\ref{sec:appendixa}. \begin{table*} \caption{\label{tab:parameters2_B}Indicative parameters for the three illustrative scenarios considered for BL~Lac.} \begin{ruledtabular} \begin{tabular}{c|ccc|ccc|ccc} & \multicolumn{3}{c|}{Scenario A} & \multicolumn{3}{c|}{Scenario B} & \multicolumn{3}{c}{Scenario C} \\ & $0.1\,$pc & $2.0\,$pc & $40.0\,$pc & $0.1\,$pc & $2.0\,$pc & $40.0\,$pc & $0.1\,$pc & $2.0\,$pc & $40.0\,$pc \\ \hline $L_{\rm disk}$ ($10^{44}~\rm erg~s^{-1}$) & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c}{} \\ $R_0$($10^{16}~$cm) & \multicolumn{3}{c|}{$1$} & \multicolumn{3}{c|}{$1$} & \multicolumn{3}{c}{$1$} \\ {$B_0$\footnote{$B(r)=B_0$ is employed in Scenario C.}}(G) & \multicolumn{3}{c|}{$0.3$} & \multicolumn{3}{c|}{$2$} & \multicolumn{3}{c}{$0.05$} \\ {$\delta_{\rm D,0}$\footnote{$\delta_{\rm D}(r)=\delta_{\rm D,0}$ is employed in Scenario C.}} & \multicolumn{3}{c|}{$60$} & \multicolumn{3}{c|}{$5$} & \multicolumn{3}{c}{$25$} \\ $p_{\rm 1}$ & \multicolumn{3}{c|}{$1.5$} & \multicolumn{3}{c|}{$1.5$} & \multicolumn{3}{c}{$1.5$} \\ $p_{\rm 2}$ & \multicolumn{3}{c|}{$4.8$} & \multicolumn{3}{c|}{$4.8$} & \multicolumn{3}{c}{$4.8$} \\ $\gamma_{\rm min}$ & \multicolumn{3}{c|}{$10$} & \multicolumn{3}{c|}{$10$} & \multicolumn{3}{c}{$10$} \\ $\gamma_{\rm break} (10^3)$ & $7.0$ & $41.0$ & $348.3$ & $20.0$ & $70.4$ & $247.7$ & \multicolumn{3}{c}{26.6} \\ $\gamma_{\rm max}$ ($10^6$) & $2.0$ & $11.7$ & $99.5$ & $2.0$ & $7.0$ & $24.8$ & \multicolumn{3}{c}{7.6} \\ $L_{\rm e}^{\rm inj} (10^{43}~\rm erg~s^{-1})$ & $4.1$ & $48.1$ & $267.5$ & $0.7$ & $0.7$ & $0.35$ & $43.1$ & $43.1$ & $2.2$ \\ \end{tabular} \end{ruledtabular} \end{table*} \begin{table*} \caption{\label{tab:parameters2_F}Indicative parameters for the three illustrative scenarios considered for FSRQ.} \begin{ruledtabular} \begin{tabular}{c|ccc|ccc|ccc} & \multicolumn{3}{c|}{Scenario A} & \multicolumn{3}{c|}{Scenario B} & \multicolumn{3}{c}{Scenario C} \\ & $0.1\,$pc & $2.0\,$pc & $40.0\,$pc & $0.1\,$pc & $2.0\,$pc & $40.0\,$pc & $0.1\,$pc & $2.0\,$pc & $40.0\,$pc \\ \hline $L_{\rm disk}$ ($10^{44}~\rm erg~s^{-1}$) & \multicolumn{3}{c|}{$8$} & \multicolumn{3}{c|}{$8$} & \multicolumn{3}{c}{$8$} \\ $R_0$($10^{16}~$cm) & \multicolumn{3}{c|}{$5$} & \multicolumn{3}{c|}{$1$} & \multicolumn{3}{c}{$5$} \\ {$B_0$}(G) & \multicolumn{3}{c|}{$1.8$} & \multicolumn{3}{c|}{$0.16$} & \multicolumn{3}{c}{$0.05$} \\ {$\delta_{\rm D,0}$} & \multicolumn{3}{c|}{$60$} & \multicolumn{3}{c|}{$15$} & \multicolumn{3}{c}{$25$} \\ $p_{\rm 1}$ & \multicolumn{3}{c|}{$1.5$} & \multicolumn{3}{c|}{$1.5$} & \multicolumn{3}{c}{$1.5$} \\ $p_{\rm 2}$ & \multicolumn{3}{c|}{$4.8$} & \multicolumn{3}{c|}{$4.8$} & \multicolumn{3}{c}{$4.8$} \\ $\gamma_{\rm min}$ & \multicolumn{3}{c|}{$10$} & \multicolumn{3}{c|}{$10$} & \multicolumn{3}{c}{$10$} \\ $\gamma_{\rm break} (10^3)$ & $0.2$ & $1.2$ & $10.0$ & $1.0$ & $3.5$ & $12.4$ & \multicolumn{3}{c}{1.9} \\ $\gamma_{\rm max}$ ($10^6$) & $2.0$ & $11.7$ & $99.5$ & $20.0$ & $11.7$ & $99.5$ & \multicolumn{3}{c}{18.6} \\ $L_{\rm e}^{\rm inj} (10^{43}~\rm erg~s^{-1})$ & $0.07$ & $27.3$ & $211.4$ & $0.3$ & $0.3$ & $15$ & $1.4$ & $42.1$ & $1.4$ \\ \end{tabular} \end{ruledtabular} \end{table*} \section{Influence of the Compton dominance on the types of blazar flares}\label{sec:sec4} To produce an orphan flare in a certain energy band within our model, the flux of the flaring zone in that energy band should significantly exceed the flux of the background emission, while the flux in any other energy band should remain below the background emission by definition. Therefore, the key to producing an orphan flare, provided a sufficient flux from the flaring zone can be produced, is the comparison between the shape of the flaring zone's SED and that of the background emission's SED. The main feature of the double-humped-shaped SED is the relative amplitude of the two humps, which can be described by the Compton dominance $q$. The ratio between the Compton dominance of the flaring zone $q$ and that of the jet's non-flaring emission, which we hereafter denote as $\chi$, determines the type of the blazar flare. Blazars tend to present an orphan $\gamma$-ray flare if $\chi\gg1$. On the contrary, an orphan optical flare would appear if $\chi\ll 1$. If the Compton dominance of the flaring and quiet state is comparable, a multi-wavelength flare is most likely to be produced within our model. We therefore conduct a more general study of the influence of model parameters on the Compton ratio of the flaring zone and the multi-wavelength properties of a blazar flare in this section. For this purpose, we consider three generic scenarios. The first one is the same as the model introduced in Section \ref{sec:method}, and is referred to as Scenario A. In addition, we consider an accelerating jet case (Scenario B) in which the Doppler factor increases along the jet and we employ $\delta_{\rm D}(r)=\delta_{\rm D,0}\left(r/{0.1~\rm pc}\right)^{0.16}$, following the observation of M87 jet \citep{2019ApJ...887..147P}. In Scenario C, the magnetic field strength $B$ and the Doppler factor $\delta_D$ are assumed independent of the jet radius. For each scenario, an FSRQ (including the so-called ``masquerading'' BL~Lacs, i.e., \cite{2013MNRAS.431.1914G}) and a true BL~Lac (without external radiation fields) will be studied, with the location of the flaring zone being located at 0.1, 2.0 and 40.0\,pc away from the SMBH respectively. The results are displayed in Fig.~\ref{fig:1} and the parameters can be found in Table~\ref{tab:parameters2_B} for BL~Lac and Table~\ref{tab:parameters2_F} for FSRQ. For this example, we intentionally choose the Compton dominance of the jet's background emission to be unity so that $\chi=q$. The parameters in Table~\ref{tab:parameters2_B} and \ref{tab:parameters2_F} are selected to ensure that all three types of flare (multi-wavelength flare and orphan optical/gamma-ray flare appear as much as possible when the flare region is located in three different positions. For example, a smaller $\delta_{\rm D,0}$ may cause the orphan $\gamma$-ray flare to be unable to appear for FSRQs in all of three scenarios. In Scenario A, the orphan $\gamma$-ray flares in an FSRQ source are more likely to show up when the location of the flaring zone is comparatively close to the SMBH, while the orphan optical flares are apt to appear far away from the SMBH (see second row from top in Fig.~\ref{fig:1}). For a BL~Lac object, the results in Scenario A show the opposite trend (see first row from top in Fig.~\ref{fig:1}): the higher intensity $\gamma$-ray flare arises when the flaring zone is located far away from the SMBH. The reason is the same as discussed for PKS~2155-304 (see also Appendix~\ref{sec:appendixa}). In Scenario B, the situation for BL~Lacs is opposite to those in Scenario A and is more akin to the situation of FSRQs in Scenario A. The orphan gamma-ray flares tend to occur at small distance while the orphan optical flares can be found at large distances from the SMBH. On the other hand, the orphan optical flares are hard to appear in FSRQs which is different from that in Scenario A. The Compton dominance ($q$) is still the key to understanding the difference. Contrary to the decelerating jet case, the increasing Doppler factor in scenario B can slow down the decrease of the Compton dominance along the jet. Therefore, it is necessary that the dissipation occurs at a distance very far from the SMBH to produce an orphan optical flare with the Compton dominance much less than unity. A detailed analysis can be found in Appendix~\ref{sec:appendixb}. The results in Scenario C are similar to that in Scenario B except that the orphan $\gamma$-ray flare can appear at a medium distance ($r=2$~pc) in Scenario B. It can also be explained by considering the Compton dominance. The energy density of the magnetic field is fixed in Scenario C. So, the Compton dominance $q(r)$ decreases with distance as $r^{-3}$ (or $r^{-4}$) for the BLR (DT) if the flaring zone is located beyond the characteristic distances of the BLR (DT) in an FSRQ. For a BL~Lac object, the high-energy emission arises from the SSC process. To focus on the influence of the flaring zone's position (i.e., $r$) on the Compton dominance, let's assume a fixed synchrotron luminosity for the flare. In this case we can obtain the energy density of the synchrotron radiation $u_{\rm syn}\propto r^{-2}$. Thus, the Compton dominance $q(r)$ decreases as $r^{-2}$. The factor of KN effect ($f_{\rm KN}$) remains unchanged given a fixed magnetic field strength and Doppler factor. If the synchrotron luminosity of the flaring zone also decreases with increasing distance, it would lead to a faster decline of the Compton dominance $q(r)$ with respect to $r$. As a result, it becomes more difficult to generate $\gamma$-ray flares at a larger distance. \section{Discussion}\label{sec:sec5} \subsection{Duration of blazar flares} Although we mainly focus on the SED of blazar flares in the present work, the flare duration is another important property. We here briefly discuss the expectations in our model. The duration timescale of a flare cannot be shorter than the light-crossing time of the dissipation zone in the observer's frame, i.e., $t_{\rm lc}\sim(1+z)R(r)/(c\delta_{\rm D})$, which depends on the size of the dissipation zone $R$ or its distance $r$ from the SMBH. In reality, the particle radiative cooling timescale $t_{\rm cool}$, adiabatic timescale $t_{\rm ad}$ or the escape timescale $t_{\rm esc}$ may determine the flare duration if they are longer than $t_{\rm lc}$. Similarly, the particle acceleration timescale $t_{\rm acc}$ or the injection timescale $t_{\rm inj}$ of accelerated particles into the radiation zone could affect the flare duration. These timescales depend on the mechanism that triggered the dissipation, which is not specified in our work. Hence, for the following discussion, we focus on the radiative loss and escape timescales. According to our setup in Section~\ref{sec:method}, all these timescales are shorter for smaller $R$, thus the light-crossing time can be used as a proxy for the flare duration. \begin{table*}[htbp] \caption{\label{tab:timescale}The light-crossing times and observed duration of flaring states for 3C~279 and PKS~2155-304.} \begin{ruledtabular} \begin{tabular}{c|cc|cc|cc} & \multicolumn{2}{c|}{3C~279} & \multicolumn{2}{c|}{PKS~2155-304} & \multicolumn{2}{c}{PKS~2155-304 (no BLR/DT)}\\ & $t_{\rm lc}$ & $\Delta t_{\rm dur}$ & $t_{\rm lc}$ & $\Delta t_{\rm dur}$ & $t_{\rm lc}$ & $\Delta t_{\rm dur}$ \\ \hline Flare 1 & $0.17^{+0.05}_{-0.04}$ days & $0.5$ days\footnote{The observed duration of Flare 1 for 3C~279 is reported by Ref.~\citep{2019ApJ...884..116L}. The others are the approximate duration estimated from light curves of the flares.} (G)\footnote{`G' denotes that this duration is estimated from the GeV band, `O' from the optical, `X' from the X-ray, and `T' from the TeV band.} & $0.03^{+0.02}_{-0.01}$ days & $\sim20$ days (X/G) & $0.08^{+0.10}_{-0.03}$ days & $\sim20$ days (X/G)\\ Flare 2 & $4.55^{+0.96}_{-0.94}$ days & $\sim2$ days (G) & $0.13^{+0.10}_{-0.05}$ days & $\sim20$ days (T) & $0.03^{+0.03}_{-0.01}$ days & $\sim20$ days (T)\\ Flare 3 & $30^{+13}_{-9}$ days & $\sim23$ days (G) & $22^{+12}_{-7}$ days & $\sim108$ days (O) & $0.05^{+0.06}_{-0.03}$ days & $\sim108$ days (O)\\ \end{tabular} \end{ruledtabular} \end{table*} In the stochastic dissipation model, orphan optical flares may arise if the flaring zone occurs at a distance far from the SMBH, and the range of distances is from a few parsecs to hundreds of parsecs. On the contrary, orphan $\gamma$-ray flares arise for a small range of distances comparatively close to the SMBH for FSRQs in both Scenarios A and B, as well as for BL~Lac objects in Scenario C. As a consequence, we may expect that the duration of orphan optical flares is generally longer than that of orphan $\gamma$-ray flares, regardless of whether the duration is determined by $t_{\rm lc}$ or the other three timescales. We calculate $t_{\rm lc}$ for each flare studied in Section~\ref{sec:sec3} using the best-fit parameters shown in Table~\ref{tab:parameters} and compare it with the observed duration of the flaring state $\Delta t_{\rm dur}$, which is approximated by the time span between the time a flare's flux rises and drops to half of the peak value (i.e., full width at half maximum of the flare's light curve). The results are shown in Table~\ref{tab:timescale} and we can see that $t_{\rm lc}\lesssim \Delta t_{\rm dur}$ is generally satisfying for all flares. Also, the observed flare duration increases with the size of the dissipation zone derived in our model as expected. Observations of other blazar flares are also consistent with the expectation. For example, the duration of all the other reported orphan optical flares are of month-long scales: about 3 months for PKS~0208-512~\citep{2013ApJ...763L..11C} and a few months for PKS~2155-304~\citep{2019hepr.confE..27W}. In contrast, some of the reported orphan $\gamma$-ray flares show intraday duration: 12 hours for 3C~279~\citep{2019ApJ...884..116L}, about 5 hours for PKS~1222+21~\citep{2016MNRAS.463L..26B}. In contrast, Scenario A for BL~Lacs suggests that the duration for orphan $\gamma$-ray flares could be longer than that of orphan optical flares for BL~Lac objects. This is consistent with the reported orphan $\gamma$-ray flare for the blazar 1055+018, the duration of which is above 100 days~\citep{2017ApJ...850...87M}. Note that this object could be classified as a quasar \citep{1995ApJ...443..578M} based on the rest-frame equivalent width ($8~\mathring{\rm A}$) of C III $\lambda1909$ emission line, but in some literature it is referred to as a BL~Lac object because of its relatively weak emission lines \cite[e.g.][]{2014ApJ...789..135W,2015ATel.7114....1J}. \subsection{Orphan flare rate} Ref.~\cite{2019ApJ...880...32L} suggested that the true orphan flare rates were $54.5~\%$ and $20~\%$ for optical and $\gamma$-ray flares respectively by analysing a sample with 107 BL~Lac objects, 64 FSRQs, 4 radio galaxies and 3 unclassified sources. In the stochastic dissipation model, the volume of the jet more likely to produce an orphan optical flare is much larger than that to produce an orphan $\gamma-$ray flare, at least for FSRQs or BL~Lacs with weak BLR radiation. The model would predict a much larger intrinsic rate of orphan optical flares than that of orphan $\gamma$-ray flares, if, for example, dissipation takes place in the jet with an equal probability per unit distance or per unit volume. However, since the magnetic field strength and the radiation field intensity are weaker (i.e., lower radiation efficiency of electrons) at larger distance, only very strong dissipation forming at large distance could manifest itself as a distinct flare. This may explain why the observed rate of orphan optical flares is only $\sim 2.5$ times greater than that of orphan $\gamma-$ray flares, and is also consistent with the large electron injection luminosity for flares occurring at large $r$ as shown in Table~\ref{tab:parameters}. In addition, the observed rates of orphan flares might also imply that the dissipation process tends to occur at smaller distance over larger distance in our model. This is not unreasonable because we may generally expect the jet to be more magnetized at a small distance \citep{2019ARA&A..57..467B, 2021MNRAS.502.1145Z}, and hence instabilities or magnetic reconnection may be better developed, while the magnetic energy might have been already (partially) consumed at large distance due to radiation or adiabatic expansion of the jet. From Section~\ref{sec:sec4} we see that the Scenario A for BL~Lacs suggests an opposite preference of the location for orphan optical flares and orphan $\gamma$-ray flares with respect to FSRQs and the Scenarios B and C for BL~Lacs. Such a difference arises from the different spatial evolution of the jet's parameters. Therefore, if the orphan flare rate can be obtained from a sample composed only of BL~Lacs, it may be possible to distinguish these three different scenarios for BL~Lacs and study the parameter evolution along the jet. It is worth noting that the rates of orphan flares quoted above have been determined after accounting for sampling and instrumental sensitivity limitations. To determine the fraction of observed true orphan flares, one must estimate the fraction of observed orphan events which are simply due to limited sensitivity of the telescopes used to monitor the sources in various wavelengths. In the analysis of~\citep{2019ApJ...880...32L} the true fraction of orphan $\gamma$-ray flares was estimated by assuming that the true fraction of multi-wavelength (non-orphan) flares is constant as a function of the brightness of the source, and by comparing the expected number of multi-wavelength flares at infinite instrumental sensitivity to that observed. \subsection{Jet's background emission} In our model, we consider that dissipation may occur along the entire jet and form numerous emitting blobs. The sum of the emission from those blobs that are not undergoing intense dissipation constitute the background emission of the jet, which may represent the low-state emission of the blazar. The envisaged scenario somewhat resembles the conical jet model which has been proposed by Ref.~\citep{1979ApJ...232...34B} and further developed in many studies \cite[e.g.][]{1980ApJ...235..386M,1981ApJ...243..700K,1985A&A...146..204G,2006MNRAS.367.1083K, 2012MNRAS.423..756P}, in that both models consider the jet as an extended emission region. The difference lies in the particle injection process: in the conical jet model, particles are injected at the jet base and advected to a larger distance. Re-acceleration of the particles along their propagation is needed in order to compensate for the severe radiative cooling and adiabatic energy loss \citep{2015MNRAS.453.4070P, 2019MNRAS.485.1210Z}. In our model instead, relativistic particles are injected locally at both small and large distances instead of being injected at the jet base and transported to larger distances. \begin{figure}[htbp] \centering \resizebox{\hsize}{!}{\includegraphics{VHE3.eps}} \caption{Optical depth, $\tau_{\gamma\gamma}$, and Compton dominance, $q$, as a function of distance from the jet base, $r$. This result is parameter dependent, and here we calculate it with the best-fit parameters for 3C~279. For a much stronger initial magnetic field $q\approx 1$ even inside the BLR. \label{fig:VHE}} \end{figure} In the previous sections, we ignored the jet's background emission as a target photon field for the IC process of electrons in the flaring zone and for the $\gamma\gamma$ absorption process. To accurately evaluate its contribution, we need to model the distribution of emissivity of the background component along the entire jet, which is beyond the scope of this work. However, it may be safe to ignore their influence in the model. Taking the flares of 3C~279 and PKS~2155-304 for example, we consider the background emission as a target photon field for Compton scattering. For simplicity, we assume that the entire background emission is emitted from the same region of the flaring zone, which significantly overestimates the number density of the background radiation field. We find that the resulting fluxes have little change compared to those shown in Figs.~\ref{fig:3C 279}-\ref{fig:PKS 2155-304_2} for $E_\gamma<1\,$TeV for the same parameters listed in Tables~\ref{tab:parameters}. The most significant change is found in the case of PKS~2155-304 without external photon fields: the flux increases by 30\% around 10\,MeV for flare 3 (where no data is available) and 20\% around 100\,GeV for flare 1 (only one data point from HESS is influenced), and hence does not influence our conclusion. In the presence of an external photon field, which would then dominate the IC process, the influence of the background emission is negligible. \subsection{Observational tests of the stochastic dissipation model} \subsubsection{Absorption of the gamma-ray emission} High-energy $\gamma$-ray photons may not be able to escape from their production site because of the absorption caused by the BLR and the DT radiation via the Breit-Wheeler pair production process. The cross section of the process peaks at 1\,MeV in the center-of-momentum frame. Since the typical photon energy of the BLR radiation is about 10\,eV (i.e. the Lyman-$\alpha$ emission) and the DT radiation is at the infrared band, the absorption is particularly important for photons of energy $\gtrsim 100$\,GeV. Therefore, the location of the emission zone can be determined by searching for the absorption features in the very-high-energy (VHE, energy above 100\,GeV) $\gamma$-ray spectrum. For example, Ref.~\citep{2011ApJ...730L...8A} found that the MAGIC observations of the FSRQ PKS~1222+21 show no spectral cutoff, and concluded that the $\gamma$-ray emission region is located outside the BLR. The gamma-ray opacity is related to the density of the target radiation field, which is also relevant for the IC emission. We show the opacity $\tau_{\gamma\gamma}$ and the Compton dominance as a function of distance $r$ in Fig.~\ref{fig:VHE} with the best-fit parameters for 3C~279. It can be seen that the Compton dominance approaches unity at jet distance $r\gtrsim 10\,$pc, where the VHE gamma-ray opacity is much smaller than unity. This implies that there should not be an absorption feature at the VHE band in the spectrum of a multi-wavelength flare which has comparable synchrotron flux and IC flux. This is consistent with previous studies \cite[e.g.][]{2014A&A...569A..46A,2021A&A...648A..23H} reporting that there is no such absorption feature during multi-wavelength blazar flares. Furthermore, Ref.~\citep{2019A&A...627A.159H} suggested that the emission zone is confidently beyond the BLR and placed it at $r\gtrsim 1.7\times10^{17}~{\rm cm}$ by fitting the $\gamma$-ray data of 3C~279 observed in June 2015, which corresponds to Flare 2 of 3C~279 in this paper. This is consistent with our fitting results. Although a clear absorption feature in the VHE spectrum of blazar flares has not been reported, our model predicts that such a feature could appear in orphan gamma-ray flares with a high Compton dominance $q\gg 1$. Future observations with next-generation VHE gamma-ray telescopes such as CTA will thus be in a position to test the stochastic dissipation model. In addition to the VHE $\gamma$-ray photons, even $\gtrsim 10\,$GeV photons may be absorbed by the BLR radiation for high-redshift sources, given favorable conditions (e.g., a very compact radiation zone and intense BLR radiation). Many studies tested this scenario and found no evidence for the expected BLR absorption in the Fermi-LAT spectra \cite[e.g.][]{2012MNRAS.425.2015P,2013MNRAS.435L..24T,2014ApJ...790...45P,2018MNRAS.477.4749C,2019ApJ...877...39M}. On the other hand, there are studies suggesting that the $\gamma$-ray emission must arise within the BLR, because the $\gamma$-ray spectra can not be described by a simple power law for some FSRQ sources \cite[e.g.][]{2010ApJ...717L.118P,2021MNRAS.500.5297A}, although this feature may also be related to the maximum energy of electrons or the KN effect. In any case, observations in dozens of sources in the GeV band is probably also a promising way to study the position of blazars' $\gamma$-ray flares and test the stochastic dissipation model. \subsubsection{Shift of the radiation center} The stochastic dissipation model suggests that different types of flares can arise when strong energy dissipation occurs at different distances from the base of the jet. Thus, we may expect the shift of the radiation centroid of the jet during the flaring state. Instruments with high spatial resolution may resolve the flaring zone. The Very Long Baseline Interferometry (VLBI) technique in the radio band (i.e. $\sim1-40$~GHz) may reach a sub-milliarcsecond resolution and could provide a decisive test of the model. Ref.~\citep{2017A&A...598L...1K} found that a flare of an AGN induces a change of distance between the apparent jet base and the absolute radio VLBI reference point. In the framework of the stochastic dissipation model, the number of electrons in the strong dissipation zone must be significantly enhanced in order to produce the blazar flare. On the other hand, however, the synchrotron self-absorption (SSA) may be strong and severely attenuate the radio emission of the flaring zone. To be more quantitative, we can write the synchrotron luminosity from a spherical dissipation zone as \citep{2001A&A...367..809K}: \begin{equation}\label{eq:luminosity} L(\nu)=4\pi^{2}R^{2}\frac{j(\nu)}{k(\nu)}\left\{1-\frac{2}{\tau^{2}}\left[1-e^{-\tau}\left(\tau-1\right) \right] \right\}, \end{equation} where $j(\nu)$ is the synchrotron emission coefficient, $k(\nu)$ is the absorption coefficient and $\tau$ is the opacity of the SSA effect. The ratio of $j(\nu)$ and $k(\nu)$ is independent of the electron injection luminosity. So the synchrotron luminosity is proportional to: \begin{equation}\label{eq:luminosity2} L_{\rm SSA}(\nu)\propto{R^2B^{-\frac{1}{2}}\nu^{\frac{5}{2}}}. \end{equation} in the case of $\tau\gg 1$. Therefore, the radio emission may not be sensitive to the enhanced electron injection luminosity during strong flares for a given $R$ and $B$. This can be seen in Fig.~\ref{fig:7}, which shows the relation between the electron injection luminosity and the radio flux of the flaring region at different radii. We see that at a comparatively low frequency such as 8\,GHz, a huge electron injection luminosity does not significantly enhance the radio flux at small radii (e.g., at 0.01\,pc), but may be revealed when the intense dissipation occurs at a large distance with $r>10\,$pc. Therefore, we may expect an orphan optical flare accompanied by the brightening or emergence of a radio knot at a large distance of the jet. Of course, this also depends on the ratio of the radio flux during the flare to that in the low state. If the low-state radio emission is already quite strong, the shift of the radio center during the flare may not be easy to confirm. Observations at a higher frequency can alleviate the SSA effect. At 230\,GHz, as shown in Fig.~\ref{fig:7}, the SSA effect is already unimportant at $r>0.1\,$pc. The Event Horizon Telescope (EHT) can work at this frequency and resolve the innermost jet of 3C 279 at 230 GHz with an angular resolution of $\sim20~{\rm\mu as}$ \citep{2020A&A...640A..69K}, which corresponds to a physical length of about 3.7 pc for a viewing angle of $2^{\circ}$ \citep{2017ApJ...846...98J}. With this resolution, it may resolve the dissipation zones of orphan optical flares and even some multi-wavelength flares in 3C 279. A closer blazar would be a better target for such a study. Note that, even at small radii with $r=0.01\,$pc where orphan $\gamma$-ray flares more likely take place, the 230\,GHz radio flux can be increased by about a factor of 3 during the flare. Although the EHT cannot spatially resolve the flaring zone at such a small distance, it is possible to observe a moderate enhancement of the radio flux in the innermost core during an orphan $\gamma$-ray flare. Again, this depends on the radio flux ratio of a flaring state to the low state. \begin{figure}[htbp] \centering \resizebox{\hsize}{!}{\includegraphics{radio3.eps}} \caption{Synchrotron radio flux from the flaring region as a function of electron injection luminosity assuming dissipation at different jet distances. The solid lines represent results at 230~GHz, and the dotted lines represent results at 8~GHz. The parameters are: $z=0.536$, $B_0=0.3\,$G, $R_0=10^{16}\,$cm, $\delta_{\rm D,0}=60$, $p_1=1.5$, $p_2=4.8$, $\gamma_{\rm min}=10$, $\gamma_{\rm break}=1.6\times10^{3}$, $\gamma_{\rm max}=10^{6}$. \label{fig:7}} \end{figure} The space mission $Gaia$ provides highly accurate optical centroid positions for AGN with (sub-)milliarcsecond accuracy. Based on Gaia, a number of publications found that there are significant radio-optical offsets for AGN by analyzing VLBI positions and $Gaia$ photocenter \cite[e.g.][]{2017MNRAS.467L..71P,2017MNRAS.471.3775P,2019MNRAS.482.3023P,2019ApJ...871..143P,2017A&A...598L...1K,2020MNRAS.493L..54K}. We can also try to measure the shift of the $Gaia$ optical photocenter during the flare state to test the stochastic dissipation model. For example, let us assume that the optical photocenter of 3C 279 in the low state is located 12 pc away from the SMBH. In Flare 1, which occurs at 0.2 pc away from the SMBH based on our fitting, the optical flux is about ten times higher than that in the low state (see Fig.~\ref{fig:3C 279}). This translates to a shift of the optical photocenter by $\sim 0.2$ mas (given the source redshift and the viewing angle ($2.1^{\circ}$ from Ref.~\citep{2012ApJ...754..114H}). Such a shift exceeds the pointing accuracy of $Gaia$ for a bright source and hence is measurable by $Gaia$ \citep{2021MNRAS.505.4616B}. \section{Conclusions}\label{sec:sec6} In this paper, we have succeeded in interpreting the spectral variety of blazar flares in a unified physical picture. In the considered framework, dissipation events may take place and accelerate particles along the jet at random distances from the SMBH where the electromagnetic environments can be quite different. As a result, different spectral shapes of the emergent radiation can arise from these dissipation zones at different distances. Our model, which is coined the ``stochastic dissipation model'', and the main conclusions of the paper are summarized below. (i) In our stochastic dissipation model, there are at least two emission components during a blazar flare. One component is the jet's background emission, which can be thought of as a superposition of radiation from numerous but comparatively weak dissipation zones along the jet. The other component originates from a flaring zone (with stronger dissipation) that is responsible for the flare. (ii) We assume that the flaring zone may randomly appear at different positions along the jet. The physical quantities describing flares from the same blazar, such as the radius of the flaring zone, the magnetic field strength, and Doppler factor, are not independent, but are intrinsically related to each other through the distance of the flaring zone to the SMBH. (iii) We have applied our model to explain the SEDs of three flaring states of 3C~279 and PKS~2155-304. The SEDs of different flaring states can be explained by our model with six common parameters and four separate parameters for both PKS~2155-304 and 3C~279. Our model for PKS~2155-304, which is categorized as a BL~Lac object, strongly favors the presence of a weak BLR radiation of luminosity $\sim 10^{41}\rm erg~s^{-1}$ which is consistent with the reported upper limit to the BLR luminosity of this source. (iv) The ratio $\chi$ between the Compton dominance of the flaring zone and that of the jet's background emission determines the spectral feature of the blazar flare. If the ratio is much larger than unity, the blazar tends to present an orphan $\gamma$-ray flare; on the contrary, if the ratio is much smaller than unity, an orphan optical flare is more like to occur. (v) For FSRQs including ``masquerading'' BL~Lacs, the Compton dominance ratio $\chi$ would be much larger than unity (corresponding to orphan $\gamma$-ray flare) when the dissipation occurs comparatively close to the SMBH (e.g., $r\lesssim 1\,$pc), while the ratio would be much smaller than unity (corresponding to orphan optical flare) when the dissipation occurs far away from the SMBH (e.g., $r\gtrsim 10\,$pc). (vi) For (true) BL~Lacs, the situation is similar to that of FSRQs, if the model parameters such as the magnetic field and the Doppler factor do not vary with the distance of the dissipation zone from the SMBH ($r$). On the other hand, if these parameters decrease with $r$, the situation becomes more complex due to the KN effect. A dedicated study is needed to elucidate the influence of the KN effect in the latter case. (vii) The flare duration and the orphan flare rates expected in the model are consistent with orphan flare observations made to date. In this work we only consider the radiation of electrons in the jet. In principle, protons can also be accelerated in the dissipation zone and radiate neutrinos via the hadronic interactions with the radiation field in blazars. Indeed, an orphan neutrino flare from TXS 0506+056 was reported by the IceCube Neutrino Observatory \cite{2018Sci...361..147I}. In the work of Ref.~\citep{2020arXiv201103681X} it was shown that the neutrino flare may have been produced by a dissipation event occurring at the jet base where the external radiation field is dominated by the X-ray corona of the SMBH. This interpretation is consistent with our model. \acknowledgements We thank the anonymous referee for the enlightening suggestions and Alicja Wierzcholska for her help with the data of PKS~2155-304. The work is supported by the NSFC grants 11625312, U2031105 and 11851304, and the National Key R$\&$D program of China under the grant 2018YFA0404203. MP acknowledges support from the MERAC Foundation. Part of this work is based on archival data, software or online services provided by the Space Science Data Center - ASI. The calculation of radiation is based on the \texttt{naima} Python package, and the data fitting is based on the \texttt{emcee} Python package.
proofpile-arXiv_065-2
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{intro} The astrophysical plasmas characterized by high Lundquist number $S\equiv Lv_A/\eta$ ($L\equiv$ length scale of the magnetic field \textbf{B} variability, $v_A\equiv$ Alfv\'en speed, and $\eta\equiv$ magnetic diffusivity) satisfy the Alfv\'en's flux-freezing theorem in presence of laminar plasma flow, ensuring magnetic field lines to be tied to fluid parcels \citep{Alfven}. The scenario is different in a turbulent magnetofluid, see \citet{Vishnaic1999, Vishnaic2000, Eyink} for details. An inherent large $L$ implies large $S$ and ensures the flux freezing in the astrophysical plasmas. Particularly, the solar corona with global $L\approx 100 ~\rm Mm$, $v_{A}\approx10^{6}$ ms$^{-1}$, B$\approx10$ G, and $\eta\approx1$ m$^2$s$^{-1}$ (calculated using Spitzer resistivity) has a $S\approx10^{14}$ \citep{Aschwanden}. However, the coronal plasma also exhibits diffusive behavior in the form of solar transients---such as solar flares, coronal mass ejections (CME), and jets. All of these are manifestations of magnetic reconnections that in turn lead to dissipation of magnetic energy into heat and kinetic energy of plasma flow, accompanied by a rearrangement of magnetic field lines \citep{Arnab}. The magnetic reconnections being dissipative processes, their onset is due to the generation of small scales in consequence of large-scale dynamics, ultimately increasing the magnetic field gradient and thereby resulting in intermittently diffusive plasma. The small scales may naturally occur as current sheets (CSs) \citep{ParkerECS}, magnetic nulls \citep{Parnell96,Ss2020} and quasi-separatrix layers (QSLs) \citep{Demoulin, avijeet2020}, or can develop spontaneously during the evolution of the magnetofluid. Such spontaneous developments (owing to discontinuities in the magnetic field) are expected from Parker’s magnetostatic theorem \citep{ParkerECS} and have also been established numerically by MHD simulations \citep{Ss2020, DKRB, SKRB, Sanjay2016, SK2017, avijeet2017, avijeet2018, Ss2019, Sanjay2021}. Identification of the small (viz. the dissipation) scale depends on the specific physical system under consideration. For example, the length scale at which the reconnection occurs is found to be $L_{\eta}\equiv\sqrt{\tau_{d}\eta}\approx$32~m, based on $\eta\approx1$ m$^2$s$^{-1}$ and the magnetic diffusion time scale $\tau_{d}$ approximated by the impulsive rise time of hard X-ray flux $\approx 10^3$ s \citep{PF200} during a flare. Consequently, the estimated ion inertial length scale $\delta_i\approx 2.25$ m in the solar corona \citep{PF200} suggests that the order of the dissipation term, $1/S\approx 10^{-5}$ (approximated with $L_{\eta}$), is smaller than the order of the Hall term, $\delta_i/L_\eta\approx 10^{-2}$, in standard dimensionless induction equation \citep{Westerberg07, 2021ApJ...906..102B} \begin{equation} \label{inducresist} \frac{{\partial\bf{B}}}{\partial t} = \nabla\times \left({\bf{v}}\times{\bf{B}}\right) -\frac{1}{S}\nabla\times{\bf{J}} -\frac{\delta_i}{L_\eta}\nabla\times\left({\bf{J}}\times{\bf{B}}\right)~, \end{equation} where ${\bf{J}}(=\nabla\times{\bf{B}})$ and ${\bf{v}}$ are the volume current density and the plasma flow velocity, respectively. This difference in the order of magnitude irrefutably indicates the importance of the Hall term in the diffusive limit {\bf{ \citep{BIRN, BhattacharjeeReview}}} of the solar coronal plasma which, further signifies that the HMHD can play a crucial role for coronal transients as the magnetic reconnections are their underlying mechanism. Importantly, the aforesaid activation of the Hall term only in the diffusive limit is crucial in setting up a HMHD based numerical simulation, invoked latter in the paper. Important insight into magnetic reconnection can be gained by casting (\ref{inducresist}) in absence of dissipation, as \begin{equation} \label{inducresist1} \frac{{\partial\bf{B}}}{\partial t} = \nabla\times \left({\bf{w}}\times{\bf{B}}\right)~, \end{equation} \noindent following \citet{hornig-schindler}. The velocity ${\bf{w}}={\bf{v}}-\delta_i/L_\eta{\bf{J}}$, which is also the electron fluid velocity, conserves magnetic flux \citep{schindler} and topology \citep{hornig-schindler} since field lines are tied to it. Consequently, field lines slip out from the fluid parcels advecting with velocity {\bf{v}} to which the lines are frozen in ideal MHD. Importantly, the resulting breakdown of the flux freezing is localized to the region where current density is large and the Hall term is effective. Because of the slippage, two fluid parcels do not remain connected with the same field lines over time---a change in field line connectivity. Quoting \citet{schindler}, such localized breakdown of flux freezing along with the resulting change in connectivity can be considered as the basis of reconnection \citep{axford}. Additional slippage of field lines occur in presence of the dissipation term, but with a change in magnetic topology. The present paper extensively relies on this interpretation of reconnection as the slippage of magnetic field lines and the resulting change in magnetic connectivity. The importance of HMHD is by no means limited to coronal transients. For example, HMHD is important in the Earth's magnetosphere, particularly at the magnetopause and the magnetotail where CSs are present \citep{Mozer2002}. Generally, the HMHD is expected to support faster magnetic reconnections, yet without directly affecting the dissipation rate of magnetic energy and helicity by the Hall term in the induction equation \citep{PF200, chenshi}. The faster reconnection may be associated with a more effective slippage of field lines in HMHD compared to the resistive MHD, compatible with the arguments presented earlier. Nevertheless, these unique properties of the HMHD are expected to bring subtle changes in the dynamical evolution of plasma, particularly in the small scales dominated by magnetic reconnections, presumably bringing a change in the large scales as a consequence. Such subtle changes were found in the recent HMHD simulation \citep{2021ApJ...906..102B}, performed by extending the computational model EULAG-MHD \citep{PiotrJCP} to include the Hall effects. Notably, the faster reconnection compared to MHD led to a breakage of a magnetic flux rope, generated from analytically constructed initial bipolar magnetic field lines \citep{Sanjay2016}. In turn, the flux rope breakage resulted in the generation of magnetic islands as theorized by \citet{Shibata}. Clearly, it is compelling to study the HMHD evolution in a more realistic scenario with the initial magnetic field obtained from a solar magnetogram. To attain such an objective, we select the recently reported active region (AR) NOAA 12734 by \citet{2021Joshi} that produced a C1.3 class flare. In absence of reliable direct measurement of the coronal magnetic field, several extrapolation models such as nonlinear force-free field (NLFFF) \citep{2008Wglman, 2012WglmnSakurai} and non-force-free field (non-FFF) \citep{HuDas08, Hu2010} have been developed to construct the coronal magnetic field using photospheric magnetograms. The standard is the NLFFF, and the recent data-based MHD simulations initialized with it have been reasonably successful in simulating the dynamics of various coronal transients \citep{2013Jiang, 2014NaturAm, 2014Innoue, 2016Savcheva}. However, the NLFFF extrapolations require to treat the photosphere as force-free, while it is actually not so \citep{Gary}. Hence, a ``preprocessing technique'' is usually employed to minimize the Lorentz force on the photosphere in order to provide a boundary condition suitable for NLFFF extrapolations \citep{2006SoPhWgl, 2014SoPhJiang} and thereby compromising the reality. Recently, the non-force-free-field (non-FFF) model, based on the principle of minimum energy dissipation rate \citep{bhattaJan2004, bhattaJan2007}, has emerged as a plausible alternative to the force-free models \citep{HuDas08, Hu2010, 2008ApJHu}. In the non-FFF model, the magnetic field \textbf{B} satisfies the double-curl-Beltrami equation \citep{MahajanYoshida} and the corresponding Lorentz force on the photosphere is non-zero while it decreases to small values at the coronal heights \citep{avijeet2018, Ss2019, avijeet2020}---concurring with the observations. In this paper, we use non-FFF extrapolation \citep{Hu2010} to obtain the magnetic field in corona using the photospheric vector magnetogram obtained from the Helioseismic Magnetic Imager (HMI) \citep{HMI} onboard the Solar Dynamics Observatory (SDO) \citep{SDO}. The paper is organized as follows. Section \ref{obs} describes the flaring event in AR NOAA 12734, section \ref{extrapolation} presents magnetic field lines morphology of AR NOAA 12734 along with the preferable sites for magnetic reconnections such as QSLs, 3D null point, and null-line found from the non-FFF extrapolation. Section \ref{simulation-results} focuses on the numerical model, numerical set-up and the evolution of magnetic field lines obtained from the extrapolation along with their realizations in observations. Section \ref{summary} highlights the key findings. \section{Salient features of the C1.3 class flare in AR NOAA 12734} \label{obs} The AR NOAA 12734 produced an extended C1.3 class flare on March 08, 2019 \citep{2021Joshi}. The impulsive phase of the flare started at 03:07 UT as reported in the Figure 3 of \citet{2021Joshi}, which shows the X-ray flux in the 1-8 {\AA} and 0.5-4 {\AA} detected by the Geostationary Operational Environmental Satellite (GOES) \citep{Gracia}. The flux evinces two subsequent peaks after the onset of the flare, one around 03:19 UT and another roughly around 03:38 UT. \citet{2021Joshi} suggested the eruptive event to take place in a coronal sigmoid with two distinct stages of energy release. Additional observations using the multi-wavelength channels of Atmospheric Imaging Assembly (AIA) \citep{AIA} onboard SDO are listed below to highlight important features pertaining to simulations reported in this paper. Figure \ref{observations} illustrates a spatio-temporal observational overview of the event. Panel (a) shows the remote semicircular brightening (C1) prior to the impulsive phase of the flare (indicated by the yellow arrow). Panels (b) to (d) indicate the flare by yellow arrow and the eruption by the white arrow in the 94 {\AA}, 171 {\AA}, and 131 {\AA} channels respectively. Notably, the W-shaped brightening appears in panels (b) to (d) along with the flare in different wavelength channels of SDO/AIA. Panel (e) shows the circular structure of the chromospheric material (C2) during the impulsive phase of the flare. It also highlights the developed W-shaped flare ribbon (enclosed by the white box) which has a tip at the center (marked by the white arrow). Panel (f) depicts the post-flare loops in 171 {\AA} channel, indicating the post-flare magnetic field line connectivity between various negative and positive polarities on the photosphere. \section{non-FFF Extrapolation of the AR NOAA 12734} \label{extrapolation} As stated upfront, the non-FFF extrapolation technique proposed by \citet{HuDas08} and based on the minimum dissipation rate theory (MDR) \citep{bhattaJan2004, bhattaJan2007} is used to obtain the coronal magnetic field for the AR NOAA 12734. The extrapolation essentially solves the equation \begin{eqnarray} \label{tc} \nabla\times\nabla\times\nabla\times \textbf{B}+a_1 \nabla\times\nabla\times \textbf{B}+b_1 \nabla\times\textbf{B}=0~, \end{eqnarray} where parameters $a_1$ and $b_1$ are constants. Following \citep{Hu2010}, the field is constructed as \begin{eqnarray} \textbf{B}=\sum_{i=1,2,3} \textbf{B}_{i}~,~~ \nabla\times \textbf{B}_{i} =\alpha_{i} \textbf{B}_{i}~, \end{eqnarray} where $\alpha_i$ is constant for a given $\textbf{B}_i$. The subfields $\textbf{B}_1$ and $\textbf{B}_3$ are linear force-free having $\alpha_1\neq\alpha_3$, whereas $\textbf{B}_2$ is a potential field with $\alpha_2=0$. An optimal pair of $\alpha=\{\alpha_1,\alpha_3\}$ is iteratively found by minimizing the average deviation between the observed transverse field ($\textbf{B}_t$) and the computed ($\textbf{b}_t$) transverse field, quantified by \begin{equation} \label{En} E_n=\left(\sum_{i=1}^{M} |\textbf{B}_{t,i}-\textbf{b}_{t,i}|\times |\textbf{B}_{t,i}|\right)/\left(\sum_{i=1}^{M}|\textbf{B}_{t,i}|^2\right)~, \end{equation} on the photosphere. Here, $M=N^2$ represents the total number of grid points on the transverse plane. The grid points are weighted with respect to the strength of the observed transverse field to minimize the contribution from weaker fields, see \citep{HuDas08, Hu2010} for further details. Since (\ref{tc}) involves the evaluation of the second-order derivative, $(\nabla\times\nabla\times \textbf{B})_z=-(\nabla^2 \textbf{B})_z$ at $z=0$, evaluation of \textbf{B} requires magnetograms at two different values of $z$. In order to work with the generally available single-layer vector magnetograms, an algorithm was introduced by \cite{Hu2010} that involves additional iterations to successively fine-tune the potential subfield $\textbf{B}_2$. The system is reduced to second order by taking initial guess $\textbf{B}_2=0$, which makes it easier to determine the boundary condition for $\textbf{B}_1$ and $\textbf{B}_3$. If the calculated value of $E_n$ turns out unsatisfactory---i.e., overly large---then a potential field corrector to $\textbf{B}_2$ is calculated from the difference in the observed and computed transverse fields and subsequently summed with the previous $\textbf{B}_2$ to further reduce $E_n$. Notably, recent simulations initiated with the non-FFF model have successfully explained the circular ribbon-flares in AR NOAA 12192 \citep{avijeet2018} and AR NOAA 11283 \citep{avijeet2020} as well as a blowout jet in AR NOAA 12615 \citep{Ss2019}, thus validating non-FFF's credibility. The vector magnetogram is selected for 2019 March 08, at 03:00 UT ($\approx$ 7 minutes prior to the start of flare). The original magnetogram cut out of dimensions 342$\times$195 pixels with pixel resolution 0.5 arcsec per pixel having an extent of $124~ \rm Mm\times 71$ Mm from ``hmi.sharp$\_$cea$\_$720s" series is considered, which ensures an approximate magnetic flux balance at the bottom boundary. To optimize the computational cost with the available resources, the original field is re-scaled and non-FFF-extrapolated over a volume of 256$\times$128$\times$128 pixels while keeping the physical extent same and preserving all magnetic structures throughout the region. The reduction, in effect, changes the conversion factor of 1 pixel to $\approx 0.484$ Mm along x and $\approx 0.554$ Mm along y and z directions of the employed Cartesian coordinate system. Panel (a) of Figure~\ref{lfcombnd} shows $E_n$ in the transverse field, defined in (\ref{En}), as a function of number of iterations. It shows that $E_n$ tends to saturate at the value of $\approx$0.22. Panel (b) of Figure \ref{lfcombnd} shows logarithmic decay of the normalized horizontally averaged magnetic field, current density, and Lorentz force with height. It is clear that the Lorentz force is appreciable on the photosphere but decays off rapidly with height, agreeing with the general perception that the corona is force-free while the photosphere is not \citep{Liu2020, Yalim20}. Panel (c) shows that the Pearson-r correlation between the extrapolated and observed transverse fields is $\approx$0.96, implying strong correlation. The direct volume rendering of the Lorentz force in panel (d) also reveals a sharp decay of the Lorentz force with height, expanding on the result of panel~(b). To facilitate description, Figure \ref{regions}~(a) shows the SDO/AIA 304 {\AA} image at 03:25 UT, where the flare ribbon brightening has been divided into four segments marked as B1-B4. Figure \ref{regions}~(b) shows the initial global magnetic field line morphology of AR NOAA 12734, partitioned into four regions R1-R4, corresponding to the flare ribbon brightening segments B1-B4. The bottom boundary of panel (b) comprises of $B_z$ maps in grey scale where the lighter shade indicates positive polarity regions and the darker shade marks the negative polarity regions. The magnetic field lines topologies and structures belonging to a specific region and contributing to the flare are documented below. \bigskip \noindent {\bf{Region R1:}} The top-down view of the global magnetic field line morphology is shown in the panel (a) of Figure~\ref{region1}. To help locate QSLs, the bottom boundary is overlaid with the $\log Q$ map of the squashing factor $Q$ \citep{Liu} in all panels of the figure. Distribution of high $Q$ values along with $B_z$ on the bottom boundary helps in identifying differently connected regions. The region with a large $Q$ is prone to the onset of slipping magnetic reconnections \citep{Demoulin}. Foot points of magnetic field lines constituting QSL1 and QSL2 trace along the high $Q$ values near the bottom boundary. QSL1, involving the magnetic field lines Set I (green) and Set II (maroon), is shown in panel (b). Particularly, magnetic field lines Set I (green) extends higher in the corona forming the largest loops in R1. Panel~(c) illustrates a closer view of QSL2 (multicolored) and the flux rope (black) beneath, situated between the positive and negative polarities P1, P2 and N1, respectively. In panel~(d), the flux rope (constituted by the twisted black magnetic field lines) is depicted using the side view. The twist value $T_w$ \citep{Liu} in the three vertical planes along the cross section of the flux rope is also overlaid. Notably, the twist value is 2 at the center of the rope and decreases outward (cf. vertical plane in middle of the flux rope in panel (d)). \bigskip \noindent {\bf{Region R2:}} Figure~\ref{R2R3R4exp} (a) shows the side view of a 3D null point geometry of magnetic field lines and the bottom boundary $B_z$ overlaid with log $Q$ ranging between 5 and 10. Panel~(b) depicts an enlarged view of the 3D null location, marked black. The height of the null is found to be $\approx$ 3~Mm from the photosphere. The null is detected using the bespoke procedure \citep{DKRB, Ss2020} that approximates the Dirac delta on the grid as \begin{equation} \label{ndefine} n(B_i) = \exp\big[-\sum_{i=x,y,z}{(B_{i} -B_{o})^2}/{d_{o}^2}\big]~, \end{equation} where small constants $B_o$ and $d_o$ correspond to the isovalue of $B_i$ and the Gaussian spread. The function $n(B_i)$ takes significant values only if $B_i\approx 0~\forall i$, whereupon a 3D null is the point where the three isosurfaces having isovalues $B_i=B_o$ intersect.\bigskip \noindent {\bf{Region R3:}} Side view of the magnetic field line morphology in region R3 is shown in Figure \ref{R2R3R4exp} (c), where the yellow surface corresponds to $n=0.9$. Panel~(d) highlights a ``fish-bone-like'' structure, similar to the schematic in Figure 5 of \citet{WangFB}. To show that in the limiting case $n=0.9$ reduced to a null line, we plot corresponding contours in the range $0.6\leq n \leq 0.9$ on three pre-selected planes highlighted in panel (e). The size reduction of the contours with increasing $n$ indicates the surface converging to a line. Such null lines are also conceptualized as favorable reconnection sites \citep{WangFB}. \bigskip \noindent {\bf{Region 4}} Figure \ref{R2R3R4exp} (f) shows magnetic field lines relevant to plasma rotation in B4. Notably, the null line from the R3 intrudes into R4 and the extreme left plane in R3 (Figure \ref{R2R3R4exp} (e)) is also shared by the R4. \section{HMHD and MHD simulations of AR NOAA 12734} \label{simulation-results} \subsection{Governing Equations and Numerical Model} In the spirit of our earlier related works \citep{avijeet2018, Ss2019, avijeet2020}, the plasma is idealized to be incompressible and thermodynamically inactive as well as explicitly nonresistive. While this relatively simple idealization is naturally limited, it exposes the basic dynamics of magnetic reconnections unobscured by the effects due to compressibility and heat transfer. Albeit the latter are important for coronal loops \citep{2002ApJ...577..475R}, they do not directly affect the magnetic topology---in focus of this paper. Historically rooted in classical hydrodynamics, such idealizations have a proven record in theoretical studies of geo/astrophysical phenomena \citep{Rossby38, 1991ApJ...383..420D, RBCLOW, 2021ApJ...906..102B}. Inasmuch as their cognitive value depends on an a posteriori validation against the observations, the present study offers yet another opportunity to do so. The Hall forcing has been incorporated \citep{2021ApJ...906..102B} in the computational model EULAG-MHD \citep{PiotrJCP} to solve the dimensionless HMHD equations, \begin{eqnarray} \label{momtransf} \frac{\partial{\bf v}}{\partial t} +({\bf v}\cdot \nabla){\bf v}&=& -\nabla p + (\nabla\times{\bf B})\times{\bf B} + \frac{1}{R_F^A}\nabla^2 {\bf v}~,\\ \label{induc} \frac{\partial{\bf B}}{\partial t}&=& \nabla\times(\textbf{v}\times{\bf B}) -d_H\nabla\times((\nabla\times{\bf B})\times{\bf B})~,\\ \label{incompv} \nabla\cdot {\bf v}&=& 0~, \\ \label{incompb} \nabla\cdot {\bf B}&=& 0~, \end{eqnarray} where $R_F^A=(v_A L/\nu)$, $\nu$ being the kinematic viscosity---is an effective fluid Reynolds number, having the plasma speed replaced by the Alfv\'en speed $v_A$. Hereafter $R_F^A$ is denoted as fluid Reynolds number for convenience. The transformation of the dimensional quantities (expressed in cgs-units) into the corresponding non-dimensional quantities, \begin{equation} \label{norm} {\bf{B}}\longrightarrow \frac{{\bf{B}}}{B_0}, \quad{\bf{x}}\longrightarrow \frac{\bf{x}}{L_0}, \quad{\bf{v}}\longrightarrow \frac{\bf{v}}{v_A}, \quad t \longrightarrow \frac{t}{\tau_A}, \quad p \longrightarrow \frac{p}{\rho_0 {v_{A}}^2}~, \end{equation} assumes arbitrary $B_0$ and $L_0$ while the Alfv\'en speed $v_A \equiv B_0/\sqrt{4\pi\rho_0}$. Here $\rho_0$ is a constant mass density, and $d_H$ is the Hall parameter. In the limit of $d_H=0$, (\ref{momtransf})-(\ref{incompb}) reduce to the MHD equations \citep{avijeet2018}. The governing equations (\ref{momtransf})-(\ref{incompb}) are numerically integrated using EULAG-MHD---a magnetohydrodynamic extension \citep{PiotrJCP} of the established Eulerian/Lagrangian comprehensive fluid solver EULAG \citep{Prusa08} predominantly used in atmospheric research. The EULAG solvers are based on the spatio-temporally second-order-accurate nonoscillatory forward-in-time advection scheme MPDATA (for {\it multidimensional positive definite advection transport algorithm}) \citep{Piotrsingle}. Importantly, unique to MPDATA is its widely documented dissipative property mimicking the action of explicit subgrid-scale turbulence models wherever the concerned advective field is under-resolved; the property known as implicit large-eddy simulations (ILES) \citep{Grinstein07}. In effect, magnetic reconnections resulting in our simulations dissipate the under-resolved magnetic field along with other advective field variables and restore the flux freezing. These reconnections being intermittent and local, successfully mimic physical reconnections. \subsection{Numerical Setup} The simulations are carried out by mapping the physical domain of $256\times128\times128$ pixels on the computational domain of $x\in\{-1, 1\}$, $y\in\{-0.5,0.5\}$, $z\in\{-0.5,0.5\}$ in a Cartesian coordinate system. The dimensionless spatial step sizes are $\Delta x=\Delta y=\Delta z \approx 0.0078$. The dimensionless time step is $\Delta t=5\times 10^{-4}$, set to resolve whistler speed---the fastest speed in incompressible HMHD. The rationale is briefly presented in the Appendix \ref{appnd}. The corresponding initial state is motionless ($\textbf{v}=0$) and the initial magnetic field is provided from the non-FFF extrapolation. The non-zero Lorentz force associated with the extrapolated field pushes the magnetofluid to initiate the dynamics. Since the maximal variation of magnetic flux through the photosphere is only 2.28$\%$ of its initial value during the flare (not shown), the $\text{B}_z$ at the bottom boundary (at $z=0$) is kept fixed throughout the simulation while all other boundaries are kept open. For velocity, all boundaries are set open. The mass density is set to $\rho_0=1$. The fluid Reynolds number is set to $500$, which is roughly two orders of magnitude smaller than its coronal value $\approx 25000$ (calculated using kinematic viscosity $\nu=4\times 10^9 ~\rm m^2s^{-1}$ \citep{Aschwanden} in solar corona). Without any loss in generality, the reduction in $R_F^A$ can be envisaged to cause a reduction in computed Alfv\'en speed, $v_A|_\text{computed} \approx 0.02\times v_A|_\text{corona}$ where the $L$ for the computational and coronal length scales are set to 71 Mm and 100 Mm respectively. This diminished Alfv\'en speed reduces the requirement of computational resources and also relates it with the observation time. The results presented herein pertain to a run for 1200$\Delta t$ which along with the normalizing $\tau_A\approx 3.55\times 10^3$ s roughly corresponds to an observation time of $\approx$ 35 minutes. For the ease of reference in comparison with observations, we present the time in units of 0.005$\tau_a$ (which is 17.75 s) in the discussions of the figures in subsequent sections. Although the coronal plasma idealized to have reduced Reynolds number is inconsequential here, in a comparison of MHD and HMHD evolution, we believe the above rationale merits further contemplation. Undeniably such a coronal plasma is not a reality. Nevertheless, the reduced $R_F^A$ does not affect the reconnection or its consequence, but slows down the dynamics between two such events and importantly---reduces the computational cost, making data-based simulations realizable even with reasonable computing resources. A recent work by \citet{JiangNat} used homologous approach toward simulating a realistic and self-consistent flaring region. In the present simulations, all parameters are identical for the MHD and the HMHD except for the $d_H$, respectively set to 0 and 0.004. The value 0.004 is motivated by recognizing ILES dissipation models intermittent magnetic reconnections at the ${\mathcal O}(\parallel\Delta{\bf x}\parallel)$ length scales, consistent with the thesis put forward in Introduction, we specify an appreciable Hall coefficient as $d_H = 0.5 \Delta z/L \approx 0.004$, where $L=1\equiv$ smallest extent of the computational volume, having $\Delta y= \Delta z \approx 0.0078$ as the dissipation scales because of the ILES property of the model. Correspondingly, the value is also at the lower bound of the pixel or scale order approximation and, in particular, an order of magnitude smaller that its coronal value valid at the actual dissipation scale. An important practical benefit of this selection is the optimization of the computational cost while keeping magnetic field line dynamics tractable. Importantly, with dissipation and Hall scales being tied, an increased current density at the dissipation scale introduces additional slippage of field lines in HMHD over MHD (due to the Hall term) and, may be responsible for more effective and faster reconnections found in the Hall simulation reported below. \subsection{Comparison of the HMHD and MHD simulations} The simulated HMHD and MHD dynamics leading to the flare show unambiguous differences. This section documents these differences by comparing methodically simulated evolution of the magnetic structures and topologies in the AR NOAA 12734---namely, the flux rope, QSLs, and null points---identified in the extrapolated initial data in the regions R1-R4. \subsubsection{Region R1} The dynamics of region R1 are by far the most complex among the four selected regions. To facilitate future reference as well as to outline the organization of the discussion that follows, Table~\ref{tab:r1} provides a brief summary of our findings---in a spirit of theses to be proven by the simulation results. \begin{table} \caption{Salient features of magnetic field lines dynamics in R1} \label{tab:r1} \begin{tabular}{ |p{3cm}|p{5.5cm}|p{5.5cm}| } \hline Magnetic field lines structure& HMHD & MHD \\ [4ex] \hline QSL1 & Fast reconnection followed by a significant rise of loops, eventually reconnecting higher in the corona. &Slow reconnection followed by a limited rise of loops. \\ [6ex] \hline QSL2 & Fast reconnection causing the magnetic field lines to entirely disconnect from the polarity P2. & Due to slow reconnection magnetic field lines remain connected to P2. \\ [6ex] \hline Flux rope &Fast slipping reconnection of the flux-rope foot points, followed by the expansion and rise of the rope envelope. & Slow slipping reconnection and rise of the flux-rope envelope; the envelope does not reach the QSL1. \\ [6ex] \hline \end{tabular} \end{table} \bigskip The global dynamics of magnetic field lines in region R1 is illustrated in Figure~\ref{fullR1}; consult Figure~\ref{region1} for the initial condition and terminology. The snapshots from the HMHD and MHD simulations are shown in panels (a)-(d) and (e)-(f), respectively. In panels (a) and (b), corresponding to $t=19$ and $t=46$, the foot points of magnetic field lines Set II (near P2, marked maroon) exhibit slipping reconnection along high values of the squashing factor $Q$ indicated by black arrows. Subsequently, between $t=80$ and 81 in panels (c) and (d), the magnetic field lines Set II rise in the corona and reconnect with magnetic field lines Set I to change connectivity. The MHD counterpart of the slipping reconnection in panels (e) and (f), corresponds to magnetic field lines Set II between t=19 and t=113. It lags behind the HMHD displays, thus implying slower dynamics. Furthermore, the magnetic field lines Set II, unlike for the HMHD, do not reach up to the magnetic field lines Set I constituting QSL1 and hence do not reconnect. A more informative visualization of the highlighted dynamics is supplemented in an online animation. The decay index is calculated for each time instant for both the simulations and is found to be less than 1.5 above the flux rope, indicating an absence of the torus instability \citep{Torok}. For more detail, Figures~\ref{R1QSL} and \ref{ropeHMHD-MHD} illustrate evolution of QSL2 and flux rope separately. Figure~\ref{R1QSL} panels (a)-(b) and (c)-(d) show, respectively, the instants from the HMHD and MHD simulations of QSL2 between P1, P2 and N1. The HMHD instants show magnetic field lines that were anchored between P2 and N1 at $t=10$ have moved to P1 around t=102, marked by black arrows in both panels. The magnetic field lines anchored at P2 moved to P1 along the high $Q$ values---signifying the slipping reconnection. The MHD instants in panels (c)-(d) show the connectivity changes of the violet and white colored magnetic field lines. The white field line was initially connecting P1 and N1, whereas the violet field line was connecting P2 and N1. As a result of reconnection along QSL, the white field line changed its connectivity from P1 to P2 and violet field line changes the connectivity from P2 to P1 (marked by black arrows). Notably, in contrast to the HMHD evolution, all magnetic field lines initially anchored in P2 do not change their connectivity from P2 to P1 during the MHD evolution, indicating the slower dynamics. The flux rope has been introduced in panels (c) and (d) of Figure~\ref{region1}, respectively, below the QSL2 and in enlargement. Its HMHD and MHD evolutions along with the twists on three different vertical cross sections are shown in panels (a)-(f) and (g)-(i) of Figure~\ref{ropeHMHD-MHD}, respectively. Magnetic field lines constituting the rope, rise substantially higher during the HMHD evolution as a result of slipping reconnection along the high $Q$ in panels (c)-(f). In panel (c) at $t=32$, the foot points of the rope that are anchored on right side (marked by black arrow) change their connectivity from one high $Q$ regime to another in panel (d) at t=33; i.e., the foot points on the right have moved to the left side (marked by black arrow). Afterwards, the magnetic field lines rise because of the continuous slipping reconnection, as evidenced in panels (e) to (f) and the supplemented animation. Comparing panels (a) with (g) at $t=10$ and (c) with (h) at t=32, we note that the twist value $T_w$ is higher in the HMHD simulation. Panels (h)-(i) highlight the displaced foot points of flux rope due to slipping reconnection at t=32 and t=120 (cf. black arrow). The rope is preserved throughout the HMHD and MHD simulations. The rise and expansion of the flux-rope envelope owing to slipping reconnection is remarkable in the HMHD simulation. \citet{dudik} have already shown such a flux-rope reconnection along QSL in a J-shaped current region, with slipping reconnection causing the flux rope to form a sigmoid (S-shaped hot channel observed in EUV images of SDO/AIA) followed by its rise and expansion. Further insight is gained by overlaying the flux rope evolution shown in Figure \ref{ropeHMHD-MHD} with direct volume rendering of $|{\bf J}|/|{\bf B}|$ (Figures \ref{ropecs} and \ref{ropecsmhd}) as a measure of magnetic field gradient for the HMHD and MHD simulations. In the HMHD case, appearance of large values of $|{\bf J}|/|{\bf B}|>475$ inside the rope (panels (a) to (c)) and foot points on left of the rope (panels (d) to (e)) are apparent. The development of the large $|{\bf J}|/|{\bf B}|$ is indicative of reconnection within the rope. Contrarily, MHD simulation lacks such high values of $|{\bf J}|/|{\bf B}|$ in the same time span (panels (a)-(b)) and the field lines show no slippage---agreeing with the proposal that large currents magnify the Hall term, resulting into more effective slippage of field lines. \subsubsection{Region R2} To compare the simulated magnetic field lines dynamics in region R2 with the observed tip of the W-shaped flare ribbon B2 (Figure \ref{extrapolation} (a)) during the HMHD and MHD evolution, we present the instants from both simulations at t=70 in panels (a) and (b) of Figure \ref{R2comp} respectively. Importantly, the lower spine remains anchored to the bottom boundary during the HMHD simulation (evident from the supplemented animation along with Figure \ref{R2comp}). Further, Figure \ref{R2comp-CS} shows the evolution of the lower spine along with the $|\textbf{J}|/|\textbf{B}|$ on the bottom boundary for the HMHD (panels (a) to (d)) and MHD (panels (e) to (h)) cases. In the HMHD case, noteworthy is the slipping motion of lower spine (marked by the black arrows) tracing the $|\textbf{J}|/|\textbf{B}|>350$ regions on the bottom boundary (panels (a) to (b)). Whereas, in the MHD such high values of $|\textbf{J}|/|\textbf{B}|$ are absent on the bottom boundary---suggesting the slippage of the field lines on the bottom boundary to be less effective in contrast to the HMHD. The finding is in agreement with the idea of enhanced slippage of field lines due to high current densities as conceptualized in the introduction. The anchored lower spine provides a path for the plasma to flow downward to the brightening segment B2. In the actual corona, such flows result in flare brightening \citep{Benz}. In contrast, the lower spine gets completely disconnected from the bottom boundary (Figure \ref{R2comp} (b)) in the MHD simulation, hence failing to explain the tip of the W-shaped flare ribbon in B2. The anchored lower spine in the HMHD simulation is caused by a complex series of magnetic field lines reconnections at the 3D null and along the QSLs in R2, as depicted in the animation. \subsubsection{Region R3} HMHD and MHD simulations of magnetic field lines dynamics around the null-line are shown in Figures~\ref{R3HMHD} and \ref{R3MHD} respectively. Figure~\ref{R3HMHD} shows the blue magnetic field lines prior and after the reconnections (indicated by black arrows) between t=4 to 5 (panels (a)-(b)), t=52 to 53 (panels (c)-(d)), and t=102 to 103 (panels (e)-(f)) during the HMHD simulation. Figure \ref{R3MHD} shows the same blue magnetic field lines prior and after the reconnections (indicated by black arrows) between t=12 to 13 (panels (a)-(b)), t=59 to 60 (panels (c)-(d)), and t=114 to 115 (panels (e)-(f)) during the MHD simulation. Comparison of the panels (a)-(f) of Figure \ref{R3HMHD} with the same panels of Figure \ref{R3MHD} reveals earlier reconnections of the blue magnetic field lines in the HMHD simulation. In both figures, green velocity vectors on the right represent the local plasma flow. They get aligned downward along the foot points of the fan magnetic field lines, as reconnection progresses. Consequently, the plasma flows downward and impacts the denser and cooler chromosphere to give rise to the brightening in B3. The velocity vectors pointing upward represent a flow toward the null-line. The plasma flow pattern in R3 is the same in the HMHD and in the MHD simulation. The vertical $yz-$plane passing through the cross section of the null-line surface (also shown in Figure \ref{R2R3R4exp} (d)) in all the panels of Figures \ref{R3HMHD} and \ref{R3MHD} shows the variation of $n$ with time. It is evident that the null is not destroyed throughout the HMHD and MHD evolution. Structural changes in the field lines caused by reconnection is near-identical for both the simulations, indicating inefficacy of the Hall term. This inefficacy is justifiable as $|\textbf{J}|/|\textbf{B}|$ remains small $\approx 10$ (not shown) in both HMHD and MHD evolution. \subsubsection{Region R4} The development of the circular motion of magnetic field lines in region R4 during the HMHD simulation is depicted in Figure \ref{lftcrclrmotion}. It shows the global dynamics of magnetic field lines in R4 and the inset images show the zoomed view of magnetic field lines in R4 to highlight the circular motion of magnetic field lines. The bottom boundary is $B_z$ in the main figure while the inset images have the $z-$component of the plasma flow at the bottom boundary (on $xy-$plane). The red vectors represent the plasma flow direction as well as magnitude in all the panels of Figure \ref{lftcrclrmotion} where the anticlockwise pattern of the plasma flow is evident. The global dynamics highlight reconnection of the loop anchored between positive and negative polarities at t=60 in Figure \ref{lftcrclrmotion} as it gets disconnected from the bottom boundary in panels (c)-(d) of Figure \ref{lftcrclrmotion}. The animation accompanying Figure \ref{lftcrclrmotion} highlights an anticlockwise motion of foot points in the same direction as the plasma flow, indicating field lines to be frozen in the fluid. The trapped plasma may cause the rotating structure B4 in the observations (c.f. Figure \ref{extrapolation} (a)). However, no such motion is present during the MHD evolution of the same magnetic field lines (not shown). An interesting feature noted in the animation is the clockwise slippage of field lines after the initial anticlockwise rotation. Further analysis of R4 using the direct volume rendering of $|\textbf{J}|/|\textbf{B}|$ is presented in Figure \ref{lftcrclrmotion-SV}. The figure shows $|\textbf{J}|/|\textbf{B}|$ attains high values $\ge225$ (enclosed by the blue rectangles) within the rotating field lines from t$\approx$86 onward. This suggests the slippage of field lines is, once again, related to the high magnetic field gradients. \par For completeness, we present the snapshots of an overall magnetic field lines morphology including the magnetic structures and topology of regions R1, R2, R3, and R4 together, overlaid with 304 {\AA} and 171 {\AA} from the HMHD and MHD simulations. Figure \ref{Tv304171} (a) shows an instant (at t=75) from the HMHD simulation where the topologies and magnetic structures in R1, R2, R3, and R4, plus the additionally drawn locust color magnetic field lines between R2 and R3 are shown collectively. It shows an excellent match of the magnetic field lines in R2 with the observed tip of W-shaped flare ribbon at B2, which is pointed out by the pink arrow in panel (a). Foot points of the spine-fan geometry around the 3D null orient themselves in the same fashion as the observed tip of the W-shaped flare ribbon at B2 as seen in 304 {\AA} channel of SDO/AIA. The rising loops indicated by the white arrow correspond to the same evolution as shown in Figure \ref{fullR1}. An overall magnetic field lines morphology mentioned in Figure \ref{lftcrclrmotion} (a) is given at the same time (t=75) during the MHD simulation overlaid with 304 {\AA} image in Figure \ref{lftcrclrmotion} (b). Importantly, unlike the HMHD simulation, the MHD simulation does not account for the anchored lower spine and fan magnetic field lines of the 3D null at the center of the B2. Also, the significant rise of overlying maroon magnetic field lines and the circular motion of the material in B4 is captured in the HMHD simulation only. In panel (c) magnetic field lines overlaid with 171 {\AA} image shows the magnetic field lines (higher up in the solar atmosphere) have resemblance with the post-flare loops during the HMHD. Overall, the HMHD evolution seems to be in better agreement with the observations in comparison to the MHD evolution. \section{Summary and Discussion} \label{summary} The paper compares data-based HMHD and MHD simulations using the flaring Active Region NOAA 12734 as a test bed. Importance of the HMHD stems from the realization that the Hall term in the induction equation cannot be neglected in presence of the magnetic reconnection---the underlying cause of solar flares. The selected event is the C1.3 class flare on March 08, 2019 around 03:19 UT for the aforementioned comparison. Although the event is analyzed and reported in the literature, it is further explored using the multi-wavelength observations from SDO/AIA. The identified important features are: an elongated extreme ultraviolet (EUV) counterpart of the eruption on the western side of the AR, a W-shaped flare ribbon and circular motion of cool chromospheric material on the eastern part. The magnetic field line dynamics near these features are utilized to compare the simulations. Notably, the simulations idealize the corona to have an Alfv\`en speed which is two orders of magnitude smaller than its typical value. Congruent to the general understanding, the Hall parameter is selected to tie the Hall dynamics to the dissipation scale $\mathcal{O} (\Delta \textbf{x})$ in the spirit of the ILES carried out in the paper. The magnetic reconnection here is associated with the slippage of magnetic field lines from the plasma parcels, effective at the dissipation scale due to local enhancement of magnetic field gradient. The same enhancement also amplifies the Hall contribution, presumably enhancing the slippage and thereby making the reconnection faster and more effective than the MHD. The coronal magnetic field is constructed by extrapolating the photospheric vector magnetic field obtained from the SDO/HMI observations employing the non-FFF technique \citep{Hu2010}. The concentrated distribution of the Lorentz force on the bottom boundary and its decrease with the height justify the use of non-FFF extrapolation for the solar corona. The initial non-zero Lorentz force is also crucial in generating self-consistent flows that initiate the dynamics and cause the magnetic reconnections. Analyses of the extrapolated magnetic field reveal several magnetic structures and topologies of interest: a flux rope on the western part at flaring location, a 3D null point along with the fan-spine configuration at the centre, a ``Fish-bone-like structure" surrounding the null-line on the eastern part of the AR. All of these structures are found to be co-spatial with the observed flare ribbon brightening. \par The HMHD simulation shows faster slipping reconnection of the flux rope foot points and overlying magnetic field lines (constituting QSLs above the flux rope) at the flaring location. Consequently, the overlying magnetic field lines rise, eventually reaching higher up in the corona and reconnecting to provide a path for plasma to eject out. The finding is in agreement with the observed elongated EUV counterpart of the eruption on western part of the AR. Contrarily, such significant rise of the flux rope and overlying field lines to subsequently reconnect higher up in the corona is absent in the MHD simulation---signifying the reconnection to be slower compared to the HMHD. Intriguingly, rise and expansion of the flux rope and overlying field lines owing to slipping reconnection on QSLs has also been modelled and observed in an earlier work by \citet{dudik}. These are typical features of the ``standard solar flare model in 3D'', which allows for a consistent explanation of events which are not causally connected \citep{dudik}. It also advocates that null-points and true separatrices are not required for the eruptive flares to occur---concurring the results of this work. HMHD evolution of the fan-spine configuration surrounding the 3D null point is in better agreement with the tip of W-shaped flare ribbon at the centre of the AR. The lower spine and fan magnetic field lines remain anchored to the bottom boundary throughout the evolution which can account for the plasma flowing downward after the reconnection and cause the brightening. Whereas in the MHD, the lower spine gets disconnected and cannot account for the brightening. The reconnection dynamics around the null-line and the corresponding plasma flow direction is same in the HMHD as well as the MHD simulation and agrees with the observed brightening. Nevertheless, reconnection is earlier in the HMHD. HMHD evolution captures an anti-clockwise circular motion of magnetic field lines in the left part of the AR which is co-spatial with the location of the rotating chromospheric material in eastern side of the AR. No such motion was found in the MHD simulation. Importantly, the simulations explicitly associate generation of large magnetic field gradients to HMHD compared to MHD, resulting in faster and more efficient field line slippage because of the enhanced Hall term. Overall, the results documented in the paper show the HMHD explains the flare brightening better than the MHD, prioritizing the requirement to include HMHD in future state-of-the-art data-based numerical simulations. \section{Acknowledgement} The simulations are performed using the 100TF cluster Vikram-100 at Physical Research Laboratory, India. We wish to acknowledge the visualization software VAPOR (\url{www.vapor.ucar.edu}), for generating relevant graphics. Q.H. and A.P. acknowledge partial support of NASA grants 80NSSC17K0016, 80NSSC21K1671, LWS 80NSSC21K0003 and NSF awards AGS-1650854 and AGS-1954503. This research was also supported by the Research Council of Norway through its Centres of Excellence scheme, project number 262622, as well as through the Synergy Grant number 810218 (ERC-2018-SyG) of the European Research Council.
proofpile-arXiv_065-3
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:intro} Space provides a useful vantage point for monitoring large-scale trends on the surface of the Earth~\cite{manfreda2018use,albert2017using,yeh2020using}. Accordingly, numerous EO satellite missions have been launched or are being planned. Many EO satellites carry multispectral or hyperspectral sensors that measure the electromagnetic radiations emitted or reflected from the surface, which are then processed to form \emph{data cubes}. The data cubes are the valuable inputs to the EO applications. However, two thirds of the surface of the Earth is under cloud cover at any given point in time~\cite{jeppesen2019cloud}. In many EO applications, the clouds occlude the targets of interest and reduce the value of the data. In fact, many weather prediction tasks actually require clear-sky measurements~\cite{liu2020hyperspectral}. Dealing with cloud cover is part-and-parcel of practical EO processing pipelines~\cite{transon2018survey, li2019deep-ieee, paoletti2019deep, mahajan2020cloud, yuan2021review}. Cloud mitigation strategies include segmenting and masking out the portion of the data that is affected by clouds~\cite{griffin2003cloud,gomez-chova2007cloud}, and restoring the cloud-affected regions~\cite{li2019cloud,meraner2020cloud,zi2021thin} as a form of data enhancement. Increasingly, deep learning forms the basis of the cloud mitigation routines~\cite{li2019deep-ieee,castelluccio2015land,sun2020satellite,yang2019cdnet}. \begin{figure}[t]\centering \begin{subfigure}[b]{0.47\linewidth} \centering \includegraphics[width=\linewidth]{./figures/intro/rgb_cloudy.pdf} \caption{Cloudy image (in RGB).} \end{subfigure} \hspace{0.5em} \begin{subfigure}[b]{0.47\linewidth} \centering \includegraphics[width=\linewidth]{./figures/intro/rgb_notcloudy.pdf} \caption{Non-cloudy image (in RGB).} \end{subfigure} \begin{subfigure}[b]{0.47\linewidth} \centering \includegraphics[width=\linewidth]{./figures/intro/b128_patch.pdf} \caption{Adversarial cube to bias the detector in the cloud-sensitive bands.} \label{fig:falsecolor} \end{subfigure} \hspace{0.5em} \begin{subfigure}[b]{0.47\linewidth} \centering \includegraphics[width=\linewidth]{./figures/intro/rgb_patch.pdf} \caption{Adversarial cube blended in the environment in the RGB domain.} \end{subfigure} \vspace{-0.5em} \caption{(Row 1) Cloudy and non-cloudy scenes. (Row 2) Our \emph{adversarial cube} fools the multispectral cloud detector~\cite{giuffrida2020cloudscout} to label the non-cloudy scene as cloudy with high confidence.} \label{fig:example} \end{figure} As the onboard compute capabilities of satellites improve, it has become feasible to conduct cloud mitigation directly on the satellites~\cite{li2018onboard,giuffrida2020cloudscout}. A notable example is CloudScout~\cite{giuffrida2020cloudscout}, which was tailored for the PhiSat-1 mission~\cite{esa-phisat-1} of the European Space Agency (ESA). PhiSat-1 carries the HyperScout-2 imager~\cite{esposito2019in-orbit} and the Eyes of Things compute payload~\cite{deniz2017eyes}. Based on the multispectral measurements, a convolutional neural network (CNN) is executed on board to perform cloud detection, which, in the case of~\cite{giuffrida2020cloudscout}, involves making a binary decision on whether the area under a data cube is \emph{cloudy} or \emph{not cloudy}; see Fig.~\ref{fig:example} (Row 1). To save bandwidth, only \emph{non-cloudy} data cubes are downlinked, while \emph{cloudy} ones are not transmitted to ground~\cite{giuffrida2020cloudscout}. However, deep neural networks (DNNs) in general and CNNs in particular are vulnerable towards adversarial examples, \ie, carefully crafted inputs aimed at fooling the networks into making incorrect predictions~\cite{akhtar2018threat, yuan2019adversarial}. A particular class of adversarial attacks called physical attacks insert adversarial patterns into the environment that, when imaged together with the targeted scene element, can bias DNN inference~\cite{athalye2018synthesizing, brown2017adversarial, eykholt2018robust, sharif2016accessorize, thys2019fooling}. In previous works, the adversarial patterns were typically colour patches optimised by an algorithm and fabricated to conduct the attack. It is natural to ask if DNNs for EO data are susceptible to adversarial attacks. In this paper, we answer the question in the affirmative by developing a physical adversarial attack against a multispectral cloud detector~\cite{giuffrida2020cloudscout}; see Fig.~\ref{fig:example} (Row 2). Our adversarial pattern is optimised in the multispectral domain (hence is an \emph{adversarial cube}) and can bias the cloud detector to assign a \emph{cloudy} label to a \emph{non-cloudy} scene. Under the mission specification of CloudScout~\cite{giuffrida2020cloudscout}, EO data over the area will not be transmitted to ground. \vspace{-1em} \paragraph{Our contributions} Our specific contributions are: \begin{enumerate}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt] \item We demonstrate the optimisation of adversarial cubes to be realised as an array of exterior paints that exhibit the multispectral reflectance to bias the cloud detector. \item We propose a novel multi-objective adversarial attack concept, where the adversarial cube is optimised to bias the cloud detector in the cloud sensitive bands, while remaining visually camouflaged in the visible bands. \item We investigate mitigation strategies against our adversarial attack and propose a simple robustification method. \end{enumerate} \vspace{-1em} \paragraph{Potential positive and negative impacts} Research into adversarial attacks can be misused for malicious activities. On the other hand, it is vital to highlight the potential of the attacks so as to motivate the development of mitigation strategies. Our contributions above are aimed towards the latter positive impact, particularly \#3 where a defence method is proposed. We are hopeful that our work will lead to adversarially robust DNNs for cloud detection. \section{Related work}\label{sec:related_work} Here, we review previous works on dealing with clouds in EO data and adversarial attacks in remote sensing. \subsection{Cloud detection in EO data}\label{sec:related_hyperspectral} EO satellites are normally equipped with multispectral or hyperspectral sensors, the main differences between the two being the spectral and spatial resolutions~\cite{madry2017electrooptical,transon2018survey}. Each ``capture'' by a multi/hyperspectral sensor produces a data cube, which consists of two spatial dimensions with as many channels as spectral bands in the sensor. Since 66-70\% of the surface of the Earth is cloud-covered at any given time~\cite{jeppesen2019cloud,li2018onboard}, dealing with clouds in EO data is essential. Two major goals are: \begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt] \item Cloud detection, where typically the location and extent cloud coverage in a data cube is estimated; \item Cloud removal~\cite{li2019cloud,meraner2020cloud,zi2021thin}, where the values in the spatial locations occluded by clouds are restored. \end{itemize} Since our work relates to the former category, the rest of this subsection is devoted to cloud detection. Cloud detection assigns a \emph{cloud probability} or \emph{cloud mask} to each pixel of a data cube. The former indicates the likelihood of cloudiness at each pixel, while the latter indicates discrete levels of cloudiness at each pixel~\cite{sinergise-cloud-masks}. In the extreme case, a single binary label (\emph{cloudy} or \emph{not cloudy}) is assigned to the whole data cube~\cite{giuffrida2020cloudscout}; our work focusses on this special case of cloud detection. Cloud detectors use either \emph{hand-crafted features} or \emph{deep features}. The latter category is of particular interest because the methods have shown state-of-the-art performance~\cite{lopezpuigdollers2021benchmarking,liu2021dcnet}. The deep features are extracted from data via a series of hierarchical layers in a DNN, where the highest-level features serve as optimal inputs (in terms of some loss function) to a classifier, enabling discrimination of subtle inter-class variations and high intra-class variations~\cite{li2019deep-ieee}. The majority of cloud detectors that use deep features are based on an extension or variation of Berkeley's fully convolutional network architecture~\cite{long2015fully, shelhamer2017fully}, which was designed for pixel-wise semantic segmentation and demands nontrivial computing resources. For example, \cite{li2019deep} is based on SegNet~\cite{badrinarayanan2017segnet}, while \cite{mohajerani2018cloud, jeppesen2019cloud, yang2019cdnet, lopezpuigdollers2021benchmarking, liu2021dcnet, zhang2021cnn} are based on U-Net~\cite{ronneberger2015u-net}, all of which are not suitable for on-board implementation. \subsection{On-board processing for cloud detection} On-board cloud detectors can be traced back to the thresholding-based Hyperion Cloud Cover algorithm~\cite{griffin2003cloud}, which operated on 6 of the hyperspectral bands of the EO-1 satellite. Li \etal's on-board cloud detector~\cite{li2018onboard} is an integrative application of the techniques of decision tree, spectral angle map~\cite{decarvalhojr2000spectral}, adaptive Markov random field~\cite{zhang2011adaptive} and dynamic stochastic resonance~\cite{chouhan2013enhancement}, but no experimental feasibility results were reported. Arguably the first DNN-based on-board cloud detector is CloudScout~\cite{giuffrida2020cloudscout}, which operates on the HyperScout-2 imager~\cite{esposito2019in-orbit} and Eye of Things compute payload~\cite{deniz2017eyes}. As alluded to above, the DNN assigns a single binary label to the whole input data cube; details of the DNN will be provided in Sec.~\ref{sec:training}. \subsection{Adversarial attacks in remote sensing} Adversarial examples can be \emph{digital} or \emph{physical}. Digital attacks apply pixel-level perturbations to legitimate test images, subject to the constraints that these perturbations look like natural occurrences, \eg, electronic noise. Classic white-box attacks such as the FGSM~\cite{goodfellow2015explaining} have been applied to attacking CNN-based classifiers for RGB images~\cite{xu2021assessing}, multispectral images~\cite{kalin2021automating} and synthetic aperture radio images~\cite{li2021adversarial}. A key observation is the generalisability of attacks from RGB to multispectral images~\cite{ortiz2018integrated, ortiz2018on}. Generative adversarial networks have been used to generate natural-looking hyperspectral adversarial examples~\cite{burnel2021generating}. Physical attacks, as defined in Sec.~\ref{sec:intro}, need only access to the environment imaged by the victim, whereas digital attacks need access to the victim's test images (\eg, in a memory buffer); in this sense, physical attacks have weaker operational requirements and the associated impact is more concerning. For \emph{aerial/satellite RGB imagery}, physical attacks on a classifier~\cite{czaja2018adversarial}, aircraft detectors~\cite{den2020adversarial, lu2021scale} and a car detector~\cite{du2022physical} have been investigated but only \cite{du2022physical} provided real-world physical test results. For \emph{aerial/satellite multi/hyperspectral imagery}, our work is arguably the first to consider physical adversarial attacks. \section{Threat model}\label{sec:threat_model} We first define the threat model that serves as a basis for our proposed adversarial attack. \begin{description}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt] \item[Attacker's goals] The attacker aims to generate an adversarial cube that can bias a pretrained multispectral cloud detector to label non-cloudy space-based observation of scenes on the surface as cloudy. In addition, the attacker would like to visually camouflage the cube in a specific \textbf{region of attack (ROA)}; see Fig.~\ref{fig:rgb_scenes} for examples. Finally, the cube should be physically realisable. \begin{figure}[ht]\centering \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{./figures/threat_model/hills-roa.pdf} \caption{Hills.} \label{fig:hills} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{./figures/threat_model/desert-roa.pdf} \caption{Desert.} \label{fig:desert} \end{subfigure} \vspace{-0.5em} \caption{Sample regions of attack.} \label{fig:rgb_scenes} \end{figure} \item[Attacker's knowledge] The attacker has full information of the targeted DNN, including architecture and parameter values, \ie, white-box attack. This is a realistic assumption due to the publication of detailed information on the model and training data~\cite{giuffrida2020cloudscout}. Moreover, from a threat mitigation viewpoint, assuming the worst case is useful. \item[Attacker's strategy] The attacker will optimise the adversarial cube on training data sampled from the same input domain as the cloud detector; the detailed method will be presented in Sec.~\ref{sec:attacking}. The cube will then be fabricated and placed in the environment, including the ROA, although Sec.~\ref{sec:limitations} will describe limitations on real-world evaluation of the proposed attack in our study. \end{description} \section{Building the cloud detector}\label{sec:training} We followed Giuffrida \etal.~\cite{giuffrida2020cloudscout} to build a multispectral cloud detector suitable for satellite deployment. \subsection{Dataset}\label{sec:cloud_detectors} We employed the Cloud Mask Catalogue~\cite{francis_alistair_2020_4172871}, which contains cloud masks for 513 Sentinel-2A~\cite{2021sentinel-2} data cubes collected from a variety of geographical regions, each with 13 spectral bands and 20 m ground resolution (1024$\times$1024 pixels). Following Giuffrida \etal., who also used Sentinel-2A data, we applied the Level-1C processed version of the data, \ie, top-of-atmosphere reflectance data cubes. We further spatially divide the data into 2052 data (sub)cubes of 512$\times$512 pixels each. To train the cloud detector model, the data cubes were assigned a binary label (\textit{cloudy} vs.~\textit{not cloudy}) by thresholding the number of cloud pixels in the cloud masks. Following Giuffrida \etal., two thresholds were used: 30\%, leading to dataset version TH30, and 70\%, leading to dataset version TH70 (the rationale will be described later). Each dataset was further divided into training, validation, and testing sets. Table~\ref{tab:cm_dataset} in the supp.~material summarises the datasets. \subsection{Model} We employed the CNN of Giuffrida \etal., which contains four convolutional layers in the feature extraction layers and two fully connected layers in the decision layers (see Fig.~\ref{fig:cnn_model} in the supp.~material for more details). The model takes as input 3 of the 13 bands of Sentinel-2A: band 1 (coastal aerosol), band 2 (blue), and band 8 (NIR). These bands correspond to the cloud-sensitive wavelengths; see Fig.~\ref{fig:falsecolor} for a false colour image in these bands. Using only 3 bands also leads to a smaller CNN ($\le 5$ MB) which allows it to fit on the compute payload of CloudScout~\cite{giuffrida2020cloudscout}. Calling the detector ``multispectral'' can be inaccurate given that only 3 bands are used. However, in Sec.~\ref{sec:mitigation}, we will investigate adversarial robustness by increasing the input bands and model parameters of Giuffrida \etal.'s model. \subsection{Training} Following~\cite{giuffrida2020cloudscout}, a two stage training process was applied: \begin{enumerate}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt] \item Train on TH30 to allow the feature extraction layers to recognise ``cloud shapes''. \item Then, train on TH70 to fine-tune the decision layers, while freezing the weights in the feature extraction layers. \end{enumerate} The two stage training is also to compensate for unbalanced distribution of training samples. Other specifications (\eg, learning rate and decay schedule, loss function) also follow that of Giuffrida \etal.; see~\cite{giuffrida2020cloudscout} for details. Our trained model has a memory footprint of 4.93 MB (1,292,546 32-bit float weights), and testing accuracy and false positive rate of 95.07\% and 2.46\%, respectively. \section{Attacking the cloud detector}\label{sec:attacking} Here, we describe our approach to optimising adversarial cubes to attack multispectral cloud detectors. \subsection{Adversarial cube design}\label{sec:material_selection} Digitally, an adversarial cube $\mathbf{P}$ is the tensor \begin{equation*} \mathbf{P} = \begin{pmatrix} \mathbf{p}_{1,1} & \mathbf{p}_{1,2} & \cdots & \mathbf{p}_{1,N} \\ \mathbf{p}_{2,1} & \mathbf{p}_{2,2} & \cdots & \mathbf{p}_{2,N} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{p}_{M,1} & \mathbf{p}_{M,2} & \cdots & \mathbf{p}_{M,N} \end{pmatrix} \in [0,1]^{M \times N \times 13}, \end{equation*} where $M$ and $N$ (in pixels) are the sizes of the spatial dimensions, and $\mathbf{p}_{i,j} \in [0,1]^{13}$ is the intensity at pixel $(i,j)$ corresponding to the 13 multispectral bands of Sentinel-2A. Physically, $\mathbf{P}$ is to be realised as an array of exterior paint mixtures (see Fig.~\ref{fig:colour_swatches}) that exhibit the multispectral responses to generate the attack. The real-world size of each pixel of $\mathbf{P}$ depends on the ground resolution of the satellite-borne multispectral imager (more on this in Sec.~\ref{sec:limitations}). \subsubsection{Material selection and measurement} To determine the appropriate paint mixtures for $\mathbf{P}$, we first build a library of multispectral responses of exterior paints. Eighty exterior paint swatches (see Fig.~\ref{fig:colour_swatches_real}) were procured and scanned with a Field Spec Pro 3 spectrometer~\cite{asd2008fieldspec3} to measure their reflectance (Fig.~\ref{fig:paint_reflectance}) under uniform illumination. To account for solar illumination when viewed from the orbit, the spectral power distribution of sunlight (specifically, using the AM1.5 Global Solar Spectrum\cite{astm2003specification}; Fig.~\ref{fig:solar_spectrum}) was factored into our paint measurements via element-wise multiplication to produce the apparent reflection; Fig.~\ref{fig:paint_apparent_reflectance}. Lastly, we converted the continuous spectral range of the apparent reflectance of a colour swatch to the 13 Sentinel-2A bands by averaging over the bandwidth of each band; Fig.~\ref{fig:paint_13bands}. The overall result is the matrix \begin{align} \mathbf{C} = \left[ \begin{matrix} \mathbf{c}_1, \mathbf{c}_2, \dots, \mathbf{c}_{80} \end{matrix} \right] \in [0,1]^{13 \times 80} \end{align} called the \emph{spectral index}, where $\mathbf{c}_q \in [0,1]^{13}$ contains the reflectance of the $q$-th colour swatch over the 13 bands. \begin{figure}[ht] \centering \includegraphics[width=1.0\columnwidth]{./figures/methods/colour_swatches_diagram.pdf} \vspace{-2.0em} \caption{The adversarial cube (digital size $4 \times 5$ pixels in the example) is to be physically realised as a mixture of exterior paint colours that generate the optimised multispectral responses.} \label{fig:colour_swatches} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=1.0\columnwidth]{./figures/methods/colour_swatches.pdf} \vspace{-1.5em} \caption{A subset of our colour swatches (paint samples).} \label{fig:colour_swatches_real} \end{figure} \begin{figure*}[ht]\centering \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{./figures/methods/ybr_reflectance.pdf} \caption{Reflectance of a colour swatch.} \label{fig:paint_reflectance} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{./figures/methods/solar_spectrum.pdf} \caption{AM1.5 Global Solar Spectrum.} \label{fig:solar_spectrum} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{./figures/methods/ybr_apparent_reflectance.pdf} \caption{Apparent reflectance of (a).} \label{fig:paint_apparent_reflectance} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{./figures/methods/ybr_13bands.pdf} \caption{13 Sentinel-2 bands of (c).} \label{fig:paint_13bands} \end{subfigure} \vspace{-0.5em} \caption{Process of obtaining the 13 Sentinel-2 spectral bands of a colour swatch.} \label{fig:spectrometer} \end{figure*} \subsubsection{Adversarial cube parametrisation} We obtain $\mathbf{p}_{i,j}$ as a linear combination of the spectral index \begin{align}\label{eq:convex} \mathbf{p}_{i,j} = \mathbf{C}\cdot \sigma(\mathbf{a}_{i,j}), \end{align} where $\mathbf{a}_{i,j}$ is the real vector \begin{align} \mathbf{a}_{i,j} = \left[ \begin{matrix} a_{i,j,1} & a_{i,j,2} & \dots & a_{i,j,80} \end{matrix} \right]^T \in \mathbb{R}^{80}, \end{align} and $\sigma$ is the softmax function \begin{align} \sigma(\mathbf{a}_{i,j}) = \frac{1}{\sum^{80}_{d=1} e^{a_{i,j,d}}} \left[ \begin{matrix} e^{a_{i,j,1}} & \dots & e^{a_{i,j,80}} \end{matrix} \right]^T. \end{align} Effectively, $\mathbf{p}_{i,j}$~\eqref{eq:convex} is a convex combination of $\mathbf{C}$. Defining each $\mathbf{p}_{i,j}$ as a linear combination of $\mathbf{C}$ supports the physical realisation of each $\mathbf{p}_{i,j}$ through proportional mixing of the existing paints, as in colour printing~\cite{sharma2017digital}. Restricting the combination to be convex, thereby placing each $\mathbf{p}_{i,j}$ in the convex hull of $\mathbf{C}$, contributes to the sparsity of the coefficients~\cite{caratheodory-theorem}. In Sec.~\ref{sec:opimisation}, we will introduce additional constraints to further enhance physical realisability. To enable the optimal paint mixtures to be estimated, we collect the coefficients for all $(i,j)$ into the set \begin{align} \mathcal{A} = \{ \mathbf{a}_{i,j} \}^{j = 1,\dots,N}_{i=1,\dots,M}, \end{align} and parametrise the adversarial cube as \begin{equation*} \mathbf{P}(\mathcal{A}) = \begin{pmatrix} \mathbf{C}\sigma(\mathbf{a}_{1,1}) & \mathbf{C}\sigma(\mathbf{a}_{1,2}) & \cdots & \mathbf{C}\sigma(\mathbf{a}_{1,N}) \\ \mathbf{C}\sigma(\mathbf{a}_{2,1}) & \mathbf{C}\sigma(\mathbf{a}_{2,2}) & \cdots & \mathbf{C}\sigma(\mathbf{a}_{2,N}) \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{C}\sigma(\mathbf{a}_{M,1}) & \mathbf{C}\sigma(\mathbf{a}_{M,2}) & \cdots & \mathbf{C}\sigma(\mathbf{a}_{M,N}) \end{pmatrix}, \end{equation*} and where $\mathbf{p}_{i,j}(\mathcal{A})$ is pixel $(i,j)$ of $\mathbf{P}(\mathcal{A})$. Optimising a cube thus reduces to estimating $\mathcal{A}$. \subsection{Data collection for cube optimisation}\label{sec:data_collection} Based on the attacker's goals (Sec.~\ref{sec:threat_model}), we collected Sentinel-2A Level-1C data products~\cite{2021copernicus} over the globe with a distribution of surface types that resembles the Hollstein dataset~\cite{hollstein2016ready-to-use}. The downloaded data cubes were preprocessed following~\cite{francis_alistair_2020_4172871}, including spatial resampling to achieve a ground resolution of 20~m and size $512 \times 512 \times 13$. Sen2Cor~\cite{main-knorn2017sen2cor} was applied to produce probabilistic cloud masks, and a threshold of 0.35 was applied on the probabilities to decide \textit{cloudy} and \textit{not cloudy} pixels. The binary cloud masks were further thresholded with 70\% cloudiness (Sec.~\ref{sec:cloud_detectors}) to yield a single binary label for each data cube. The data cubes were then evaluated with the cloud detector trained in Sec.~\ref{sec:training}. Data cubes labelled \emph{not cloudy} by the detector was separated into training and testing sets \begin{align} \mathcal{D} = \{ \mathbf{D}_k \}^{2000}_{k=1}, \;\;\;\; \mathcal{E} = \{ \mathbf{E}_\ell \}^{400}_{\ell=1}, \end{align} for adversarial cube training. One data cube $\mathbf{T} \in \mathcal{D}$ is chosen as the ROA (Sec.~\ref{sec:threat_model}). \begin{figure*}[ht]\centering \includegraphics[width=0.95\linewidth]{./figures/methods/pipeline.pdf} \vspace{-0.5em} \caption{Optimisation process for generating adversarial cubes.} \label{fig:pipeline} \end{figure*} \subsection{Optimising adversarial cubes}\label{sec:patch} We adapted Brown \etal's~\cite{brown2017adversarial} method, originally developed for optimising adversarial patches (visible domain). Fig.~\ref{fig:pipeline} summarises our pipeline for adversarial cube optimisation, with details provided in the rest of this subsection. \vspace{-1em} \paragraph{Subcubes} First, we introduce the subcube notation. Let $b \subseteq \{1,2,\dots,13\}$ index a subset of the Sentinel-2A bands. Using $b$ in the superscript of a data cube, e.g., $\mathbf{P}^{b}$, implies extracting the subcube of $\mathbf{P}$ with the bands indexed by $b$. Of particular interest are the following two band subsets: \begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt] \item $c = \{1, 2, 8\}$, \ie, the cloud sensitive bands used in~\cite{giuffrida2020cloudscout}. \item $v = \{2, 3, 4\}$, \ie, the visible bands. \end{itemize} \subsubsection{Cube embedding and augmentations}\label{sec:augmentations} Given the current $\mathcal{A}$, adversarial cube $\mathbf{P}(\mathcal{A})$ is embedded into a training data cube $\mathbf{D}_k$ through several geometric and spectral intensity augmentations that simulate the appearance of the adversarial cube when captured in the field by a satellite. The geometric augmentations include random rotations and positioning to simulate variations in placement of $\mathbf{P}(\mathcal{A})$ in the scene. The spectral intensity augmentations include random additive noise, scaling and corruption to simulate perturbation by ambient lighting. \subsubsection{Loss function and optimisation}\label{sec:opimisation} Define $\mathbf{D}_k(\mathcal{A})$ as the training data cube $\mathbf{D}_k$ embedded with $\mathbf{P}(\mathcal{A})$ (with the augmentations described in Sec.~\ref{sec:augmentations}). The data cube is forward propagated through the cloud detector $f$ to estimate the \emph{confidence} \begin{align} \hat{y}_k = f(\mathbf{D}^c_k(\mathcal{A})) \end{align} of $\mathbf{D}_k(\mathcal{A})$ being in the \emph{cloudy} class. Note that the cloud detector considers only the subcube $\mathbf{D}^c_k(\mathcal{A})$ corresponding to the cloud sentitive bands. Since we aim to bias the detector to assign high $\hat{y}_k$ to $\mathbf{D}_k(\mathcal{A})$, we construct the loss \begin{align}\label{eq:loss} \Psi(\mathcal{A},\mathcal{D}) = \sum_k -\log(f(\mathbf{D}^c_k(\mathcal{A}))). \end{align} In addition to constraining the spectral intensities in $\mathbf{P}(\mathcal{A})$ to be in the convex hull of $\mathbf{C}$, we also introduce the multispectral non-printability score (NPS) \begin{align}\label{eq:nps_loss} \Phi(\mathcal{A}, \mathbf{C}) = \frac{1}{M N} \sum_{i,j} \left( \min_{\textbf{c} \in \mathbf{C}} \left\| \textbf{p}_{i,j}(\mathcal{A}) - \mathbf{c}\right\|_2 \right). \end{align} Minimising $\Phi$ encourages each $\textbf{p}_{i,j}(\mathcal{A})$ to be close to (one of) the measurements in $\textbf{C}$, which sparsifies the coefficients $\sigma(\mathbf{a}_{i,j})$ and helps with the physical realisability of $\mathbf{P}(\mathcal{A})$. The multispecral NPS is an extension of the original NPS for optimising (visible domain) adversarial patches~\cite{sharif2016accessorize}. To produce an adversarial cube that is ``cloaked'' in the visible domain in the ROA defined by $\mathbf{T}$, we devise the term \begin{align}\label{eq:cloaking_loss} \Omega(\mathcal{A}, \mathbf{T}) = \left\| \textbf{P}^{v}(\mathcal{A}) - \mathbf{T}^v_{M \times N} \right\|_2, \end{align} where $\mathbf{T}^v_{M \times N}$ is a randomly cropped subcube of spatial height $M$ and width $N$ in the visible bands $\mathbf{T}^v$ of $\mathbf{T}$. The overall loss is thus \begin{equation} L(\mathcal{A}) = \underbrace{\Psi(\mathcal{A},\mathcal{D})}_{\textrm{cloud sensitive}} + \alpha\cdot \underbrace{\Phi(\mathcal{A}, \mathbf{C})}_{\textrm{multispectral}} + \beta \cdot \underbrace{\Omega(\mathcal{A}, \mathbf{T})}_{\textrm{visible domain}}, \label{eq:overall_loss} \end{equation} where weights $\alpha, \beta \ge 0$ control the relative importance of the terms. Notice that the loss incorporates multiple objectives across different parts of the spectrum. \vspace{-1em} \paragraph{Optimisation} Minimising $L$ with respect to $\mathcal{A}$ is achieved using the Adam~\cite{kingma2014adam} stochastic optimisation algorithm. Note that the pre-trained cloud detector $f$ is not updated. \vspace{-1em} \paragraph{Parameter settings} See Sec.~\ref{sec:results}. \subsection{Limitations on real-world testing}\label{sec:limitations} While our adversarial cube is optimised to be physically realisable, two major constraints prevent physical testing: \begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt] \item Lack of precise knowledge of and control over the operation of a real satellite makes it difficult to perform coordinated EO data capture with the adversarial cube. \item Cube dimensions of about 100$\times$100 pixels are required for effective attacks, which translates to 2 km$\times$2 km = 4 km$^2$ ground size (based on the ground resolution of the data; see Sec.~\ref{sec:data_collection}). This prevents full scale fabrication on an academic budget. However, the size of the cube is well within the realm of possibility, \eg, solar farms and airports can be much larger than $4$ km$^2$~\cite{ong2013land}. \end{itemize} We thus focus on evaluating our attack in the digital domain, with real-world testing left as future work. \section{Measuring effectiveness of attacks}\label{sec:metrics} Let $\mathbf{P}^\ast = \mathbf{P}(\mathcal{A}^\ast)$ be the adversarial cube optimised by our method (Sec.~\ref{sec:attacking}). Recall from Sec.~\ref{sec:data_collection} that both datasets $\mathcal{D}$ and $\mathcal{E}$ contain \emph{non-cloudy} data cubes. We measure the effectiveness of $\mathbf{P}^\ast$ on the training set $\mathcal{D}$ via two metrics: \begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt] \item Detection accuracy of the pretrained cloud detector $f$ (Sec.~\ref{sec:training}) on $\mathcal{D}$ embedded with $\mathbf{P}^\ast$, i.e., \begin{equation}\label{eq:accuracy} \text{Accuracy}({\mathcal{D}}) \triangleq \frac{1}{|\mathcal{D}|} \sum^{|\mathcal{D}|}_{k=1} \mathbb{I}(f(\mathbf{D}^c_k(\mathcal{A}^\ast)) \le 0.5), \end{equation} where the lower the accuracy, the less often $f$ predicted the correct class label (\emph{non-cloudy}, based on confidence threshold $0.5$), hence the more effective the $\mathbf{P}^\ast$. \item Average confidence of the pretrained cloud detector $f$ (Sec.~\ref{sec:training}) on $\mathcal{D}$ embedded with $\mathbf{P}^\ast$, i.e., \begin{equation}\label{eq:average_probability} \text{Cloudy}({\mathcal{D}}) \triangleq \frac{1}{|\mathcal{D}|} \sum^{|\mathcal{D}|}_{k=1} f(\mathbf{D}^c_k(\mathcal{A}^\ast). \end{equation} The higher the avg confidence, the more effective the $\mathbf{P}^\ast$. \end{itemize} To obtain the effectiveness measures on the testing set $\mathcal{E}$, simply swap $\mathcal{D}$ in the above with $\mathcal{E}$. \section{Results}\label{sec:results} We optimised adversarial cubes of size 100$\times$100 pixels on $\mathcal{D}$ (512$\times$512 pixel dimension) under different loss configurations and evaluated them digitally (see Sec.~\ref{sec:limitations} on obstacles to real-world testing). Then, we investigated different cube designs and mitigation strategies for our attack. \subsection{Ablation tests}\label{sec:ablation} Based on the data collected, we optimised adversarial cubes under different combinations of loss terms: \begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt] \item $\Psi$: Adversarial biasing in the cloud-sensitive bands~\eqref{eq:loss}. \item $\Phi$: Multispectral NPS~\eqref{eq:nps_loss}. \item $\Omega$-Hills: Cloaking~\eqref{eq:cloaking_loss} with $\mathbf{T}$ as Hills (Fig.~\ref{fig:hills}). \item $\Omega$-Desert: Cloaking~\eqref{eq:cloaking_loss} with $\mathbf{T}$ as Desert (Fig.~\ref{fig:desert}). \end{itemize} The weights in~\eqref{eq:overall_loss} were empirically determined to be $\alpha = 5.0$ and $\beta = 0.05$. \vspace{-1em} \paragraph{Convex hull and NPS} Fig.~\ref{fig:cubes_hull} shows the optimised cubes $\mathbf{P}^\ast$ and its individual spectral intensities $\mathbf{p}^\ast_{i,j}$ in the cloud sensitive bands (false colour) and visible domain. Note that without the convex hull constraints, the intensities (green points) are scattered quite uniformly, which complicates physical realisability of the paint mixtures. The convex hull constraints predictably limit the mixtures to be in the convex hull of $\mathbf{C}$. Carath{\'e}odory's Theorem~\cite{caratheodory-theorem} ensures that each $\mathbf{p}^\ast_{i,j}$ can be obtained by mixing at most 13 exterior paints. In addition, the multispectral NPS term encourages the mixtures to cluster closely around the columns of $\mathbf{C}$ (red points), \ie, close to an existing exterior paint colour. \vspace{-1em} \paragraph{Visual camouflage} Fig.~\ref{fig:cubes_loss_images} illustrates optimised cubes $\mathbf{P}^\ast$ embedded in the ROA Hills and Desert, with and without including the cloaking term~\eqref{eq:cloaking_loss} in the loss function. Evidently the cubes optimised with $\Omega$ are less perceptible. \vspace{-1em} \paragraph{Effectiveness of attacks} Table~\ref{tab:result_loss} shows quantitative results on attack effectiveness (in terms of the metrics in Sec.~\ref{sec:metrics}) on the training $\mathcal{D}$ and testing $\mathcal{E}$ sets---again, recall that these datasets contain only \emph{non-cloudy} data cubes. The results show that the optimised cubes are able to strongly bias the pretrained cloud detector, by lowering the accuracy by at least $63\%$ (1.00 to 0.37) and increasing the cloud confidence by more than $1000\%$ (0.05 to 0.61). The figures also indicate the compromise an attacker would need to make between the effectiveness of the attack, physical realisablity and visual imperceptibility of the cube. \begin{table}[ht] \setlength\tabcolsep{1pt} \centering \begin{tabular}{p{4.0cm} | p{1.0cm} p{1.0cm} | p{1.0cm} p{1.0cm}} \rowcolor{black} & \multicolumn{2}{l |}{\textcolor{white}{\textbf{Accuracy}}} & \multicolumn{2}{l}{\textcolor{white}{\textbf{Cloudy}}} \\ \hline \textbf{Loss functions} & \textbf{Train} & \textbf{Test} & \textbf{Train} & \textbf{Test} \\ \hline - (no adv.~cubes) & 1.00 & 1.00 & 0.05 & 0.05 \\ $\Psi$ (no convex hull constr.) & 0.04 & 0.03 & 0.95 & 0.95 \\ $\Psi$ & 0.13 & 0.12 & 0.81 & 0.83 \\ $\Psi + \alpha\Phi$ & 0.22 & 0.19 & 0.73 & 0.75 \\ $\Psi + \beta\Omega$-Hills & 0.17 & 0.14 & 0.77 & 0.80 \\ $\Psi + \beta\Omega$-Desert & 0.23 & 0.25 & 0.72 & 0.73 \\ $\Psi + \alpha\Phi + \beta\Omega$-Hills & 0.25 & 0.28 & 0.71 & 0.70 \\ $\Psi + \alpha\Phi + \beta\Omega$-Desert & 0.37 & 0.37 & 0.61 & 0.61 \\ \end{tabular} \vspace{-0.5em} \caption{Effectiveness of 100$\times$100 adversarial cubes optimised under different loss configurations (Sec.~\ref{sec:ablation}). Lower accuracy = more effective attack. Higher cloud confidence = more effective attack.} \label{tab:result_loss} \end{table} \begin{figure*}[ht]\centering \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\textwidth]{./figures/results/hull/log_nohull.pdf} \caption{$L = \Psi$ (without convex hull constraints).} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\textwidth]{./figures/results/hull/log_hull.pdf} \caption{$L = \Psi$.} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\textwidth]{./figures/results/hull/log+nps_hull.pdf} \caption{$L = \Psi + \alpha \cdot \Phi$.} \end{subfigure} \vspace{-0.5em} \caption{Effects of convex hull constraints and multispectral NPS on optimised cube $\mathbf{P}^\ast$. The top row shows the cube and individual pixels $\mathbf{p}^\ast_{i,j}$ (green points) in the visible bands $v$, while the bottom row shows the equivalent values in the cloud sensitive bands $c$ (in false colour). In the 3-dimensional plots, the red points indicate the columns of the spectral index $\mathbf{C}$ and black lines its convex hull.} \label{fig:cubes_hull} \end{figure*} \begin{figure}[ht]\centering \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{./figures/results/loss/not_camo_hills.pdf} \caption{$L = \Psi + \alpha \Phi$.} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{./figures/results/loss/camo_hills.pdf} \caption{$L = \Psi + \alpha \Phi + \beta \Omega$-$\textrm{Hills}$.} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{./figures/results/loss/not_camo_desert.pdf} \caption{$L = \Psi + \alpha \Phi$.} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{./figures/results/loss/camo_desert.pdf} \caption{$L = \Psi + \alpha \Phi + \beta \Omega$-$\textrm{Desert}$.} \end{subfigure} \vspace{-0.5em} \caption{Optimised cubes $\mathbf{P}^\ast$ shown in the visible domain $v$ with and without the cloaking term~\eqref{eq:cloaking_loss}.} \label{fig:cubes_loss_images} \end{figure} \subsection{Different cube configurations}\label{sec:multcube} Can the physical footprint of the adversarial cube be reduced to facilitate real-world testing? To answer this question, we resize $\mathbf{P}$ to 50$\times$50 pixels and optimise a number of them (4 or 6) instead. We also tested random configurations with low and high proximity amongst the cubes. The training pipeline for the multi-cube setting remains largely the same. Fig.~\ref{fig:cubes_config_images} shows (in visible domain) the optimised resized cubes embedded in a testing data cube. Quantitative results on the effectiveness of the attacks are given in Table~\ref{tab:result_cubeconfig}. Unfortunately, the results show a significant drop in attack effectiveness when compared against the 100$\times$100 cube on all loss configurations. This suggests that the size and spatial continuity of the adversarial cube are important factors to the attack. \begin{table}[ht] \setlength\tabcolsep{1pt} \centering \begin{tabular}{p{0.7cm} | p{1.50cm} | p{1.80cm} | p{1.0cm} p{1.0cm} | p{1.0cm} p{1.0cm}} \rowcolor{black} \multicolumn{3}{l |}{\textcolor{white}{\textbf{Cube configurations}}} & \multicolumn{2}{l |}{\textcolor{white}{\textbf{Accuracy}}} & \multicolumn{2}{l}{\textcolor{white}{\textbf{Cloudy}}} \\ \hline \textbf{\#} & \textbf{Size} & \textbf{Proximity} & \textbf{Train} & \textbf{Test} & \textbf{Train} & \textbf{Test} \\ \hline \multicolumn{3}{l |}{- (no adv.~cubes)} & 1.00 & 1.00 & 0.05 & 0.05 \\ 4 & 50$\times$50 & Low & 0.87 & 0.87 & 0.26 & 0.27 \\ % 6 & 50$\times$50 & Low & 0.71 & 0.72 & 0.33 & 0.33 \\ 4 & 50$\times$50 & High & 0.63 & 0.62 & 0.42 & 0.44 \\ 6 & 50$\times$50 & High & 0.63 & 0.63 & 0.40 & 0.41 \\ \end{tabular} \vspace{-0.5em} \caption{Effectiveness of 50$\times$50 adversarial cubes under different cube configurations (Sec.~\ref{sec:multcube}) optimised with loss $L = \Psi + \alpha\Phi$. Lower accuracy = more effective attack. Higher cloud confidence = more effective attack. Compare with single 100$\times$100 adversarial cube results in Table~\ref{tab:result_loss}.} \label{tab:result_cubeconfig} \end{table} \begin{figure}[ht]\centering \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{./figures/results/config/four_random.pdf} \caption{Four 50$\times$50 cubes (low prox).} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{./figures/results/config/six_random.pdf} \caption{Six 50$\times$50 cubes (low prox).} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{./figures/results/config/four_fixed.pdf} \caption{Four 50$\times$50 cubes (high prox).} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{./figures/results/config/six_fixed.pdf} \caption{Six 50$\times$50 cubes (high prox).} \end{subfigure} \vspace{-0.5em} \caption{Optimised cubes $\mathbf{P}^\ast$ shown in the visible domain $v$ of different cube configurations.} \label{fig:cubes_config_images} \end{figure} \subsection{Mitigation strategies}\label{sec:mitigation} We investigated several mitigation strategies against our adversarial attack: \begin{itemize}[leftmargin=1em,itemsep=2pt,parsep=0pt,topsep=2pt] \item 13 bands: Increasing the number of input bands of the cloud detector from 3 to 13 (all Sentinel-2A bands); \item $\sqrt{2}$: Doubling the model size of the cloud detector by increasing the number of filter/kernels in the convolutional layers and activations in the fully connected layers by $\sqrt{2}$ \item $2\times$ CONV: Doubling the model size of the cloud detector by adding two additional convolutional layers. \end{itemize} Table~\ref{tab:result_mitigations} shows that using a ``larger'' detector (in terms of the number of input channels and layers) yielded slightly worse cloud detection accuracy. However, increasing the number of input bands significantly reduced our attack effectiveness, possibly due to the increased difficulty of biasing all 13 channels simultaneously. This argues for using greater satellite-borne compute payloads than that of~\cite{giuffrida2020cloudscout}. \begin{table}[ht] \setlength\tabcolsep{1pt} \centering \begin{tabular}{p{1.5cm} | p{2.5cm} | p{1.0cm} p{1.0cm} | p{1.0cm} p{1.0cm}} \rowcolor{black} & & \multicolumn{2}{l |} {\textcolor{white}{\textbf{Accuracy}}} & \multicolumn{2}{l} {\textcolor{white}{\textbf{Cloudy}}} \\ \hline \textbf{Detectors} & \textbf{Loss functions} & \textbf{Train} & \textbf{Test} & \textbf{Train} & \textbf{Test} \\ \hline 13 bands & - (no adv.~cubes) & 1.00 & 1.00 & 0.06 & 0.06 \\ & $\Psi + \alpha\Phi$ & 0.94 & 0.96 & 0.15 & 0.14 \\ \hline $\sqrt{2}$ & - (no adv.~cubes) & 1.00 & 1.00 & 0.08 & 0.08 \\ & $\Psi + \alpha\Phi$ & 0.36 & 0.38 & 0.62 & 0.60 \\ \hline $2\times$CONV & - (no adv.~cubes) & 1.00 & 1.00 & 0.08 & 0.08 \\ & $\Psi + \alpha\Phi$ & 0.26 & 0.25 & 0.74 & 0.73 \\ \end{tabular} \vspace{-0.75em} \caption{Effectiveness of 100$\times$100 adversarial cubes optimised for different cloud detector designs (Sec.~\ref{sec:mitigation}). Lower accuracy = more effective attack. Higher cloud confidence = more effective attack. Compare with single 100$\times$100 adversarial cube results in Table~\ref{tab:result_loss}.} \label{tab:result_mitigations} \end{table} \section{Conclusions and limitations}\label{sec:conclusion} We proposed a physical adversarial attack against a satellite-borne multispectral cloud detector. Our attack is based on optimising exterior paint mixtures that exhibit the required spectral signatures to bias the cloud detector. Evaluation in the digital domain illustrates the realistic threat of the attack, though the simple mitigation strategy of using all input multispectral bands seems to offer good protection. As detailed in Sec.~\ref{sec:limitations}, our work is limited to digital evaluation due to several obstacles. Real-world testing of our attack and defence strategies will be left as future work. \vfill \section{Usage of existing assets and code release} The results in this paper were partly produced from ESA remote sensing data, as accessed through the Copernicus Open Access Hub~\cite{2021copernicus}. Source code and/or data used in our paper will be released subject to securing permission. \vfill \section*{Acknowledgements}\label{sec:acknowledgement} Tat-Jun Chin is SmartSat CRC Professorial Chair of Sentient Satellites. {\small \bibliographystyle{ieee_fullname}
proofpile-arXiv_065-4
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Limitations and Conclusion} \label{sec:conclusion} A major limitation of NeRF-SR{} is that it does not enjoy the nice arbitrary-scale property. It also introduces extra computation efficiency, albeit it consumes no more time than training a HR NeRF. In conclusion, we presented NeRF-SR{} the first pipeline of HR novel view synthesis with mostly low resolution inputs and achieve photorealistic renderings without any external data. Specifically, we exploit the 3D consistency in NeRF from two perspectives: supersampling strategy that finds corresponding points through multi-views in sub-pixels and depth-guided refinement that hallucinates details from relevant patches on an HR reference image. Finally, region sensitive supersampling and generalized NeRF super-resolution may be explored for future works. \section{Related Work} \label{sec:related-work} \noindent\textbf{Novel View Synthesis.} Novel view synthesis can be categorized into image-based, learning-based, and geometry-based methods. Image-based methods warp and blend relevant patches in the observation frames to generate novel views based on measurements of quality \cite{gortler1996lumigraph, levoy1996light}. Learning-based methods predict blending weights and view-dependent effects via neural networks and/or other hand-crafted heuristics\cite{hedman2018deep, choi2019extreme, riegler2020free, thies2020image}. Deep learning has also facilitated methods that can predict novel views from a single image, but they often require a large amount of data for training\cite{tucker2020single, wiles2020synsin, shih20203d, niklaus20193d, rockwell2021pixelsynth}. Different from image-based and learning-based methods, geometry-based methods first reconstruct a 3D model \cite{schonberger2016structure} and render images from target poses. For example, Aliev \etal\cite{aliev2020neural} assigned multi-resolution features to point clouds and then performed neural rendering, Thies \etal\cite{thies2019deferred} stored neural textures on 3D meshes and then render the novel view with traditional graphics pipeline. Other geometry representations include multi-planes images \cite{zhou2018stereo, mildenhall2019local, flynn2019deepview, srinivasan2019pushing, li2020crowdsampling, li2021mine}, voxel grids \cite{henzler2020learning, penner2017soft, kalantari2016learning}, depth \cite{wiles2020synsin, flynn2019deepview, riegler2020free, riegler2021stable} and layered depth \cite{shih20203d, tulsiani2018layer}. These methods, although producing relatively high-quality results, the discrete representations require abundant data and memory and the rendered resolutions are also limited by the accuracy of reconstructed geometry. \vspace{2mm} \noindent\textbf{Neural Radiance Fields.} Implicit neural representation has demonstrated its effectiveness to represent shapes and scenes, which usually leverages multi-layer perceptrons (MLPs) to encode signed distance fields \cite{park2019deepsdf, duan2020curriculum}, occupancy \cite{mescheder2019occupancy, peng2020convolutional, chen2019learning} or volume density \cite{mildenhall2020nerf, niemeyer2020differentiable}. Together with differentiable rendering \cite{kato2018neural, liu2019soft}, these methods can reconstruct both geometry and appearance of objects and scenes \cite{sitzmann2019scene, saito2019pifu, niemeyer2020differentiable, sitzmann2019deepvoxels, liu2020neural}. Among them, Neural Radiance Fields (NeRF) \cite{mildenhall2020nerf} achieved remarkable results for synthesizing novel views of a static scene given a set of posed input images. There are a growing number of NeRF extensions emerged, \eg reconstruction without input camera poses\cite{wang2021nerf, lin2021barf}, modelling non-rigid scenes \cite{pumarola2021d, park2021nerfies, park2021hypernerf, martin2021nerf}, unbounded scenes\cite{zhang2020nerf++} and object categories \cite{yu2021pixelnerf, trevithick2021grf, jang2021codenerf}. Relevant to our work, Mip-NeRF~\cite{barron2021mip} also considers the issue of \textit{resolution} in NeRF. They showed that NeRFs rendered at various resolutions would introduce aliasing artifacts and resolved it by proposing an integrated positional encoding that featurize conical frustums instead of single points. Yet, Mip-NeRF only considers rendering with downsampled resolutions. To our knowledge, no prior work studies how to increase the resolution of NeRF. \vspace{2mm} \noindent\textbf{Image Super-Resolution} Our work is also related to image super-resolution. Classical approaches in single-image super-resolution (SISR) utilize priors such as image statistics \cite{kim2010single, zontak2011internal} or gradients \cite{sun2008image}. CNN-based methods aim to learn the relationship between HR and LR images in CNN by minimizing the mean-square errors between SR images and ground truths \cite{dong2014learning, wang2015deep, dong2015image}. Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} are also popular in super-resolution which hallucinates high resolution details by adversarial learning \cite{ledig2017photo, menon2020pulse, sajjadi2017enhancenet}. These methods mostly gain knowledge from large-scale datasets or existing HR and LR pairs for training. Besides, these 2D image-based methods, especially GAN-based methods do not take the view consistency into consideration and are sub-optimal for novel view synthesis. Reference-based image super-resolution (Ref-SR) upscales input images with additional reference high-resolution (HR) images. Existing methods match the correspondences between HR references and LR inputs with patch-match \cite{zhang2019image, zheng2017combining}, feature extraction \cite{xie2020feature, yang2020learning} or attention \cite{yang2020learning}. Although we also aim to learn HR details from given reference images, we work in the 3D geometry perspective and can bring details for all novel views instead of one image. \section{Introduction} \label{sec:intro} Synthesizing photorealistic views from a novel viewpoint given a set of posed images, known as \textit{novel view synthesis}, has been a long-standing problem in the computer vision community, and an important technique for VR and AR applications such as navigation, and telepresence. Traditional approaches mainly falls in the range of image-based rendering and follows the process of warping and blending source frames to target views \cite{gortler1996lumigraph, levoy1996light}. Image-based rendering methods heavily rely on the quality of input data and only produces reasonable renderings with dense observed views and accurate proxy geometry. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{figure/teaser1.pdf} \caption{NeRF, the state-of-the-art novel view synthesis method, can synthesize photorealistic outputs at the resolution of training images but struggles at higher resolutions as shown in (a), while NeRF-SR{} produces high-quality novel views (b) even with low-resolution inputs.} \label{fig:teaser} \end{figure} Most recently, \textit{neural rendering} has made significant progress on novel view synthesis by leveraging learnable components with 3D geometry context to reconstruct novel views with respect to input images. As the current state-of-the-art method, neural radiance fields (NeRF) \cite{mildenhall2020nerf} have emerged as a promising direction for neural scene representation even on sparse image sets of complex real-world scenes. NeRF uses the weights of multi-layer perceptrons (MLPs) to encode the radiance field and volume density of a scene. Most importantly, the implicit neural representation is continuous, which enables NeRF to take as input any position in the volume at inference time and render images at any arbitrary resolution. At the same time, a high-resolution 3D scene is essential for many real-world applications, \eg a prerequisite to providing an immersive virtual environment in VR. However, a trained NeRF struggles to generalize directly to resolutions higher than that of the input images and generates blurry views (See \figref{fig:teaser}), which presents an obstacle for real-world scenarios, \eg images collected from the Internet may be low-resolution. To tackle this problem, we present NeRF-SR{}, a technique that extends NeRF and creates high-resolution (HR) novel views with better quality even with low-resolution (LR) inputs. We first observe there is a sampling gap between the training and testing phase for super-resolving a 3D scene, since the sparse inputs are far from satisfying Nyquist view sampling rates~\cite{mildenhall2019local}. To this end, we derive inspiration from traditional graphics pipeline and propose a supersampling strategy to better enforce the multi-view consistency embedded in NeRF in a sub-pixel manner, enabling the generation of both SR images and SR depth maps. Second, in the case of limited HR images such as panoramas and light field imaging systems that have a trade-off between angular and spatial resolutions, we find that directly incorporating them in the NeRF training only improves renderings \textit{nearby the HR images} in a small margin. Thus, we propose a patch-wise warp-and-refine strategy that utilizes the estimated 3D geometry and propagate the details of HR reference to \textit{all over the scene}. Moreover, the refinement stage is efficient and introduces negligible running time compared with NeRF rendering. To the best of our knowledge, we are the first to produce visually pleasing results for novel view synthesis under mainly low-resolution inputs. Our method requires only posed multi-view images of the target scene, from which we dig into the internal statistics and does not rely on any external priors. We show that NeRF-SR{} outperforms baselines that require LR-HR pairs for training. Our contributions are summarized as follows: \begin{itemize} \item the first framework that produces decent multi-view super-resolution results with mostly LR input images \item a supersampling strategy that exploits the view consistency in images and supervises NeRF in the sub-pixel manner \item a refinement network that blends details from any HR reference by finding relevant patches with available depth maps \end{itemize} \section{Limitation} \section{Experiments} \label{sec:experiments} \begin{table*}[htbp] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{l|ccc|ccc|ccc|ccc} & \multicolumn{3}{c|}{Blender$\times 2$ ($100 \times 100$)} & \multicolumn{3}{c|}{Blender$\times 4$ ($100 \times 100$)} & \multicolumn{3}{c}{Blender$\times 2$ ($200 \times 200$)} & \multicolumn{3}{c}{Blender$\times 4$ ($200 \times 200$)} \\ Method & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ \\ \hline NeRF~\cite{mildenhall2020nerf} & $\underline{27.54}$ & $\underline{0.921}$ & $0.100$ & $\underline{25.56}$ & $0.881$ & $0.170$ & $\underline{29.16}$ & $\underline{0.935}$ & $0.077$ & $\underline{27.47}$ & $0.910$ & $0.128$ \\ NeRF-Bi & $26.42$ & $0.909$ & $0.151$ & $24.74$ & $0.868$ & $0.244$ & $28.10$ & $0.926$ & $0.109$ & $26.67$ & $0.900$ & $0.175$ \\ NeRF-Liif & $27.07$ & $0.919$ & $\underline{0.067}$ & $25.36$ & $\underline{0.885}$ & $0.125$ & $28.81$ & $0.934$ & $\underline{0.058}$ & $27.34$ & $\underline{0.912}$ & $0.096$ \\ NeRF-Swin & $26.34$ & $0.913$ & $0.075$ & $24.85$ & $0.881$ & $\underline{0.108}$ & $28.03$ & $0.926$ & $\underline{0.058}$ & $26.78$ & $0.906$ & $\underline{0.086}$ \\ Ours-SS & $\boldsymbol{29.77}$ & $\boldsymbol{0.946}$ & $\boldsymbol{0.045}$ & $\boldsymbol{28.07}$ & $\boldsymbol{0.921}$ & $\boldsymbol{0.071}$ & $\boldsymbol{31.00}$ & $\boldsymbol{0.952}$ & $\boldsymbol{0.038}$ & $\boldsymbol{28.46}$ & $\boldsymbol{0.921}$ & $\boldsymbol{0.076}$ \end{tabular} } \caption{Quality metrics for novel view synthesis on blender dataset. We report PSNR/SSIM/LPIPS for scale factors $\times2$ and $\times4$ on two input resolutions ($100 \times 100$ and $200 \times 200$) respectively. } \label{table:blender-results} \end{table*} \setlength{\tabcolsep}{1.4pt} In this section, we provide both quantitative and qualitative comparisons to demonstrate the advantages of the proposed NeRF-SR{}. We first show results and analysis of super-sampling, and then demonstrate how the refinement network adds more details to it. Our result only with super-sampling is denoted as Ours-SS and our result after patch-based refinement is denoted as Ours-Refine. \subsection{Dataset and Metrics} To evaluate our methods, we train and test our model on the following datasets. We evaluate the quality of view synthesis with respect to ground truth from the same pose using three metrics: Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) \cite{wang2003multiscale} and LPIPS\cite{zhang2018unreasonable}. \topic{Blender Dataset} The Realistic Synthetic $360^{\circ}$ of \cite{mildenhall2019local} (known as Blender dataset) contains 8 detailed synthetic objects with 100 images taken from virtual cameras arranged on a hemisphere pointed inward. As in NeRF\cite{mildenhall2020nerf}, for each scene we input 100 views for training and hold out 200 images for testing. \topic{LLFF Dataset} LLFF dataset\cite{mildenhall2019local, mildenhall2020nerf} consists of 8 real-world scenes that contain mainly forward-facing images. We train on all the images and report the average metrics on the whole set. \subsection{Training Details} In super-sampling, we implement all experiments on top of NeRF~\cite{mildenhall2020nerf} using PyTorch. As we train on different image resolutions independently, for fair comparison we train blender dataset and LLFF dataset for respectively 20 epochs and 30 epochs, where each epoch contains an iteration of the whole training set. We choose Adam as the optimizer (with hyperparameters $\beta_1 = 0.9$, $\beta_2 = 0.999$) with batch size set to 2048 (2048 rays a batch for all experimented scales) and learning rate decayed exponentially from $5\cdot 10^{-4}$ to $5 \cdot 10^{-6}$. Following NeRF, NeRF-SR{} also uses a hierarchical sampling with the same size ``coarse'' and ``fine'' MLP. The number of coarse samples and fine samples are both set to 64. \begin{figure*}[htbp] \centering \includegraphics[width=1.0\linewidth]{results/results-final.pdf} \caption{Qualitative comparison of blender dataset when the input images are $200 \times 200$ and upscale by 2 and 4. Note how NeRF-SR{} recovers correct details through super-sampling even when inputting low-resolution images, such as \textit{Lego}'s gears, \textit{Hotdog}'s sausage and sauce, \textit{Mic}'s magnets and shiny brackets. Note NeRF-SR{} is able to synthesize consistently over different viewpoints, here we provide two for \textit{Hotdog}, videos can be found on our \href{https://cwchenwang.github.io/NeRF-SR}{website}. Please zoom in for a better inspection of the results. } \label{fig:res-blender} \end{figure*} \input{results/llff-results} \subsection{Comparisons} Since there are no previous work that deals with super-resolving NeRF, we devise several reasonable baselines for comparisons, detailed as the following: \topic{NeRF} Vanilla NeRF is already capable of synthesising images at any resolution due to its implicit formation. Therefore, we train NeRF on LR inputs using the same hyperparameters in our method and directly render HR images. \topic{NeRF-Bi} aims to super-resolve a trained LR NeRF. We use the same trained model in the NeRF baseline, but render LR images directly and upsample them with the commonly used bicubic upsampling. \topic{NeRF-Liif} Liif~\cite{chen2021learning} achieves state-of-the-art performance on continuous single image super-resolution. Similar to the NeRF-Bi baseline, we super-resolve LR images using pretrained liif model instead. Note that to the training process of liff requires LR-HR pairs, therefore it introduces external data priors. \topic{NeRF-Swin} SwinIR~\cite{liang2021swinir} is the start-of-the-art method on single image super-resolution. Like NeRF-Bi and NeRF-Liif, NeRF-Swin performs super-resolution on a LR NeRF with the released SwinIR models under the ``Real-World Image Super-Resolution'' setting, which has a training set of more than 10k LR-HR pairs. \subsection{Effectiveness of supersampling} For blender dataset, we super-sample on two resolutions: $100 \times 100$ and $200 \times 200$, and test scales $\times 2$ and $\times 4$. For the LLFF dataset, the input resolution is $504 \times 378$ and we also upscale by $\times 2$ and $\times 4$. The downscaling of images in the dataset from original resolution to training resolution is done by the default Lanczos method in the Pillow package. \figref{fig:res-blender} shows qualitative results for all methods on a subset of blender scenes. Renderings from NeRF-Bi exhibit correct global shapes but lack high-frequency details. Vanilla NeRF produces renderings that have more details than NeRF-Bi if the scene is already well-reconstructed at input resolution. However, it is still restricted by the information in the input image. NeRF-Liif can recover some details, but lacks enough texture. NeRF-SR{} find sub-pixel level correspondence through supersampling, which means missing details in the input can be found from other views that lie in the neighboring region in 3D space. Quantitative results of blender dataset are summarized in \tabref{table:blender-results}. NeRF-SR{} outperforms other baselines in all scenarios. NeRF-Liif or NeRF-Swin have the second best LPIPS, providing good visual quality but cannot even compete with NeRF in PSNR and SSIM. The reason is maybe the blender dataset is synthetic and has different domain than the dataset it is trained on, resulting false prediction (see NeRF-Swin on \textit{Lego} and \textit{Hotdog}). The qualitative and quantitative results for LLFF dataset are demonstrated in \figref{fig:res-llff} and \tabref{table:llff-results} respectively. NeRF and NeRF-Bi suffers from blurry outputs. While NeRF-Liif and NeRF-Swin recovers some details and achieve satisfying visual quality (comparable LPIPS to Ours-SS) since they are trained on external datasets, they tend to be oversmooth and even predicts false color or geometry (See the leaves of \textit{Flower} in \figref{fig:res-llff}). NeRF-SR{} fill in the details on the complex scenes and outperforms other baselines significantly. Therefore, we can conclude that learning-based 2D baselines struggle to perform faithfully super-resolution, especially in the multi-view case. In \secref{subsec:ss}, we mentioned that the supervision is performed by comparing the average color of sub-pixels due to the unknown nature of the degradation process (We call it ``average kernel''). However, in our experiments, the degradation kernel is actually Lanczos, resulting an asymmetric downscale and upscale operation. We further experiment on the condition that the degradation from high-resolution to input images is also ``average kernel'' for blender data at the resolution $100 \times 100$. Results show this symmetric downscale and upscale operation provides better renderings than asymmetric one. PSNR, SSIM, LPIPs are all improved to $30.94$ dB, $0.956$, $0.023$ for scale $\times 2$ and $28.28$ dB, $0.925$ and $0.061$ for $\times 4$ respectively. The sensitivity to the degradation process is similar to that exhibited in single-image super-resolution. Detailed Rendering can be found in the \href{https://cwchenwang.github.io/NeRF-SR/data/supp.pdf}{supplementary}. \renewcommand{1.0in}{0.7in} \renewcommand{0.85in}{0.6in} \newcommand{\croplego}[1]{ \makecell{ \includegraphics[trim={255px 190px 95px 160px}, clip, width=0.85in]{#1} \\ \includegraphics[trim={118px 170px 222px 170px}, clip, width=0.85in]{#1} } } \newcommand{\cropmicavg}[1]{ \makecell{ \includegraphics[trim={160px 130px 190px 220px}, clip, width=0.85in]{#1} \\ \includegraphics[trim={155px 280px 195px 70px}, clip, width=0.85in]{#1} } } \subsection{Refinement network} LLFF dataset contains real-world pictures that have a much more complex structure than the blender dataset, and super-sampling isn't enough for photorealistic renderings. We further boost its outputs with a refinement network introduced in \secref{subsec:refine}. We use a fixed number of reference patches ($K = 8$) and the dimensions of patches are set to $64 \times 64$. While inferencing, the input images are divided into non-overlapping patches and stitched together after refinement. Without the loss of generosity, we set the reference image is to the first image in the dataset for all scenes, which is omitted when calculating the metrics. The inference time of the refinement stage is neglibile compared to NeRF's volumetric rendering: for example, it takes about 48 seconds for NeRF's MLP to render a $1008 \times 756$ image, and it only takes another 1.3 seconds in the refinement stage on a single 1080Ti. The quantitative results of refinement can be found in \tabref{table:llff-results}. After refinement, metrics are improved substantially at the scale of 4. For the scale of 2, PSNR increases only a bit after refining, a possible reason is that supersampling already learns a decent high-resolution neural radiance fields for small upscale factors and the refinement only improves subtle details (Please refer the \href{https://cwchenwang.github.io/NeRF-SR/data/supp.pdf}{supplementary} for an example). However, we can see that LPIPS is still promoted, meaning the visual appearance improves. The problem doesn't occur for larger magnifications such as 4 since supersampling derives much fewer details from low-resolution inputs, making the refinement process necessary. We demonstrate the renderings qualitatively before and after refining in \figref{fig:res-llff}. It is clear to see that the refinement network boosts supersampling with texture details and edge sharpness. \begin{table}[htbp] \centering \resizebox{\linewidth}{!}{% \begin{tabular}{l|ccc|ccc} & \multicolumn{3}{c|}{LLFF$\times 2$} & \multicolumn{3}{c}{LLFF$\times 4$} \\ Method & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ \\ \hline NeRF~\cite{mildenhall2020nerf} & $26.36$ & $0.805$ & $0.225$ & $24.47$ & $0.701$ & $0.388$ \\ NeRF-Bi & $25.50$ & $0.780$ & $0.270$ & $23.90$ & $0.676$ & $0.481$ \\ NeRF-Liif & $\underline{26.81}$ & $\underline{0.823}$ & $\underline{0.145}$ & $\underline{24.76}$ & $\underline{0.723}$ & $0.292$ \\ NeRF-Swin & $25.18$ & $0.793$ & $0.147$ & $23.26$ & $0.685$ & $\underline{0.247}$ \\ Ours-SS & $\boldsymbol{27.31}$ & $\boldsymbol{0.838}$ & $\boldsymbol{0.139}$ & $\boldsymbol{25.13}$ & $\boldsymbol{0.730}$ & $\boldsymbol{0.244}$ \\ \hline Ours-Refine & $\boldsymbol{27.34}$ & $\boldsymbol{0.842}$ & $\boldsymbol{0.103}$ & $\boldsymbol{25.59}$ & $\boldsymbol{0.759}$ & $\boldsymbol{0.165}$ \\ \end{tabular} } \caption{Quality metrics for view synthesis on LLFF dataset. We report PSNR/SSIM/LPIPS for scale factors $\times2$ and $\times4$ on input resolutions ($504 \times 378$). } \label{table:llff-results} \end{table} \section{Background} \label{sec:background} Neural Radiance Fields (NeRF) \cite{mildenhall2020nerf} encodes a 3D scene as a continuous function which takes as input 3D position $\mathbf{x} = (x, y, z)$ and observed viewing direction $\mathbf{d} = (\boldsymbol{\theta}, \boldsymbol{\phi})$, and predicts the radiance $\mathbf{c}(\mathbf{x}, \mathbf{d}) = (r, g, b)$ and volume density $\sigma(\mathbf{x})$. The color depends both on viewing direction $\mathbf{d}$ and $\mathbf{x}$ to capture view dependent effects, while the density only depends on $\mathbf{x}$ to maintain view consistency. NeRF is typically parametrized by a multilayer perceptron (MLP) $f: (\mathbf{x}, \mathbf{d}) \rightarrow (\mathbf{c}, \sigma)$. NeRF is an emission-only model (the color of a pixel only depends on the radiance along a ray with no other lighting factors). Therefore, according to volume rendering \cite{kajiya1984ray}, the color along the camera ray $\mathbf{r}(t) = \mathbf{o} + t\mathbf{d}$ that shots from the camera center $\mathbf{o}$ in direction $\mathbf{d}$ can be calculated as: \begin{equation} \mathbf{C}(\mathbf{r}) = \int_{t_n}^{t_f}T(t)\sigma(\mathbf{r}(t))\mathbf{c}(\mathbf{r}(t), \mathbf{d}) \mathrm{d}t \label{equ:render} \end{equation} where \begin{equation} T(t) = \mathrm{exp}\Big(-\int_{t_n}^{t}\sigma(\mathbf{r}(t))\mathrm{d}t\Big) \end{equation} is the accumulated transmittance that indicates the probability that a ray travels from $t_n$ to $t$ without hitting any particle. NeRF is trained to minimize the mean-squared error (MSE) between the predicted renderings and the corresponding ground-truth color: \begin{equation} \mathcal{L}_{\mathrm{MSE}} = \sum_{\mathbf{p} \in \mathcal{P}}\| \hat{\mathbf{C}}(\mathbf{r}_{\mathbf{p}}) - \mathbf{C}(\mathbf{r}_{\mathbf{p}}) \|_2^{2} \label{equ:mse} \end{equation} where $\mathcal{P}$ denotes all pixels of training set images, $\mathbf{r}_{\mathbf{p}}(t) = \mathbf{o} + t\mathbf{d}_{\mathbf{p}}$ denotes the ray shooting from camera center to the corners (or centers in some variants \cite{barron2021mip}) of a given pixel $\mathbf{p}$. $\hat{\mathbf{C}}(\mathbf{r}_{\mathbf{p}})$ and $\mathbf{C}(\mathbf{r}_{\mathbf{p}})$ are the ground truth and output color of $\mathbf{p}$. In practice, the integral in \eqnref{equ:render} is approximated by numeric quadrature that samples a finite number of points along with the rays and computes the summation of radiances according to the estimated per-point transmittance. The sampling in NeRF follows a \textit{coarse-to-fine} mechanism with two MLPs, \ie coarse network is queried on equally spaced samples whose outputs are utilized to sample another group of points for more accurate estimation and fine network is then queried on both groups of samples. \section{Approach} \label{sec:approach} In this section, we introduce the details of NeRF-SR{}. The overall structure is presented in \figref{fig:framework}. The supersampling strategy and patch refinement network will be introduced in \secref{subsec:ss} and \secref{subsec:refine}. \begin{figure} \begin{center} \includegraphics[width=1.0\linewidth]{figure/framework.pdf} \end{center} \caption{An overview of the proposed NeRF-SR{} that includes two components. (a), we adopt a super sampling strategy to produce super-resolution novel views from only low-resolution inputs. (b) Given an high-resolution reference at any viewpoint from which we utilize the depth map at hand to extract relevant patches, NeRF-SR{} generates more details for synthesized images.} \label{fig:framework} \end{figure} \subsection{Supersampling} \label{subsec:ss} NeRF optimizes a 3D radiance field by enforcing multi-view color consistency and samples rays based on camera poses and pixels locations in the training set. Although NeRF can be rendered at any resolution and retain great performance when the input images satisfy the Nyquist sampling rate, it is impossible in practice. Compared to the infinity possible incoming ray directions in the space, the sampling is quite sparse given limited input image observations. NeRF can create plausible novel views because the output resolution is the same as the input one and it relies on the interpolation property of neural networks. However, this becomes a problem when we render an image at a higher resolution than training images, specifically, there is a gap between the training and testing phase. Suppose a NeRF was trained on images of resolution $\mathrm{H} \times \mathrm{W}$, the most straightforward way to reconstruct a training image on scale factor $s$, \ie an image of resolution $s\mathrm{H} \times s\mathrm{W}$ is sampling a grid of $s^{2}$ rays in an original pixel. Obviously, not only the sampled ray directions were never seen during training, but the pixel queried corresponds to a smaller region in the 3D space. Regarding this issue, we propose a supersampling strategy that tackles the problem of rendering SR images for NeRF. The intuition of supersampling is explained as follows and illustrated in \figref{fig:super-sampling}. \begin{figure} \begin{center} \includegraphics[width=0.9\linewidth]{figure/super-sampling.pdf} \end{center} \caption{Original NeRF casts a single ray through a pixel (solid line) and performance MSE loss directly (left), while our method (right) splits a pixel into multiple sub-pixels (dash line) and draws a ray for each sub-pixel, then the radiances of sub-pixels will be averaged for MSE loss. Compared to vanilla NeRF, more 3D points in the scene can be corresponded and constrained in supersampling.} \label{fig:super-sampling} \end{figure} We start from the image formation process. The pixel values are mapped from scene irradiance through a \textit{camera response function} (CRF). For simplification, we assume a pinhole camera model as in NeRF and consider ISO gain, shutter speed as implicit factors. Let $\mathcal{R}(\mathbf{p})$ denotes the set of all possible ray directions for pixel $\mathbf{p}$ from a training image, then: \begin{equation} \mathcal{C}(\mathbf{p}) = f(E_{\mathcal{R}(\mathbf{p})}) \end{equation} where $\mathcal{C}(\mathbf{p})$ indicates the color of $\mathbf{p}$, $f$ is CRF, $E$ is the incident irradiance over the area covered by $\mathbf{p}$, which is the integration of radiance over all incoming rays in $\mathcal{\mathbf{p}}$. Although ideally the training ray directions should be sampled from $\mathcal{R}(\mathbf{p})$, it is both computational expensive and challenging for the network to fit this huge amount of data. Therefore, in our work, to super-resolve images at the scale of $s$, we first evenly split a pixel from training set into a $s \times s$ grid sub-pixels $\mathcal{S}(\mathbf{p})$. As in NeRF, we do not model CRF and output the color of each sub-pixel using a multi-layer perceptron directly. During training stage, ray directions for a pixel $\mathbf{p}$ will be sampled from the sub-pixels instead, denoted as $\mathcal{R}^{\prime}(\mathbf{p}) = \{\mathbf{r}_{\mathbf{j}}\:|\: \mathbf{j} \in \mathcal{S}(\mathbf{p}) \} \subset \mathcal{R}(\mathbf{p})$. At inference stage, an $s\mathrm{H} \times s\mathrm{W}$ image can be directly obtained by directly rendering and organizing the sub-pixels, erasing the sampling gap between the training and testing phase. Another concern is how to perform supervision with only ground truth images at dimension $\mathrm{H} \times \mathrm{W}$. Similar to the blind-SR problem, the degradation process from $s\mathrm{H} \times s\mathrm{W}$ is unknown and may be affected by many factors. Inspired by the graphics pipeline, we tackle this issue by compute the radiance for sub-pixels in $\mathcal{R}^{\prime}(p)$ using Equation \ref{equ:render} and then average them to compare with the color of $\mathbf{p}$. Thus, Equation \ref{equ:mse} can be extended as: \begin{equation} \mathcal{L}_{\mathrm{MSE}} = \sum_{\mathbf{p} \in \mathcal{P}}\Big\| \frac{1}{|\mathcal{R}^{\prime}(\mathbf{p})|}\sum_{\mathbf{r}^{\prime} \in \mathcal{R}^{\prime}(\mathbf{p})}\hat{\mathbf{C}}(\mathbf{r}^{\prime}) - \mathbf{C}(\mathbf{r}_{\mathbf{p}}) \Big\|_2^{2} \label{equ:l_mse} \end{equation} where $\mathcal{R}^{\prime}(\mathbf{p})$ is the sub-pixel grid for pixel $\mathbf{p}$, $|\mathcal{R}^{\prime}(\mathbf{p})|$ is the number of sub-pixels in $\mathcal{R}^{\prime}(\mathbf{p})$, $\mathbf{r}^{\prime}$ is the ray direction for a single sub-pixel, $\hat{\mathbf{C}}(\mathbf{r}^{\prime})$ is the color of a sub-pixel predicted by the network. On the other hand, the LR images can be seen as downsampled from HR ones by averaging pixel color in a grid (We call it ``average'' kernel). This aborts any complex assumptions on the downsampling operation and make our method robust for various situations. To summarize, supersampling extends original NeRF in two aspects: first it samples ray directions from $s \times s$ grid sub-pixels for pixel $\mathbf{p}$ instead of a single ray direction; second, it averages the color of the sub-pixels for supervision. In computer graphics, supersampling and averaging is often used in the rendering process to handle the problem of aliasing. In our work, we show that it fully exploits the cross-view consistency introduced by NeRF to a sub-pixel level, \ie a position can be corresponded through multiple viewpoints. While NeRF only shoots one ray for each pixel and optimizes points along that ray, supersampling constraints more positions in the 3D space and better utilize the multi-view information in input images. In other words, supersampling directly optimizes a denser radiance field at training time. \begin{figure*}[htbp] \centering \includegraphics[width=1.0\linewidth]{figure/refinement.pdf} \caption{Our refinement module encodes synthesized patches $\widetilde{P}$ from images produced by supersampling and reference patches $\{P^{\mathrm{REF}}\}_{k=1}^{K}$ from $\mathcal{I}_{\mathrm{REF}}$. The encoded features of $\mathcal{I}_{\mathrm{REF}}$ are maxpooled and concatenated with that of $\widetilde{P}$, which is then decoded to generate the refined patch. In the training phase, $\widetilde{P}$ is sampled from synthesized SR image at the camera pose of $\mathcal{I}_{\mathrm{REF}}$ and $\{P^{\mathrm{REF}}\}_{k=1}^{K}$ is sampled at adjacent regions. When testing, $\{P^{\mathrm{REF}}\}_{k=1}^{K}$ is obtained via depth warping. (The input and output patches are zoomed for better illustration, zoom in to see the details on leaves after refinement)} \label{fig:refine} \end{figure*} \subsection{Patch-Based Refinement} \label{subsec:refine} With supersampling, the synthesized image achieves much better visual quality than vanilla NeRF. However, when the images for a scene do not have enough sub-pixel correspondence, the results of supersampling cannot find enough details for high-resolution synthesis. Also, often there are limited high-resolution images from which HR content are available for further improving the results. Here, we present a patch-based refinement network to recover high-frequency details that works even in the \textit{extreme} case, \ie only one HR reference is available, as shown in \figref{fig:refine}. Our system is though not limited to one HR reference and can be easily extended to multiple HR settings. The core design consideration focuses on how to ``blend'' details on the reference image $\mathcal{I}_{\mathrm{REF}}$ into NeRF synthesized SR images that already captured the overall structure. We adopt a patch-by-patch refine strategy that turns an SR patch $\widetilde{P}$ into the refined patch $P$. Other than $\widetilde{P}$, the input should also include an HR patch from $\mathcal{I}_{\mathrm{REF}}$ that reveals how the objects or textures in $\widetilde{P}$ presents in high-resolution. However, due to occlusion and inaccuracy of depth estimation, multiple HR patches are required to cover the region in $\widetilde{P}$ and we use $K$ patches $\{P^{\mathrm{REF}}\}_{k=1}^{K}$ for reference. Also, patches in $\{P^{\mathrm{REF}}\}_{k=1}^{K}$ cover larger regions than $\widetilde{P}$ and contain less relevant information. The refinement stage aims at local detail enhancement and well preserve the view consistent structure from super-sampling with the spatial information of depth predictions. We use a U-Net based convolutional architecture for the refinement network, which has demonstrated its efficacy in several existing novel view synthesis methods \cite{choi2019extreme, riegler2021stable, riegler2020free}. In earlier attempts, we model the refinement procedure as an image-to-image translation \cite{isola2017image} and find channel-wise stack $\widetilde{P}$ and $\{P^{\mathrm{REF}}\}_{k=1}^{K}$ were unable to fit the training set perfectly. Therefore, inspired by \cite{choi2019extreme, riegler2020free}, we instead encode each patch respectively with an encoder consisting of seven convolutional layers. The decoder of the network takes as input the nearest-neighbor upsampled features from previous layers concatenated with both the encoded features of $\widetilde{P}$ and maxpooled features of $\{P^{\mathrm{REF}}\}_{k=1}^{K}$ at the same spatial resolution. All convolutional layers are followed by a ReLU activation. \topic{Training} The training of the refinement network requires SR and HR patch pairs, which are only available at the camera pose of $\mathcal{I}_{\mathrm{REF}}$. Therefore, $\widetilde{P}$ is randomly sampled from the SR image and $P$ is the patch on $\mathcal{I}_{\mathrm{REF}}$ at the same location. We perform perspective transformations to $\widetilde{P}$ and $P$ as during testing, the input patches are mostly from different camera poses. Moreover, to account for the inaccuracy of reference patches at testing time, we sample $\{P^{\mathrm{REF}}\}_{k=1}^{K}$ within a fixed window around $P$. In order to preserve the spatial structure of $\widetilde{P}$ while improving its quality, our objective function combines reconstruction loss $\mathcal{L}_{rec}$ and perceptual loss $\mathcal{L}_{per}$, where \begin{equation} \mathcal{L_\mathrm{refine}} = \mathcal{L}_{rec} + \mathcal{L}_{per} = ||\widetilde{P} - P ||_1 + \Sigma_{l}\lambda_{l}||\phi_{l}(\widetilde{P}) - \phi_{l}(P) ||_1 \end{equation} $\phi_{l}$ is a set of layers in a pretrained VGG-19 and $\lambda{l}$ is the reciprocal of the number of neurons in layer $l$. Note that we adopt $\mathnormal{l}_1$-norm instead of MSE in $L_{rec}$ because it is already minimized in supersampling and $\mathnormal{l}_1$-norm will sharpen the results. \topic{Testing} At inference time, given a patch $\widetilde{P}$ on synthesized image $\mathcal{I}_n$, we can find a high-resolution reference patch on reference image $\mathcal{I}_{\mathrm{REF}}$ for each pixel on $\widetilde{P}$: \begin{equation} P_{i,j}^{\mathrm{REF}} = K_{\mathrm{REF}}T(K_{n}^{-1}d_{i,j}\widetilde{P}_{i,j}) \label{equ:warp} \end{equation} where $i,j$ denotes a location on patch $\widetilde{P}$, $d$ is the estimated depth, $T$ is the transformation between camera extrinsic matrices from $\mathcal{I}_n$ to $\mathcal{I}_{\mathrm{REF}}$, and $K_{\mathrm{REF}}$ and $K_{n}$ refer to the camera intrinsic matrices of $\mathcal{I}_{\mathrm{REF}}$ and $\mathcal{I}_n$. Therefore, \eqnref{equ:warp} computes the 3D world coordinate of $i,j$ based on $d_{i,j}$ and camera parameters, then backproject it to a pixel on $\mathcal{I}_{\mathrm{REF}}$ and extract the corresponding patch at that location (points fall out of $\mathcal{I}_{\mathrm{REF}}$ are discarded). In summary, to obtain the refined $P$, we first sample $K$ patches from $\{P_{i,j}^{\mathrm{REF}}\}$ to construct the set $\{P^{\mathrm{REF}}\}_{k=1}^{K}$ and then input them together with $\widetilde{P}$ into the network. More details of the refinement network can be found in the \href{https://cwchenwang.github.io/NeRF-SR/data/supp.pdf}{supplementary material}. The training of NeRF requires correspondences of input images in the 3D space. As long as the HR reference falls in the camera frustum of input images, it can be easily wrapped to other views and bring enough details. Therefore, our refinement network is well-suited for any NeRF compatible dataset.
proofpile-arXiv_065-5
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Machine Learning (ML) applications recently demonstrated widespread adoption in many critical missions, as a way to deal with large-scale and noisy datasets efficiently, in which human expertise cannot be used due to practical reasons. Although ML-based approaches have achieved impressive results in many data processing tasks, including classification, and object recognition, they have been shown to be vulnerable to small adversarial perturbations, and thus tend to misclassify, or not able to recognize minimally perturbed inputs. Figure~\ref{fig:adversarial-input} illustrates how an adversarial sample can be generated by adding a small perturbation, and as a result can get misclassified by a trained Neural Network (NN). \begin{figure}[h] \centering \includegraphics[width=0.8\columnwidth]{{adversarial-input}.png} \caption{By adding an unnoticeable perturbation to an image of "panda", an adversarial sample is created, and it was misclassified as "gibbon" by the trained network. (Image credit: ~\cite{goodfellow2015})\label{fig:adversarial-input}} \end{figure} Adversarial perturbation can be achieved either through \emph{white-box} or \emph{black-box} attacks. In the threat model of \emph{white-box} attacks, an attacker is supposed to have full knowledge of the target NN model, including the model architecture and all relevant hyperparameters. For the \emph{black-box} attacks, an attacker has no access to the NN model and associated parameters; thus, an attacker relies on generating adversarial samples using the NN model on hand (known as \emph{attacker model}), and then uses these adversarial samples on the target NN model (known as \emph{victim model}). White-box attacks are considered to be difficult to launch in real world scenarios, as it is often not possible for an attacker to have access to full information of the victim model. Thus, in this paper, we focus on \emph{black-box} attacks which are posing practical threats for many ML applications, and evaluate the strategies of generating adversarial samples (which can be used for launching black-box attacks) and their transferability to victim models. {\bf\textit{Transferability}} is  an ability of an adversarial sample that is generated by a machine learning attack on a particular machine learning model (i.e., on an attacker  model) to be  effective against a different, and potentially unknown machine learning model (i.e., on a victim model). Attacker model refers to the model used in generating the adversarial samples (i.e., malicious inputs that are  modified to yield erroneous output while appearing as unmodified to the human or an agent), whereas the target model refers to the NN model to which the adversarial samples will be transferred. There is a long literature on transferability of adversarial samples and machine learning attacks that generate them; however, they often analyze the transferability from the perspective of a specific network model~\citep{szegedy2014, goodfellow2015, papernot2016, demontis2019}. That is, they have tried to present an explanation on why transferability is able to occur based on the NN model properties (of a given specific target model). Hence, we say that  most research have taken a \emph{model-centric} approach. In contrast, we are presenting an {\bf \textit{attack-centric}} approach, in this paper. In \textit{attack-centric} approach, we provide insights on why adversarial samples can actually transfer by analyzing the adversarial samples generated using different machine learning attacks. A particular insight that we want to build is to see if machine learning attacks and input set have any inherent feature that causes or increases the likelihood of adversarial samples to transfer effectively to the victim models. In the following, we provide motivation on studying transferability of adversarial samples and exemplify ML-based applications in which they may pose significant security and reliability threats. \subsection{Motivation for Research on Transferability of Adversarial Samples} Machine learning has become a driving force for many data intensive innovative technologies in different domains, including (but not limited to) health care, automotive, finance, security, and predictive analytics, thanks to the widespread availability of data sources and computational power allowing to process them in a reasonable time. However, machine learning systems may have security concerns which can be detrimental (and even life threatening) for many application use cases. For motivating the readers regarding the importance of transferability of adversarial samples, and demonstrate the feasibility and possible consequences of machine learning attacks, here we highlight some practical security threats which exploit the transferability of adversarial samples. \cite{thys2019} generated adversarial samples that were able to successfully hide a person from a person detector camera which relies on a machine learning model. They showed that this kind of attack is feasible to maliciously circumvent surveillance systems and intruders can sneak around undetected by holding on to the adversarial sample/patch in the form of cardboard in front of their body which is aimed towards the surveillance camera.  Another sector that heavily relies on ML approaches health care due to high volume of data being processed is health care. A particular example of exploiting adversarial samples in this domain is as follows. Dermatologists usually operate under a "fee-for-service" revenue model in which physicians get paid for procedures they perform for a patient. This has caused unethical dermatologists to apply unnecessary procedures to increase their revenue. To avoid frauds in this nature, insurance companies often rely on machine learning models that analyze patient data (e.g., dermatoscopy images) to confirm that suggested procedures are indeed necessary. According to the hypothetical scenario presented by ~\cite{finlayson2018}, an attacker could generate adversarial samples composed of dermatoscopy images such that when they are analyzed with the machine learning model used by insurance company (victim's model), it would (incorrectly) report that a suggested procedure is appropriate and necessary for the patient. For security applications that rely on audio commands (which are processed by a ML-based speech recognition system), an attacker can construct adversarial audio samples to be used in breaking into the targeted system. Such an attack, if successful, may lead to information leakage, cause denial of service, or executing unauthorized commands. A feasibility of an attack on speech recognition system is demonstrated by ~\cite{carlini2016} that generated adversarial audio samples (called obfuscated commands) which were used in attacking Google Now's speech recognition system.  ~\cite{jia2017} used Stanford Question Answering Dataset (SQuAD) to test whether text recognition systems can answer questions about paragraphs that contain adversarial sentences inserted by a malicious user. These adversarial samples were automatically generated to mislead the system without changing the correct answers or misleading humans. Their results showed that the accuracy of sixteen published models drops from an average of 75\% F1 score to 36\%, and when the attacker was allowed to add ungrammatical sequences of words, the average accuracy on four of the tested models dropped further down to 7\%.  As machine learning approaches find their ways into many application domains, the concerns associated with the reliability and security of systems are getting profound. While covering all application areas is out of scope for this paper, our goal is to motivate the study of transferability of adversarial samples to better understand the mechanisms and factors that influence their effectiveness. Without loss of generality, we focus primarily on image classification as a use case to demonstrate the impact of machine learning attacks and their role on effectiveness of transferability of adversarial samples in this paper (though the findings and insights obtained can be generalized for other use cases). \section{Related Work} The study of machine learning attacks and transferability of adversarial samples have gained a momentum, following the widespread use of Deep Neural Networks (DNNs) in many application domains. In the following, we detail the recent studies in this area, and discuss their relevance to our work. \cite{szegedy2014} studied the transferability of adversarial samples on different models that were trained using MNIST dataset. They focused on examining why DNNs were so vulnerable to images with little perturbation. In particular, they examined non-linearity and overfitting in neural networks as the cause of DNNs vulnerability to adversarial samples. Their experiments and methodology, however, were limited to the  NN model characteristics to gain intuition on transferability.  \cite{goodfellow2015} carried out a new study on transferability of adversarial samples which was built on the previous study of~\cite{szegedy2014}. In contrast, they argued that non-linearity of NN models actually helps to reduce the vulnerability to adversarial samples, and linearity of a model is the cause that makes adversarial samples work. Also, they further suggest that transferability is more likely when the adversarial perturbation or noise is highly aligned with the weight vector of the model. The entire analysis was based on attack called Fast Gradient Sign Method (FGSM) that computes the gradient of the loss function once, and then finds the minimum step size that generates the adversarial samples. Another study on transferability was conducted by~\cite{papernot2016} in which they aimed at experimenting how transferability works across traditional machine learning classifiers, such as Support Vector Machines (SVMs), Decision Trees (DT), K-nearest neighbors (KNN), Logistic Regression (LR) and DNNs. Their motivation is to determine if adversarial samples constitute a threat for a specific type or implementation of machine learning model. In other words, they would like to analyze if adversarial samples would be transferred to any of these models; and if so, which of the classifiers (or models) are more prone to such black-box attacks. They also examined intra-technique and cross-technique transferability across the models, and provided in depth explanation on why deep neural network and LR were more prone to intra-technique transferability when compared to SVM, DT, KNN, and LR. However, similar to previous studies, their analysis did not consider the possible impacts of intrinsic properties of attacks on transferability of adversarial samples. \cite{papernot2017} extended their earlier findings by demonstrating how a black-box attack can be launched on hosting DNN without prior knowledge of the model structure nor its training dataset. The attack strategy employed consists of training a local model (i.e., substitute/attacker model) using synthetically generated data by the adversary that was labeled by the targeted DNN. They demonstrated the feasibility of this strategy to launch black-box attacks on machine learning services hosted by Amazon, Google and MetaMind. Similar study was conducted by~\cite{liu2017}, in which they assumed the model and training process, including both training and test datasets are unknown to them before launching the attack. \cite{demontis2019} presented a comprehensive analysis on transferability for both test-time evasion and training-time poisoning attacks. They showed that there are two main factors contributing to the success of the attack that include intrinsic adversarial vulnerability of the target model, and the complexity of the substitute model used to optimize the attack. They further defined three metrics/factors that impacts transferability, which are: i) size of the input gradient, ii) alignment of the input gradients of the loss function computed using the target and the substitute (attacker) models, and iii) variability of the loss landscape. All these findings and factors, while essential, are restricted to explain the transferability from the model-centric perspective. However, our investigation is not limited to the assessment of models, but expands the analysis on various attack implementations and the adversarial samples generated to see if there are underlying characteristics that contribute to increasing or decreasing the chances of transferability among NN models. \section{Machine Learning Attacks} The adversarial perturbations crafted to generate adversarial samples for fooling a trained network are referred as machine learning attacks. The full list of machine learning attacks presented in the literature is exhaustive, however, we present the subset of attacks analyzed in this work with a brief description of their characteristics in Table~\ref{tab:attacks}. Following the categorization presented by~\cite{rauber2018}, we categorize the attacks used in this paper into two main families: i) gradient-based, and ii) decision-based attacks. Gradient-based attacks try to generate adversarial samples by finding the minimum perturbation through a gradient descent mechanism. Decision-based attacks involve the use of image processing techniques to generate adversarial samples. It is called decision-based because the algorithms rely on comparing the generated adversarial samples with the original output until misclassification occurs. \begin{longtable}{| p{.25\textwidth} | p{.18\textwidth} | p{.46\textwidth}|} \hline Name of Attack & Attack Family & Short Description\\ \hline\hline Deep Fool Attack & gradient-based & It obtains minimum perturbation by approximating the model classifier with a linear classifier~\citep{moosavi2016}.\vspace{0.1cm} \\ \hline Additive Noise Attack & decision-based & Adds Gaussian or uniform noise and gradually increases the standard deviation until misprediction occurs~\citep{rauber2018}.\vspace{0.1cm}  \\ \hline Basic Iterative Attack & gradient-based & Applies a gradient with small step size and clips pixel values of intermediate results to ensure that they are in the neighborhood of the original image~\citep{kurakin2017}. \vspace{0.1cm} \\ \hline Blended Noise Attack & decision-based & Blends the input image with a uniform noise until the image is misclassified.\vspace{0.1cm}\\ \hline Blur Attack & decision-based & Finds the minimum  blur needed to turn an input image into an adversarial sample by linearly increasing the standard deviation of a Gaussian filter. \vspace{0.1cm}\\ \hline Carlini Wagner Attack & gradient-based & Generates adversarial sample by finding the smallest noise added to an image that will change the classification of the image~\citep{carlini2017}.\vspace{0.1cm}\\ \hline Contrast Reduction Attack & decision-based & Reduces the contrast of an input image by performing a line-search internally to find minimal adversarial perturbation. \vspace{0.1cm}\\ \hline Search Contrast Reduction Attack& decision-based & Reduces the contrast of an input image by performing a binary search internally to find minimal adversarial perturbation. \vspace{0.1cm}\\ \hline Decoupled Direction and Norm (DDN) Attack & gradient-based & Induces misclassifications with low L2-norm, through decoupling the direction and norm of the adversarial perturbation that is added to the image~\citep{rony2019}. The attack compensates for the slowness of Carlini Wagner attack.\vspace{0.1cm}\\ \hline Fast Gradient Sign Attack & gradient-based & Uses a one-step method that computes the gradient of the loss function with respect to the image once and then tries to find the minimum step size that will generate an adversarial sample~\citep{goodfellow2015}.\\ \hline Inversion Attack & decision-based & Creates a negative image (i.e., image complement of the original image, in which the light pixels appear dark, and vice versa) by inverting the pixel values~\citep{hosseini2017}.\vspace{0.1cm}\\ \hline Newton Fool Attack & gradient-based & Finds small adversarial perturbation on an input image by significantly reducing the confidence probability~\citep{jang2017}.\vspace{0.1cm}\\ \hline Projected Gradient Descent Attack & gradient-based & Attempts to find the perturbation that maximizes the loss of a model (using gradient descent)  on an input. It is ensured that the size of the perturbation is kept smaller than specified error by relying on clipping the samples generated~\citep{madry2017}.\vspace{0.1cm}\\ \hline Salt and Pepper Noise Attack & decision-based & Involves adding salt and pepper noise to an image in each iteration until the image is misclassified, while keeping the perturbation size within the specified epsilon $\epsilon$.\vspace{0.1cm}\\ \hline Virtual Adversarial Attack & gradient-based & Calculates untargeted adversarial perturbation by performing an approximated second order optimization step on the Kullback–Leibler divergence between the unperturbed predictions and the predictions for the adversarial perturbation~\citep{miyato2015}. \vspace{0.1cm}\\ \hline Sparse Descent Attack & gradient-based & A version of basic iterative method that minimizes the L1 distance. \vspace{0.1cm}\\ \hline Spatial Attack & decision-based & Relies on spatially chosen rotations, translations, scaling~\citep{engstrom2019}.\vspace{0.1cm}\\ \hline \hline \caption{The machine learning attacks used in this work.} \label{tab:attacks} \end{longtable} \section{Methodology} In the following, we detail the Convolutional Neural Network (CNN) models, infrastructure and tools used in the evaluation, as well as the procedure employed in carrying out the experiments. \subsection{Infrastructure and Tools} To build, train and test the CNNs that use in our evaluation, we rely on PyTorch and TorchVision. We also use Foolbox~\citep{rauber2018} which is a Python library to generate adversarial samples. It provides reference implementations for many of the published adversarial attacks, all of which perform internal hyperparameter tuning to find the minimum adversarial perturbation. We use Python version 3.7.3 on Jupyter Notebook. We run our experiments on Google Colab which provides an interactive environment that allows to write and execute Python code. It is similar to Jupyter notebook, but rather than being installed locally, it is hosted on the cloud. It is heavily customized for data science workloads, as it contains most of the core libraries used in data science/machine learning research. We used this environment in training the neural network as it provides large memory capacity and access to GPUs, thereby reducing the training time.  \subsection{CNNs Used in This Study} Here, we provide the brief description and details of CNNs used in this work. Note that a particular CNN may be in one of two roles, namely it can be either an attacker model (on which the adversarial samples are generated), or a victim model (to which the adversarial samples will be used to attack). {\bf LeNet:} It is simple, yet popular CNN architecture that was first introduced in 1995 but came to limelight in 1998 after it demonstrated success in handwritten digit recognition task~\citep{lecun1998}. The LeNet architecture used for this work is slightly modified to train for CIFAR-10 dataset (instead of MNIST). {\bf AlexNet:} It is an advanced form of LeNet architecture, with a depth of 8 layers. It showed groundbreaking results in 2012- ILSVRC competition by achieving an error  rate from 25.8\% to 16.4\% on ImageNet dataset with about 60 million trainable parameters~\citep{krizhevsky2017}. It also has different optimization techniques such as dropout, activation function and Local Response (LR) normalization. Since LR normalization had shown minimal (if any) contribution in practice it was not included in the AlexNet model trained for this project.  Aside from the increase in the depth of the  network, another difference between the LeNet and AlexNet model trained in this work is that AlexNet has a dropout layer added to it. {\bf Vgg-11:} It was introduced to improve the image classification accuracy on ImageNet dataset by~\cite{simonyan2015}. Compared to LeNet and AlexNet, Vgg-11 has an increased network depth, and  it made use of small (3 x 3) convolutional filters. The architecture secured a second place at the ILVRSC 2014 competition after reducing the error rate on the ImageNet dataset down to 7.3\%. Hence, the architecture is an improvement over AlexNet. There are different variants of Vgg : Vgg-11, 13, 16 and 19. Only Vgg-11 is used in this paper. In addition to being deeper than  AlexNet architecture, batch normalization is also introduced in the Vgg-11 used in this project. Table~\ref{tab:cnn-models} summarizes the major features of these three CNN models. We choose these models to evaluate how machine learning attacks and corresponding adversarial samples generated respond to these models. \begin{longtable}{| p{.08\textwidth} | p{.072\textwidth} | p{.12\textwidth}| p{.109\textwidth} | p{.125\textwidth} | p{.065\textwidth} | p{.12\textwidth} | p{.1\textwidth}|} \hline CNN& \# Conv. Layers&\# Inner activation func., type&Output activation func.& \# Pooling Layers, type& \# FC Layers&\# Dropout Layers (\%)&\# BatchNorm Layers \vspace{0.1cm}\\ \hline LeNet&2&4, RELU &Softmax& 2, maxpool& 3 &None & None \vspace{0.1cm}\\ \hline AlexNet&5&7, RELU&Softmax &3, maxpool& 3 & 2 (\%0.5)& None \vspace{0.1cm}\\ \hline Vgg-11& 8&8, RELU&Softmax&4, maxpool& 3 & 2 (\%0.5) & 8 \vspace{0.1cm}\\ \hline \hline \caption{Features of the CNN models used in this paper.} \label{tab:cnn-models} \end{longtable} \subsection{Data Processing and Training} {\bf Dataset:} We used CIFAR-10 dataset~\citep{Krizhevsky2009} for our analysis, since it is arguably one of the most widely used dataset in the field of image processing and computer vision research. It contains 60,000 images which belong to one of ten classes. Training dataset contains 45,000 images, validation dataset has 500 images, whereas testing dataset contains 10,000 images. To generate adversarial samples, 500 images are selected from the testing dataset (50 images picked from each class to have a balanced dataset). \noindent {\bf Preprocessing:} At the very beginning, we performed training transformations, including random rotation, random horizontal flip, random cropping, converting the dataset to tensor and normalization. Likewise, we performed test transformations, including converting the dataset to tensor, and normalized it. Random rotation and horizontal flip introduce complexity to the input data which helps the model to learn in a more robust way. It is necessary to convert inputs to tensor because PyTorch works with tensor objects. The three channels are normalized (dividing by 255) to increase learning accuracy. Final step of data pre-processing was forming a batch size of 256 and creating a data loader for train and validation data (the method loads 256 images in each iteration during the training and validation). We choose batch size of 256 as it is large enough to make the training faster.  \noindent {\bf Training:} For the training, we first created the network model which comprises of feature extraction, classification and forward propagation. In each epoch, we calculated the training loss, training accuracy, validation loss and validation accuracy.  To perform training, we specified the following parameters for the train function: model, training iterator, optimizer (Adam optimizer) and criterion (cross entropy loss criterion). To perform validation, we specified the following parameters to the evaluation function: model, validation iterator, and criterion (cross entropy loss criterion). After completing training phase, we saved parameter values for the given model. \begin{longtable}{| p{.3\textwidth} | p{.2\textwidth}| p{.2\textwidth} | p{.2\textwidth} |} \hline Characteristics & LeNet & AlexNet & Vgg-11 \vspace{0.1cm}\\ \hline \hline Epoch number & 25 & 25 & 10 \vspace{0.1cm}\\ \hline Training loss & 0.953 & 0.631 & 0.244 \vspace{0.1cm}\\ \hline Validation loss & 0.956 & 0.695 & 0.468 \vspace{0.1cm}\\ \hline Training accuracy & 66.34\% & 78.34\%& 91.94\% \vspace{0.1cm}\\ \hline Validation accuracy & 66.70\% & 76.74\%&87.11\% \vspace{0.1cm}\\ \hline Testing accuracy & 66.64\% &76.03\%& 85.87\% \vspace{0.1cm}\\ \hline \hline \caption{Training characteristics for NN models.} \label{tab:training-characteristics} \end{longtable} The final step is the testing stage. To test the trained models, we loaded in the saved model parameters, including trained weights. Then, we checked for testing accuracy of the networks. Table~\ref{tab:training-characteristics} summarizes the training characteristics and reports train, validation and testing accuracy obtained. \subsection{Adversarial Samples Generation} {\bf Machine learning attacks:} Table~\ref{tab:attacks} detailed 17 unique machine learning attacks employed in the evaluation. However, for some of the attacks, more than one norms (L0, L1, L-infinity) are used for estimating the error ($\epsilon$), thus increasing the number of unique attacks evaluated to 40. For the sake of brevity, we enumerate the attacks ranging from 1 to 40 (as listed in Table~\ref{tab:attack-enumeration}), and used this enumeration as labels, instead of providing the full name and used-norm when showing the results in the following Figures. \begin{longtable}{| p{.05\textwidth} | p{.3\textwidth}| p{.055\textwidth} || p{.05\textwidth} | p{.3\textwidth}| p{.055\textwidth} | } \hline Label & Attack Name & Norm & Label & Attack Name & Norm \\ \hline \hline 1& Deep Fool Attack& L-inf & 21& BSCR Attack& L2\\ \hline 2& Deep Fool Attack& L2 & 22& BSCR Attack& L-inf\\ \hline 3& Additive Gaussian Noise (AGN) Attack& L2 & 23& Linear Search Contrast Reduction (LSCR) Attack& L1\\ \hline 4& Additive Uniform Noise (AUN) Attack& L2 & 24& LSCR Attack& L2\\ \hline 5& AUN Attack& L-inf & 25& LSCR Attack& L-inf\\ \hline 6& Repeated AGN Attack& L2 & 26& Decoupled Direction and Norm Attack& L2\\ \hline 7& Repeated AUN Attack& L2 & 27& Fast Gradient Sign Attack& L1\\ \hline 8& Repeated AUN Attack& L-inf & 28& Fast Gradient Sign Attack& L2\\ \hline 9& Basic Iterative Attack& L1 & 29& Fast Gradient Sign Attack& L-inf\\ \hline 10& Basic Iterative Attack& L2& 30& Inversion Attack& L1\\ \hline 11& Basic Iterative Attack& L-inf& 31& Inversion Attack& L2\\ \hline 12& Blended Uniform Noise Attack& L1 & 32& Inversion Attack& L-inf\\ \hline 13& Blended Uniform Noise Attack& L2 & 33& Newton Fool Attack& L2\\ \hline 14& Blended Uniform Noise Attack& L-inf & 34& Projected Gradient Descent Attack& L1\\ \hline 15& Blur Attack& L1 & 35& Projected Gradient Descent Attack& L2\\ \hline 16& Blur Attack& L2 & 36& Projected Gradient Descent Attack& L-inf\\ \hline 17& Blur Attack& L-inf & 37& Salt and Pepper Attack& L2\\ \hline 18& Calini Wagner Attack& L2 & 38& Sparse descent Attack& L1\\ \hline 19& Contrast Reduction Attack& L2 & 39& Virtual adversarial Attack& L2\\ \hline 20& Binary Search Contrast Reduction (BSCR) Attack& L1 & 40& Spatial Attack& N/A\\ \hline \caption{Labels of attacks and norms used to generate adversarial samples.} \label{tab:attack-enumeration} \end{longtable} {\bf Adversarial Sample Formulation:} Given a classification function $f(x)$, class $C_x$, adversarial classification function $f(x\prime)$, distance $D(x, x\prime)$  and epsilon $\epsilon$ (smallest allowable perturbation or error), adversarial sample $x$ can be mathematically expressed as: \[ f(x)\; = \;C_x \land f(x\prime)\;\neq\;C_x \land D(x,x\prime) \leq \epsilon. \] To craft adversarial samples via Foolbox~\citep{rauber2018}, we need to specify a criterion that defines the impact of adversarial action (misclassification in our case), and a distance measure that defines the size of a perturbation (i.e., L1-norm, L2-norm, and/or L-inf which must be less than specified $\epsilon$). Then, these are taken into consideration in an attacker model to generate an adversarial sample.  The following equation shows the general distance formula. Depending on the value of p, L1, L2 or L-inf norm is obtained. \[ ||x - \hat{x}||_p \; = \; (\; \sum_{i=1}^{d} | x_i = \hat{x_i}|^p \;)^{1/2} \] We picked the value of epsilon as 1.0, since it allows to generate a significant number of adversarial samples for all the attack methods used. Because it takes a lot of time to generate adversarial samples using the attack algorithms, we used 500 balanced inputs (i.e., 50 images from each of the 10 classes) from the test data. To demonstrate how well adversarial samples transfer, we use a confusion matrix as a visual guide. In a given confusion matrix, each row represents instances in a predicted class, whereas each column represents instances in an true/actual class that a given input belongs. The diagonal of the confusion matrix shows the number of each class that were correctly predicted after an attack is launched. For example, Figure~\ref{fig:confusion-linf} shows a confusion matrix of adversarial samples generated by using Deep Fool attack (with L-inf norm) on LeNet. It has all zero entries on the diagonal which means that the inputs (i.e., adversarial samples) were misclassified in all classes. This implies that the attack that generated the adversarial samples is very powerful since they were all misclassified. On the other hand, Figure~\ref{fig:confusion-l2} shows a confusion matrix of adversarial samples generated by using Gaussian Noise attack (with L2 norm) on LeNet. In this confusion matrix, however, the diagonal has non-zero, larger positive entries that illustrates the attack used in generating the adversarial samples are less powerful leading as many of the samples correctly classified. \begin{figure}[h] \centering \includegraphics[width=0.6\columnwidth]{{confusion-linf}.png} \caption{Confusion matrix of adversarial samples generated using Deep Fool attack with L-inf norm on LeNet. \label{fig:confusion-linf}} \vspace{-0.2cm} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.6\columnwidth]{{confusion-l2}.png} \caption{Confusion matrix of adversarial samples generated using Additive Gaussian Noise attack with L2 norm on LeNet.\label{fig:confusion-l2}} \vspace{-0.2cm} \end{figure} \subsection{Experimental Procedure} Here, we describe the procedure in performing the analysis and generating the results shown in the Evaluation. First, the adversarial samples are generated by using the attack model and original dataset on an attacker model (which can be one of the LeNet, AlexNet, or Vgg-11 at any given scenario). Once the adversarial samples are generated on the attacker model, they are used on the victim models (which can be one of the LeNet, AlexNet or Vgg-11). Then, the statistics regarding the number of mispredictions, as well as their prediction classes are collected. We also calculate the Structural Similarity Index Measure (SSIM) between adversarial samples and the original sample to compare how visually similar they are (SSIM value ranges from 0 to 1; the higher value indicates more similarity). This measure has been used in the literature to correlate more with human perception than Mean Absolute Distance (MAD). Hence, it serves as a metric for estimating how much perturbed (adversarial) and the original image will differ visually. \section{Evaluation} We obtained three kinds of results using adversarial samples generated on attacker models: i) number of mispredictions when adversarial samples are used on victim models; ii) the classes that (mis)predictions belong when adversarial samples are used on victim models; and iii) SSIM value between original and adversarial samples. We used these results to assess the effectiveness of attacks used in generating adversarial samples. This assessment led us to identify four main factors that contribute immensely towards the transferability of adversarial samples. In the following, we discuss these factors and provide results obtained to backup our findings for each factor's implication.     \subsection{Factor 1: The attack itself} We observed that some of the attacks used in generating adversarial samples are just more powerful than others (regardless of the victim model). That is, the adversarial samples generated by these attacks are easily transferable, hence leading to high number of misprediction on the target model. \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{{attacks}.png} \caption{Average number of mispredictions for adversarial samples transferred to the LeNet, AlexNet and Vgg-11. \label{fig:attacks}} \end{figure} Figure~\ref{fig:attacks} shows that the attacks with labels 1, 5, 8, 11, 14, 17, 25, 29, 32, 36, and 40 have higher number of  mispredictions when adversarial samples are used on victim models. Hence, those attacks are more powerful. Further, attacks with labels 11, 29 and 36 appear to have the highest number of mispredictions (on any victim model). This result shows that the transferability of an adversarial sample highly depends on the attack that generated the given adversarial sample. \subsection{Factor 2: Norm Used in the Attack} We observed that a particular attack that uses different norm to generate adversarial samples yielded varying degree of transferability. In general, the attacks that use L-inf tend to produce adversarial samples that exhibit higher number of mispredictions compared to attacks using L2 and L1. Figures~\ref{fig:lenet-attacker-distances},~\ref{fig:alexnet-attacker-distances} and \ref{fig:vgg11-attacker-distances} show results for attacks that use different norms when generating adversarial samples. In particular, Figure~\ref{fig:lenet-attacker-distances} shows the average number of mispredictions per attack for adversarial samples that are generated on LeNet. Among the attacks, Deep Fool, AUN and RAUN are implemented by using just L-inf and L2, whereas the rest have implementation for L1, L2 and L-inf norms. Clearly, the adversarial samples generated with L-inf norm have stronger ability to transfer, compared to the ones generated with L1 and L2 norms. Likewise, Figure~\ref{fig:alexnet-attacker-distances} and~\ref{fig:vgg11-attacker-distances} show the average number of mispredictions per attack for adversarial samples that are generated on AlexNet and Vgg-11, respectively. The findings are consistent among the victim models, indicating the norm to be used for a given attack has a significant impact on transferability of adversarial samples. \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{{lenet-attacker-distances}.png} \caption{Average number of mispredictions per attack for adversarial samples generated on LeNet. \label{fig:lenet-attacker-distances}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{{alexnet-attacker-distances}.png} \caption{Average number of mispredictions per attack for adversarial samples generated on AlexNet. \label{fig:alexnet-attacker-distances}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{{vgg11-attacker-distances}.png} \caption{Average number of mispredictions per attack for adversarial samples generated on Vgg-11. \label{fig:vgg11-attacker-distances}} \end{figure} While L-inf norm yields adversarial samples to transfer better compared to other norms, it should be noticed that the disturbance made to an input sample may become more pronounced. Comparing SSIM values of adversarial samples generated by using different norms shows that L-inf always produces significantly perturbed samples. In Figure~\ref{fig:ssim}, the range for SSIM values are labeled as: Excellent = ( 0.75 $\leq$ SSIM $\leq$ 1.0 ), Good = ( 0.55 $\leq$ SSIM $\leq$0.74 ), Poor = (0.35 $\leq$ SSIM $\leq$ 0.54), and Bad = (0.00 $\leq$ SSIM $\leq$ 0.34). We observed that many of the adversarial samples generated by L-inf norm have lower SSIM, indicating that perturbations made may be perceived by human. Therefore, checking the SSIM values can be used to guide the effectiveness of a given attack. Although an attack aims to maximize the number of mispredictions, it should be considered as a stronger attack if it can keep SSIM higher while yielding higher number of mispredictions, at the same time.  \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{{ssim}.png} \caption{SSIM values for adversarial samples generated on AlexNet. \label{fig:ssim}} \end{figure} \subsection{Factor 3: Closeness of the Target Model to the Attacker Model} Not surprisingly, we observed that adversarial samples yielded higher number of mispredictions for the models on which they were generated (i.e., the case in which attacker and victim models are the same). For example, adversarial samples generated on AlexNet lead to higher number of misprediction when these samples are used on AlexNet, or on a closer model (e.g., a variation of AlexNet). However, when these adversarial samples are used on other (or dissimilar) victim models, they lead to a comparably lower number of mispredictions. These findings are shown in Figures~\ref{fig:lenet-attacker-model},~\ref{fig:alexnet-attacker-model} and \ref{fig:vgg11-attacker-model}. \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{{lenet-attacker-model}.png} \caption{Number of mispredictions for adversarial samples that are generated on LeNet.\label{fig:lenet-attacker-model}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{{alexnet-attacker-model}.png} \caption{Number of mispredictions for adversarial samples that are generated on AlexNet. \label{fig:alexnet-attacker-model}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{{vgg11-attacker-model}.png} \caption{Number of mispredictions for adversarial samples that are generated on Vgg-11.\label{fig:vgg11-attacker-model}} \end{figure} The implication of this factor is that if an attacker can generate adversarial samples on a model that is similar to victim models, then the probability of adversarial samples generated to be transferred effectively is higher. This methodology can be used by industry experts to test how well adversarial samples can transfer  to their ML models. One way to exploit this observation for security-critical applications is to build multiple ML models that are dissimilar in terms of structure, but providing similar prediction accuracy; and then using majority vote (or similar schemes) to decide what should be proper prediction. If a particular attack would transfer and be effective on one of the ML model, (as evident by the analysis) it is very likely that other ML models (which are dissimilar) would be less sensitive to the same attack, providing a way to detect the anomaly and avoid the undesired consequences of adversarial samples. Building ML models that are different in structure, but yielding similar accuracy would be active research direction, not just for security-related concerns, but also be useful for reliability, power management, performance and scalability. \subsection{Factor 4: Sensitivity of an Input} Inherent sensitivity of an input to a particular attack can determine the strength of adversarial sample and how well it can transfer to a victim model. We can summarize our observations about the sensitivity of inputs used in the attacks as follows. \begin{enumerate} \item Some inputs are very sensitive to almost any attack, thus the adversarial samples generated for them can effectively transfer to victim models (e.g., input images with index 477, 479, 480 and 481 in Figure~\ref{fig:vgg11-misprediction}). \item Some inputs are insensitive to attacks, thus the adversarial samples generated are ineffective and cannot get mispredicted, regardless of the victim model (e.g., input images with index 481, 484, 494 in Figure~\ref{fig:vgg11-misprediction}). \item Some inputs are sensitive to specific attacks on a particular victim model, meaning the adversarial samples become effective when they are generated by particular subset of attacks, targeting a particular model (but not effective when used on other models). For example, the input images with index 465 and 467 in Figure~\ref{fig:vgg11-misprediction} become more sensitive (thus corresponding adversarial samples are more effective) when they are transferred to LeNet and AlexNet models, respectively (but not on other models). \end{enumerate} \begin{figure}[h] \centering \includegraphics[width=1.0\columnwidth]{{vgg11_models_last40_df}.png} \caption{The number of effective attacks (yielding a adversarial sample that would be mispredicted) for a particular input used on Vgg-11 as an attacker model (zoomed in to see last 40 input images). \label{fig:vgg11-misprediction}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1.0\columnwidth]{{lenet_models_total_df}.png} \caption{The number of effective attacks (yielding a adversarial sample that would be mispredicted) for a particular input used on LeNet as an attacker model. \label{fig:lenet-misprediction-all}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1.0\columnwidth]{{alexnet_models_total_df}.png} \caption{The number of effective attacks (yielding a adversarial sample that would be mispredicted) for a particular input used on AlexNet as an attacker model. \label{fig:alexnet-misprediction-all}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1.0\columnwidth]{{vgg11_models_total_df}.png} \caption{The number of effective attacks (yielding a adversarial sample that would be mispredicted) for a particular input used on Vgg-11 as an attacker model. \label{fig:vgg11-misprediction-all}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.7\columnwidth]{{collective-histogram}.png} \caption{Histogram that summarizes the sensitivity of inputs to attacks. The x-axis indicates the number of effective attacks for a given input (i.e., generated adversarial sample would transfer to victim model successfully regardless of the attacker model), and y-axis indicates the number of inputs whose adversarial samples (generated by a set of attacks) would transfer effectively to the victim models. \label{fig:collective-histogram} } \end{figure} Figure~\ref{fig:vgg11-misprediction} shows the effective number of attacks used to generate adversarial samples on Vgg-11. For better visibility, only the last 40 input images (out of 500) are zoomed in Figure~\ref{fig:vgg11-misprediction} where the x- axis shows the index of input image and the y-axis shows the number of attacks that lead to misprediction of generated adversarial samples on victim models (please, see Figure~\ref{fig:vgg11-misprediction-all} for all 500 inputs used on Vgg-11). Since there are 40 attacks used to generate adversarial samples, the y-axis can be at most 40 (in which case it would mean that all of the attacks yielded adversarial sample that result in misprediction). The results obtained for the complete 500 input images used are shown in Figure~\ref{fig:alexnet-misprediction-all},~\ref{fig:lenet-misprediction-all} for AlexNet, and LeNet (as attacker model), respectively. The implication of this factor is that the inherent characteristics of the input may play a role on how effectively the generated adversarial samples would be transferred to victim models. When combined with the strength of an attack, some inputs that are sensitive to the given set of attacks (irrespective of attacker model) may yield more effective adversarial samples than the other inputs. Figure~\ref{fig:collective-histogram} illustrates this phenomena. It can be seen that most of the input images are sensitive to roughly 10 attacks out of the 40 (regardless of the attacker model being used), but relatively very few inputs are very sensitive to all the attacks (23 input images yield adversarial samples that were mispredicted on all the victim models, regardless of attacker model and attack used).  \section{Conclusion} In its simplest form, \textit{transferability} can be defined as the ability of adversarial samples generated using the attacker model to be mispredicted when transferred to the victim model. We identified that most of the literature on transferability focuses on interpreting and evaluating transferability from the machine learning model perspective alone, which we refer as model-centric approach. In this work, we took an alternative path that we call attack-centric approach that focuses on investigating machine learning attacks to interpret and evaluate how adversarial samples transfer to the victim models. For each attacker model, we generated adversarial samples that are transferred to the three victim models (i.e., LeNet, AlexNet and Vgg-11). We identified four factors that influence how well an adversarial sample would transfer. Our hope is that these factors would be useful guidelines for researchers and practitioners in the field to prohibit the adverse impact of black-box attacks and to build more attack resistant/secure machine learning systems.  \vskip 0.2in
proofpile-arXiv_065-6
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Surface codes are an important class of error correcting codes in fault tolerant quantum computation. In literature, rigorous constructions of them are always done in cases of $\mathbb{Z}_2$ -vector spaces, which is reasonable because theories about qubit quantum computation are highly successful and qubit quantum codes are still dominant in today's research. However, higher dimensional qudit quantum systems has been proved to have some advantages in fault tolerant schemes, and some numerical studies has been done using special qudit surface codes, therefore a general discussion about the basic construction of qudit surface codes would be helpful. \par The basic introduction to the general theory of qudit stabilizer and surface code is \cite{Bombin}, where the author define them by symplectic codes. In this article, we give a more direct construction of surface code with arbitrary qudit dimention $D\geq{2}$ in a way similar to those of qubit surface code in prevailing literature, for example \cite{Projective}. We follow \cite{standard}, and define stabilizer code simply as the subspace stabilized by a subgroup $\mathcal{S}$ of qudit Pauli group, then we use the usual CSS construction to obtain $\mathcal{S}$ form an arbitrary 2-complex defined in \cite{Bombin}. When the 2-complex is from surface, we get the qudit surface code. In particular, even in arbitrary qudit dimesion, there is a size theorem proved in \cite{standard} that relates the `size' of stabilizer code to the size of its stabilizer group, which in this article, help us relate the size of the homology group of a 2-complex to that of its homological quantum code, this is more general than Theorem \uppercase\expandafter{\romannumeral3}.2 in \cite{Bombin} whose proof relies on dimension theory of vector spaces while in general D-qubit's case, we usually do not have a vector space but only $\mathbb{Z}_D$-modules. \par As an application, we generalize the hypermap-homology quantum code defined in \cite{Martin} to the qudit's case. Both the group structure or the construction\footnote{Which means constructing a topological hypermap from a conbinatorial one, see \cite{Martin}, or my article on https://arxiv.org/abs/2105.01608 for more details. } of topological hypermap rely heavily on the orientability of surfaces, but $\mathbb{Z}_2$ homology eliminates these reliance so that qubit hypermap codes can actually be constructed without the group structure and even on non-orientable surfaces. It is only the general $D$-qudit hypermap codes that will fully reflects the beautiful orientation related structure rooted in topological hypermaps. However, in this article, we do not build hypermap quantum codes from topological hypermaps as Martin does, but define them directly from combinatorial hypermaps, which makes statements more convenient and rigious at the sacrifice of loosing geometric intuition. Moreover, for a given hypermap quantum code, we constructed an abstract 2-complex whose topological quantum codes defined in this article equals exactly to it. This was motivated by the work of Pradeep Sarvepalli \cite{Pradeep} which shows that any (canonical) hypermap quantum codes equals to an surface code that can be built directly upon its underlying surface. \section{ Qudit systems of dimension \(D^n\)} A qudit is a finite dimensional quantum system with dimensioin $D\geq2$. As with the qubits' case, two operators $X$ and $Z$ act on a single qudit, and is defined as\footnote{In some other papers like \cite{standard}, $X$ is defined as the adjoint $X^\dag$ of ours. }: \begin{align} X & =\sum_{j\in \mathbb{Z}_D}{|j+1\rangle\langle{j}|} \\ Z & =\sum_{j\in \mathbb{Z}_D}{\omega^j |j\rangle\langle{j}|} \end{align} where $\omega=e^{2\pi{i}/D}$ and $\{|j\rangle\mid j\in {\mathbb{Z}_D}\}$ is an orthonormal basis for the qudit Hilbert space $\mathcal{H}$, also, the addition of integers in equation (1) is modulo \(D\). From the above equations, we have \(ZX=\omega{XZ}\), and $X^D=Z^D=1$. As with the qubits' Hadamard gate, there are so called \emph{Fourier gate} which maps the $\omega^k$-eigenvector \(|k\rangle\) of $Z$ to an $\omega^k$-eigenvector $|H_k\rangle$ of $X$, with \begin{equation} |H_k\rangle=\frac{1}{\sqrt{D}}\sum_{j}\omega^{-jk}|j\rangle. \end{equation} For n qudits, the Hilbert space is denoted by $\mathcal{H}_n$ and we have \begin{equation} \mathcal{H}_n=\bigotimes_{i=1}^{n}\mathcal{H} \end{equation} with a canonical basis the tensor products of \(|j\rangle\). Denote $X_i$ and $Z_i$ the corresponding $XZ$ operators acting on the i-th qudit, we call expressions of the form \cite{standard} \begin{equation} \omega^\lambda{X^\mathbf{x}Z^\mathbf{z}}=\omega^\lambda{X_1^{x_1}Z_1^{z_1}}\otimes{X_2^{x_2}Z_2^{z_2}}\otimes{\cdots}{\otimes}X_n^{x_n}Z_n^{z_n} \end{equation} the \emph{Pauli products}, where \(\lambda\) is an integer and the n-tuples $\mathbf{x}=(x_1,x_2,\cdots,x_n)$, $\mathbf{z}=(z_1,z_2,\cdots,z_n)$ belongs to \(\mathbb{Z}_D^n\). These Pauli products is closed under multiplication and form the \emph{Pauli group} \(\mathcal{P}_n\). An n-qudit \emph{stabilizer code} is a subspace $\mathcal{C}$ of $\mathcal{H}_n$ together with a subgroup \(\mathcal{S}\) of \(\mathcal{P}_n\) satisfying two conditions\footnote{In \cite{standard}, there is one more condition about the maximality of $\mathcal{S}$, which is actually not necessary in the proof of the next size theorem. }: \begin{itemize} \item For every $s$ in $\mathcal{S}$ and every \(|\phi\rangle\) in $\mathcal{C}$ \begin{equation} s|\phi\rangle=|\phi\rangle \end{equation} \item $\mathcal{C}$ is maximal the sense that any ket $|\phi\rangle\in{\mathcal{H}_n}$ that satisfies equation (6) for all $s$ in $\mathcal{S}$ lies in $\mathcal{C}$. \end{itemize} We call $\mathcal{S}$ the stabilizer of $\mathcal{C}$. For any subgroup $\mathcal{S}$, the stabilizer code \(\mathcal{C}\) always exists but $\mathcal{S}$ must be abelian and does not contain any scalar multiplication \(e^{i\theta}I\) other than $I$ itself when $\mathcal{C}\neq{\{0\}}$\footnote{If any of these happens, we would have that for all \(|\phi\rangle\in \mathcal{C} \), \(e^{i\theta}|\phi\rangle=|\phi\rangle\) for some $e^{i\theta}\neq{1}$ , which indicates \(\mathcal{C}=\{0\}\).}. Unlike the qubit's case, $\mathcal{C}$ does not have to be `logical qudits', i.e, it's dimension does not have to be \(D^k\) for some integer $k\geq{0}$, fortunately, we still have the following size theorem \cite{standard} whose proof we omit. \begin{theorem} let \(\mathcal{C}\) be an n-qudit stabilizer code with stabilizer \(\mathcal{S}\) which does not contain any scalar multiplication other than identity\footnote{This implies the abelianity of $\mathcal{S}$, but the inverse is not true. Also, this was not stated in the original paper \cite{standard}, while the proof relies on it. }. Then \begin{equation} K\times|S|=D^n, \end{equation} where K is the dimension of $\mathcal{C}$, \(|\mathcal{S}|\) is the size\footnote{Size means cardinality of the set $\mathcal{S}$.} of the stabilizer group $\mathcal{S}$ and $D$ is the dimension of the Hilbert space of one carrier qudit. \end{theorem} \section{2-complexes and Qudit surface code } To construct surface code, unlike qubits' case, orientation of the underlying 2-complex matters now, so we adopt the definition of 2-complex used in \cite{Bombin}. An oriented graph is a graph with each edge an orientation added. In combinatorial point of view, an oriented graph consists of a set of vertices $V$, a set of edges $E$, and two incidence fuctions \(I_s,I_t : E\rightarrow V\) which we call \emph{source} and \emph{target}, and we say an edge \(e\) goes or points from $I_s(e)$ to $I_t(e)$. In addition, there is also the set of `inverse edges' $E^{-1}=\{e^{-1}\mid e\in{E}\}$. We define $(e^{-1})^{-1}\mathrel{\mathop:}={e}$ and $I_s(e^{-1})=I_t(e)$, $I_t(e^{-1})=I_s(e)$, which allows the inverse operation and the functions $I_s,I_t$ to be expanded to the whole set $\bar{E}=E\cup{E^{-1}}$. Now we can define the concept of closed walk. First, an n-tuple of expanded edges is $(e_0,e_1,\cdots,e_{n-1})$ where $e_i\in \bar{E}$ with its index \(i\in \mathbb{Z}_n\), and satisfies $I_t(e_i)=I_s(e_{i+1})$. Then a \emph{closed walk of length n} is an equivalence class of these n-tuples under the equivalence relation generated by cyclic permutations, i.e, \((e_0,e_1,\cdots,e_{n-1})\sim(\Tilde{e}_0,\Tilde{e}_1,\cdots,\Tilde{e}_{n-1})\Leftrightarrow e_{i+k}=\Tilde{e}_i\) for some $k\in \mathbb{Z}_n$, and we denote that of $(e_0,e_1,\cdots,e_{n-1})$ by \begin{equation} \omega=[e_0,e_1,\cdots,e_{n-1}] \end{equation} which has a well defined inverse \begin{equation} \omega^{-1}\mathrel{\mathop:}=[\epsilon_0,\epsilon_1,\cdots,\epsilon_{n-1}] \end{equation} with $\epsilon_i=e_{n-1-i}^{-1}$. The 2-dimensional generalization of graphs is 2-complexes, combinatorially, an \emph{oriented 2-complexes} is a graph \(\Gamma=(V,E,I_s,I_t)\) with a set of faces $F$ plus a function $B:F\rightarrow{W_\Gamma}$, which comes from the gluing map of a 2-cell along its boundary in algebraic topology, where $W_\Gamma$ is the set of all closed walks. Similarly, we expand $F$ to $\bar{F}=F\cup{F^{-1}}$, together with the domain of the inverse operation and gluing map $B$: \begin{equation} B(f^{-1})=B(f)^{-1}, \quad\forall{f\in \bar{F}}. \end{equation} Intuitively speaking, a face \(f\in F\) is a closed disk with a normal vector field which gives its orientation, then an induced orientation of its boundary circle is also given, say, counterclockwise around the normal vector field. When the face is attached to a graph, this orientation of the boundary circle determines the orientation of the closed walk. As is said in \cite{Bombin}, the combinatorial definition of 2-complexes leaves out the possibility of gluing 2-cells into a single point, but is more than enough to define just the surface codes. Every compact surface has a finite cell division and can be combinatorially represented by a 2-complex, in particular, when the surface is closed, it could consist of a vertex $v$, $g$ ($g>{0}$) edges \(\{a_1,a_2,\cdots,a_g\}\) and a face $f$ with \footnote{By equation (11), we understand that there exist an intrinsic index set $\mathbb{Z}_{2g}$ such that $(a_1,a_1,\cdots, a_g,a_g)=(e_1,e_2,\cdots,e_{2g})$ and \(I_t(e_i)=I_s(e_{i+1})\), thus \([a_1,a_1,\cdots, a_g,a_g]\) means \([e_1,e_2,\cdots,e_{2g}]\), the same for equation (12). } \begin{equation} B(f)=[a_1,a_1,\cdots, a_g,a_g] \end{equation} if the surface is non-orientable, and a vertex $v$, $2g$ ($g>0$) edges \(\{a_1,b_1,\cdots,a_g,b_g\}\) and a face $f$ with \begin{equation} B(f)=[a_1,b_1,a_1^{-1},b_1^{-1},\cdots, a_g,b_g,a_g^{-1},b_g^{-1}] \end{equation} if the surface is orientable but not a sphere. In both case, we say the surface has genus $g$, which is predetermined by its homeomorphism class. For a sphere, its genus is defined to be \(g\mathrel{\mathop:}=0\), and has a 2-complex representation with two vertices \(v_0,v_1\), an edge $e$ pointing from $v_0$ to $v_1$ and a face $f$ with \(B(f)=[e,e^{-1}]\). A surface has many 2-complex representations other than those given above, which may be more useful in quantum error correction codes. On the other hand, not every 2-complex represents a surface, those do comes from a surface must satisfy the conditions of \emph{Surface 2-complex }. However, the definition of Surface 2-complex is unnecessary to our purpose and we omit it, for details, consult \cite{Bombin}. Giving a 2-complex \(\Sigma=(V,E,I_s,I_t,F,B)\), we can define three $\mathbb{Z}_D$ modules \(C_0(\Sigma),C_1(\Sigma),C_2(\Sigma)\) as free modules generated by sets \(V\), \(E\), \(F\), for example, \(C_0(\Sigma)\) consists of all the formal sums \(r_1v_1+r_2v_3+\cdots+r_{|V|}v_{|V|}\) with $r_i\in \mathbb{Z}_D, v_i\in V$. Then a boundary operator $\partial_1: C_1(\Sigma)\rightarrow C_0(\Sigma)$ is defined to be the unique homomorphism such that \(\partial_1(e)=I_t(e)-I_s(e)\) for each $e\in E$. To define the boundary $\partial_2: C_2(\Sigma)\rightarrow C_1(\Sigma)$, first, for any closed walk \(\omega=[e_1^{\sigma_1},e_2^{\sigma_2},\cdots,e_h^{\sigma_h}],e_i\in E,\sigma_i=\pm 1\), we define \( c_\omega\mathrel{\mathop:}=\sum_{i=1}^h\sigma_i e_i, \), then $\partial_2$ is the unique homomorphism such that \(\partial_2(f)=c_{B(f)}\) for any $f\in F$. Now, there is a simple but impartant equation \begin{equation} \partial_1\circ \partial_2=0. \end{equation} We denote $Z_1(\Sigma)\mathrel{\mathop:}=\ker\partial_1$ whose elements are called \emph{cycles} and $B_1(\Sigma)\mathrel{\mathop:}=\im\partial_2$ whose elements are called \emph{boundaries}. Equation (13) tells us that $B_1(\Sigma)\subset Z_1(\Sigma) $, in particular, \(B_1(\Sigma)\) is a normal subgroup of $Z_1(\Sigma)$, and we have the \emph{first homology group} $H_1(\Sigma)$ as the quotient group\footnote{We forget the scalar multiplication for the moment, but $H_1(\Sigma)$ is actually a $Z_D$-module too. } \begin{equation} H_1(\Sigma)\mathrel{\mathop:}=Z_1(\Sigma)/B_1(\Sigma). \end{equation} By writing out the matrix of \(\partial_i\) under the natrual bases $V,E,F$, a special kind of stabilizer codes called homological quantum codes can be constructed \cite{Martin}. However, we do not want to use matrix argument in context of surface codes\footnote{Surface codes are a special kind of homological quantum codes.} and $\mathbb{Z}_D$-modules, instead, we introduce basic cohomology terms \cite{Bombin,Martin}, this would make things compact and geometrical insightful. First, some algebraic remarks. If $A$ is a module over a commutative ring $R$, then the set of all homomorphisms\footnote{R itself is an R-module.} from A to R is an R-module called the \emph{dual modules} of $A$ and is denoted by \(A^*\mathrel{\mathop:}=Hom_R(A,R)\). Now if $F$ is a free R-module with a finite basis \(X\), for each $x\in X$, let \(x^*: F\rightarrow R\) be the homorphism given by \(x^*(y)=\delta_{xy}\) ( \(\forall y\in X\) )\footnote{\(\delta_{xy}\) denotes \(0\in R\) if \(x\neq{y}\), \(1_R\) if \(x=y\).}, then a basic fact is that \(F^*\) is a free R-module with basis $\{x^*\mid x\in X\}$. Denote $C^i(\Sigma)\mathrel{\mathop:}=C^*_i(\Sigma)$, and also $(c^i,c_i)\mathrel{\mathop:}=c^i(c_i)$ for any \(c^i\in C^i(\Sigma)\) and \(c_i\in C_i(\Sigma)\), the cobondary operator $\delta_{i+1}: C^i\rightarrow C^{i+1}, (i\in \{0,1\})$ is defined by \begin{equation} (\delta_i(c^{i-1}),c_i)\mathrel{\mathop:}=(c^{i-1},\partial_i(c_i)),\quad i=1,2. \end{equation}. Then, by equation (13), we have the \emph{cochain complex} \begin{equation} \delta_2\circ \delta_1=0. \end{equation} along with so called fist \emph{coholmology group} \(H^1(\Sigma)\mathrel{\mathop:}=Z^1(\Sigma)/B^1(\Sigma)\) where the \emph{cocycles} is defined by $Z^1(\Sigma)\mathrel{\mathop:}=\ker \delta_2$, and \emph{coboundaries} by \(B^1(\Sigma)\mathrel{\mathop:}=\im \delta_1\). Now, let the star of a vertex \(v\in V\) to be the set \cite{Bombin} \(\stAr(v)\mathrel{\mathop:}= \{(e,\sigma)\in E\times\{1,-1\}\mid I_t(e^\sigma)=v\}\). Then we have a geometric explanation of $\delta_1$ \begin{equation} \delta_1(v^*)=\sum_{(e,\sigma)\in \stAr(v)}\sigma e^* \end{equation} which is important in the construction of surface codes. To construct a stabilizer code, we attach a qudit to each edge of a 2-complex, thus obtaining a $D^{|E|}$ dimensional Hilbert space $\mathcal{H}_{|E|}$, what we need is to find a subgroup of $\mathcal{P}_{|E|}$. First, we define two sets of operators. \begin{itemize} \item \emph{Face operators}: For each face f, we have \(\partial_2(f)=c_{B(f)}=\sum_{i=1}^h\sigma_i e_i\), where $\sigma\in\{1,-1\}$, an operator is defined by \begin{equation} B_f\mathrel{\mathop:}= \prod_{i=1}^h Z_i^{\sigma_i} \end{equation} with $Z_i$ the $Z$ operator on $e_i$'s qudit. \item \emph{Vertex operator}: For each vertex, we have equation (17), an operator is defined by \begin{equation} A_v\mathrel{\mathop:}=\prod_{(e,\sigma)\in \stAr(v)}X_e^\sigma \end{equation} with $X_e$ the \(X\) operator on $e$'s qudit. \end{itemize} Notice that we can expand the index to all edges by forcing some exponential $\sigma$ equal to $0$, i.e, we can write \(B_f= \bigotimes_{i=1}^{|E|} Z_i^{\sigma_i}\), and \(A_v=\bigotimes_{i=1}^{|E|}X_i^{\sigma_i}\), thus there is an $|E|$-tuple $\mathbf{v}_f=(\sigma_1,\sigma_2,\cdots,\sigma_{|E|})$\footnote{Unlike those in equation (18), there is possibility that for some $i$, $|\sigma_i|>1$. Because the closed walk may intersect with itself, in for example the case of a non-orientable surface. } for each face operator, and an $|E|$-tuple $\mathbf{u}_v=(\sigma'_1,\sigma'_2,\cdots,\sigma'_{|E|})$ for each vertex operator. Multiplication of two face (vertex) operators $B_f,B_{f'}$ (\(A_v,A_{v'}\)) correspond to addition of their \(|E|\)-tuples $\mathbf{v}_f+\mathbf{v}_{f'}$ (\(\mathbf{u}_v+\mathbf{u}_{v'}\)) in \(\mathbb{Z}_{D}^{|E|}\), which indicates that the subgroup \(\mathcal{B}\) ($\mathcal{A}$) of \(\mathcal{P}_{|E|}\) generated by all the face (vertex) operators $B_f$ ($A_v$) corresponds to a submodule \(r(\mathcal{B})\) ($r(\mathcal{A})$) of the free module $\mathcal{Z}_D^{|E|}$. Indeed, \(r(\mathcal{B})\) ($r(\mathcal{A})$) is simply the set of coordinates of elements in $\im\partial_2$ ($\im\delta_1$) under the basis \(\{e\mid e\in E\}\)(\(\{e^*\mid e\in E\}\)). \begin{lemma} The elements of \(\mathcal{B}\) commute with elements of \(\mathcal{A}\). \end{lemma} \begin{proof} For any $f\in F$ and $v\in V$, we have \((\delta_1(v),\partial_2(f))=(v,\partial_1\circ\partial_2(f))=0\) by equations (15) and (13), which implies that the inner product \(\mathbf{v}_f\cdot\mathbf{u}_v=0\) in $\mathbb{Z}_D^{|E|}$. If we denote $g^+\mathrel{\mathop:}=\sum_{i\in{I^+}}\sigma_i\sigma'_i$ with $I^+\mathrel{\mathop:}=\{i\in\{\{1,2,\cdots,|E|\}| \sigma_i\sigma'_i>0\}$ , $g^-\mathrel{\mathop:}=\sum_{i\in{I^-}}\sigma_i\sigma'_i$ with $I^-\mathrel{\mathop:}=\{i\in\{\{1,2,\cdots,|E|\}| \sigma_i\sigma'_i<0\}$ , we have $g^+-g^-\equiv 0\mod D$. Now, from the basic relation \(ZX=\omega{XZ}\), we have \(Z^{-1}X^{-1}=\omega{X^{-1}Z^{-1}}\), \(Z^{-1}X=\omega^{-1}{XZ^{-1}}\) and \(ZX^{-1}=\omega^{-1}{X^{-1}Z}\), which shows that if we interchange $B_f$ and $A_v$, there would have $\omega^{g^+}$ and $(\omega^{-1})^{g^-}$ generated, these together give $1$. \end{proof} Let \(\mathcal{S}\) be the subgroup generated by all $B_f$ and $A_v$, then by lemma 2 it is abelian so that any element $s$ of $\mathcal{S}$ can be written as \begin{equation} s=b\cdot a \end{equation} with \(b\in\mathcal{B}\) and \(a\in\mathcal{A}\), and thus cannot be some scalar multiplication other than identity. Then by theorem 1, the stabilizer code $\mathcal{C}$ defined by $\mathcal{S}$ has dimension $K=D^{|E|}/{|S|}$, and is called surface code when the 2-complex comes from a surface with or without boundary. In qubit's case, it can be further showed that the number of logical qubits contained in \(\mathcal{C}\) equals the dimension of the first homology group, i.e, $\dim H_1(\Sigma)$. However, the arguments using the dimension property of vector spaces cannot be applied in general $D$-qudit's case, for a $\mathbb{Z}_D$-module may not be vector space when $D$ is not a prime. Fortunately, the next theorem shows that even for arbitrary $D$, the size of $H_1(\Sigma)$ still gives a measurement of \(K\). \begin{theorem} For any 2-complex $\Sigma$, let $\mathcal{S}$ be the subgroup generated by all face and vertex operators defined by equation (18) and (19), then the dimension $K$ of its stabilizer code $\mathcal{C}$ equals the size of \(H_1(\Sigma)\), i.e, we have \begin{equation} K=|H_1(\Sigma)|. \end{equation} \end{theorem} \begin{proof} By equation (20), we have \(|\mathcal{S}|=|\mathcal{B}||\mathcal{A}|=|r(\mathcal{B})||r(\mathcal{A})|\)\footnote{Here we have also used the property that operators of the form \(X^xZ^z\) (\(x,z\in \mathbb{Z}_D\)) is a basis of $L(\mathcal{H})$. }, so \(K=D^{|E|}/{|S|}=|C_1(\Sigma)|/|\im\partial_2||\im \delta_1|\), thus we only need to prove \(|C_1(\Sigma)|/|\im \delta_1|=|\ker\partial_1|\), i.e, $|\ker\partial_1|\cdot |\im \delta_1|=D^{|E|}$. Notice that if \(x\in \ker\partial_1\), then for every \(\alpha\in\im\delta_1\), there is a \(\beta\in{C^0(\Sigma)}\) such that $\alpha=\delta_1 \beta$, and we have $(\alpha,x)=(\beta,\partial_1x)=0$. On the other hand, if $y\in C_1(\Sigma)$ such that for all $\alpha\in\im\delta_1$, \((\alpha,y)=0\), then for all \(\beta\in C^0(\Sigma)\), we have $(\beta,\partial_1y)=(\delta_1\beta,y)=0$, which means $\partial_1y=0$, i.e, $y\in\ker\partial_1$. These together shows that the set of coordinates of the elements in $\ker\partial_1$ is the submodule \(r(\mathcal{A})^{\perp}\) of $\mathbb{Z}_D^{|E|}$. Now, theorem 3.2 in \cite{Free} tells us $|r(\mathcal{A})||r(\mathcal{A})^{\perp}|=D^{|E|}$, which proves our result. \end{proof} As examples, lets calculate dimensions of codes from projective plane $\mathbb{P}^2$ and torus $\mathbb{T}^2$. For $\mathbb{P}^2$, a 2-complex consist of a vertex $v$, an edge $e$ with $I_s(e)=I_t(e)=v$, and a face $f$ with $B(f)=[e,e]$. So, $\partial_2(f)=e+e=2e$, and we have $\im\partial_2\simeq2\mathbb{Z}_D$. Moreover, by $\partial_1(e)=v-v=0$, we have $\ker \partial_1=C_1(\Sigma)\simeq \mathbb{Z}_D$. Therefore \(H_1(\Sigma)\simeq{\mathbb{Z}_D/{2\mathbb{Z}_D}}\), which has two elements when $D$ is even and one element when $D$ is odd. Thus by theorem 3, we could say that the code $\mathcal{C}$ contains a (logical) qubit when $D$ is even and only a `half' qubit when $D$ is odd\footnote{For that \(H_1(\Sigma)\) only depends on the homeomorphism class of the underline surface too, these results won't change when we choose some other 2-complex representations. }. For $\mathbb{T}^2$, a 2-complex consist of a vertex $v$, two edges $\{e_1,e_2\}$ with $I_s(e_i)=I_t(e_i)=v$ , and face $f$ with $B(f)=[e_1,e_2,e_1^{-1},e_2^{-1}]$. we have \(\partial_1(e_1)=0\) and \(\partial_2(f)=e_1+e_2-e_1-e_2=0\), which means $\im\partial_2=0$ and $\ker\partial_1=C_1(\Sigma)$. Therefore, $H_1(\Sigma)\simeq\mathbb{Z}_D^{2}$, and the code $\mathcal{C}$ contains two $D$-qudits. \section{Qudit hypermap code } In a general 2-complex construction, even if the 2-complex comes from an oriented surface, there seems to have no canonical way of orienting the edges, i.e, defining the functions \(I_s,I_t\). In this section, we show that this arbitrariness can be avoid when the 2-complex comes in a certain way from a hypermap. A hypermap\footnote{More precisely, a combinatorial hypermap. } consists of a number set \(B_n=\{1,2,\cdots,n\}\) with a pair of permutations $(\alpha, \sigma)\in S_n$ such that the subgroup $<\alpha,\sigma>$ generated by them is transitive on \(B_n\)\footnote{`Transitive' means for every two elements $i,j\in B_n$, there is a permutation \(\gamma\in <\alpha,\sigma>\) such that $\gamma(i)=j$.}. For each element $\gamma\in<\alpha,\sigma>$, define its orbits to be the equivalence classes of $B_n$ under the relation \(i\sim j\Leftrightarrow \exists \gamma'\in <\gamma>, \gamma'(i)=j \), then for each $i\in B_n$, there is a positive integer \(r\) so that the \(\gamma\)-orbit it belongs to is \(orb_\gamma(i)=\{i,\gamma(i),\cdots,\gamma^{r-1}(i)\}\),with \(\gamma^r(i)=i\). We call the orbits of \(\alpha\) \emph{hyperedges}, the orbits of \(\sigma\) \emph{hypervertices}, and the orbits of \(\alpha^{-1}\sigma\) \emph{faces}\footnote{For \(\alpha^{-1}\sigma\), we take the convention in \cite{Martin}, i.e, acting from left to right.}, in addition, we call the elements of $B_n$ themselves \emph{darts}. Also, we denote \(e_{ \owns i}\), \(v_{\owns i}\), and \(f_{\owns i}\) the hyperedge, the hypervertex, and the face that dart $i$ belongs to. Let \(\mathcal{V},\mathcal{E},\mathcal{F}\) be the free $\mathbb{Z}_D$-modules generated by all hypervertices, hyperedges, and faces, also, $\mathcal{W}$ be the free $\mathbb{Z}_D$-modules generated by all darts $B_n$. We define a homomorphism $d_2: \mathcal{F}\rightarrow\mathcal{W}$ by \(d_2(f)=\sum_{i \in f}i\), and a homomorphism $d_1: \mathcal{W}\rightarrow \mathcal{V}$ by \(d_1(i)=v_{\owns \alpha^{-1}(i)}-v_{\owns i}\), then we have \begin{lemma} \(d_1\circ d_2=0\). \end{lemma} \begin{proof} For an $f\in \mathcal{F}$, we write its element as $f=\{i_0,i_2,\cdots,i_{k-1}\}$ where the subscript $s\in\mathbb{Z}_k$ with $i_{s+1}=\alpha^{-1}\sigma(i_s)$, which implies \(v_{\owns \alpha^{-1}(i_s)}=v_{\owns i_{s+1}}\), thus \(d_1\circ d_2(f)=d_1\sum_{s\in\mathbb{Z}_k}i_s=v_{\owns \alpha^{-1}(i_0)}-v_{\owns i_0}+v_{\owns \alpha^{-1}(i_1)}-v_{\owns i_1}+\cdots+v_{\owns \alpha^{-1}(i_{k-1})}-v_{\owns i_{k-1}}=0\). \end{proof} \noindent Also, there is a homomorphism $\iota:\mathcal{E}\rightarrow\mathcal{W}$ with \(\iota(e)=\sum_{i\in e}i\), which is very similar to $d_2$, and we have \begin{lemma} \( d_1\circ\iota=0\). \end{lemma} \noindent Lemma 5 guarantees a well defined homomorphism $\Delta_1$ from the quotient module $\mathcal{W}/\iota(\mathcal{E})$ to \(\mathcal{V}\), with $\Delta_1[\omega]=d_1\omega$, where $[\omega]$ denotes the equivalence class of $\omega$. Further more, if we define \(\Delta_2:\mathcal{F}\rightarrow \mathcal{W}/\iota(\mathcal{E})\) by $\Delta_2=\rho \circ d_2$, where $\rho$ is the natural projection from \(\mathcal{W}\) to \(\mathcal{W}/\iota(\mathcal{E})\) , we would have $\Delta_1\circ \Delta_2=0$. \begin{wrapfigure}{r}{0.47\textwidth} \centering \begin{tikzpicture}[ squarednode/.style={rectangle, draw=black!0, fill=green!0, very thick, minimum size=2mm}, ] \node[squarednode] (site 1) {\(\mathcal{W}\)}; \node[squarednode] (site 2) [right=1.6 of site 1] {\(\mathcal{V}\)}; \node[squarednode] (site 3) [left=1.6 of site 1] {\(\mathcal{F}\)}; \node[squarednode] (site 4) [below=1 of site 1] {\(\mathcal{W}/\iota(\mathcal{E})\)}; \node[squarednode] (fake l) [left=0.3 of site 4]{}; \node[squarednode] (fake r) [right=0.3 of site 4]{}; \node[squarednode] (p1) [above=0.1 of fake l]{\(\Delta_2\)}; \draw[->] (site 3) -- (site 4); \node[squarednode] (p2) [above=0.1 of fake r]{\(\Delta_1\)}; \draw[->] (site 4) -- (site 2); \node[squarednode] (fake l+) [left=0.4 of site 1]{}; \node[squarednode] (fake r+) [right=0.4 of site 1]{}; \node[squarednode] (d1) [above=-0.2 of fake l+]{\(d_2\)}; \node[squarednode] (d2) [above=-0.2 of fake r+]{\(d_1\)}; \draw[->] (site 1) -- (site 2); \draw[->] (site 3) -- (site 1); \node[squarednode] (center) [below=0.3 of site 1]{}; \node[squarednode] (p) [right=-0.1 of center]{\(\rho\)}; \draw[->] (site 1) -- (site 4); \end{tikzpicture} \caption{\(\Delta_i\) are defined to make the diagram commute.} \label{jiaohuantu} \end{wrapfigure} To construct a homological quantum code, we only have to show that $\mathcal{W}/\iota(\mathcal{E})$ is a free module with a specified basis, then we can use the matrix argument in \cite{Martin}, and will obtain a so called \emph{hypermap-homology} quantum code. For that, we choose a \emph{special dart} in every hyperedge and denote the union of these special darts the subset $S\subset B_n$. Then we have \begin{lemma} \(\mathcal{W}/\iota(\mathcal{E})\) is a free module with a basis \(\{[i]\mid i\in B_n\setminus S\}\). \end{lemma} \begin{proof} First, we show that this is a linear independent set. Suppose there are $k_i\in \mathbb{Z}_D$ such that $\sum_{i\in B_n\setminus S}k_i[i]=0$, then we have $\sum_{i\in B_n\setminus S}k_ii=\sum_eR_e\iota(e)$, $R_e\in \mathcal{Z}_D$. If we use $s_e$ to denote the special dart in the hyperedge $e$, then the right side of the equals sign becomes \(\sum_eR_es_e+\sum_{i\in B_n\setminus S}h_ii\) for some $h_i\in \mathbb{Z}_D$, thus $R_e=h_i-k_i=0$ by linear independence of the set $B_n$ in \(\mathcal{W}\), which further indicates $k_i=0$. On the other hand, for every $\omega\in\mathcal{W}$, we have some \(R_e,h_i\in\mathbb{Z}_D\) such that \begin{align*} [\omega]&=[\sum_eR_es_e+\sum_{i\in B_n\setminus S}h_ii]\\&=\sum_eR_e[s_e]+\sum_{i\in B_n\setminus S}h_i[i]\\&=\sum_eR_e(-{\sum_{i\in{e\setminus\{s_e\}}}[i]})+\sum_{i\in B_n\setminus S}h_i[i], \end{align*} which shows \(\mathcal{W}/\iota(\mathcal{E})=span\{[i]\mid i\in B_n\setminus S\}\). \end{proof} \noindent However, we will instead construct an abstract 2-complex \(\Sigma=(V,E,I_s,I_t,F,B)\) whose \emph{chain} $C_2(\Sigma)\stackrel{\partial_2}{\longrightarrow}C_1(\Sigma)\stackrel{\partial_1}{\longrightarrow}C_0(\Sigma)$ is isomorphic to the chain of hypermap homology $\mathcal{F}\stackrel{\Delta_2}{\longrightarrow}\mathcal{W}/\iota(\mathcal{E})\stackrel{\Delta_1}{\longrightarrow}\mathcal{V}$, which will lead us to the situation in the previous section, and in particular, help us avoid using matrix argument. In order to define $\Sigma$, we let $V$ be the set of all hypervertices, $E$ be the set $B_n\setminus S$, and $F$ be the set of all faces, then apparently, we have \(C_2(\Sigma)= \mathcal{F} \), \(C_1(\Sigma)\simeq \mathcal{W}/\iota(\mathcal{E})\), and \(C_0(\Sigma)= \mathcal{V}\). Further more, for every $e\in E$, which is a non-special dart, i.e, $e=i\in B_n\setminus S $, define \(I_t(e)=v_{\owns \alpha^{-1}(i)}\), \(I_s(e)=v_{\owns i}\), then we have $\partial_1(e)=I_t(e)-I_s(e)=v_{\owns \alpha^{-1}(i)}-v_{\owns i}=d_1i=\Delta_1[i]$, which means $\partial_1\simeq\Delta_1$. To define $B$, notice that for every $f\in F$, there is a positive integer $r$ such that \(f=\{i_0,i_1,\cdots,i_{r-1}\}\) with subscripts in $\mathbb{Z}_r$, and satisfy \(i_{k+1}=\alpha^{-1}\sigma (i_k)\) for all $k$. Suppose that the subset which consists of all special darts in $f$ is $S_f=\{i_{k_1},i_{k_2},\cdots i_{k_s}\}$, we have $[i_{k_t}]=[i_{k_t}-\iota(e_{\owns{i_{k_t}}})]=-\sum_{l=1}^{|e_{\owns{i_{k_t}}}|-1}[i^t_l]$, with $i^t_{l+1}=\alpha (i^t_{l})$ for all $l\in \{1,2,\cdots,|e_{\owns{i_{k_t}}}|-2 \}$, plus $\alpha(i_{k_t})=i^t_1$ and $\alpha(i^t_{|e_{\owns{i_{k_t}}}|-1})=i_{k_t}$, where $i^t_l\in B_n\setminus S$. Thus we have \begin{equation} \Delta_2(f)=\sum_{i\in f\setminus S}[i]-\sum_{t=1}^s{\sum_{l=1}^{|e_{\owns{i_{k_t}}}|-1}[i^t_l]} \end{equation} and \begin{lemma} In the r-tuple \((i_0,i_1,\cdots,i_{r-1})\) from $f$, if we replace each $i_{k_t}\in S_f $ by the tuple $\mathbf{p}_t=((i_1^t)^{-1},(i_2^t)^{-1},\cdots,(i_{|e_{\owns{i_{k_t}}}|-1}^t)^{-1})$ in \(E\cup{E^{-1}}\), then we get a closed walk \begin{equation*} [i_0,i_1,\cdots,\mathbf{p}_1,i_{k_1+1},\cdots,\mathbf{p}_s,i_{k_s+1},\cdots,i_{r-1}]. \end{equation*} \end{lemma} \begin{proof} If re-indexing the `closed walk' by $e_i\in E\cup{E^{-1}}$ with \(i \in \mathbb{Z}_K\) where $K$ is the length, we only need to check that $I_s(e_{i+1})=I_t(e_i)$. For example, $I_s((i^1_1)^{-1})=v_{\owns{\alpha^{-1}(i^1_1)}}=v_{\owns i_{k_1}}=v_{\owns \alpha^{-1}\sigma(i_{{k_1}-1})}=v_{\owns \alpha^{-1}(i_{{k_1}-1})}=I_t(i_{{k_1}-1})$, when \(i_{k_1-1}\notin S_f\). \end{proof} \noindent Now, if we define \(B(f)\) to be the closed walk in lemma 7, then by equation (22) there is \(\partial_2\simeq\Delta_2\). We have shown that every hypermap map code is the homological quantum code constructed from a 2-complex $\Sigma$. The most intresting observation about \(\Sigma\) is that it should be a surface 2-complex. Actually, every hypermap \((\alpha,\sigma)\) has an geometrical representation $H=(M,\Gamma)$ called \emph{topological hypermap}, where $M$ is an oriented surface, $\Gamma$ is an bipartite graph embedded in $M$ whose edges correspond to the darts in $B_n$, the normal vector field given by $M$'s orientation determines the maps $\alpha$ and $\sigma$. Then Pradeep Sarvepalli showed in \cite{Pradeep} that we can obtain by adding curves on $M$ an ordinary surface code which equals the original hypermap code constructed by Martin Leslie in \cite{Martin}. In our language, Pradeep's curves together with the vertices of $\Gamma$ they connected and $M$ itself from exactly the 2-complex $\Sigma$ we constructed, whose homological quantum code is Pradeep's surface code. A subtlety is that in \cite{Pradeep}, the curves are not oriented for they only deal with qubit quantum codes, which can be easily fixed. However we do not try to prove directly that $\Sigma$ is surface 2-complex for the great possibility of a tedious argument, but show the simple fact that $\Sigma$ is orientable. When a 2-complex is from an orientable surface, moreover, the function $B:F\rightarrow E$ is determined by a global normal vector field\footnote{The restriction of the global field in each face will induce an orientation of its closed walk. }, we must have \cite{Bombin} \begin{equation} \sum_{f\in F} \partial_2(f)=0. \end{equation} Let's define orientable 2-complexes to be those satisfying equation (23), then we have \begin{theorem} The 2-complex $\Sigma$ constructed above is orientable\footnote{Also, the transitivity of \(<\alpha,\sigma>\) implies the connectivity of this 2-complex, and for difinition of conectivity, consult \cite{Bombin}.}. \end{theorem} \begin{proof} We only have to show that \(\sum_{f\in F}\Delta_2(f)=0\), which is correct because \(\sum_{f\in F}d_2(f)=\sum_{i\in B_n}i=\sum_{e} \iota(e)\) where the last sum is done for all hyperedges. \end{proof} \section{Discussion} We didn't talk about the error correcting ability of our code in this article, while for qudit stabilizer codes, the basic spirit is similar. For a qudit Pauli errors $E$, we can also define its \emph{syndrome} (with respect to $s_i$) to be the integer $0\leq g_i\leq D-1$ such that $Es_i=\omega^{g_i}s_iE$, where \(\{s_i\}\) is a set of generators of the stabilizer group $\mathcal{S}$, and for any pair of such errors $E_i$ and $E_j$, we also have that $E_i^{\dag}E_j\in C(\mathcal{S})$\footnote{\(C(\mathcal{S})\) denotes the centralizer of \(\mathcal{S}\) in $\mathcal{P}_n$, which may not equal to the normalizer $N(\mathcal{S})$ when $D>2$. } if and only if they have same syndrome for each $s_i$. Now suppose that $|\psi\rangle\in\mathcal{C}$ is the state we want to protect, and an Pauli error $E_j$ happens on it. The corrupted state $E_j|\psi\rangle$ is obviouly an $\omega^{-{g_i}}$ eigenstate of $s_i$ with $g_i$ the $s_i$'s syndrome of $E_j$. Thus we can obtain the syndromes of \(E_j\) by measuring the normal operators $s_i$ using $E_j|\psi\rangle$. With these data at hand, we try to cook out by certain algorithm an `error correcting operator' $E_i$ whose syndromes equal to those of $E_j$. Now if $E_i^{\dag}E_j\in \mathcal{S}$, we have $E_i^{\dag}E_j|\psi\rangle=|\psi\rangle$, and the error is successfully corrected. However, if $E_i^{\dag}E_j\in C(\mathcal{S})\setminus{\mathcal{S}}$, we may induce a nontrivial linear transformation on the code space $\mathcal{C}$, and the correction algorithm is failed. Theoretically, how far these algorithms can be applied in the correction of an arbitrary error with the form of a quantum operation $\mathcal{E}(\rho)=\sum_i E_i^\dag\rho E_i$ is interesting, even some basic facts seems to be challenging in qudit's case. For example, theorem 10.8 in \cite{Nielsen} which gives a basic error-correction condition for stabilizer codes can not be directly generalized to qudit's case, for the proof relies on the specific form of projective operators of the eigenspaces from elements in qubit Pauli group. One more thing. In parctical, one always encounter so called planar codes, which are constructed on surfaces with two kinds of boundaries, they are smooth boundaries and rough boundaries. Smooth boundaries are ordinary boundaries which would not cause any problem. However, at rough boundaries, the definition of the function $I_s$, $I_t$, and $B$ should be slightly modified in order to fit all that we developed in section 3, and in the calculation of $H_1(\Sigma)$ then, some techniques from relative homology theory may also be helpful. \bibliographystyle{unsrt}
proofpile-arXiv_065-7
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} There are numerous links between probabilistic cellular automata (PCA) \cite{Louis18} and percolation (see \cref{subsec:models} for the definitions of our models of interest) \cite{Grimmett99}. In the case of additive PCA this link is very apparent, since they may be viewed as oriented percolation models (see e.g.\ \cite{Hartarsky21GOSP}). Moreover, percolation is often used as a reference model for comparison in more complex cases (see e.g.\ \cite{Marcovici19}). Our first goal will be to import a recent technique \cite{Duminil-Copin19} for proving the sharpness of phase transitions from percolation to the setting of attractive PCA. This allows us to establish that they all `die out' exponentially fast throughout their `subcritical' phase. This comes to complement a classical result of Bezuidenhout and Gray \cite{Bezuidenhout94} showing that a certain `supercritical' phase is also well-behaved. Besides PCA, our other main motivation for pursuing this result comes from bootstrap percolation (BP). We establish a correspondence between the two, so as to deduce exponential decay of the probability of remaining healthy above criticality previously conjectured for a class of BP models. This also has implications for related kinetically constrained models (KCM), taking into account previous work of the author \cite{Hartarsky21}. Finally, we show other uses of the correspondence between PCA and BP. Namely, it provides an equivalence between the non-triviality of the phase transition of certain BP models and the stability w.r.t.\ noise of certain deterministic cellular automata (CA). The former was studied recently in the framework of BP universality by Bollob\'as, Smith and Uzzell \cite{Bollobas15}, Balister, Bollob\'as, Przykucki and Smith \cite{Balister16} and Balister, Bollob\'as, Morris and Smith \cite{Balister22}, while the latter was investigated over four decades ago by Toom \cite{Toom80} and subsequently by a number of authors \cites{Gacs21,Bramson91,Lebowitz90,Gray99,Berman88,Swart22}. Bridging their viewpoints yields results in both directions. \subsection{Models} \label{subsec:models} \subsubsection{General setting} \textbf{Convention} As it is common in set systems, we will denote points with lower case letters, sets of points with upper case ones (or with Greek lower case letters), families of such sets by capital calligraphic letters, systems of such families with capital script letters and in the rare case of classes of such systems, we will use capital fraktur letters. Here and below we use the words `set', `family', `system' and `class' as synonyms, but we will reserve their usage to the corresponding levels as much as possible. Throughout, unless otherwise stated, we fix an integer dimension $d\ge 1$ and \emph{range} $r\in[1,\infty)$. We set $R=([-r,r]^{d}\times [-r,0))\cap{\ensuremath{\mathbb Z}} ^{d+1}$, so as to allow non-zero memory. Models \emph{without memory} will be defined identically, taking $R=([-r,r]^d\times\{-1\})\cap{\ensuremath{\mathbb Z}} ^{d+1}$. A \emph{configuration} is any element of $\O=\{0,1\}^{{\ensuremath{\mathbb Z}} ^{d}\times([-r,0)\cap{\ensuremath{\mathbb Z}} )}$. For any set $X$, we identify any ${\ensuremath{\eta}}\in\{0,1\}^X$ with a subset of $X$ in the natural way. An \emph{up-set} of a partially ordered set $(P,\ge)$ is a subset $U\subset P$ such that for any $(p,u)\in P\times U$ such that $p\ge u$ we have $p\in U$. We similarly define \emph{down-sets} (which are the complements of up-sets). Let $\ensuremath{\mathscr U} $ denote the system of all up-families of $\O_R:=\{0,1\}^R$ equipped with the partial order ${\ensuremath{\eta}}\ge\o$ if ${\ensuremath{\eta}}_x\ge \o_x$ for all $x\in R$ (that is, if ${\ensuremath{\eta}}\supset\o$). Note that $\varnothing\in\ensuremath{\mathscr U} $ and $\O_R\in\ensuremath{\mathscr U} $. We will further need to consider the class $\ensuremath{\mathfrak D} $ of down-sets of the partially ordered system $(\ensuremath{\mathscr U} ,\supset)$. An \emph{attractive PCA}\footnote{More generally, PCA are defined identically by a rates measure supported not only on $\ensuremath{\mathscr U} $, but on the entire power set of $\O_R$. However, attractiveness is essential for everything we will say, so we restrict directly to the relevant setting. See e.g.\ \cite{Salo21} for problems arising immediately without this assumption.} will be defined by the \emph{rates} $\u(\{\ensuremath{\mathcal U}\})\in[0,1]$ for $\ensuremath{\mathcal U}\in\ensuremath{\mathscr U} $. We require $\sum_{\ensuremath{\mathcal U}\in\ensuremath{\mathscr U} }\u(\{\ensuremath{\mathcal U}\})=1$ and view $\u$ as a probability measure on $\ensuremath{\mathscr U} $. We say simply \emph{attractive CA}, if $\u$ is a Dirac measure. An attractive PCA is said to be \emph{additive} if $\u(\{\ensuremath{\mathcal U}\})=0$ unless $\ensuremath{\mathcal U}$ is generated by singletons, that is, there exists $X\subset R$ (possibly empty) such that $\ensuremath{\mathcal U}=\{Y\subset R:X\cap Y\neq\varnothing\}$. We further say that it is \emph{absorbing} if $\u(\{\O_R\})=0$, which will be equivalent to saying that the $\ensuremath{\mathbf 0} $ configuration (which we identified with the set $\varnothing$) is an absorbing state. Hence, attractive PCA identify with a finite dimensional simplex equipped with the standard topology (that is, the topology of weak convergence of the corresponding $\u$ measures) and similarly for additive, absorbing attractive and absorbing additive ones. Given the rates $\u$, we can construct the associated finite memory Markov chain on $\{0,1\}^{{\ensuremath{\mathbb Z}} ^d}$ graphically as follows. In words, at each time step $t\in{\ensuremath{\mathbb Z}} $, the state of each site $x\in{\ensuremath{\mathbb Z}} ^{d}$ becomes $1$ if and only if the restriction of the current configuration to $R+(x,t)$ belongs to a randomly chosen up-family with law $\u$. More formally, let $(\ensuremath{\mathcal U}_{x,t})_{(x,t)\in{\ensuremath{\mathbb Z}} ^{d+1}}$ be an i.i.d.\ random field of up-families with law $\u$. Given the state of the PCA ${\ensuremath{\eta}}$ at times $t-r,\dots,t-1$ (which form a configuration) and $x\in{\ensuremath{\mathbb Z}} ^d$, we define \begin{equation} \label{eq:def:PCA} {\ensuremath{\eta}}_{x}(t)=\begin{cases}1&\text{if }\left\{(y,s)\in R:{\ensuremath{\eta}}_{x+y}(t+s)=1\right\}\in \ensuremath{\mathcal U}_{x,t}\\ 0&\text{otherwise}.\end{cases} \end{equation} This defines the trajectory of the PCA, given the initial state and the up-family field. When we want to specify that the initial state is $\o\in\O$, we write ${\ensuremath{\eta}}^\o$ for the corresponding process. When the PCA has no memory (i.e.\ for all $\ensuremath{\mathcal U}\in\operatorname{supp}\u$ and $U\in \ensuremath{\mathcal U}$ we have $(U\cap[-r,r]^d\times \{-1\})\in\ensuremath{\mathcal U}$), we may abusively write ${\ensuremath{\eta}}^\o$ with $\o\in\{0,1\}^{{\ensuremath{\mathbb Z}} ^d}$ and it is understood that this is the state of the process at time $-1$. We write ${\ensuremath{\mathbb P}} _\u$ for the law of the $\ensuremath{\mathcal U}_{x,t}$ field, from which the process is constructed. \subsubsection{Examples} \label{subsec:examples} Let us now introduce a few relevant examples. \paragraph{Toom rule with death} The Toom rule \cite{Toom80}*{Example 1} is a deterministic CA in two dimensions, which updates the state of each site $x\in{\ensuremath{\mathbb Z}} ^2$ to the more common value (in $\{0,1\}$) among the current state of $x$, $x+(1,0)$ and $x+(0,1)$. We further subject this CA to a specific type of noise, obtaining a PCA that we will refer to as \emph{Toom rule with death}. Namely, at each step and each site independently with probability $1-p\in[0,1]$, instead of applying the previous rule, we directly set it to state $0$. With our notation this corresponds to $d=2$, $r=1$ and $\u$ charging only two up-families: $\u(\{\varnothing\})=1-p$ (we will systematically put accolades to avoid confusing e.g.\ the singleton system consisting of the empty up-family appearing above and the empty system, which naturally verifies $\u(\varnothing)=0$) and \[\u\left(\left\{\left\{X\subset R:|X\cap\{(0,0,-1),(1,0,-1),(0,1,-1)\}|\ge 2\right\}\right\}\right)=p.\] This PCA is attractive, but not additive. For $p=1$ it degenerates into the (deterministic) CA called Toom rule. In fact, Toom \cite{Toom80} studied random perturbations of attractive CA in much greater generality, but we will come back to this later. \paragraph{Generalised oriented site percolation} Fix $X\subset R$ and $p\in[0,1]$. We define GOSP to be the additive attractive PCA with neighbourhood $X$, given by $\u(\{\varnothing\})=1-p$ and $\u(\{\{Y\subset R:Y\cap X\neq\varnothing\}\})=p$. The name comes from the observation that ${\ensuremath{\eta}}^{\{0\}}(t)\neq\ensuremath{\mathbf 0} $ if and only if there is a path with steps in $-X$ from $0$ to some site of the form $(x,t)\in{\ensuremath{\mathbb Z}} ^{d+1}$, using only sites $(y,s)\in{\ensuremath{\mathbb Z}} ^{d+1}$ such that $\ensuremath{\mathcal U}_{y,s}\neq \varnothing$. The standard oriented site percolation model is recovered by taking $X=\{(-1,-1),(1,-1)\}$ in one dimension. \paragraph{Bootstrap percolation} A BP model is specified by an \emph{update family}: a finite family $\ensuremath{\mathcal X}$ of finite subsets of ${\ensuremath{\mathbb Z}} ^d\setminus\{0\}$, both the sets and the family being possibly empty. At each time step the state of a site $x\in{\ensuremath{\mathbb Z}} ^d$ becomes $1$ if it is already $1$ or there exists $X\in\ensuremath{\mathcal X}$ such that all elements of $x+X$ are in state $1$. In fact, this is just another way to parametrise the set of all attractive CA with no memory which are monotone in time in the sense that ${\ensuremath{\eta}}_x(t+1)\ge {\ensuremath{\eta}}_x(t)$ for all $x\in{\ensuremath{\mathbb Z}} ^d$ and $t\ge 0$.\footnote{This property is sometimes called \emph{freezing} to distinguish from attractiveness, which is also a type of monotonicity.} With our notation BP corresponds to taking as $\u$ the Dirac measure on the minimal up-family $\ensuremath{\mathcal U}\subset \O_R$ such that $\{X\times\{-1\}:X\in(\ensuremath{\mathcal X}\cup\{\{0\}\})\}\subset \ensuremath{\mathcal U}$. If we drop the assumption that $\{0\}\times\{-1\}\in \ensuremath{\mathcal U}$, we retrieve the class of all attractive CA with no memory. The latter is essentially the class of models whose random perturbations were studied by Toom \cite{Toom80}. More generally, we define \emph{inhomogeneous BP} by a measure $\chi$ on update families. Then each site $x\in{\ensuremath{\mathbb Z}} ^d$ is assigned an i.i.d.\ update family $\ensuremath{\mathcal X}_x$ with law $\chi$ and at each step at site $x$ we use the minimal up-family $\ensuremath{\mathcal U}_x\subset \O_R$ such that $\{X\times \{-1\}:X\in(\ensuremath{\mathcal X}_x\cup\{\{0\}\})\}\subset\ensuremath{\mathcal U}_x$ and define the evolution via \cref{eq:def:PCA}\footnote{There may be issues defining this if $\chi$ has infinite support. We will only consider finitely supported $\chi$ measures in this work.} with $\ensuremath{\mathcal U}_{x,t}=\ensuremath{\mathcal U}_x$ for all $t$. Clearly, this is no longer a CA or a PCA, but rather what one would call \emph{an inhomogeneous attractive CA}. \paragraph{PCA with death} Given an attractive PCA, we define its version with death by considering $\ensuremath{\widetilde\u} =p\u+(1-p)\d_{\varnothing}$, so that $\ensuremath{\widetilde\u} $ defines another attractive PCA. In words, we run the original PCA with probability $p$ and put state $0$ with probability $1-p$, like we did for the Toom rule with death and GOSP. \paragraph{Kinetically constrained models} KCM are continuous time Markov processes with state space $\{0,1\}^{{\ensuremath{\mathbb Z}} ^d}$ informally defined as follows, given an \emph{update family}: a finite family $\ensuremath{\mathcal X}$ of finite subsets of ${\ensuremath{\mathbb Z}} ^d\setminus\{0\}$, and a parameter $q\in[0,1]$ (see \cite{Cancrini08}). Each site $x\in{\ensuremath{\mathbb Z}} ^d$ attempts to update at rate $1$ to an independent Bernoulli state with parameter $q$, but is only allowed to do so if the configuration at some $X\in\ensuremath{\mathcal X}$ is $\ensuremath{\mathbf 1} $ (the all $1$ configuration). Otherwise, the state of $x$ cannot change until the above constraint becomes satisfied. Superficially, KCM are not closely related to the attractive PCA that we study, as they are neither attractive, nor discrete time, nor synchronous. Nevertheless, we will see that our treatment entails new results for KCM as well. \subsection{The phase diagram of PCA} \label{subsec:phases} In 1994 Bezuidenhout and Gray \cite{Bezuidenhout94} established the following fundamental result (see their work for a more formal statement). \begin{thm} \label{th:BG} Within the set of attractive PCA $\u$ without memory, the set of those with $\u(\{\varnothing\})>0$ and ${\ensuremath{\mathbb P}} _\u\left(\forall t>0,{\ensuremath{\eta}}^{\{0\}}(t)\neq\ensuremath{\mathbf 0} \right)>0$ is open. \end{thm} The main corollary of this is that the phase transition of survival from a single point for attractive PCA without memory but with positive death rate is continuous. Moreover, they showed that, within this `supercritical' phase described in the theorem, one can perform renormalisation to highly supercritical oriented percolation. This entails a number of results and is a key step towards establishing that models in this phase are `well-behaved' (see \cite{Hartarsky21GOSP} for more detail on what we mean by this). Our first main result is of a similar flavor and complements \cref{th:BG}. \begin{thm} \label{th:main} Let $S$ be in the set of attractive PCA such that $\d_\ensuremath{\mathbf 0} $ is their unique invariant measure. Let $\operatorname{Int}$ and $\overline{\cdot}$ be the interior and the closure within the set of absorbing attractive PCA. Then for any $\u\in \operatorname{Int}(S)$ there exist $c,C>0$ such that for all $t>0$ and finite $A\subset {\ensuremath{\mathbb Z}} ^d\times\{-r,\dots,-1\}$ it holds that \begin{align} \label{eq:main:1}{\ensuremath{\mathbb P}} _\u\left({\ensuremath{\eta}}_0^{\ensuremath{\mathbf 1} }(t)\neq 0\right)&{}\le Ce^{-ct},\\ \label{eq:main:2}{\ensuremath{\mathbb P}} _\u\left({\ensuremath{\eta}}^A(t)\neq\ensuremath{\mathbf 0} \right)&{}\le C|A|e^{-ct}. \end{align} Moreover, $S\subset \overline{\operatorname{Int} (S)}$. \end{thm} This result can be informally rephrased as `the subcritical phase of attractive PCA is well-behaved.' We leave the discussion of how to use this on concrete models to \cref{subsec:parametrisation}. Unfortunately, the terms `supercritical' for \cref{th:BG} and `subcritical' for \cref{th:main} are quite misleading. Indeed, it is known that there is an intermediate regime, which may naturally be called \emph{cooperative survival} phase. More specifically, the Toom rule with death for small enough death rates exhibits such behaviour (see \cites{Toom80}) and more general examples will be discussed in \cref{subsec:BP}. \subsection{Parametrised models and applications} \label{subsec:parametrisation} In practice one is usually not interested in the set of \emph{all} PCA, but rather has one specific PCA in mind, possibly with a parameter to tune. As it stands, \cref{th:main} does not assert that for a specific model, as we vary the parameter, we will have a well-behaved subcritical phase immediately followed by a non-uniqueness one. Our next goal is to provide a simple sufficient criterion for this to happen. Indeed, hypotheses are needed, since the model of interest may glide along the boundary of the subcritical phase, in which case no sharpness of the phase transition is to be expected. To preclude this scenario we will require a few more definitions. We say that a measure ${\ensuremath{\nu}}$ on a partially ordered set $(X,\ge)$ \emph{stochastically dominates} another one, $\u$, if for every down-set $D$ of $X$ we have ${\ensuremath{\nu}}(D)\le\u(D)$. A \emph{parametrised model} is a continuously differentiable curve $(\u_p)_{p\in[p_1,p_2]}$ in the space of attractive PCA for some $p_1<p_2\in{\ensuremath{\mathbb R}} $, so that $\u_p$ are probability measures on $\ensuremath{\mathscr U} $. Here and below derivatives $\u'_p=\lim_{q\to p}(u'_q-u'_p)/(q-p)$ are viewed as signed measures on $\ensuremath{\mathscr U} $ equipped with the weak topology. Note that the function $p\mapsto \u_p$ is nondecreasing for stochastic domination (i.e.\ for all $p\le q$ the measure $\u_q$ stochastically dominates $\u_p$) iff \begin{equation} \label{eq:def:monotonicity}\forall\ensuremath{\mathscr D} \in\ensuremath{\mathfrak D} \setminus\{\varnothing,\ensuremath{\mathscr U} \},\forall p\in[p_1,p_2],\u'_p(\ensuremath{\mathscr D} )\le 0. \end{equation} If that is the case, we say that the parametrised model is \emph{nondecreasing} and, if additionally $\u_p\neq\u_q$ whenever $p\neq q$, we say it is \emph{increasing}. Morally, any increasing parametrised model should have a sharp phase transition in the sense of \cref{th:main} (see \cref{th:parametrised} below). However, even for percolation some issues may arise if the model increases in a `nonessential' way. Furthermore, there has been some trouble establishing a necessary and sufficient condition for a modification being `essential' \cites{Balister14,Aizenman91}. Circumventing these issues, we propose a sufficient criterion, which is satisfied in natural models. We say the parametrised model is \emph{strongly increasing} if there exists $c>0$ such that \begin{equation} \label{eq:def:strong:monotonicity} \forall \ensuremath{\mathscr D} \in \ensuremath{\mathfrak D} \setminus\{\varnothing,\ensuremath{\mathscr U} \}, \forall p\in[p_1,p_2],\u_p'(\ensuremath{\mathscr D} )\le -c(1-\u_p(\ensuremath{\mathscr D} )), \end{equation} where the derivative is w.r.t.\ $p$. For a nondecreasing parametrised model we define the \emph{critical parameter} \begin{equation} \label{eq:def:pc} \ensuremath{p_{\mathrm{c}}} =\inf\left\{p\in[p_1,p_2]: \d_\ensuremath{\mathbf 0} \text{ is not the unique invariant measure}\right\} \end{equation} with $\inf\varnothing=p_2$. Note that $\u_p(\{\varnothing\})=0$ iff $\d_\ensuremath{\mathbf 1} $ is invariant, while $\u_p(\{\O_R\})=0$ iff $\d_\ensuremath{\mathbf 0} $ is invariant. We will say that a nondecreasing parametrised model has a \emph{sharp transition} if for any $p\in[p_1,\ensuremath{p_{\mathrm{c}}} )$ there exist $c,C>0$ such that \cref{eq:main:1,eq:main:2} hold for all $t>0$ and finite $A\subset{\ensuremath{\mathbb Z}} ^{d}\times\{-r,\dots,-1\}$. \begin{thm} \label{th:parametrised} Every strongly increasing parametrised model has a sharp transition. \end{thm} \begin{rem} \Cref{th:main,th:parametrised} and their proofs extend naturally beyond the binary setting. More precisely, one may replace the base set $\{0,1\}$ of $\O$ by an arbitrary finite partially ordered set with unique minimal and maximal elements called $0$ and $1$ (the maximal one is not essential). We have chosen to reason directly in the binary case for the sake of readability. \end{rem} We next state a few interesting applications of \cref{th:parametrised} to specific models. The first one recovers a classical result of Aizenman and Barsky \cite{Aizenman87}*{Theorem 7.3} (also see their section 8.1 for a discussion of additive PCA) and Menshikov \cite{Menshikov86}, which also has other proofs \cites{Duminil-Copin16,Duminil-Copin19}. \begin{cor} \label{cor:GOSP} Every GOSP has a sharp transition. \end{cor} Indeed, by the definition in \cref{subsec:examples} of GOSP with neighbourhood $X\subset R$, for all $p\in(0,1]$ (the case $p=0$ is similar) and $\ensuremath{\mathscr D} \in\ensuremath{\mathfrak D} \setminus\{\varnothing\}$ we have \[\u'(\ensuremath{\mathscr D} )=\u'(\{\varnothing\})+{\ensuremath{\mathbbm{1}}} _{\ensuremath{\mathcal U}\in\ensuremath{\mathscr D} }\u'(\{\ensuremath{\mathcal U}\})=-1+{\ensuremath{\mathbbm{1}}} _{\ensuremath{\mathcal U}\in\ensuremath{\mathscr D} }=\frac{-1+\u(\ensuremath{\mathscr D} )}{p}\le-1+\u(\ensuremath{\mathscr D} ),\] where we set $\ensuremath{\mathcal U}=\{Y\subset R:Y\cap X\neq\varnothing\}$. However, we may also directly use \cref{th:linear} below instead of \cref{th:parametrised} to deduce \cref{cor:GOSP}. We next state a consequence of \cref{th:parametrised} in the BP context. \begin{thm} \label{th:BP} Consider a BP with update family $\ensuremath{\mathcal X}$ contained in a half-space: there exists $u\in{\ensuremath{\mathbb R}} ^d$ such that $\forall X\in\ensuremath{\mathcal X},\forall x\in X,\<x,u\>< 0$. We use i.i.d.\ Bernoulli initial condition with parameter $q$, whose law we denote by ${\ensuremath{\mu}}_q$ (BP being a CA, this is the only randomness). Then, setting \begin{equation} \label{eq:def:qc} \ensuremath{q_{\mathrm{c}}} =\inf\left\{q\in[0,1]:\lim_{t\to\infty}{\ensuremath{\mu}}_q({\ensuremath{\eta}}_0(t)\neq 1)=0\right\}.\end{equation} we have \begin{equation} \label{eq:BP:exp:decay} \forall q>\ensuremath{q_{\mathrm{c}}} ,\exists c,C>0,\forall t>0,\quad{\ensuremath{\mu}}_q\left({\ensuremath{\eta}}_0(t)\neq 1\right)\le Ce^{-ct}. \end{equation} \end{thm} \Cref{th:BP} makes further progress towards proving \cite{Hartarsky21}*{Conjecture 8.1}, which asks for this result for all families, not necessarily contained in a half-space. This conjecture itself generalised a question of Schonmann \cite{Schonmann92} from 1992, which remains open, asking if the same result holds when restricted to $\ensuremath{\mathcal X}$ contained in the set of nearest neighbours of the origin instead of a half-space. It should be noted that \cref{th:BP} is not a direct application of \cref{th:parametrised}, since BP has $\u(\{\varnothing\})=0$ by definition. Instead we will rely on the correspondence between BP and CA of \cref{prop:correspondence}. This correspondence presented in \cref{sec:BP} is also at the root of the results discussed in \cref{subsec:BP}. To give a simple instance of it, let us focus on standard oriented site percolation from \cref{subsec:examples} viewed as a one-dimensional CA with death. Consider the two-diemsnional BP update family $\ensuremath{\mathcal X}=\{\{(-1,-1),(1,-1)\}\}$ and the initial condition given by the space-time sites whose (random) up-set is $\varnothing$ (death). Then the space-time sites $(x,t)$ which remain in state 0 until time $t$ in the BP process are exactly the ones which are in state 1 for the oriented percolation CA with death with initial condition $\ensuremath{\mathbf 1} $. \cref{th:BP} generalises directly to inhomogeneous BP as follows. \begin{thm} \label{th:inhomogeneous} Consider an inhomogeneous BP with measure $\chi$ supported on a finite set of update families contained in the same half-space (see \cref{th:BP}). Denote by ${\ensuremath{\mu}}_q$ the law of the i.i.d.\ update families with distribution $\chi$ and i.i.d.\ initial condition with Bernoulli law of parameter $q$. Then \cref{eq:BP:exp:decay} holds with $\ensuremath{q_{\mathrm{c}}} $ defined by \cref{eq:def:qc}. \end{thm} Moving on to KCM, the following is a direct consequence of \cref{th:BP} together with \cite{Hartarsky21}*{Theorem~3.7}. \begin{cor} \label{cor:KCM} Consider a KCM with update family $\ensuremath{\mathcal X}$ contained in a half space (see \cref{th:BP}). Then the spectral gap of its generator is positive for all $q>\ensuremath{q_{\mathrm{c}}} $ and $0$ for all $q<\ensuremath{q_{\mathrm{c}}} $, where $\ensuremath{q_{\mathrm{c}}} $ is the quantity defined in \cref{eq:def:qc} for the BP with the same choice of update family $\ensuremath{\mathcal X}$. \end{cor} \subsection{Bootstrap percolation and Toom perturbations of attractive cellular automata} \label{subsec:BP} Finally, we discuss consequences of the PCA representation of BP used to prove \cref{th:BP}. \paragraph{Bootstrap percolation universality} We first need to introduce a few notions from BP universality, whose aim is to classify BP update families according to their behaviour at or around their phase transition. We say that a BP update family $\ensuremath{\mathcal X}$ is \emph{supercritical} if there exists a finite set $A\subset{\ensuremath{\mathbb Z}} ^d$ such that $\bigcup_{t>0}{\ensuremath{\eta}}^A(t)$ is infinite. A more geometric characterisation is available based on the notion of stable direction. We say that $u\in S^{d-1}$ (the unit sphere) is \emph{unstable} if there exists $X\in\ensuremath{\mathcal X}$ such that for all $x\in X$ we have $\<u,x\><0$ and it is \emph{stable otherwise}. It was proved in \cite{Bollobas15}*{Theorem 7.1, Definition 1.3} that in two dimensions $\ensuremath{\mathcal X}$ is supercritical if and only if there exists an open hemisphere of unstable directions. This is expected to generalise to any dimension. Instead, we say that $\ensuremath{\mathcal X}$ is \emph{subcritical} if $\ensuremath{q_{\mathrm{c}}} >0$ (recall \cref{eq:def:qc}). It was proved in \cite{Balister22}*{Definition 1.2, Corollary 1.6} that $\ensuremath{\mathcal X}$ is subcritical if and only if every open hemisphere contains an open set of stable directions. \paragraph{Toom perturbations} Toom \cite{Toom80} considered random space-time perturbations of attractive CA with $\u(\{\varnothing\})=0$ (otherwise every configuration becomes $\ensuremath{\mathbf 0} $ after one step, since CA are deterministic), in order to assess whether the $\ensuremath{\mathbf 0} $ and $\ensuremath{\mathbf 1} $ states are stable. We will not define the exact noise used there, but using attractiveness to suppress `positive' noise, it can be brought down to a noise stochastically dominated by a product measure with low parameter. For simplicity, we will work directly with the resulting product noise. Namely, we consider the CA with death $\ensuremath{\widetilde\u} $. We say that the CA $\u$ is an \emph{eroder} if, starting from any configuration $\o$ such that $\o_{(x,s)}=0$ for finitely many $(x,s)\in{\ensuremath{\mathbb Z}} ^{d}\times \{-r,\dots,-1\}$, we have ${\ensuremath{\eta}}^\o(t)=\ensuremath{\mathbf 1} $ for any $t$ large enough depending on $\o$. \paragraph{Interplay} We can recover the following result, first established by Toom \cite{Toom80}, directly from the BP--PCA correspondence and BP results. \begin{thm} \label{th:Toom} In dimension $d=1$ any attractive CA is an eroder iff its version with death rate $1-p$ small enough has an invariant measure different from $\d_\ensuremath{\mathbf 0} $. \end{thm} Let us note that in our alternative proof of \cref{th:Toom} we will not at all require the full power of \cite{Balister22}*{Corollary 1.6} or its two-dimensional version \cite{Balister16}*{Theorem 1}. Instead, we only rely on the much easier partial result \cite{Balister16}*{Theorem 9} in view of \cref{lem:directions} below. We believe the renormalisation approach of \cite{Balister16}*{Theorem 9} to be simpler than the original proof of Toom, as well as its subsequent versions in \cites{Gacs21,Berman88,Swart22, Hartarsky22Toom} based on somewhat involved Peierls arguments. Naturally, a carefully performed Peierls argument often gives a better quantitative bound on the critical parameter (see e.g.\ \cites{Swart22,Hartarsky22Toom}), though this is seldom important. We also mention the more complex renormalisation approach of \cite{Bramson91}. Inversely, the viewpoint of \cite{Toom80}*{Theorem 5} provides an interesting consequence for BP, thanks to the BP-PCA correspondence of \cref{prop:correspondence}. Namely, it allows us to recover the main results of \cites{Balister16,Bollobas15} restricted to families contained in half-spaces, but extended to arbitrary dimension, which have not been treated until present (but see the subsequent work \cite{Balister22}). \begin{thm} \label{th:orientation:BP} A BP in any dimension whose update family is contained in a half-space (recall \cref{th:BP}) is either supercritical or subcritical (that is, either there are finite sets with infinite offspring or $\ensuremath{q_{\mathrm{c}}} >0$). \end{thm} Although in two dimensions this can be checked easily (see \cref{lem:directions} below) from geometric considerations based on the characterisations of supercritical and subcritical models from \cites{Bollobas15,Balister16}, it does not seem to have been noticed. We should note that the half-space condition cannot be removed, as without it already in two dimensions another, \emph{critical}, class emerges \cite{Bollobas15}. \subsection{Organisation} The remainder of the paper is organised as follows. In \cref{sec:decay} we prove our general results---\cref{th:main,th:parametrised} by adapting the randomised algorithm approach of Duminil-Copin, Raoufi and Tassion \cite{Duminil-Copin19}. In \cref{sec:BP} we introduce the correspondence between BP and CA and deduce \cref{th:BP,th:inhomogeneous,th:Toom,th:orientation:BP}. \section{Sharpness of the transition} \label{sec:decay} \subsection{Linear parametrisation} \label{subsec:linear} We first seek to prove the following preliminary result, from which \cref{th:main,th:parametrised} will be deduced in \cref{subsec:parametrised}. \begin{thm} \label{th:linear} Let $\u$ be a rates measure and let $\u_p=p\u+(1-p)\d_{\varnothing}$ for $p\in[0,1/(1-\u(\{\varnothing\}))]$. This nondecreasing parametrised model has a sharp transition. \end{thm} The first step is a Russo formula adapted to our situation. We say that a space-time site $(x_0,t_0)$ is \emph{pivotal} for an event $A$ and realisation of the $\ensuremath{\mathcal U}_{x,t}$ variables, if $A$ occurs for $(\ensuremath{\mathcal U}_{x,t})_{(x,t)\in{\ensuremath{\mathbb Z}} ^{d+1}}$, but it does not occur if we replace $\ensuremath{\mathcal U}_{x_0,t_0}$ by $\varnothing$. \begin{lem} \label{prop:russo} Let $\u$ be a rates measure and let $\u_p=p\u+(1-p)\d_{\varnothing}$ for $p\in[0,1]$. Let $A_n$ be the event that ${\ensuremath{\eta}}_0^{\ensuremath{\mathbf 1} }(n)\neq 0$ and set $\theta_n(p)={\ensuremath{\mathbb P}} _{\u_p}(A_n)$. Then for all $n\ge 0$ and $p\in(0,1]$ \[\theta_n'(p)=\sum_{x,t}{\ensuremath{\mathbb P}} _{\u_p}\left((x,t)\text{ is pivotal for }A_n\right).\] \end{lem} \begin{proof} Fix an i.i.d.\ random field $\ensuremath{\mathcal U}_{x,t}^1$ with law $\u$ and i.i.d.\ random variables $X_{x,t}$ uniform on $[0,1]$ and define \[\ensuremath{\mathcal U}_{x,t}^p=\begin{cases}\ensuremath{\mathcal U}_{x,t}^1&\text{if }X_{x,t}\le p\\ \varnothing&\text{if }X_{x,t}> p.\end{cases}\] Conditioning on the $\ensuremath{\mathcal U}_{x,t}^1$, we can then proceed as in the proof of the standard Russo formula (see e.g.\ \cite{Grimmett99}*{Sec. 2.4}). We then average over $\ensuremath{\mathcal U}_{x,t}^1$ to obtain the desired equality. \end{proof} The next step is to adapt the Duminil-Copin--Raoufi--Tassion version of the O'Donnell--Saks--Schramm--Servedio decision tree result (see \cites{ODonnell05,Duminil-Copin19} for background). It is important to note that the support of $\u$ is not necessarily binary or even totally ordered, so it is not clear how to adapt \cite{Duminil-Copin19}*{Theorem 1.1}. We circumvent this problem by restricting our attention to events rather than real-valued observables. \begin{lem} \label{lem:DCRT:var} Let $n\in{\ensuremath{\mathbb N}} $ and $A\subset \ensuremath{\mathscr U} ^n$ be an increasing event (w.r.t.\ the pointwise inclusion order). Fix a randomised algorithm determining the occurrence of $A$ and denote by $\d_i$ the probability that it reveals the value of the $i$-th up-family. Then \[\operatorname{Var}_\u({\ensuremath{\mathbbm{1}}} _A)\le 2\sum_{i=1}^{n}\d_i\cdot{\ensuremath{\mathbb P}} _\u\left(i\text{ is pivotal for }A\right)\] where $\operatorname{Var}$ and ${\ensuremath{\mathbb P}} $ are w.r.t.\ a product measure $\bigotimes_{i=1}^n\u$ on $\ensuremath{\mathscr U} ^n$. \end{lem} \begin{proof} The result is proved like Theorem 1.1.\ of \cite{Duminil-Copin19} (also see Remark 2.2.\ there). It suffices to replace their equation (5) by the fact that if ${\ensuremath{\eta}}\in\ensuremath{\mathscr U} ^n$ and $\o\in\ensuremath{\mathscr U} ^n$ differ only at the $i$-th up-family, then \begin{equation} \label{eq:DCRT:piv} \left|{\ensuremath{\mathbbm{1}}} _A({\ensuremath{\eta}})-{\ensuremath{\mathbbm{1}}} _A(\o)\right|\le {\ensuremath{\mathbbm{1}}} _{i\text{ is pivotal for $A$}}({\ensuremath{\eta}})+{\ensuremath{\mathbbm{1}}} _{i\text{ is pivotal for $A$}}(\o). \end{equation} To see this, assume w.l.o.g.\ that ${\ensuremath{\eta}}\not\in A$ (if ${\ensuremath{\mathbbm{1}}} _A({\ensuremath{\eta}})={\ensuremath{\mathbbm{1}}} _A(\o)$ there is nothing to prove). Then replacing ${\ensuremath{\eta}}_i$ by $\varnothing$ does not trigger the occurrence of $A$, as $A$ is increasing. Hence, either $\o\not\in A$ and the l.h.s.\ in \cref{eq:DCRT:piv} is 0, or $i$ is pivotal for $A$ and $\o$, so the r.h.s.\ is 1. \end{proof} Our next task is to define a suitable randomised algorithm to which \cref{lem:DCRT:var} will be applied. \begin{lem} \label{lem:algo} Fix a parametrised model and $p\in[p_1,p_2]$. Recall $A_n=\{{\ensuremath{\eta}}_0^{\ensuremath{\mathbf 1} }(n)\neq 0\}$ and $\theta_n(p)={\ensuremath{\mathbb P}} _p(A_n)$. For each integer $n\ge 1$ there exists a randomised algorithm determining the occurrence of $A_n$ such that for every $(x,t)\in{\ensuremath{\mathbb Z}} ^{d+1}$ its revealment probability $\d_{x,t}$ satisfies \[\d_{x,t}\le \frac{2}{n}\sum_{i=0}^{n-1}\theta_i(p).\] \end{lem} \begin{proof} Clearly, it suffices to restrict our attention to the up-families in \begin{equation} \label{eq:def:S} S=\left\{(x,t)\in {\ensuremath{\mathbb Z}} ^{d+1}:|x|\le r\cdot (n-t),t\in[0,n]\right\}, \end{equation} where $r$ is the range of the process. The algorithm proceeds as follows (following \cite{Duminil-Copin19}*{Lemma 3.2} and \cite{Hartarsky21}*{Lemma 7.3}). Select a number $k$ uniformly at random in $\{1,\dots,n\}$. Contrary to \cite{Hartarsky21}, we will explore forward in time, starting from time $k$. For $t\in\{0,\dots,n-k\}$ denote by $\o^\ensuremath{\mathbf 1} (t)$ the configuration with initial condition $\ensuremath{\mathbf 1} $ and up-families $\ensuremath{\mathcal U}'_{x,t}=\ensuremath{\mathcal U}_{x,t+k}$, where $\ensuremath{\mathcal U}_{x,k}$ are the up-families in the graphical construction of ${\ensuremath{\eta}}$. We initialise the algorithm at $t=0$. We explore the state of each $\ensuremath{\mathcal U}_{x,t+k}$ for $(x,t+k)\in S$ such that the currently explored up-families do not allow to conclude that $\o^\ensuremath{\mathbf 1} _x(t)=0$ (so, at the first step that is all $\ensuremath{\mathcal U}_{x,k}$ for $(x,k)\in S$). Then we increment $t$ and repeat the previous operation. Upon reaching $t=n-k$, if we are able to conclude that $\o^{\ensuremath{\mathbf 1} }_0(n-k)=0$, we terminate the algorithm (since we know by attractiveness that ${\ensuremath{\eta}}^\ensuremath{\mathbf 1} _0(n)\le \o^{\ensuremath{\mathbf 1} }_0(n-k)=0$). Otherwise we reveal all up-families in $S$ to determine if $A_n$ occurs and terminate. Next observe that, conditionally on $k$, the probability of revealing $U_{x,t}$ for $(x,t)\in S$ is at most $\theta_{n-k}(p)+{\ensuremath{\mathbbm{1}}} _{t\ge k}\theta_{t-k}(p)$, the first term bounding the probability that we reach $t=n-k$ without having $\o_0^{\ensuremath{\mathbf 1} }(n-k)=0$. Averaging over the law of $k$, we obtain the desired conclusion. \end{proof} We are now ready to assemble the proof of \cref{th:linear}. \begin{proof}[Proof of \cref{th:linear}] Fix $\u\neq\d_\varnothing$ and $\ensuremath{p_{\mathrm{c}}} >0$ (otherwise the statement is trivial). Up to linear reparametrisation of the model, we may further assume that $\u(\{\varnothing\})=0$, $p_1=0$ and $p_2=1$. Let $\u_p=p\u+(1-p)\d_{\varnothing}$ for $p\in[0,1]$. This clearly defines a nondecreasing parametrised model, since $\ensuremath{\mathcal U}\supset\varnothing$ for all $\ensuremath{\mathcal U}\in\ensuremath{\mathscr U} $. Define $A_n$, $\theta_n(p)$ and $S$ as above (see \cref{lem:algo} and \cref{eq:def:S}). By \cref{lem:DCRT:var,lem:algo}, for all $p\in[0,1]$ we have \[\theta_n(p)(1-\theta_n(p))\le \frac{4}{n}{\ensuremath{\mathbb E}} _{\u_p}[N]\sum_{i=0}^{n-1}\theta_{i}(p),\] where $N$ is the number of $(x,t)\in S$ which are pivotal for $A_n$. Then, applying \cref{prop:russo} we get that for $p\in(0,1]$ \[\theta_n'(p)\ge \frac{n\theta_n(p)(1-\theta_n(p))}{4\sum_{i=0}^{n-1}\theta_{i}(p)}.\] Fix $\varepsilon \in(0,\ensuremath{p_{\mathrm{c}}} )$. Observe that for any $p\in[\varepsilon,\ensuremath{p_{\mathrm{c}}} -\varepsilon]$ we have $1-\theta_n(p)\ge \u_p(\{\varnothing\})\ge \varepsilon$. Thus, we may apply \cite{Duminil-Copin19}*{Lemma 3.1} in this interval to get that either \cref{eq:main:1} holds for $\u_p$ for all $p\in[{\ensuremath{\varepsilon}},\ensuremath{p_{\mathrm{c}}} -\varepsilon)$, or there exists $p\in(\varepsilon,\ensuremath{p_{\mathrm{c}}} -\varepsilon)$ such that $\theta(p)>0$. However, the latter contradicts the definition of $\ensuremath{p_{\mathrm{c}}} $, \cref{eq:def:pc}. Since \cref{eq:main:1} is trivial for $\u_0=\d_\varnothing$, we have established \cref{eq:main:1} for $\u_p$ for all $p\in[0,\ensuremath{p_{\mathrm{c}}} )$. Finally, it remains to derive \cref{eq:main:2}. Starting from a finite set $A$, the only sites which may be in state $1$ at time $t$ are those at distance at most $rt$ from a site in $A$. Hence, taking the union bound of \cref{eq:main:1} over these sites and using attractiveness, we get that for all $p\in[0,\ensuremath{p_{\mathrm{c}}} )$ \[{\ensuremath{\mathbb P}} _{\u_p}\left({\ensuremath{\eta}}^A(t)\neq \ensuremath{\mathbf 0} \right)\le |A|rt\max_{x\in{\ensuremath{\mathbb Z}} ^d}{\ensuremath{\mathbb P}} _{\u_p}\left({\ensuremath{\eta}}^A_{x}(t)\neq0\right)\le C|A|rte^{-ct}\le C'|A|e^{-c't}\] for a suitable choice of $C',c'>0$ depending on $p$, yielding \cref{eq:main:2}. \end{proof} \subsection{Parametrisations} \label{subsec:parametrised} With \cref{th:linear}, we are in position to conclude the proof of \cref{th:main}. \begin{proof}[Proof of \cref{th:main}] Let $\u$ be in the interior of the set of absorbing attractive PCA such that $\d_\ensuremath{\mathbf 0} $ is their unique invariant measure, denoted by $\operatorname{Int}(S)$. Clearly, we cannot have $\u(\{\varnothing\})=0$, since in that case $\d_\ensuremath{\mathbf 1} $ is invariant. Therefore, we may consider the linearly parametrised model $\u_p=p\u+(1-p)\d_\varnothing$ for $p\in[0,1/(1-\u(\{\varnothing\}))]$. Since this parametrised model is absorbing and $\u_1\in \operatorname{Int}(S)$, necessarily $\ensuremath{p_{\mathrm{c}}} >1$. But then \cref{th:linear} yields \cref{eq:main:1,eq:main:2} for $\u=\u_1$. Similarly, assume that $\d_\ensuremath{\mathbf 0} $ is the unique invariant measure of the $\u$-PCA, but not necessarily $\u\in \operatorname{Int}(S)$. Then \cref{th:linear} gives that $\ensuremath{p_{\mathrm{c}}} \ge 1$, so that \cref{eq:main:1,eq:main:2} hold for any $p<1$. Then \cref{prop:open} below allows us to conclude that $\u_p\in \operatorname{Int}(S)$ for $p\in[0,1)$ and conclude that $\u\in\overline{\operatorname{Int}(S)}$. \end{proof} \begin{prop} \label{prop:open} The set of attractive absorbing PCA for which there exist $c,C>0$ so that \cref{eq:main:1} holds for all $t>0$ is open. \end{prop} \begin{proof} We use a standard renormalisation argument. Fix $\u$ such that \cref{eq:main:1} holds for some $c,C>0$ and let ${\ensuremath{\varepsilon}}>0$ small enough to be chosen later. Partition ${\ensuremath{\mathbb Z}} ^{d+1}$ into the boxes $B_{x,t}=(xL,tL)+[0,L)^{d+1}$ for some large $L$ to be chosen later. Let $\ensuremath{\mathcal B}_{x,t}$ denote the event that the $\u$-PCA with initial condition $\ensuremath{\mathbf 1} $ and up-family field translated by $-L(x,t)$ is $1$ for some $(y,s)\in [0,L)^d\times [L-r,L)$. By \cref{eq:main:1} and the union bound we have that for $L$ large enough ${\ensuremath{\mathbb P}} _\u(\ensuremath{\mathcal B}_{x,t})<{\ensuremath{\varepsilon}}$ for any $(x,t)\in{\ensuremath{\mathbb Z}} ^{d+1}$. Moreover, $\ensuremath{\mathcal B}_{x,t}$ only depends on the up-families $\ensuremath{\mathcal U}_{y,s}$ for $(y,s)\in B_{x',t'}$ with $(x',t')$ at distance at most $r$ from $(x,t)$. Fix an attractive absorbing PCA ${\ensuremath{\nu}}$ sufficiently close to $\u$ (depending on ${\ensuremath{\varepsilon}}$ and $L$). Then we can couple the corresponding up-family fields $\ensuremath{\mathcal U}_{y,s}$ and $\ensuremath{\mathcal U}'_{y,s}$, so that they differ with small probability and independently for each $(y,s)\in{\ensuremath{\mathbb Z}} ^{d+1}$. We say that the box $B_{x,t}$ is \emph{bad} if $\ensuremath{\mathcal B}_{x,t}$ occurs or there exists a site $y,s$ at distance at most $rL$ from $B_{x,t}$ such that $\ensuremath{\mathcal U}_{y,s}\neq\ensuremath{\mathcal U}'_{y,s}$. By the well-known Liggett--Schonmann--Stacey theorem \cite{Liggett97} the good boxes stochastically dominate a product measure with density of bad boxes approaching $1$ if we choose ${\ensuremath{\varepsilon}}$ small enough. Finally, we consider oriented percolation of bad boxes with range $r$ and observe that if a box does not belong to a bad connected component reaching time $0$, then the state at its top boundary is necessarily $\ensuremath{\mathbf 0} $ in both the $\u$ and the ${\ensuremath{\nu}}$-PCA with initial condition $\ensuremath{\mathbf 1} $. Indeed, if the box $B_{x,t}$ itself is not bad that is enough by definition of bad, boxes, while otherwise we can proceed by induction. Namely, we observe that if the state of all boxes $B_{y,t-1}$ for $\|x-y\|_\infty\le r$ are good, then the their top boundaries are in state $\ensuremath{\mathbf 0} $ and no absorbing PCA of range $r$ can reach a nonzero state at the top of $B_{x,t}$, starting from that. We can then conclude that \cref{eq:main:1} holds for ${\ensuremath{\nu}}$ (and different $c,C>0$), since in highly subcritical independent percolation of range $r$ the size of clusters has this exponential decay property (e.g.\ by \cref{cor:GOSP}, which was a consequence of \cref{th:linear} without going through \cref{th:parametrised}). \end{proof} Our next goal is to prove \cref{th:parametrised}. We will deduce this from the specific case, \cref{th:linear}, and the following result. \begin{lem} \label{lem:strong:monotonicity} Let $(\u_p)_{p\in[0,1]}$ be a strongly increasing parametrised model. Then there exists $c'>0$ such that for all $p\in(0,1]$ there exists ${\ensuremath{\varepsilon}}>0$ such that for all $p'\in [1-{\ensuremath{\varepsilon}},1]$ it holds that ${\ensuremath{\nu}}_{p'}:=p'\u_{p}+(1-p')\d_{\varnothing}$ stochastically dominates $\u_{p-(1-p')/c'}$. \end{lem} \begin{proof} Notice that for any down-system $\ensuremath{\mathscr D} \in\ensuremath{\mathfrak D} \setminus\{\varnothing,\ensuremath{\mathscr U} \}$ the desired inequality ${\ensuremath{\nu}}_{p'}(\ensuremath{\mathscr D} )\le \u_{p-(1-p')/c'}(\ensuremath{\mathscr D} )$ is an equality for $p'=1$. Therefore, it suffices to show that the derivatives satisfy the inverse inequality: \begin{equation} \label{eq:ineq:domination} \u_p(\ensuremath{\mathscr D} )-1 ={\ensuremath{\nu}}'_{p'}(\ensuremath{\mathscr D} )\ge \frac{\partial\u_{p-(1-p')/c'}}{\partial p'}(\ensuremath{\mathscr D} )=\frac{\u'_{p-(1-p')/c'}(\ensuremath{\mathscr D} )}{c'}\end{equation} in the neighbourhood of $p'=1$. If $\u_p(\ensuremath{\mathscr D} )=1$, we may conclude directly by the fact that the parametrised model is nondecreasing. Otherwise, recalling \cref{eq:def:strong:monotonicity} and setting $c'=c/2$, we have \[\frac{\u'_{p-(1-p')/c'}(\ensuremath{\mathscr D} )}{c'}\le 2\left(\u_{p-(1-p')/c'}(\ensuremath{\mathscr D} )-1\right)\xrightarrow{p'\to1}2\left(\u_p(\ensuremath{\mathscr D} )-1\right)<\u_p(\ensuremath{\mathscr D} )-1.\] Hence, for $p'$ sufficiently close to $1$, \cref{eq:ineq:domination} does hold for all $\ensuremath{\mathscr D} \in\ensuremath{\mathfrak D} \setminus\{\varnothing,\ensuremath{\mathscr U} \}$, since $\ensuremath{\mathfrak D} $ is finite. \end{proof} \begin{proof}[Proof of \cref{th:parametrised}] Consider a strongly increasing parametrised model $(\u_p)_{p\in[p_1,p_2]}$. Up to linear reparametrisation, we may assume that $p_1=0,p_2=1$. We further suppose that $\ensuremath{p_{\mathrm{c}}} >0$, as otherwise there is nothing to prove. Fix $p\in(0,\ensuremath{p_{\mathrm{c}}} )$ and set ${\ensuremath{\nu}}_{p'}=p'\u_{p}+(1-p')\d_{\varnothing}$ for $p'\in [1-{\ensuremath{\varepsilon}},1]$, where ${\ensuremath{\varepsilon}}$ is from \cref{lem:strong:monotonicity}. Let $\ensuremath{p_{\mathrm{c}}} '$ denote the critical value of this parametrised model. Then \cref{th:linear} shows that \cref{eq:main:1,eq:main:2} hold for ${\ensuremath{\nu}}_{p'}$ for $p'<\ensuremath{p_{\mathrm{c}}} '$. By \cref{lem:strong:monotonicity} and the fact that $(\u_p)_{p\in[0,1]}$ is nondecreasing, it suffices to prove that $\ensuremath{p_{\mathrm{c}}} '=1$. But this is clear, because the definition of $\ensuremath{p_{\mathrm{c}}} $ gives that $\d_\ensuremath{\mathbf 0} $ is the unique invariant measure for $\u_p={\ensuremath{\nu}}_1$. \end{proof} \section{Bootstrap percolation and cellular automata} \label{sec:BP} \subsection{The correspondence} It was noticed already by Schonmann \cite{Schonmann92} that standard oriented site percolation, instead of a one-dimensional PCA or a two-dimensional percolation, can be viewed as a two-dimensional BP with an i.i.d.\ initial condition. This was exploited in \cite{Hartarsky21} and extended to GOSP subsequently studied in \cite{Hartarsky21GOSP}. We now show that this correspondence in fact extends to all attractive PCA.\footnote{We refer the reader to \cite{Lebowitz90} for a related correspondence between PCA and equilibrium statistical mechanics models.} \begin{prop}[BP--CA correspondence] \label{prop:correspondence} Consider an attractive CA with $\u=\d_{\ensuremath{\mathcal U}}$ for some $\ensuremath{\mathcal U}\in\ensuremath{\mathscr U} $. Let $\th$ be its version with death with rates measure $\ensuremath{\widetilde\u} =p\u+(1-p)\d_\varnothing$ for some $p\in[0,1]$. Let $\ensuremath{\mathcal U}_{x,t}$ be the up-family field used to construct the $\ensuremath{\widetilde\u} $-PCA. Define the $d+1$-dimensional up-family \begin{equation} \label{eq:correspondence} \ensuremath{\mathcal X}=\left\{R\setminus D:D\in\O_R\setminus\ensuremath{\mathcal U}\right\}. \end{equation} Consider the BP process $\o$ defined by the update family $\ensuremath{\mathcal X}$ with initial condition $({\ensuremath{\mathbbm{1}}} _{\ensuremath{\mathcal U}_{x,t}=\varnothing})_{(x,t)\in{\ensuremath{\mathbb Z}} ^{d+1}}$. Define its \emph{closure} \begin{equation} \label{eq:def:closure} C=\left\{(x,t)\in{\ensuremath{\mathbb Z}} ^{d+1}:\exists s\in\{0,1,\dots\},\o_{(x,t)}(s)=1\right\} \end{equation} and the configuration $C_0=({\ensuremath{\mathbb Z}} ^d\times\{-r,\dots,-1\})\setminus C$. Then for all $t\ge 0$ \begin{equation} \label{eq:etatilde} \th^{C_0}(t)=\{x\in{\ensuremath{\mathbb Z}} ^d:(x,t)\not\in C\} \end{equation} and this is a version of the stationary $\ensuremath{\widetilde\u} $-PCA at its upper invariant measure, that is, the limit as $t\to\infty$ of the law of $\widetilde{\ensuremath{\eta}}^{\ensuremath{\mathbf 1} }(t)$. In particular, $C={\ensuremath{\mathbb Z}} ^{d+1}$ a.s.\ if and only if $\d_\ensuremath{\mathbf 0} $ is the unique invariant measure of the $\ensuremath{\widetilde\u} $-PCA. Moreover, the map $\ensuremath{\mathcal U}\leftrightarrow\ensuremath{\mathcal X}$ is a one-to-one correspondence between $d$-dimensional attractive CA and $d+1$-dimensional BP with update family contained in the lower half-space ${\ensuremath{\mathbb Z}} ^{d}\times\{-1,-2,\dots\}$. \end{prop} \begin{proof} The bijectiveness follows from the fact that the \emph{double complement} map $\ensuremath{\mathcal U}\mapsto\{R\setminus D:D\in\O_R\setminus\ensuremath{\mathcal U}\}$ is an involution of the power set of $\O_R$, since the two complements commute and each of them is an involution. Fix an attractive CA and its corresponding BP as in the statement. Let us first verify \cref{eq:etatilde}. By induction on $t$, it suffices to verify this for $t=0$. By definition $(x,0)\in C$ iff $\ensuremath{\mathcal U}_{x,0}=\varnothing$ (this is the initial condition of $\o$) or there exists a minimal $s>0$ such that $\o_{(x,0)}(s)=1$. The latter, happens iff $R\setminus (\o(s-1)-(x,0))\not\in\ensuremath{\mathcal U}$ and $\o_{(x,0)}(s-1)=0$. Hence, $(x,0)\in C$ iff $\ensuremath{\mathcal U}_{x,0}=\varnothing$ or $R\cap (C_0-(x,0))\not\in\ensuremath{\mathcal U}$. But this is equivalent to $\th_x^{C_0}(0)=0$ by definition, so \cref{eq:etatilde} indeed holds. Thus, $1-{\ensuremath{\mathbbm{1}}} _C$ is a trajectory of the $\ensuremath{\widetilde\u} $-PCA. Moreover, its law is clearly invariant by translation in ${\ensuremath{\mathbb Z}} ^{d+1}$, so the process $\th^C_0$ is stationary. It remains to verify that this corresponds to the upper invariant measure. To see this it suffices to prove that \begin{equation} \label{eq:omegaeta} 1-\o_{(x,s)}(t)\ge \th^\ensuremath{\mathbf 1} _x(s)\end{equation} for all $(x,t,s)\in{\ensuremath{\mathbb Z}} ^{d}\times\{1,2,\dots\}^2$ such that $s\ge rt$, since $1-{\ensuremath{\mathbbm{1}}} _C$ is the decreasing limit of $1-\o(t)$. Notice that $1-\o_{(x,s)}(t)$ in fact only depends on the initial condition of $\o$ for $(y,u)\in{\ensuremath{\mathbb Z}} ^d\times\{0,\dots,s\}$, since $s\ge rt$. Define the closure $C'$ as in \cref{eq:def:closure} with the process $\o'$ defined like $\o$ but with initial condition ${\ensuremath{\mathbbm{1}}} _{\ensuremath{\mathcal U}_{y,s}=\varnothing}$ for $(y,u)\in{\ensuremath{\mathbb Z}} ^d\times\{0,\dots,s\}$ and 0 elsewhere. Then in fact $1-\o_{(x,s)}(t)= 1-\o'_{(x,s)}(t)\ge 1-{\ensuremath{\mathbbm{1}}} _{(x,s)\in C'}=\th^\ensuremath{\mathbf 1} _x(s)$. The last equality is \cref{eq:etatilde} applied to a suitable choice of initial condition (which we may choose freely, since the relation is deterministic and not just a.s.). Thus, \cref{eq:omegaeta} is established and the proof is complete. \end{proof} Amusingly, since BP is an attractive CA, one can iterate this correspondence. As an example, consider the $0$-dimensional CA with no memory given by the identity map $0\mapsto 0;1\mapsto1$ (there are four 0-dimensional CA without memory---the identity, the constant $0$, the constant $1$ and a non-attractive one). It is clearly not an eroder, since $0$ does not become $1$. Its corresponding 1-dimensional BP model is East-BP, which makes the right neighbour of a 1 also 1. When applying the correspondence a second time, we obtain the North-East-BP model (up to a linear transformation of the lattice), which is equivalent to standard oriented site percolation. \begin{prop} \label{prop:inhomogeneous} The correspondence of \cref{prop:correspondence} extends to a one-to-one mapping from the set of all $d$-dimensional attractive PCA (death may be integrated directly into $\u$) to $d+1$-dimensional inhomogeneous BP with $\ensuremath{\mathbf 0} $ initial condition and update families measure $\chi$ supported on the set of update families contained in the lower half-space (deaths corresponding to $\varnothing\in\ensuremath{\mathcal X}_x$). Namely, $\chi$ is the image of $\u$ via the mapping $\ensuremath{\mathcal U}\mapsto\ensuremath{\mathcal X}$ of \cref{eq:correspondence} and vice versa. Then the same conclusions still hold (\cref{eq:etatilde} and it being a stationary $\u$-PCA at its upper invariant measure). \end{prop} The proof is identical to the one of \cref{prop:correspondence} and therefore omitted. \subsection{Proof of Theorems \ref{th:BP} and \ref{th:inhomogeneous}} Fix a BP $\o$ with update family $\ensuremath{\mathcal X}$ contained in a half-space. Up to an invertible linear transformation of the lattice, we may assume that this is the lower half-space and denote by $\u=\d_{\ensuremath{\mathcal U}}$ the corresponding attractive CA via \cref{prop:correspondence}. The proposition gives us that $\ensuremath{q_{\mathrm{c}}} $ in \cref{eq:def:qc} for $\ensuremath{\mathcal X}$ is in fact equal to $1-\ensuremath{p_{\mathrm{c}}} $ with $\ensuremath{p_{\mathrm{c}}} $ from \cref{eq:def:pc} for the $\ensuremath{\widetilde\u} $-PCA with death rate $1-p$. \Cref{th:linear} applies to the $\ensuremath{\widetilde\u} $-PCA, so \cref{eq:main:1} holds for $p<\ensuremath{p_{\mathrm{c}}} $. Applying \cref{eq:etatilde} to the initial condition equal to ${\ensuremath{\mathbbm{1}}} _{\ensuremath{\mathcal U}_{x,t}=\varnothing}$ for $t\ge 0$ and $0$ otherwise and denoting $C'$ the corresponding closure, we get that $\th_x^\ensuremath{\mathbf 1} (t)=1$ iff $(x,t)\not\in C'$. Hence, for $p<\ensuremath{p_{\mathrm{c}}} $ there are $c,C>0$ such that for all $(x,t)\in{\ensuremath{\mathbb Z}} ^{d+1}$ \[{\ensuremath{\mathbb P}} _{\ensuremath{\widetilde\u} }\left((x,t)\not\in C'\right)={\ensuremath{\mathbb P}} _{\ensuremath{\widetilde\u} }\left(\th^\ensuremath{\mathbf 1} _x(t)\neq 0\right)\leq Ce^{-ct}.\] Denote by $\o'$ the $\ensuremath{\mathcal X}$-BP with the above initial condition. Then, $(x,t)\not\in C'$ iff $\o'_{(x,t)}(t)=0$, since, by induction on $t$, the $\o'$ process becomes stationary in ${\ensuremath{\mathbb Z}} ^d\times\{0,\dots,t\}$ after $t$ steps. Finally, since $\o'_{(x,t)}(t)\le\o_{(x,t)}(t)$ by attractiveness, the proof of \cref{th:BP} is concluded. \Cref{th:inhomogeneous} is proved identically in view of \cref{prop:inhomogeneous}. \subsection{Proof of Theorems \ref{th:Toom} and \ref{th:orientation:BP}} Fix an attractive CA $\u=\d_\ensuremath{\mathcal U}$ and let $\ensuremath{\mathcal X}$ be its corresponding BP update family via \cref{prop:correspondence}. Consider a configuration ${\ensuremath{\xi}}$ with finitely many $0$s. Applying \cref{eq:etatilde} to this initial condition and its closure $C$, we get that if $\ensuremath{\mathcal U}$ is not an eroder, then $\ensuremath{\mathcal X}$ is supercritical. Inversely, if $\ensuremath{\mathcal X}$ is supercritical, considering the intersection of the closure of a finite initial set with infinite offspring and a horizontal strip of width $r$ (which is clearly finite), we similarly obtain from \cref{eq:etatilde} that $\ensuremath{\mathcal U}$ is not an eroder. Furthermore, \cref{prop:correspondence} grants that $\ensuremath{\mathcal X}$ is subcritical iff $\ensuremath{\mathcal U}$ has at least two invariant measures when the death rate $1-p$ is small enough. Thus, \cref{th:Toom} is equivalent to \cref{th:orientation:BP} restricted to two-dimensions and, inversely \cref{th:Toom} without restrictions on the dimension is equivalent to \cref{th:orientation:BP}. Hence, \cref{th:orientation:BP} follows from \cref{th:Toom}, which is valid in all dimensions \cite{Toom80}*{Theorem 5}. Turning to \cref{th:Toom} in one dimension, in order to avoid a circular reasoning, we recall from \cref{subsec:BP} that by \cites{Bollobas15,Balister16} it suffices to verify the following fact, which we prove for completeness (see \cite{Toom80}*{Theorem 6} or \cite{Toom95}*{Theorem 3}). \begin{lem} \label{lem:directions} Fix a $d$-dimensional update family $\ensuremath{\mathcal X}$ contained in a half-space. Then for any $u\in S^{d-1}$ the set of stable directions $v\in S^{d-1}$ such that $\<u,v\><0\}$ is either empty or has a nonempty interior. Moreover, if $d=2$ and there is no open semicircle of unstable directions, then there exist two opposite directions in the interior of the set of stable directions. \end{lem} \begin{proof} Let $e_1,\dots, e_{d}$ denote the canonical basis of ${\ensuremath{\mathbb R}} ^{d+1}$. For concreteness let us assume that the family is contained in the upper half-space $\{x\in{\ensuremath{\mathbb R}} ^{d}:\<x,e_{d}\>>0\}$. It is not hard to see from the definition (see \cite{Bollobas15}*{Remark 3.3}) that the set of stable directions can be written as a finite intersection of finite unions of closed hemispheres containing the direction $e_{d}$ in their interior. Clearly, any closed hemisphere contains the geodesic between any of its points and any point in its interior. Therefore, for any stable direction the geodesics to a neighbourhood of $e_d$ consist of stable directions. Hence, the set of stable directions is the closure of its own interior. The general conclusion then follows immediately. In two dimensions, matters are simpler. We already established that stable directions form a connected closed set, that is, a closed interval of $S^1$. Depending on whether it is smaller or larger than a semicircle, this gives the desired conclusion. \end{proof} To conclude, let us mention that, given opposite stable directions provided in \cref{lem:directions}, the renormalisation argument sketched in \cite{Balister16} proceeds as follows. Divide the plane into large rhombi in the usual way, so that their sides are close to being perpendicular to these directions. Then perform renormalisation, saying that the rhomubs is good if it is initially in state $\ensuremath{\mathbf 0} $ for the $\ensuremath{\mathcal X}$-BP, which happens with high probability if the parameter $q$ is small. Then it suffices to verify that an infinite oriented path of good rhombi will remain in state $0$ forever, which follows from the suitable choice of their geometry. \section{Further directions} To conclude, let us mention a few directions for further work. Firstly, in view of \cref{th:BG,th:main}, one would naturally like to know what happens in the cooperative survival phase. That is the interior of the set of absorbing attractive PCA $\u$ such that ${\ensuremath{\mathbb P}} _\u(\forall t>0,{\ensuremath{\eta}}^{\{0\}}(t)\neq\ensuremath{\mathbf 0} )=0$ and with more than one invariant measure. We are aware of no results in this direction. For instance one may expect the following to be true. \begin{ques} For an attractive PCA $\u$ in the cooperative survival phase does one have exponential convergence to the upper invariant measure starting from the $\ensuremath{\mathbf 1} $ initial condition? Equivalently, in the corresponding inhomogeneous BP, does the truncated infection time have an exponential tail, that is, setting $\t_0=\inf\{t> 0:{\ensuremath{\eta}}_0(t)=1\}$, do we have \[\limsup_{t\to\infty}\frac{\log {\ensuremath{\mathbb P}} (\t_0>t|\t_0<\infty)}{t}<0?\] \end{ques} For models, such as the Toom rule with death, for which one can prove that they are in the cooperative survival phase, this follows from the corresponding proof, but we rather ask for non-perturbative results valid throughout the phase. It would also be very interesting to obtain an analogue of \cref{th:main} for PCA with a unique invariant measure not necessarily equal to $\d_\ensuremath{\mathbf 0} $. See \cites{Taati21,Louis04} for progress in this direction under other conditions. Furthermore, it is natural to seek to extend \cref{th:main,th:linear} to continuous time absorbing attractive interacting particle systems with single spin flips. In the case of the contact process this is a well-known result of Bezuidenhout and Grimmett \cite{Bezuidenhout91} (also see \cite{Swart18}). Finally, in the light of the BP--PCA correspondence of \cref{prop:correspondence,prop:inhomogeneous}, can one transfer more interesting information between the two settings? \section*{Acknowledgements} This work is supported by ERC Starting Grant 680275 ``MALIG.'' We thank Ir\`ene Marcovici, Jan Swart, R\'eka Szab\'o, Siamak Taati and Cristina Toninelli for enlightening discussions. We also thank Réka for bringing important references to our attention. We thank the annonymous referees for helpful remarks on the presentation. \paragraph{Subsequent developments} Since the submission of this manuscript several particularly related works have been completed and call for comment. Firstly, as expected, BP universality was extended to higher dimensions by Balister, Bollob\'as, Morris and Smith. In particular, \cite{Balister22} established that update families such that every open hemisphere contains an open set of stable directions is subcritical. One may recover \cref{th:Toom} in any dimension (this result in any dimension is exactly the content of \cite{Toom80}) from \cite{Balister22}*{Corollary 1.6} in the same way that we deduced \cref{th:Toom} from \cite{Balister16}*{Theorem 9}. However, the multiscale renormalisation of \cite{Balister22} is arguably more complex than the Peierls argument of \cite{Toom80} (see also \cite{Swart22}), making this alternative proof less appealing. Similarly, when restricted to families $\ensuremath{\mathcal X}$ contained in a half-space, \cite{Balister22} gives an alternative proof of the most difficult part of \cref{th:orientation:BP}. Secondly, based on a recent generalisation of Toom's approach due to Swart, Szab\'o and Toninelli \cite{Swart22}, Szab\'o and the author \cite{Hartarsky22Toom} gave an alternative proof of the main result of \cite{Balister22} cited above not necessarily restricted to families contained in a half-space, unlike \cref{th:orientation:BP}. To that end they employed a connection between PCA and BP complementary to the one of \cref{prop:correspondence}. Moreover, using both connections simultaneously, they established improved quantitative bounds in Toom's setting of CA with death. \let\d\oldd \let\k\oldk \let\l\oldl \let\L\oldL \let\o\oldo \let\O\oldO \let\r\oldr \let\t\oldt \let\u\oldu \bibliographystyle{plain}
proofpile-arXiv_065-8
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{intro}} \IEEEPARstart{F}{ace} detection, one of the most popular, fundamental and practical tasks in computer vision, is to detect human faces from images and return the spatial locations of faces via bounding boxes~\cite{pascalvoc}, as shown in Fig.~\ref{fig:fd_intro}. Starting with the Viola-Jones (V-J) detector~\cite{Haar-like} in 2001, the solution to face detection has been significantly improved from handcrafting features such as Haar-like features~\cite{Haar-like}, to end-to-end convolutional neural networks (CNNs) for better feature extraction. Face detection is the first step for many face-related applications, such as face recognition, face tracking, facial expression recognition, facial landmarks detection and so on. Those technologies can achieve an overall better performance by faster and more accurate face detectors. \begin{figure}[htbp] \begin{center} \includegraphics[width=1.0\linewidth]{figs/face_det_example.pdf} \end{center} \caption{Examples of face detection from WIDER Face~\cite{fd-widerface}. A simple case (a) where there is only one clear frontal face. Common variations are in scale (b), pose (c), occlusion (d), expression (e), illumination (f). Red boxes are faces in extreme conditions.} \label{fig:fd_intro} \end{figure} Before deep learning was employed for face detection, the cascaded AdaBoost classifier was the dominant method for face detection. Some algorithms were specifically designed for face detection by using some kinds of features, such as Haar-like features~\cite{Haar-like}, SURF~\cite{facesurf_li} and Multi-Block LBP~\cite{multi_block_LBP}. In recent years, deep learning has been proven to be more powerful for feature extraction and helps to achieve very impressive accuracy on object detection. Numerous object detection deep models have been designed for generic object detection which is much more challenging than face detection. Therefore, many models from face detection are adopted from or inspired by models for generic object detection. We can train a deep face detector directly using Faster R-CNN~\cite{fasterrcnn}, YOLO~\cite{yolo} or SSD~\cite{ssd}, and much better detection results can be obtained than traditional cascaded classifiers. Some similar works can be found, such as Face R-CNN~\cite{fd-facercnn} and Face R-FCN~\cite{fd-facerfcn} which are modified and improved based on Faster R-CNN, R-FCN~\cite{rfcn} respectively. Additionally, some other detectors, such as MTCNN~\cite{fd-mtcnn}, HR~\cite{fd-hr}, SSH~\cite{fd-ssh}, are originally designed for face detection. Some techniques in generic object detection have also been adapted into face detection, such as the multi-scale mechanism from SSD, the feature enhancement from FPN~\cite{fpn}, and the focal loss from RetineNet~\cite{focalloss} according to the special pattern of human faces for face detection. These techniques lead to the proposal of various outstanding face detectors such as S$^3$FD~\cite{fd-s3fd}, PyramidBox~\cite{fd-pyramidbox}, SRN~\cite{fd-srn}, DSFD~\cite{fd-dsfd}, and RetinaFace~\cite{retinaface}. Face detection is sometimes considered as a solved problem because the average precision (AP) on many face detection datasets such as PASCAL Face~\cite{fd-pascalface}, AFW~\cite{fd-afw} and FDDB~\cite{fd-fddb}, has reached or exceeded 0.990 since 2017\footnote{State-of-the-art AP can be found in the official result pages of the datasets, and \url{https://paperswithcode.com/task/face-detection} which also collects results from published papers.}. On the most popular and challenging WIDER Face dataset~\cite{fd-widerface}, the AP has reached 0.921 even on the hard test set. \begin{figure}[htbp] \begin{center} \includegraphics[width=1.0\linewidth]{figs/highest_ap_yearwise.pdf} \end{center} \caption{ The best AP on the easy, medium and hard subsets of WIDER Face~\cite{fd-widerface} test set in the recent years. } \label{fig:highest_ap_yearwise} \end{figure} But face detection is not a solved problem. If we observe the best results of each year in Fig.~\ref{fig:highest_ap_yearwise}, we can find the AP is still improving but slowly in recent 3 years. Therefore, with such near-to-saturated performance improvement, one question would be asked: If a tiny improvement is achieved by a much heavier deep model with great computational cost, will we consider the model is a good one? If we look slightly deeper into the implementation of some recent models, we can find that multiple scaling is heavily used in the evalutions on WIDERFACE benchmark. If we resize the input image with many different scales, such as 1/4, 1/2, 1, 3/2, 2, 4 and more, and feed all those resized images into a detector, the combined results will have a better AP, which in another word is achieved by the assembling and suppressing (NMS) the multi-scale outputs, and is independent to the backbone of the underlying face detector. We listed the scales used by some models in Table~\ref{tab:model_test_scales}. None of them tested an image using only one scale. It is a trend that more scales are used recently. There is a risk that multiple scales with a heavy computational cost are employed, and outstanding accuracy is claimed, which overshadows the performance gain the by the detector itself and the computational cost by such a multi-scale operation is not known. It is also worth noting that most benchmarks do not evaluate the computational cost. Most often, it is difficult for users to know by which the improvement is achieved, a better backbone technology or the follow-up computational-intensive multi-scale ensemble strategy? \begin{table}[htbp] \centering \caption{ Different models adopt different ranges and different presets of test scales. '0.25x' denotes shrinking the width and height by 0.25, and others follow. Specifically, 'Sx' and 'Ex' are shrinking and enlarging images accordingly, while 'Fx' is enlarging the image into a fixed size. Test image sizes stand for re-scaling the smaller side of the image to the given value, and the other side follows the same ratio. } \begin{tabular}{l|l} \hline Model & Test image scales \\ \hline HR,2017\cite{fd-hr} & 0.25x, 0.5x, 1x, 2x \\ S3FD,2017\cite{fd-s3fd} & 0.5x, 1x, Sx, Ex \\ SRN,2019\cite{fd-srn} & 0.5x, 1x, 1.5x, 2.25x, Fx \\ DSFD,2019\cite{fd-dsfd} & 0.5x, 1x, 1.25x, 1.75x 2.25x, Sx, Ex \\ CSP,2019\cite{fd-csp} & 0.25x, 0.5x, 0.75x, 1x, 1.25x, 1.5x, 1.75x, 2x \\ \hline Model & Test image sizes \\ \hline SSH,2017\cite{fd-ssh} & 500, 800, 1200, 1600 \\ SFA,2019\cite{SFA} & 500, 600, 700, 800, 900, 1000, 1100, 1200, 1600 \\ SHF,2020\cite{SHF} & 100, 300, 600, 1000, 1400 \\ RetinaFace,~2020\cite{retinaface} & 500, 800, 1100, 1400, 1700 \\ \hline \end{tabular} \label{tab:model_test_scales} \end{table} We do expect a perfect face detector which is robust and accurate even for some faces in extremely difficult conditions, while being extremely fast with low computational cost. However, we all know the \textit{no free lunch theorem}. Therefore, in this survey, we investigate the recent deep learning based face detection methods and evaluate them in terms of accuracy and computational cost. The main contributions are as follows. \begin{enumerate} \item Different from previous face detection surveys~\cite{fd-survey2019, fd-survey2015, fd-survey2010, fd-survey2002, fd-survey2001} in which the content is mainly built on reviewing traditional methods, our survey focuses on deep learning-based face detectors. We have noted the existence of surveys~\cite{objectdetection-survey-zhao-2019, objectdetection-survey-zou-2019, objectdetection-survey-wu-2020} on deep learning; however, they focus on generic object detection, not specifically for face detection. In this paper, we provide a clear view of the path by which deep learning based face detection has evolved in recently years. \item Accuracy and efficiency are both studied and analyzed in the paper. In addition to detailed introductions to deep learning based face detectors, some experiments are carried out to analyze different deep face detectors using different metrics. Some tricks to improve accuracy are also introduced. So the paper can help readers understand better how good accuracy and efficiency can be achieved. \item With a focus on the efficiency of face detectors, comprehensive experiments are carried out to evaluate the accuracy and particularly efficiency of different face detectors. In addition to latency, we also propose an accurate metric for the computational cost of a CNN model. It is \textbf{FL}oating point \textbf{OP}eration\textbf{s} (FLOPs) under certain rules. FLOPs is more neutral than latency which heavily depends on hardware and deep network structure. The code to compute the FLOPs has been released in \url{https://github.com/fengyuentau/PyTorch-FLOPs.git}. \end{enumerate} The rest of the paper is organized as follows. Some key challenges in face detection are summarized in Section~\ref{challenges}. In Section~\ref{review_arch}, we provide a roadmap to describe the development of deep learning-based face detection with detailed reviews. In Section~\ref{face_rep}, we review several fundamental subproblems including backbones, context modeling, the handling of face scale variations and proposal generation. Popular datasets for face detection and state-of-the-art performances are presented in Section~\ref{review_eval}. Section~\ref{comp} reveals the relation between computational cost and AP by conducting extensive experiments on several open-source one-stage face detectors. In addition, speed-focusing face detectors collected from Github are reviewed in Section~\ref{review_speed}. Finally, we conclude the paper with a discussion on future challenges in face detection in Section~\ref{conclusion}. \section{Main Challenges} \label{challenges} Most face-related applications need clear frontal faces. Detecting a clear frontal face is a relatively easy task. Some may argue that some faces are useless for the next step such as face recognition if the faces are tiny and with occlusion; but it is not. Effectively detecting any faces in extremely difficult conditions can greatly improve the perception capability of a computer but is still a challenging task. If a face is detected and evaluated as a bad quality sample, the subject can be suggested to be closer to the camera, or the camera can adjust automatically for a better image. Face detection is still a problem far from to be well solved. Many challenges do still exist. \textbf{Accuracy-related challenges} are from face appearance and imaging conditions. In real-world scenes, there are many different kinds of face appearance, varying in different skin color, makeup, expression, wearing glasses or a mask and so on. In unconstrained environments, imaging a face can be impacted by various lighting, viewing angles and distances, backgrounds, and weather conditions. The face images will vary in illumination, pose, scale, occlusion, blur and distortion. The face samples in difficult conditions can be found in Fig.~\ref{fig:fd_intro}. There have been several datasets and competitions featuring face detection in unconstrained conditions, such as FDDB~\cite{fd-fddb}, WIDER Face~\cite{fd-widerface} and WIDER Face Challenge 2019~\footnote{\url{https://competitions.codalab.org/competitions/20146}}. More than 45\% of faces are smaller than $20\times 20$ pixels in WIDER. In most face-related applications, we seldom need small faces whose sizes are less than $20$. However, if we can detect small or even tiny faces, we can resize the original large images to smaller ones and send them to a face detector. Then, the computational cost can be greatly reduced since we only need to detect faces in smaller images. Therefore, a better accuracy sometimes also means a higher efficiency. \textbf{Masked face detection} is becoming more important since people are wearing and will continuously wear masks to prevent COVID-19 in the next few years. Face-related applications did not consider this situation in the past. Wearing masks will reduce the detection accuracy obviously. Some masks are even printed with some logos or cartoon figures. All those can disrupt face detection. If a face has a mask and sunglasses at the same time, face detection will be even more difficult. Therefore, in the next few years, masked face detection should be explored and studied. \textbf{Efficiency-related challenges} are brought by the great demands on edge devices. Since the increasing demands on edge devices, such as smartphones and intelligent CCTV cameras, massive amount of data is generated per day. We frequently take selfies, photos of others, long video meetings, etc. Modern CCTV cameras record 1080P videos constantly at 30 FPS. These result in a great demand for facial data analysis, and the data is considerable. In contrast, edge devices have limited computational capability, storage and battery life to run advanced deep learning-based algorithms. In this case, efficient face detection is essential for face applications on edge devices. \begin{figure*}[htbp] \begin{center} \includegraphics[width=1.0\linewidth]{figs/timeline.pdf} \end{center} \caption{ Timeline of milestone face detectors~\cite{fd-cascadecnn, fd-faceness, fd-mtcnn, fd-cmsrcnn, fd-facerfcn, farpn, fd-hr, fd-ssh, fd-s3fd, fd-pyramidbox, fd-srn, fd-dsfd, fd-csp, retinaface, HAMBox, BFBox, ProgressFace}, and remarkable works from object recognition~\cite{bb-vgg, bb-resnet} and object detection~\cite{fasterrcnn, rfcn, ssd, fpn, focalloss, cornernet} (marked as blue, attached to the middle branch). Since the proposal of AlexNet~\cite{alexnet}, various face detection works inspired by deep learning techniques from object recognition and object detection were published in the 2012-post deep learning-based face detection era. The top branch is two/multi-stage face detectors, while the bottom branch is one-stage detectors, which has become the most popular network design adopted by researchers. } \label{fig:timeline} \end{figure*} \begin{figure*}[htbp] \begin{center} \includegraphics[width=1.0\linewidth]{figs/multi_stage-archs.pdf} \end{center} \caption{ Diagrams of milestone multi/two-stage face detectors~\cite{fd-cascadecnn, fd-mtcnn, fd-cmsrcnn, fd-cmsrcnn}. Others share similar architectures as the three. } \label{fig:multi-stage-archs} \end{figure*} \section{Face Detection Frameworks} \label{review_arch} Before deep learning was used for face detection, cascaded AdaBoost-based classifiers were the most popular classifiers for face detection. The features used in AdaBoost were designed specifically for faces, not generic objects. For example, the Haar-like~\cite{Haar-like} feature can describe facial patterns of eyes, mouth and others. In recent years, facial features can be automatically learnt from data via deep learning techniques. Therefore, many deep learning-based face detectors are inspired by modern network architectures designed from object detection. Following the popular manner of organizing object detection frameworks, we organize deep learning-based face detectors into three main categories: \begin{itemize} \item Multi-stage face detection frameworks. It is inspired by cascaded classifiers in face detection and is an early exploration of applying deep learning techniques to face detection. \item Two-stage face detection frameworks. The first stage generates some proposals, and the proposals are confirmed in the second stage. The efficiency should be better than multi-stage ones. \item One-stage face detection frameworks. Feature extraction and proposal generation are performed in a single unified network. These frameworks can be further categorized into anchor-based methods and anchor-free methods. \end{itemize} To show how the deep learning-based face detection evolves, milestone face detectors and some important object detectors are plotted in Fig.~\ref{fig:timeline}. The two-stage and multi-stage face detectors are on the top branch, and the single-stage ones are on the bottom branch. The generic object detectors are in the middle branch and in blue. A More detailed introduction of those detectors is provided in the following subsections. \subsection{Multi-stage and Two-Stage Face Detectors} In the early era when deep learning techniques entered face detection, face detectors were designed to have multiple stages, also known as the cascade structure which has been widely used in most early face detectors. With the remarkable breakthrough brought by Faster R-CNN~\cite{fasterrcnn}, some researchers turned to improve Faster R-CNN based on face data. In the cascade structure, features are usually extracted and refined one or multiple times before being fed into classifiers and regressors, so as to reject most of the sliding windows to improve efficiency. As shown on the result page\footnote{{\url{http://vis-www.cs.umass.edu/fddb/results.html}}} of FDDB~\cite{fd-fddb}, Li et al. made an early attempt and proposed their CNN-based face detector, named \textbf{CascadedCNN}~\cite{fd-cascadecnn}. CascadeCNN consists of 3 stages of CNNs, as shown in Fig.~\ref{fig:multi-stage-archs}. Sliding windows are first resized to $12 \times 12$ pixels and fed into the shallow 12-net to reduce candidate windows by 90\%. The remaining windows are then processed by the 12-calibration-net to refine the size for face localization. Retained windows are then resized to $24 \times 24$ as the input for the combination of 24-net and 24-calibration-net, and so on for the next CNNs combination. CascadeCNN achieved state-of-the-art performance on AFW~\cite{fd-afw} and FDDB, while reaching a compelling speed of 14 FPS for for the typical $640 \times 480$ VGA images on a 2.0 GHz CPU. Another attempt at cascaded CNNs for face detection is the well-known \textbf{MTCNN}~\cite{fd-mtcnn} proposed by Zhang et al. MTCNN is composed of 3 subnetworks, which are P-Net for obtaining candidate facial windows, R-Net for rejecting false candidates and refining remaining candidates, O-Net for producing the final output with both face bounding boxes and landmarks in the multi-task manner. P-Net is a shallow fully convolutional network with 6 \texttt{CONV} layers, which can take images of any sizes as input. MTCNN was a great success with large and state-of-the-art advantages on WIDER Face~\cite{fd-widerface}, FDDB and AFW, while reaching 16 fps on a 2.6 GHz CPU. In the object-detection-fashion two-stage network architectures, a region proposal network (RPN)~\cite{fasterrcnn} is required to generate object proposals. RPN can be considered as a straightforward classification CNN, which generates proposals based on the preset anchors on CNN features, filters out non objects and refines object proposals. However, as the CNNs shrink the image to extract features, the corresponding output features for tiny faces can be less than 1 pixel, making it insufficient to encode rich information. To address this problem, Zhu et al. proposed \textbf{CMS-RCNN}~\cite{fd-cmsrcnn}, which is equiped with a contextual multi-scale design for both RPN and final detection. As shown in Fig.~\ref{fig:multi-stage-archs}, multi-scale features from \textit{conv3}, \textit{conv4} and \textit{conv5} are concatenated by shrinking them into the same shape with \textit{conv5} as the input for RPN, so as to collect more information for tiny faces and also improve the localization capability from low-level layers. CMS-RCNN achieved an AP of 0.899, 0.874, 0.624 on the easy, medium and hard sets of the WIDER Face dataset respectively, outperforming MTCNN by 0.051(Easy), 0.049(Medium) and 0.016(Hard). In addition to CMS-RCNN, there are others making improvements based on Faster R-CNN. \textbf{Bootstrapping Faster R-CNN}~\cite{Xiaomi} builds a training dataset by iteratively adding false positives from a model's output to optimize Faster R-CNN. \textbf{Face R-CNN}~\cite{fd-facercnn} adopts the same architecture as Faster R-CNN with center loss, online hard example mining and multi-scale training strategy. \textbf{FDNet}~\cite{fd-fdnet} exploits multi-scale training and testing and a vote-based NMS strategy on top of Faster R-CNN with a light-head design. Position-sensitive average pooling was proposed in \textbf{Face R-FCN}~\cite{fd-facerfcn} to assign different weights to different parts of the face based on R-FCN~\cite{rfcn}. With the improvements considering the special patterns of face data, these methods achieved better performance than their original version on the same WIDER Face dataset. Whether it is the cascaded multi-stage or two-stage network design, its computation is heavily dependent on the number of faces in the image, the increase in which also increases proposals passed to the next stage in the interior of the network. Notably, the multi-scale test metric, which usually enlarges the images multiple times to make tiny faces detectable, can dramatically increase the computational cost on this basis. Considering that the number of faces in the image from the actual scene varies from one face in a selfie to many faces in a large group photo, we consider the robustness of cascade or two-stage networks in terms of runtime. \subsection{One-Stage Face Detectors} \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\linewidth]{figs/one_stage-archs.pdf} \end{center} \caption{ Diagrams of milestone one-stage face detectors~\cite{fd-hr, fd-s3fd, fd-ssh, fd-pyramidbox, fd-dsfd, fd-csp}. } \label{fig:single-stage-archs} \end{figure*} In real-time face-related applications, face detection must be performed in real time. If the system is deployed on edge devices, the computing power is low. In those kinds of situations, one-stage face detectors are more suitable since their process time is stable regardless of how many faces there are in images. Different from the multi/two-stage detectors, the one-stage face detectors perform feature extraction, proposal generation and face detection in a single and unified convolutional neural network, whose runtime efficiency is independent of the number of faces. Dense anchors are designed to replace proposals in two-stage detectors~\cite{fd-ssh}. Starting from CornerNet~\cite{cornernet}, an increasing number of works use the anchor-free mechanism in their frameworks. \textbf{HR}~\cite{fd-hr} proposed by Hu \textit{et al.} is one of the first to perform anchor-based face detection in a unified convolutional neural network. The backbone of HR is ResNet-101~\cite{bb-resnet} with layers truncated after \texttt{conv4\_5}. Early feature fusion on layers \texttt{conv3\_4} and \texttt{conv4\_5} is performed to encode context since high-resolution features are beneficial for small face detection. Through experiments on faces clustered into 25 scales, 25 anchors are defined for 2X, 1X and .5X inputs, to achieve the best performance of three input scales. HR outperformed CMS-RCNN~\cite{fd-cmsrcnn} by 0.199 on the WIDER Face validation hard set, and more importantly, the run-time of HR is independent of the number of faces in the image, while CMS-RCNN's linearly scale up with the number of faces. Different from HR, \textbf{SSH}~\cite{fd-ssh} attempts to detect faces at different scales on different levels of features, as shown in Fig.~\ref{fig:single-stage-archs}. Taking VGG-16~\cite{bb-vgg} as the backbone, SSH detects faces on the enhanced features from \texttt{conv4\_3}, \texttt{conv5\_3} and \texttt{pool5} for small, medium and large faces respectively. SSH introduces a module (SSH module) that greatly enriches receptive fields to better model the context of faces. The SSH module is widely adopted by later works~\cite{fd-pyramidbox, fd-dsfd, retinaface, HAMBox}, which turns out to be efficient for performance boosting. Since \textbf{S$^3$FD}~\cite{fd-s3fd}, many one-stage face detectors~\cite{fd-pyramidbox, fd-srn, fd-dsfd, fd-csp, retinaface, HAMBox, BFBox, ProgressFace} fully utilize multi-scale features attempting to achieve scale-invariant face detection. S$^3$FD extends the headless VGG-16~\cite{bb-vgg} with more convolutional layers, whose stride gradually doubles from 4 to 128 pixels, so as to cover a larger range of face scales. \textbf{PyramidBox}~\cite{fd-pyramidbox} adopts the same backbone as S$^3$FD, integrates FPN~\cite{fpn} to fuse adjacent-level features for semantic enhancement, and improves the SSH module with wider and deeper convolutional layers inspired by Inception-ResNet~\cite{Inception-v4} and DSSD~\cite{dssd}. \textbf{DSFD}~\cite{fd-dsfd} also inherits the backbone from S$^3$FD, but enhances the multi-scale features by the Feature Enhance Module (FEM), so that detection can be made on two shots - one from non-enhanced multi-scale features, and the other from the enhanced features. The same scale features from the second shot have larger RFs than those from the first shot, but also have smaller RFs than the next-level features from the first shot, indicating that the face scales are split more refined across these multi-scale detection layers. Similarly, \textbf{SRN}~\cite{fd-srn} has a dual-shot networks but is trained differently on multi-scale features: low-level features need two-step classification to refine, since they have higher resolution and contribute the vast majority of anchors and also negative samples; additionally, high-level features have lower resolution which is worth two-step regression using the Cascade R-CNN~\cite{cascadercnn} to have more accurate bounding boxes. There are also some significant anchor-based methods using the FPN~\cite{fpn} as the backbone. \textbf{RetinaFace} adds one more pyramid layer on top of the FPN and replaces CONV layers with the deformable convolution network (DCN)~\cite{DCN, DCNv2} within FPN's lateral connections and context module. RetinaFace models a face in three ways: a 3D mesh (1k points), a 5-landmark mask (5 points), and a bounding box (2 points). Cascade regression~\cite{cascadercnn} is employed with multi-task loss in RetinaFace to achieve better localization. Instead of using the handcrafting structures, Liu \textit{et al} proposed \textbf{BFBox}, which explores face-appropriate FPN architectures using the successful Neural Architecture Search (NAS). Liu decouples FPN as the backbone and FPN connections, the former of which can be replaced by VGG~\cite{bb-vgg}, ResNet~\cite{bb-resnet} or the backbone from NAS, and the latter of which can be top-down, bottom-up or cross-level fusion from NAS. Since the proposal of CornerNet~\cite{cornernet} back in 2018, which directly predicts the top-left and bottom-right points of bounding boxes instead of relying on prior anchors, many explorations~\cite{fcos, zhoucenternet, extremenet, reppoints} have been made to remodel object detection more semantically using the anchor-free design. \textbf{CSP} models a face bounding box as a center point and the scale of the box as shown in Fig.~\ref{fig:single-stage-archs}. CSP takes multi-scale features from the modified ResNet-50~\cite{bb-resnet}, and concatenates them to take the advantage of rich global and local information for detection heads using transpose convolution layers. In particular, the anchor-free detection head can also be an enhancement module for anchor-based heads. \textbf{ProgressFace}~\cite{ProgressFace} appends an anchor-free module to provide more positive anchors for the highest resolution feature maps in FPN, so as to reduce the imbalance of positive and negative samples for small faces. One-stage frameworks are popular on face detection in recent years for the following three reasons. (a) The runtime of one-stage face detectors is independent of the number of faces in an image by design. Therefore, it enhances the robustness of runtime efficiency. (b) It is computationally efficient and straightforward for one-stage detectors to reach near scale invariance by contextual modeling and multi-scale feature sampling. (c) Face detection is a relatively less complex task than general object detection. This means that innovations and advanced network designs in object detection can be quickly adjusted to face detection by considering the special pattern of faces. \section{Face Representation} \label{face_rep} The key idea of face detection has never changed whether it is in the traditional era or deep learning era. It finds the common patterns of all faces in the dataset. In the traditional era, many of handcrafted features, such as SIFT~\cite{SIFT}, Haar~\cite{Haar-like} and HOG~\cite{HOG}, are employed to extract local features from the image, which are aggregated by approaches such as AdaBoost for the higher-level representation of faces. Different from traditional methods, which require rich prior knowledge to design handcrafted features, deep convolutional neural networks can directly learn even more powerful features from face images. A deep learning-based face detection model can be considered as two parts: a CNN backbone and several detection branches. Starting from some popular CNN backbones, the feature extraction methods that can handle face scale invariance are introduced as well as several strategies to generate proposals for face detection. \subsection{Popular CNN Backbones} \label{backbones} In most deep face detectors there is a CNN backbone for feature extraction. Some popular backbone networks are listed in Table~\ref{tab:cnn-backbones}. They are VGG-16 from the VGGNet~\cite{bb-vgg} series, ResNet-50/101/152 from the ResNet~\cite{bb-resnet} series, and MobileNet~\cite{MobileNet-v1}. The models are powerful and can achieve good accuracy on face detection, but they are a little heavy. \begin{table}[htbp] \centering \caption{ CNN backbones commonly used by modern deep learning-based face detectors. FC layers of these CNNs are ignored when calculating '\#CONV Layers', '\#Params' and 'FLOPs'. The input size for calculating 'FLOPs' is $224 \times 224$. The calculation of FLOPs is discussed in Section~\ref{comp}. 'Top-1 Error' refers to the performance on the ImangeNet~\cite{imagenet} validation set. Note that 9 of the 20 \texttt{CONV} layers in MobileNet~\cite{MobileNet-v1} are depth-wise. } \begin{tabular}{|c|c|c|c|c|} \hline \begin{tabular}[c]{@{}c@{}}CNN\\ Backbones\end{tabular} & \begin{tabular}[c]{@{}c@{}}\#CONV\\ Layers\end{tabular} & \begin{tabular}[c]{@{}c@{}}\#Params\\ ($\times 10^6$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}FLOPs\\ ($\times 10^9$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Top-1\\ Error\end{tabular} \\ \hline VGG-16 & 13 & 14.36 & 30.72 & 28.07\% \\ \hline ResNet-50 & 52 & 23.45 & 8.25 & 22.85\% \\ \hline ResNet-101 & 136 & 42.39 & 15.72 & 21.75\% \\ \hline ResNet-152 & 188 & 56.87 & 23.19 & 21.43\% \\ \hline MobileNet & 20 & 3.22 & 1.28 & 29.40\% \\ \hline \end{tabular} \label{tab:cnn-backbones} \end{table} \begin{figure}[htbp] \begin{center} \includegraphics[width=1.0\linewidth]{figs/scale_dis.pdf} \end{center} \caption{ The distribution of face scales on WIDER Face~\cite{fd-widerface} dataset. } \label{fig:Scale_dis} \end{figure} Early attempts on deep learning-based face detection were cascaded structures that did not take the above CNN architectures. Even some simple structured CNN is much more computational heavy than AdaBoost, cascaded CNN is computational heavy also. With breakthroughs in object detection, some of the techniques have been borrowed and applied on face detection. VGG-16~\cite{bb-vgg} has 13 \texttt{CONV} layers, which is the first choice for the baseline backbones for many face detectors, such as SSH~\cite{fd-ssh}, S$^3$FD~\cite{fd-s3fd} and PyramidBox~\cite{fd-pyramidbox}. Performance improvements can easily be obtained by simply swapping the backbone from VGG-16 to ResNet-50/101/152~\cite{bb-resnet}, as shown in ~\cite{fd-dsfd}. Since state of the arts have achieved AP \textgreater 0.900 even on WIDER Face hard sets, it is common for recent face detectors~\cite{fd-dsfd, ProgressFace, tinaface} to equipe with a deeper and wider backbone for higher AP, such as the ResNet-152 and ResNets with FPN~\cite{fpn} connections. Liu \textit{et al.} employs Neural Architecture Search (NAS) to search face-appropriate backbones and FPN connections. One of the most inexpensive choices is ResNet-50 which is listed in Table~\ref{tab:cnn-backbones}, which has less parameters and less FLOPs, while achieving very similar performance compared to deeper nets. Another choice for state-of-the-art face detectors to reach real-time speed is to change the backbone to MobileNet~\cite{MobileNet-v1}, which has similar performance to VGG-16 but one order of magnitude less in '\#Params' and FLOPs. \begin{figure}[htbp] \begin{center} \includegraphics[width=1.0\linewidth]{figs/face_in_scales.pdf} \end{center} \caption{ A face in different scales. Could you tell the images of sizes $4 \times 4$, $8 \times 8$ contain a face? } \label{fig:face_in_scales} \end{figure} \subsection{Towards Face Scale Invariance} One of the major challenges for face detection is the large span at face scales. As statistics shown in Fig.~\ref{fig:Scale_dis}, there are 157,025 and 39,123 face bounding boxes in the train and validation set respectively, both of which have more than 45\% of face bounding boxes are $16 \times 16$ and smaller, and a non-negligible ~1\% are $256 \times 256$ and larger. We choose these scales to perform clustering to match the strides of feature maps selected for detection; for example there is only 1 pixel in the feature maps of stride 4 for encoding a face of size equal to or less than $4 \times 4$. We also present the visual differences among scales in Fig.~\ref{fig:face_in_scales}. It is challenging even for humans to tell whether the image of size $16 \times 16$ contains a face. In the following, we describe the mechanism of face detectors towards face scale invariance even with tiny faces. Most of the modern face detectors are anchor-based. Anchors are predefined boxes of different scales and aspect ratios attached to each pixel in the feature maps, which serve as the proposal to match with the ground truth faces. More details about anchors are provided in Section~\ref{proposal_generation}. As \cite{fd-s3fd} noted, since the predefined anchor scales are discrete while the face scales in the wild change continuously, outer faces whose scales are distributed away from anchor scales cannot match enough anchors. It will result in a low recall rate. A simple solution for a trained face detector is to perform multi-scale test on an image pyramid, which is built by progressively resizing the original image. It is equal to re-scale faces and hopefully brings outer faces back into the detectable range of scales. This solution does not require retraining the detector, but it may come with a sharp increase in redundant computation, since there is no certain answer to how deep the pyramid we should build to match with the certain extent of scale invariance of a trained CNN. Another better solution to face scale invariance is to make full use of the feature maps produced in CNNs. One can easily observe that the layers of standard CNN backbones gradually decrease in size. The subsampling of these layers naturally builds up a pyramid with different strides and receptive fields (RFs). It produces multi-scale feature maps. In general, high-level feature maps produced by later layers with large RFs are encoded with strong semantic information, and lead to its robustness to variations such as illumination, rotation and occlusion. Low-level feature maps produced by early layers with small RFs are less sensitive to semantics, but have high resolution and rich details, which are beneficial for localization. To take both the advantages, a number of methods are proposed, which can be categorized into \textbf{modeling context}, \textbf{detecting on a feature pyramid}, and \textbf{predicting face scales}. \textbf{Modeling context:} Additional context is essential for detecting faces, especially for detecting small ones. HR\cite{fd-hr} shows that context modeling by fusing feature maps of different scales can dramatically improve the accuracy of detecting small faces. Following a similar fusion strategy as HR, \cite{SHF} detects on three different dilated \texttt{CONV} branches, aiming to enlarge RF without too much increase in computation. \cite{fd-cmsrcnn} downsamples feature maps of strides 4 and 8 to concatenate with those of stride 16, so as to improve the capability of the RPN to produce proposals for faces at different scales. SSH~\cite{fd-ssh} exploits an approach similar to Inception~\cite{Inception-V1}, which concatenates the output from three \texttt{CONV} branches that have $3 \times 3$, $5 \times 5$ and $7 \times 7$ filters respectively. PyramidBox~\cite{fd-pyramidbox} first adopts an FPN~\cite{fpn} module to build up context and is further enhanced by deeper and wider SSH modules. \cite{fd-dsfd} improves the SSH module by replacing \texttt{CONV} layers with dilated \texttt{CONV} layers. \cite{fd-csp} upsamples feature maps of strides 8, 16 to concatenate with those of stride 4, which is fed to an FCN to produce center, scale and offset heatmaps. The fusion of feature maps encodes rich semantics from high-level feature maps with rich geometric information from low-level feature maps, based on which the detectors can improve their capability of localization and classification towards face scale invariance. Meanwhile, the fusion of feature maps also introduces more layers, such as \texttt{CONV} and \texttt{POOL} to adjust scales and channels, which creates additional computational overhead. \textbf{Detecting on a feature pyramid}: Inspired by SSD~\cite{ssd}, a majority of recent approaches, such as \cite{fd-ssh,fd-s3fd,fd-pyramidbox,fd-srn,fd-dsfd,retinaface}, detect at multiple feature maps of different scales respectively, and combine detection results. It is considered to be an effective method for weighing between speed and accuracy. SSD~\cite{ssd} puts default boxes on each pixel of the feature maps from 6 detection layers that have strides of 8, 16, 32, 64 and 128. Sharing a similar CNN backbone with SSD, \cite{fd-s3fd,fd-pyramidbox} detect on a wider range of layers, which have strides gradually doubling from 4 to 128 pixels. SRN~\cite{fd-srn} and DSFD~\cite{fd-dsfd} introduce the two-stream mechanism, which detects on both the detection layers from the backbone and extra layers applied on the detection layers for feature enhancement. Different from subsampling on more layers, \cite{fd-ssh,SFA,retinaface} detects only at the last three level feature maps, which are enhanced by their context modeling methods. By detecting on a feature pyramid, detection layers are implicitly trained to be sensitive to different scales, while it also leads to an increase in model size and redundant computation, since the dense sampling may cause some duplicate results from adjacent-level layers. \textbf{Predicting face scales}: To eliminate the redundancy from pyramids, several approaches~\cite{SAFD,RSA,S2AP} predict the face scales before making a detection. \cite{SAFD} first generates a global face scale histogram from the input image by the Scale Proposal Network (SPN), which is trained with image-level ground truth histogram vectors and without face location information. A sparse image pyramid is built according to the output histogram, so as to have faces rescaled to the detectable range of the later single-scale RPN. Similarly, \cite{RSA} detects on a feature pyramid without unnecessary scales, which is built by using the scale histogram to a sequential ResNet~\cite{bb-resnet} blocks that can downsample feature maps recursively. \cite{S2AP} predicts not only face scales but also face locations by a shallow ResNet18~\cite{bb-resnet} with scale attention and spatial attention attached, named S$^2$AP. S$^2$AP generates a 60-channel feature map, meaning face scales are mapped to 60 bins, each of which is a spatial heatmap that has high response to its responsible face scale. With the 60-channel feature maps, it is possible to decrease the unnecessary computation with the low-response channels and the low-response spatial areas by a masked convolution. \subsection{Proposal Generation} \label{proposal_generation} Faces in the wild can be of any possible locations and scales in the image. The general pipeline for most of the early successful face detectors, is to first generate proposals in the sliding-window manner, extract features from the windows using handcrafted descriptors~\cite{Haar-like,fd-afw,6619289,1410446} or CNNs~\cite{fd-cascadecnn,fd-mtcnn}, and finally apply face classifiers. However, inspired by RPN~\cite{fasterrcnn} and SSD~\cite{ssd}, modern anchor-based face detectors generate proposals by applying $k$ anchor boxes on each pixel of the extracted CNN features. Specifically, 3 scales and 3 aspect ratios are used in Faster R-CNN~\cite{fasterrcnn}, yielding $k=9$ anchors on each pixel of the feature maps. Moreover, the detection layer takes the same feature maps as input, yielding $4k$ outputs encoding the coordinates for $k$ anchor boxes from the regressor and $2k$ outputs for face scores from the classifier. Considering that most of the face boxes are near square, modern face detectors tend to set the aspect ratio of anchors to 1, while the scales depends. HR~\cite{fd-hr} defines 25 scales so as to match the cluster results on the WIDER Face~\cite{fd-widerface} training set. S$^3$FD assigns the anchor scale of 4 times the stride of the current layer to keep anchor sizes smaller than effective receptive fields~\cite{ERF} and ensure the same density of different scale anchors on the image. PyramidBox~\cite{fd-pyramidbox} introduces PyramidAnchors, which generates a group of anchors with larger regions corresponding to a face, such as head and body boxes, to have more context to help detect faces. In ~\cite{zcc}, extra shifted anchors are added to increase the anchor sample density, and significantly increased the average IoU between anchors and small faces. GroupSampling~\cite{GroupSampling} assigns anchors of different scales only on the bottom pyramid layer of FPN~\cite{fpn}, but it groups all training samples according to the anchor scales, and randomly samples from groups to ensure the positive and negative sample ratios between groups are the same. \section{Datasets and Evaluation} \label{review_eval} \begin{table*}[htbp] \centering \caption{Comparison of currently accessible face detection datasets, listed in the order of publication or started year. Note that UCCS~\cite{uccs} and WILDEST Face~\cite{wildestface} are not included because their data is not currently available. 'Blur', 'App.', 'Ill.', 'Occ.', 'Pose' in the 'Variations' columns denote blur, appearance, illumination, occlusion and pose respectively.} \label{tab:dataset-stats} \begin{tabular}{|c|c|c|c|c|c|c|c|ccccc|} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{\#Images} & \multirow{2}{*}{\#Faces} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\#Faces\\ Per Image\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}} AVG Resolution\\ ($W \times H$)\end{tabular}} & \multicolumn{3}{c|}{Split} & \multicolumn{5}{c|}{ Variations} \\ \cline{6-13} & & & & & Train & Val & Test & Blur & App. & Ill. & Occ. & Pose \\ \hline FDDB~\cite{fd-fddb} & 2,845 & 5,171 & 1.8 & $377 \times 399$ & - & - & 100\% & \checkmark & & & \checkmark & \checkmark \\ \hline AFW~\cite{fd-afw} & 205 & 468 & 2.3 & $1491 \times 1235$ & - & - & 100\% & & \checkmark & & & \checkmark \\ \hline PASCAL Face~\cite{fd-pascalface} & 851 & 1,335 & 1.5 & - & - & - & 100\% & & & & & \\ \hline MALF~\cite{fd-malf} & 5,250 & 11,931 & 2.2 & - & - & - & 100\% & & \checkmark & & \checkmark & \checkmark \\ \hline WIDER Face~\cite{fd-widerface} & 32,203 & 393,703 & 12.2 & $1024 \times 888$ & 40\% & 10\% & 50\% & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline MAFA~\cite{mafa} & 30,811 & 39,485 & 1.2 & $516 \times 512$ & 85\% & - & 15\% & & \checkmark & & \checkmark & \\ \hline IJB-A~\cite{ijba} & 48,378 & 497,819 & 10.2 & $1796 \times 1474$ & 50\% & - & 50\% & & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline IJB-B~\cite{ijbb} & 76,824 & 135,518 & 1.7 & $894 \times 599$ & - & - & 100\% & & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline IJB-C~\cite{ijbc} & 138,836 & 272,335 & 1.9 & $1010 \times 671$ & - & - & 100\% & & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline 4K-Face~\cite{4kface} & 5,102 & 35,217 & 6.9 & $3840 \times 2160$ & - & - & 100\% & & & & & \\ \hline UFDD~\cite{ufdd} & 6,425 & 10,897 & 1.6 & $1024 \times 774$ & - & - & 100\% & \checkmark & & \checkmark & & \\ \hline DARK Face~\cite{darkface} & 6,000 & 43,849 & 7.3 & $1080 \times 720$ & 100\% & - & - & & & \checkmark & & \\ \hline \end{tabular} \end{table*} To evaluate different face detection algorithms, datasets are needed. There have been several public datasets, which are FDDB~\cite{fd-fddb}, AFW~\cite{fd-afw}, PASCAL Face~\cite{fd-pascalface}, MALF~\cite{fd-malf}, WIDER Face~\cite{fd-widerface}, MAFA~\cite{mafa}, 4K-Face~\cite{4kface}, UFDD~\cite{ufdd} and DARK Face~\cite{darkface}. These datasets all consist of colored images from real-life scenes. Different datasets may utilize different evaluation criterion. In Section~\ref{datasets}, we present overviews of different datasets covering some statistics such as the number of images and faces, the source of images, the rules of labeling and challenges brought by the dataset. A detailed analysis of the face detection evaluation criterion is also included in Section~\ref{eval-criterion}. Detection results on the datasets are provided and analyzed in Section~\ref{datasets-results}. \subsection{Datasets} \label{datasets} Some essential statistics of currently accessible datasets are summarized in Table~\ref{tab:dataset-stats} including the total number of images and faces, faces per image, how the data was splitted different sets, etc. More details are introduced in the following part. \textbf{FDDB}\footnote{\url{http://vis-www.cs.umass.edu/fddb/}}~\cite{fd-fddb} is short for \textbf{F}ace \textbf{D}etection \textbf{D}ataset and \textbf{B}enchmark, which has been one of the most popular datasets for face detector evaluation since its publication in 2010. The images of FDDB were collected from Yahoo! News, 2,845 of which were selected after filtering out duplicate data. Faces were excluded with these factors, (a) height or width less than 20 pixels, (b) the two eyes being non-visible, (c) the angle between the nose and the ray from the camera to the head being less than 90 degrees, (d) failure estimation on position, size or orientation of faces by a human. This led to 5,171 faces left, which were annotated by drawing elliptical face regions covering from the forehead to the chin vertically, and the left cheek to the right cheek horizontally. FDDB helped advance uncontrained face detection in terms of the robustness of expression, pose, scale and occlusion. However, its images can be heavily biased toward celebrity faces since they were collected from the news. It is also worth noting that although the elliptical style of the face label adopted by FDDB is closer to human cognition, it is not adopted by later datasets and deep learning-based face detectors, which favor the bounding box style with a relatively easier method for defining positive/negative samples by calculating the Intersection over Union (IoU). Zhu et al. built an annotated faces in-the-wild (\textbf{AFW}\footnote{\url{http://www.cs.cmu.edu/~deva/papers/face/index.html}}) dataset~\cite{fd-afw} by randomly sampling images with at least one large face from Flickr. 468 faces were annotated from 205 images, each of which is labeled with a bounding box and 6 landmarks. \textbf{PASCAL Face}\footnote{\url{http://host.robots.ox.ac.uk/pascal/VOC/}}~\cite{fd-pascalface} was contructed by selecting 851 images from the PASCAL VOC~\cite{pascalvoc} test set with 1,335 faces annotated. Since the two datasets were built to help evaluate the face detectors proposed by ~\cite{fd-afw} and ~\cite{pascalvoc}, they only contain a few hundred images, resulting in limited variations in face appearance and background. Yang et al. created the \textbf{M}ulti-\textbf{A}ttribute \textbf{L}abelled \textbf{F}aces~\cite{fd-malf} (\textbf{MALF}\footnote{\url{http://www.cbsr.ia.ac.cn/faceevaluation/}}) dataset for fine-grained evaluation on face detection in the wild. The MALF dataset contains 5,250 images from Flickr and Baidu Search with 11,931 faces labeled, which is an evidently larger dataset than FDDB, AFW and PASCAL Face. The faces in MALF were annotated by drawing axis-aligned square bounding boxes, attempting to contain a complete face with the nose in the center of the bounding box. This may introduce noise for training face detectors since a square bounding box containing a 90-degree side faces can have over half of its content being cluttered background. In addition to labeling faces, some attributes were also annotated, such as gender, pose and occlusion. In 2016, \textbf{WIDER Face}\footnote{\url{http://shuoyang1213.me/WIDERFACE/}}~\cite{fd-widerface} was released, which has been the most popular and widely used face detection benchmark. The images in WIDER Face were collected from popular search engines for predefined event categories following LSCOM~\cite{lscom} and examined manually to filter out similar images and images without faces, resulting in 32,203 images in total for 61 event categories, which were split into 3 subsets for training, validation testing set. To keep large variations in scale, occlusion and pose, the annotation was performed following two main policies: (a) a bounding box should tightly contain the forehead, chin and cheek and is drew for each recognizable face and (b) an estimated bounding box should be drawn for an occluded face, producing 393,703 annotated faces in total. The number of faces per image reaches 12.2 and 50\% of the faces are of height between 10-50 pixels. WIDER Face outnumbers other datasets in Table~\ref{tab:dataset-stats} by a large margin. It means WIDER Face pays never-seen-before attention to small faces detection by providing a large number of images with the densest small faces for training, validation and testing. Furthermore, the authors of WIDER Face defined 'easy', 'medium' and 'hard' levels for the validation and test sets based on the detection rate of EdgeBox~\cite{edgebox}. It offers a much more detailed and fine-grained evaluation for face detectors. Hence, the WIDER Face dataset greatly advances the researches of CNN based face detectors, especially the multi-scale CNN designs and utilization of context. The last four datasets listed in Table~\ref{tab:dataset-stats} are less generic than those reviewed above, and focus on face detection in specified and different aspects. The \textbf{MAFA}\footnote{\url{http://www.escience.cn/people/geshiming/mafa.html}}~\cite{mafa} dataset focuses on masked face detection, containing 30,811 images with 39,485 masked faces labeled. In addition to the location of eyes and masks, the orientation of the face, the occlusion degree and the mask type were also annotated for each face. The IJB series\footnote{\url{https://www.nist.gov/programs-projects/face-challenges}}~\cite{ijba, ijbb, ijbc} were collected for multiple tasks, including face detection, verification, identification, and identity clustering. The IJB-C is the combination of IJB-A and IJB-B with some new face data. \textbf{4K-Face}\footnote{\url{https://github.com/Megvii-BaseDetection/4K-Face}}~\cite{4kface} was built for the evaluation of large face detection, and contains 5,102 4K-resolution images with 35,217 large faces (\textgreater512 pixels). \textbf{UFDD}\footnote{\url{https://ufdd.info}}~\cite{ufdd} provides a test set with 6,425 images and 10,897 faces in the variation of different weather conditions and degradtion such as lens impediments. \textbf{DARK Face}\footnote{\url{https://flyywh.github.io/CVPRW2019LowLight/}}~\cite{darkface} concentrates on face detection in low light conditions, and provides 6,000 low-light images for training dark face detector. Since the images are captured in real-world nighttime scenes such as streets, each image in DARK Face contains 7.3 faces on average which is relatively dense. \begin{figure*}[htbp] \centering \subfloat[Discontinuous ROC curves]{ \includegraphics[width=0.45\linewidth]{figs/fddb-disc_roc.png} } \qquad \subfloat[Continuous ROC curves]{ \includegraphics[width=0.45\linewidth]{figs/fddb-cont_roc.png} } \caption{ The results on the FDDB dataset, which are from the result page of FDDB {\url{http://vis-www.cs.umass.edu/fddb/results.html}}. } \label{fig:fddb_sota} \end{figure*} \begin{figure*}[htbp] \subfloat[WIDER Face Validation Set]{ \includegraphics[width=1.0\linewidth]{figs/widerface-val.png} } \\ \subfloat[WIDER Face Test Set]{ \includegraphics[width=1.0\linewidth]{figs/widerface-test.png} } \caption{ The results on the WIDER Face validation and test sets. The figures are from WIDER face homepage \url{http://shuoyang1213.me/WIDERFACE/}. } \label{fig:wider_sota} \end{figure*} \subsection{Accuracy Evaluation Criterion} \label{eval-criterion} There are mainly two accuracy evaluation criteria adopted by the datasets reviewed above, one of which is the receiver operating characteristic (ROC) curve obtained by plotting the true positive rate (TPR) against false positives such as those adopted by FDDB~\cite{fd-fddb}, MALF~\cite{fd-malf}, UCCS~\cite{uccs} and IJB~\cite{ijbc}, the other of which is the most popular evaluation criterion from PASCAL VOC~\cite{pascalvoc} by plotting the precision against recall while calculating average precision (AP), such as those adopted by AFW~\cite{fd-afw}, PASCAL Face~\cite{fd-pascalface}, WIDER Face~\cite{fd-widerface}, MAFA~\cite{mafa}, 4K-Face~\cite{4kface}, UFDD~\cite{ufdd}, DARK Face~\cite{darkface} and Wildest Face~\cite{wildestface}. Since these two kinds of evaluation criterion are two different methods for revealing the performance of detectors under the same calculation of the confusion matrix~\footnote{\url{https://en.wikipedia.org/wiki/Confusion_matrix}}, we choose the most popular evaluation criteria AP calculated from the precision-again-recall curve in the paper. To get a precision-again-recall curve, the confusion matrix, which is to define the true positives (TP), false positives (FP), false negatives (FN) and true negatives (TN) from the detection and ground truths, should be firstly calculated. A true positive is a detection result matched with a ground truth; otherwise, it is a false positive. The unmatched ground truths are defined as the false negatives. True negatives are not applied here since the background can be a large part of the image. To define whether two regions are matched or not, the commonly used intersection over union (IoU), also known as the Jaccard overlap, is applied: \begin{equation} \label{eq:ioudef} IoU = \frac{area(P) \cap area(GT)}{area(P) \cup area(GT)} \end{equation} where $P$ is the predicted region, and $GT$ is the ground truth region. In a widely used setting, the IoU threshold is set to 0.5, meaning if the IoU of a predicted region and a ground truth region is greater than or equal to 0.5, the predicted region is marked as matched and thus a true positive, otherwise it is a false positive. After determining true or false positives for each detection, the next step is to calculate the precision and recall from the detection result list sorted by score in descending order to plot the precision-against-recall curve. A granular confidence gap can be defined to sample more precision and recall, but for a simple explanation, we define the gap as a detection result. In $n$th sampling, we calculate the precision and recall from the top-$n$ detection results: \begin{align} \label{eq:prec-recall-def} Precision_{n} = \frac{TP_{n}}{TP_{n} + FP_{n}} \\ Recall_{n} = \frac{TP_{n}}{TP_{n} + FN_{n}} \end{align} where $TP_{n}$, $FP_{n}$ and $FN_{n}$ are true positives, false positives and false negatives from the top-$n$ results respectively. Let us say we have 1,000 detection results; then, we have 1,000 pairs of $(recall_i, precision_i)$ which are enough for plotting the curve. We can compute the area under the precision-against-recall curve, which is AP, to represent the overall performance of a face detector. Under the single IoU threshold setting of 0.5 in WIDER Face evaluation, the top AP for the hard test subset of WIDER reached 0.924. In the WIDER Face Challenge 2019 which uses the same data as the WIDER Face dataset but evaluates face detectors in 10 IoU thresholds of 0.50:0.05:0.95, the top average AP reaches 0.5756. \subsection{Results on Accuracy} \label{datasets-results} To understand the progress in recent years on face detection, the results of different datasets are collected from their official homepages. Because of space limitations, only the results from the two most popular datasets are listed. They are Fig.~\ref{fig:fddb_sota} for FDDB~\cite{fd-fddb} and Fig.~\ref{fig:wider_sota} for WIDER Face~\cite{fd-widerface}. The FDDB results since 2004 are listed. The current ROC curves are much better than those in the past. This means that the detection accuracy is much higher than in the past. The true positive rate is reaching 1.0. If you look into the samples in FDDB, you can find there are some tiny and blur faces in the ground truth data. Sometimes it is hard to decide whether they should be faces, even by humans. Therefore, we can say that the current detectors achieve perfect accuracy on FDDB, and almost all faces can be detected. The WIDER face is newer, larger and more challenging than FDDB. Most recent face detectors have been tested with it. From Fig.~\ref{fig:wider_sota}, it can be found that the accuracy is also very high even on the hard set. The improvement on mAP is not so obvious now. The mAP is almost saturated similar to FDDB. We must note that the current benchmarks, regardless of FDDB, WIDER or others, only evaluate the accuracy of detection and do not evaluate efficiency. If two detectors achieve similar mAP, but the computational cost of one is just half of another, surely we will think the detector with half computational cost is better than another. Since the accuracy metric is almost saturated, it is time to include efficiency in the evaluation. \section{Evaluation of Computational Cost} \label{comp} \begin{table*}[htbp] \caption{Equations of FLOPs calculation of different layers.} \label{tab:flops-calc} \centering \begin{tabularx}{\textwidth}{|c|c|X|} \hline NN layers & FLOPs & Explanation \\ \hline Conv & $C_{out} H_{out} W_{out} (2 C_{in} K^2 - 1)$ & \begin{tabular}{l} For each element in the output tensor, there are $C_inK^2$ multiplications between the \\ kernels and sliding windows, and $C_inK^2-1$ additions to sum up. If bias is used, 1 \\ FLOPs should be added to the FLOPs calculation of each element.~\cite{conv-eq} \end{tabular} \\ \hline Max Pool & $K^2 C_{out} H_{out} W_{out}$ & \begin{tabular}{l} For each element in the output tensor, we consider the worst situation where every \\ element in the kernel requires a comparison with each other. \end{tabular}\\ \hline ReLU & $2 C_{out} H_{out} W_{out}$ & \begin{tabular}{l} ReLU is usually implemented as $x*(x>0)$, which is much faster than directly \\ comparing $x$ with $0$. We consider a comparison as 1 FLOPs for simplicity. \end{tabular}\\ \hline Batch Norm & $6 C_{out} H_{out} W_{out}$ & \begin{tabular}{l} As~\cite{batchnorm} stated, the variances and means are fixed during inference. Therefore, 6 FLOPs\\ is accounted for applying the linear transform to each element. \end{tabular}\\ \hline L2-Norm & $3 C_{out} H_{out} W_{out}$ & \begin{tabular}{l} The L2-norm layer was proposed by~\cite{l2norm} to help features of late fusion work well, \\ which is defined as $L_2-norm(x)=\frac{x}{||x||_2}=\frac{x}{\sqrt{\sum^d_{i=1}|x_i|^2}}$, where $d$ usually stands for \\ channels. It takes approximately $2CHW$ FLOPs to calculate the $L_2$ norm \\ channel-wisely and $CHW$ FLOPs to perform $L_2$ norm element-wisely. \end{tabular} \\ \hline Bilinear Upsample & $19 C_{out} H_{out} W_{out}$ & \begin{tabular}{l} The definition of bilinear upsampling~\footnote{\url{https://en.wikipedia.org/wiki/Bilinear_interpolation}} contains 9 non-duplicate additions \\ and subtractions and 10 multiplications/divisions for calculating one element in the \\ output. \end{tabular} \\ \hline Sigmoid & $3 C_{out} H_{out} W_{out}$ & \begin{tabular}{l} The definition of sigmoid~\footnote{\url{https://en.wikipedia.org/wiki/Sigmoid_function}} contains 1 exponentiation, 1 addition and 1 division to \\ calculate one element in the output. \end{tabular} \\ \hline Softmax & $3E$ & \begin{tabular}{l} $E$ denotes the total number of elements in the output tensor. It takes approximately \\ $2E$ FLOPs to calculate the sum of the exponentiation of each element in different \\ channels, and $E$ FLOPs to calculate the final result. \end{tabular} \\ \hline \end{tabularx} \end{table*} Deep learning techniques have brought momentous improvement to face detection, and can detect faces more robustly in unconstrained environments. Most of the recent works train and test their models on WIDER Face~\cite{fd-widerface}. As shown in Fig.~\ref{fig:highest_ap_yearwise}, we can find a large AP leap from 2016 to 2017. However, the line has been flat since 2017. If we look deep into the official releasing code of recent works, it can be easily found that newer models tend to use larger scales and a wider range of scales as shown in Table~\ref{tab:fd-test-scales}. These test scales are usually not mentioned in the papers, but can lead to a non-negligibly great increase in computational cost just for slightly boosting the AP. We may even question: Is the AP improved by a better algorithm or the usage of a wider range of test scales? \subsection{Rules of FLOPs Calculation} \label{sec:rule-flops} \textbf{What kind of models are we going to re-evaluate?} First, the models must be open-source at least with the release of its test code and a trained model. We do not re-implement the methods since we want to ensure that the accuracy should be 100\% the same as the original authors claimed. Additionally, it is essential for us to choose one-stage models, as their FLOPs are independent of the number of faces in the images, and they have been the most studied frameworks in recent years. Third, we mainly choose the models from the WIDER Face result page for fair comparisons. \begin{table*}[htbp] \centering \caption{Test scales used by open-source one-stage face detectors~\cite{fd-hr, fd-ssh, fd-s3fd, fd-pyramidbox, fd-srn, fd-dsfd, fd-csp}. Note that the double check marks denote image flipping vertically in addition to the image at the current scale. SSH shrinks and enlarges images to several preset fixed sizes. Since S$^3$FD, two adaptive test scales are used to save GPU memory, one of which is "S" for adaptive shrinking, the other of which is "E" for recursively adaptive enlarging. Scale "F" denotes enlarging the image to the preset largest size.} \label{tab:fd-test-scales} \begin{tabular}{cc|cccccccccccc} \hline \multicolumn{1}{c|}{\multirow{2}{*}{Model}} & \multirow{2}{*}{Publication} & \multicolumn{12}{c}{test Scales (ratio)} \\ \cline{3-14} \multicolumn{1}{c|}{} & & 0.25 & 0.5 & 0.75 & 1 & 1.25 & 1.5 & 1.75 & 2.0 & 2.25 & S & E & F \\ \hline \multicolumn{1}{c|}{HR} & CVPR'17 & \checkmark & \checkmark & & \checkmark & & & & & & & & \\ \multicolumn{1}{c|}{S$^3$FD} & ICCV'17 & & \checkmark & & \checkmark & & & & & & \checkmark & \checkmark & \\ \multicolumn{1}{c|}{PyramidBox} & ECCV'18 & \checkmark & & \checkmark & \checkmark \checkmark & \checkmark & \checkmark & \checkmark & & & \checkmark & \checkmark & \\ \multicolumn{1}{c|}{SRN} & AAAI'19 & & \checkmark & & \checkmark \checkmark & & \checkmark & & & \checkmark & & & \checkmark \\ \multicolumn{1}{c|}{DSFD} & CVPR'19 & & \checkmark & & \checkmark \checkmark & \checkmark & & \checkmark & & \checkmark & \checkmark & \checkmark & \\ \multicolumn{1}{c|}{CSP} & CVPR'19 & \checkmark \checkmark & \checkmark \checkmark & \checkmark \checkmark & \checkmark \checkmark & \checkmark \checkmark & \checkmark \checkmark & \checkmark \checkmark & \checkmark \checkmark & \checkmark \checkmark & & & \\ \hline & & \multicolumn{12}{c}{test Scales (resize longer side)} \\ \cline{3-14} & & 100 & 300 & 500 & 600 & 700 & 800 & 900 & 1000 & 1100 & 1200 & 1400 & 1600 \\ \hline \multicolumn{1}{c|}{SSH} & ICCV'17 & & & \checkmark & & & \checkmark & & & & \checkmark & & \checkmark \\ \multicolumn{1}{c|}{SHF} & WACV'20 & \checkmark & \checkmark & & \checkmark & & & & \checkmark & & & \checkmark & \\ \multicolumn{1}{c|}{RetinaFace} & CVPR'20 & & & \checkmark & & & \checkmark & & & \checkmark & & \checkmark & \checkmark \\ \hline \end{tabular} \end{table*} \textbf{How do we calculate the FLOPs of different models?} We first validate whether the officially released trained models can perform as well as the authors state in their papers. It should be noted that we do not calculate the pre-processing and post-processing stages from a model's pipeline. In other words, only FLOPs of neural network layers such as convolution, activation, normalization, pooling and other layers are calculated. Given a 4D input tensor of size $N \times C_{in} \times H_{in} \times W_{in}$ as input, a neural network layer produces a 4D output tensor of size $N\times C_{out}\times H_{out}\times W_{out}$, where $N$ is the batch size which is dismissed for simplicity in the following since it is usually set to 1 during test, $C$, $H$ and $W$ are the channels, height and width of the tensor respectively. Additionally, $K$ is introduced to represent the kernel size for layers utilizing kernels such as convolution and pooling layers. Specifically, we treat floating point operations, such as addition, subtraction, multiplication, division and exponentiation the same, which should be 1 FLOPs for simplicity. With these assumptions, we are able to derive the equations for calculating FLOPs for different layers as listed in Table~\ref{tab:flops-calc}. We implement our FLOPs calculator based on PyTorch regarding all the rules and equations we discussed above, which accelerates the calculation of FLOPs by dismissing any calculation related to the value of tensors, while only computing the sizes of tensors and FLOPs. This calculator can also allow us to use the code of defining models from authors with minor changes, which reduces the statistics workload. We released our source code at \url{https://github.com/fengyuentau/PyTorch-FLOPs}. \begin{figure*}[htbp] \begin{center} \includegraphics[width=1.0\linewidth]{figs/ap_flops-val.pdf} \end{center} \caption{ The FLOPs vs. multi-scale AP of WIDER Face validation set. 7 models from the WIDER Face result page are listed, which are HR~\cite{fd-hr}, SSH~\cite{fd-ssh}, S$^3$FD~\cite{fd-s3fd}, PyramidBox~\cite{fd-pyramidbox}, SRN~\cite{fd-srn}, DSFD~\cite{fd-dsfd}, CSP~\cite{fd-csp}. (\textit{The TFLOPs for some speed-focusing face detectors are listed in Table~\ref{tab:speedy-detectors-comparison} because the TFLOPs are in a much smaller scale and cannot fit in this figure.})} \label{fig:val_ap_comp} \end{figure*} \begin{figure*}[htbp] \begin{center} \includegraphics[width=1.0\linewidth]{figs/ap_flops-test.pdf} \end{center} \caption{ The FLOPs vs. multi-scale test AP of WIDER Face test set. 7 models from the WIDER Face result page are listed, which are HR~\cite{fd-hr}, SSH~\cite{fd-ssh}, S$^3$FD~\cite{fd-s3fd}, PyramidBox~\cite{fd-pyramidbox}, SRN~\cite{fd-srn}, DSFD~\cite{fd-dsfd}, CSP~\cite{fd-csp}.} \label{fig:test_ap_comp} \end{figure*} \subsection{FLOPs vs. AP in Multi-Scale Test} The multi-scale test metric is to test a model with a set derived from an image at original and different scales (with aspect ratio fixed). The detection results of different scales are then merged and applied with the non-maximum suppression (NMS), so as to suppress the overlapped bounding boxes and reduce false positives. Based on the training data and scheme, a \textit{comfort zone} of a model is determined, which is a range of scales of faces that can be detected. The multi-scale test metric can improve a model's AP by re-scaling out-of-zone faces back in the comfort zone. However, since we cannot determine which of the faces in the test set are out-of-zone, we have to apply re-scaling to every image in the set. It leads to the multiplied increase in FLOPs per image. \begin{table*}[htbp] \centering \caption{How different scales impact the AP of PyramidBox~\cite{fd-pyramidbox}. We use Scale = {1} as the baseline, and then try adding different scales one by one to test how AP is impacted by different scales.} \label{tab:pyramidbox-1} \begin{tabular}{llllll|lll|l} \multicolumn{6}{c|}{Test Scales} & \multirow{2}{*}{$AP_{easy}$} & \multirow{2}{*}{$AP_{medium}$} & \multirow{2}{*}{$AP_{hard}$} & \multirow{2}{*}{TFLOPs} \\ 0.25 & 0.75 & 1 & 1.25 & 1.5 & 1.75 & & & & \\ \hline & & \checkmark & & & & 0.947 & 0.936 & 0.875 & 1.37 \\ \hline \checkmark & & \checkmark & & & & 0.954(+0.007) & 0.939(+0.003) & 0.872(-0.003) & 1.45(+0.008) \\ & \checkmark & \checkmark & & & & 0.952(+0.005) & 0.940(+0.004) & 0.874(-0.001) & 2.14(+0.77) \\ & & \checkmark & \checkmark & & & 0.948(+0.001) & 0.938(+0.002) & 0.884(+0.009) & 2.72(+1.35) \\ & & \checkmark & & \checkmark & & 0.947(+0.000) & 0.937(+0.001) & 0.881(+0.006) & 2.46(+1.09) \\ & & \checkmark & & & \checkmark & 0.946(-0.001) & 0.936(+0.000) & 0.874(-0.001) & 1.63(+0.26) \end{tabular} \end{table*} \begin{table*}[htbp] \centering \caption{How much will AP and FLOPs decrease if a scale is removed? The detector PyramidBox is employed.} \label{tab:pyramidbox-2} \begin{tabular}{llllll|lll|l} \multicolumn{6}{c|}{Test Scales} & \multirow{2}{*}{$AP_{easy}$} & \multirow{2}{*}{$AP_{medium}$} & \multirow{2}{*}{$AP_{hard}$} & \multirow{2}{*}{TFLOPs} \\ 0.25 & 0.75 & 1 & 1.25 & 1.5 & 1.75 & & & & \\ \hline \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & 0.957 & \multicolumn{1}{l|}{0.945} & 0.886 & 4.94 \\ \hline & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & 0.949(-0.008) & \multicolumn{1}{l|}{0.940(-0.005)} & 0.884(-0.002) & 4.85(-0.009) \\ \checkmark & & \checkmark & \checkmark & \checkmark & \checkmark & 0.954(-0.003) & \multicolumn{1}{l|}{0.942(-0.003)} & 0.885(-0.001) & 4.16(-0.780) \\ \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & 0.955(-0.002) & \multicolumn{1}{l|}{0.940(-0.005)} & 0.850(-0.013) & 3.58(-1.360) \\ \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & 0.957(+0.000) & \multicolumn{1}{l|}{0.944(-0.001)} & 0.880(-0.006) & 3.58(-1.360) \\ \checkmark & \checkmark & \checkmark & \checkmark & & \checkmark & 0.958(+0.001) & \multicolumn{1}{l|}{0.945(+0.000)} & 0.884(-0.002) & 3.84(-1.100) \\ \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & & 0.957(+0.000) & \multicolumn{1}{l|}{0.945(+0.000)} & 0.886(+0.000) & 4.67(-0.270) \end{tabular} \end{table*} Fig.~\ref{fig:val_ap_comp} and Fig.~\ref{fig:test_ap_comp} show the multi-scale test AP and FLOPs of different models on the validation and test sets of the WIDER Face dataset, respectively. We can find a clear trend in the two figures. The FLOPs are increasing and the AP is improving in the sequence of methods HR~\cite{fd-hr}, SSH~\cite{fd-ssh}, S$^3$FD~\cite{fd-s3fd}, PyramidBox~\cite{fd-pyramidbox}, SRN~\cite{fd-srn} and CSP~\cite{fd-csp}. There are two methods do not follow the trend. The first one is DSFD~\cite{fd-dsfd} which has more than 3 times of FLOPs than SRN and CSP, but the AP is similar with those of SRN and CSP. It means DSFD has unreasonable high computational cost. Then second detector is RetinaFace~\cite{retinaface} which gained the best AP but the computational cost is much lower than most other methods. The two figures (Fig.~\ref{fig:val_ap_comp} and Fig.~\ref{fig:test_ap_comp}) give us a clear view of different face detection models and can guide us understand different models deeper. \subsection{FLOPs vs. AP in Single-Scale Test} FLOPs can sharply increase in two ways: fundamentally increasing through introducing more complex modules to the network, and through multi-scale testing. As Table~\ref{tab:fd-test-scales} shows, these models are all tested on various scales. However, why models are tested on these various scales is seldom discussed. How much contribution on AP can one scale bring? Are any scales not necessary? \textbf{Single-scale test on a single model}. Table~\ref{tab:pyramidbox-1} shows the AP contribution of different scales. The easy subset in WIDER Face~\cite{fd-widerface} contains a large margin of faces of regular size and some large faces, as a result of which shrinking images can help improve the AP. We can observe that $AP_{hard}$ gains the most from scales ${1, 1.25}$ and ${1, 1.5}$ , but not for scale ${1, 1.75}$. Together with FLOPs, we can also observe an increase to the peak at scale ${1, 1.25}$ and then a sharp drop for larger scales. The reason is that a threshold for the largest size of images is set to avoid exceeding the GPU memory. This means that not all 1.75x resized images were sent to a detector in the experiments. Table~\ref{tab:pyramidbox-2} shows how much the AP and FLOPs will decrease if a model tested without a scale. As the missing scale becomes larger, the decrease of $AP_{easy}$ decreases. However, this pattern does not apply to $AP_{medium}$ and $AP_{hard}$. The reason is that the enlarged images will be skipped if their size goes beyond the preset limit, so as to avoid exceeding GPU memory. The larger the scale is, the fewer images will be re-scaled and tested. The drop of FLOPs greatly decreases on scale 1.75. This is because the PyramidBox pretrained model is mainly trained on scale 1. The two tables~\ref{tab:pyramidbox-1} and ~\ref{tab:pyramidbox-2} imply that $AP_{easy}$ is the most sensitive to scales 0.25, $AP_{medium}$ is the most sensitive to scale 0.25 and 1, and $AP_{hard}$ is the most sensitive to scale 1. Note that this is highly related to the training scale. If the model is trained differently, the conclusion may change accordingly. \textbf{Single-scale test on multiple models}. Table~\ref{tab:multi-models-single-scale} shows the AP and FLOPs of different models on scale 1. The large overall leap is brought by PyramidBox~\cite{fd-pyramidbox}, which mainly introduces the FPN~\cite{fpn} module to fuse features from two adjacent scales and the context enhancing module from SSH~\cite{fd-ssh}. The computational cost of PyramidBox is ~2X compared with SSH but less than 1/2 of DSFD. However, the AP achieved by PyramidBox and DSFD are comparable. \begin{table}[htp] \centering \caption{AP and FLOPs of different models on scale 1.} \label{tab:multi-models-single-scale} \begin{tabular}{c|ccc|c} Model & $AP_{easy}$ & $AP_{medium}$ & $AP_{hard}$ & TFLOPs \\ \hline RetinaFace & 0.952 & 0.942 & 0.776 & 0.198 \\ S$3$FD & 0.924 & 0.906 & 0.816 & 0.571 \\ CSP & 0.948 & 0.942 & 0.774 & 0.571 \\ SSH & 0.925 & 0.909 & 0.731 & 0.587 \\ PyramidBox & 0.947 & 0.936 & 0.875 & 1.387 \\ DSFD & 0.949 & 0.936 & 0.845 & 1.532 \end{tabular} \end{table} If some benchmarks can evaluate FLOPs or some other similar efficiency measurements, different face detectors can compare more fairly. It will also promote face detection research to a better stage. \subsection{FLOPs vs Latency} To compare the two measurements, we convert existing models to the Open Neural Network Exchange (ONNX) format and run them using the ONNXRUNTIME\footnote{\url{https://github.com/microsoft/onnxruntime}} in this comparison for fair comparison. Note that due to the different supports to ONNX converting of different DL frameworks, we managed to convert RetinaFace~\cite{retinaface}, SRN~\cite{fd-srn}, DSFD~\cite{fd-dsfd} and CSP~\cite{fd-csp} to ONNX format. The results are in Table~\ref{tab:flops-time}. These models are evaluated using an NVIDIA QUADRO RTX 6000 with CUDA 10.2, and an INTEL Xeon Gold 6132 CPU @ 2.60 GHz. The powerful GPU contains 4,609 CUDA parallel-processing cores and 24GB memory. We can observe that both FLOPs and forward latency increase from RetinaFace~\cite{retinaface} to DSFD~\cite{fd-dsfd}. Note that although the average FLOPs of RetinaFace are just one-fifth of SRN's, the forward latency of RetinaFace is almost near half of SRN's, implying that FLOPs are not linearly correlated to latency due to the differences in implementation, hardware settings, memory efficiency and so on. The reason why the post-processing latency of DSFD and CSP sharply increase is that they do not use GPU-accelerated NMS as others do. \begin{table}[htbp] \centering \caption{ State-of-the-art open-source models tested with a 720P image containing several faces at scale=1.0 only. We average the FLOPs (AVG TFLOPs) and latency (AVG Latency) by running the test for each model 100 times. Note that 'Post-Proc' denotes post-processing stages, such as decoding from anchors, NMS and so on. For this stage, we adopt the original processing code of each model.} \label{tab:flops-time} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}AVG\\ TFLOPs\end{tabular}} & \multicolumn{3}{c|}{AVG Latency (ms)} \\ \cline{3-5} & & \begin{tabular}[c]{@{}c@{}}Forward\\ (GPU)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Forward\\ (CPU)\end{tabular} & Post-Proc \\ \hline RetinaFace & 0.201 & 131.60 & 809.24 & 8.74 (GPU) \\ \hline CSP & 0.579 & 154.55 & 1955.20 & 27.74 (CPU) \\ \hline SRN & 1.138 & 204.77 & 2933.16 & 8.71 (GPU) \\ \hline DSFD & 1.559 & 219.63 & 3671.46 & 76.32 (CPU) \\ \hline \end{tabular} \end{table} \section{Speed-Focusing Face Detectors} \label{review_speed} \begin{table*}[htbp] \centering \caption{ Popular and active open-source face detectors at Github. Note that 'AVG GFLOPs' are computed on WIDER Face validation set in single-scale test where only scale=1.0. Also note that latency is measured on CPU. } \label{tab:speedy-detectors-comparison} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\#CONV\\ Layers\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\#Params\\ ($\times10^6$)\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}AVG\\ GFLOPs\end{tabular}} & \multicolumn{3}{c|}{WIDER Face Val Set} & \multicolumn{2}{c|}{Latency (ms)} \\ \cline{5-9} & & & & $AP_{easy}$ & $AP_{medium}$ & $AP_{hard}$ & Forward & Post-Proc \\ \hline FaceBoxes~\cite{FaceBoxes} & 33 & 1.013 & 1.541 & 0.845 & 0.777 & 0.404 & 16.52 & 7.16 \\ \hline ULFG-slim-320~\cite{ULFG} & \multirow{2}{*}{42} & \multirow{2}{*}{0.390} & \multirow{2}{*}{2.000} & 0.652 & 0.646 & 0.520 & \multirow{2}{*}{19.03} & \multirow{2}{*}{2.37} \\ \cline{1-1} \cline{5-7} ULFG-slim-640~\cite{ULFG} & & & & 0.810 & 0.794 & 0.630 & & \\ \cline{1-9} ULFG-RFB-320~\cite{ULFG} & \multirow{2}{*}{52} & \multirow{2}{*}{0.401} & \multirow{2}{*}{2.426} & 0.683 & 0.678 & 0.571 & \multirow{2}{*}{21.27} & \multirow{2}{*}{1.90} \\ \cline{1-1} \cline{5-7} ULFG-RFB-640~\cite{ULFG} & & & & 0.816 & 0.802 & 0.663 & & \\ \hline YuFaceDetectNet~\cite{YuFaceDetectNet} & 43 & 0.085 & 2.549 & 0.856 & 0.842 & 0.727 & 23.47 & 32.81 \\ \hline LFFD-v2~\cite{LFFD} & 45 & 1.520 & 37.805 & 0.875 & 0.863 & 0.752 & 178.47 & 6.70 \\ \hline LFFD-v1~\cite{LFFD} & 65 & 2.282 & 55.555 & 0.910 & 0.880 & 0.778 & 229.35 & 10.08 \\ \hline \end{tabular} \end{table*} For the face detectors introduced in the previous sections, the main target is to reach a better AP. Their computational costs are heavy and normally in magnitude of TFLOPs. It is unrealistic to deploy those heavy models to a face-related system. There are some other open source face detectors whose target is to make face detection run in real time for practical applications. Their computational costs are in the magnitude of those of GFLOPs or 10 GFLOPs and are much less than the previous costs. Here we group them as speed-focusing face detectors. We collect the most-popular ones from \url{github.com}, and review them in terms of network architectures, AP, FLOPs and efficiency. \textbf{FaceBoxes}~\cite{FaceBoxes} is one of the first one-stage deep learning-based models to achieve real-time face detection. FaceBoxes rapidly downsamples feature maps to a stride 32 with two convolution layers with large kernels. Inception blocks~\cite{Inception-V1} are introduced to enhanced feature maps at stride of 32. Following the multi-scale mechanism from SSD~\cite{ssd}, FaceBoes detects on layers \texttt{inception3}, \texttt{conv3\_2} and \texttt{conv4\_2} for faces at different scales, resulting in an AP of 0.960 on FDDB~\cite{fd-fddb} and 20 FPS on an INTEL E5-2660v3 CPU at 2.60 GHz. \textbf{YuFaceDetectNet}~\cite{YuFaceDetectNet} adopts a light MobileNet~\cite{MobileNet-v1} as the backbone. Compared to FaceBoxes, YuFaceDetectNet has more convolution layers on each stride to have fine-grained features, and detects on the extra layer of stride 16, which improves the recall of small faces. The evaluation results of the model on the WIDER Face~\cite{fd-widerface} validation set are 0.856 (Easy), 0.842 (Medium) and 0.727 (Hard). The main and well-known repository, libfacedetection~\cite{libfacedetection}, takes YuFaceDetectNet as the detection model and offers pure C++ implementation without dependence on DL frameworks, resulting from 77.34 FPS for $640 \times 480$ images to 2,027.74 FPS for $128 \times 96$ images on an INTEL i7-1065G7 CPU at 1.3 GHz. \textbf{LFFD}~\cite{LFFD} introduces residual blocks for feature extraction, and proposes receptive fields as the natural anchors. Its faster version LFFD-v2 managed to achieve 0.875 (Easy), 0.863 (Medium) and 0.754 (Hard) on the WIDER Face validation set, while running at 472 FPS using CUDA 10.0 and an NVIDIA RTX 2080Ti GPU. \textbf{ULFG}~\cite{ULFG} adds even more convolution layers on each stride, taking the advantage of depth-wise convolution, which is friendly to edge devices in terms of FLOPs and forward latency. As reported, the slim version of ULFG has an AP of 0.770 (Easy), 0.671 (Medium) and 0.395 (Hard) on the WIDER Face validation set, and can run at 105 FPS with an input resolution of $320 \times 240$ on an ARM A72 at 1.5 GHz. These light-weight models are developed using various frameworks and tested on different hardware. For fair comparison, we export these models from their original frameworks to ONNX and test using ONNXRUNTIME on a INTEL i7-5930K CPU at 3.50GHz. Results are shown in Table~\ref{tab:speedy-detectors-comparison}. We can observe that more \texttt{CONV} layers do not lead to more parameters (FacesBoxes and ULFG series) and more FLOPs (YuFaceDetectNet and ULFG series). This is mainly because of the extensive usage of depth-wise convolution in ULFG. Additionally, note that more FLOPs do not lead to more forward latency due to depth-wise convolution. The post-processing latency across different face detectors seems inconsistent with the forward latency, and we verified that this is caused by different numbers of bounding boxes sent to NMS and the different implementations of NMS (Python-based or Cython-based). \section{Conclusions and Discussions} \label{conclusion} Face detection is one of the most important and popular topics yet still challenging in computer vision. Deep learning has brought remarkable breakthroughs for face detectors. Face detection is more robust and accurate even in unconstrained real-world environments. In this paper, recent deep learning-based face detectors and benchmarks are introduced. From the evaluations of accuracy and efficiency on different deep face detectors, we can find that we can reach a very high accuracy if we do not consider the computational cost. However, there should be a simple and beautiful solution for face detection since it is simpler than generic object detection. The research on face detection can focus on the topics introduced in the following topics in the future. \textbf{Superfast Face Detection}. There is no definition for superfast face detection. Ideally, superfast face detector should be able to run in real time on low-cost edge devices even when the input image is 1080P. Empirically speaking, we would like to expect it to be less than 100M FLOPs with a 1080P image as input. For real-world applications, efficiency is one of the key issues. Efficient face detectors can help to save both energy, the cost of hardware and improve the responsiveness for edge devices, such as CCTV cameras and mobile phones. \textbf{Detecting Faces in the Long-tailed Distribution}. Face samples can be regarded as a long-tailed distribution. Most face detectors are trained for the dominant part of the distribution. We have already had enough samples for faces with variances in illumination, pose, scale, occlusion, blur, distortion in the WIDER Face dataset. But what about other faces like the old and damaged ones? As people getting old, there are many wrinkles on their faces; and people who suffer from illnesses or accidents may have damaged faces, such as burn scars on the faces. Face detection is not only a technical problem but also a humanitarian problem, meaning that this technology should serve all the people, not only the dominant part of the population. Ideally, face detectors should be able to detect all kinds of faces. However, in most face datasets and benchmarks, most faces are from young people. The final goal of face detection is to detect faces with very high accuracy and high efficiency. Therefore, the algorithms can be deployed to many kinds of edge devices and centralized servers to improve the perception capability of computers; currently, there still is a considerable gap. Face detectors can achieve good accuracy but still require considerable computations. Improving the efficiency should be the next step. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
proofpile-arXiv_065-9
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Radar detector design has been a problem of long-standing interest in the radar literature \cite{Meyer1973}. Optimal target detection is achieved by a test derived from the Newman-Pearson (NP) criterion, which guarantees the highest probability of detection for a given probability of false alarm \cite{Kay1998}. For example, it is well known that the square law detector is optimal in the NP sense when targets, clutter and interference follow Gaussian distributions \cite{Richards2005}. However, such Gaussian models do not always reflect the actual operating conditions. For example, in high-resolution radar applications and/or at low grazing angles, the probability of observing large values of the clutter amplitude is greater than based on Rayleigh statistics \cite{Richards2010}. Accordingly, the clutter amplitude is commonly modeled by non-Gaussian distributions, such as Weibull, K, and lognormal distributions \cite{Gini2007}. In most cases involving heavy-tailed non-Gaussian models, the structure of optimal detectors requires intractable numerical integrations, which makes the implementation of such detectors computationally intensive \cite{Farina1986}. Moreover, detectors designed based on specific models suffer performance degradation when the actual signals behave differently than their assumed mathematical models \cite{Gini1998}. Deep learning has been successfully applied in a variety of fields to solve problems for which reliable mathematical models are unavailable or too complex to yield feasible optimal solutions \cite{osvaldo2}. In the radar field, deep learning-based approaches have been proposed for implementing NP detectors \cite{Moya2013}. These approaches rely on the assumption that the training and the actual operating environments have similar statistical properties. However, in practice, mismatch between training and testing conditions may result in detection performance loss \cite{Wei2019NN}. To deal with this problem, a straightforward approach is to re-train the detector from scratch based on newly collected data from the current operating environment. However, this approach requires large overhead in terms of data collection and training time. In \cite{Wei2021}, the authors train the detector with a mixture of data from different environments to robustify detection performance with respect to the characteristics of observed data. Transfer learning and meta-learning are two different learning paradigms in machine learning that can be used to address different operating conditions. In transfer learning, the goal is to extract knowledge from source tasks, so as to improve training efficiency on a target task \cite{transfer}. In contrast, meta-learning, or ``learning to learn", aims to infer an inductive bias given the data from multiple related tasks, to enable efficient training on a new task within a certain class \cite{meta2020}. The inferred inductive bias can take the form of a learning procedure or a prior over the model parameters \cite{osvaldo3}. Notably, reference \cite{MAML2017} proposes the model-agnostic meta-learning (MAML) algorithm, which optimizes the initialization for the parameters of a neural network to enable fast adaptation on a new task. Recently, the two learning paradigms have been applied to solve problems in communication systems to achieve fast adaptation, such as resource allocation in wireless networks \cite{resource}, downlink beamforming optimization \cite{beamform}, demodulating over fading channels \cite{osvaldo_journal}, and decoding for convolutional and turbo codes \cite{Mind}. In this work, we aim to design detectors that adapt quickly to the radar operating environment based on few data samples. Unlike the techniques in \cite{Moya2013}, the deep learning-based detector design is separated into two stages, i.e., an offline training stage and an adaptation stage. The goal of the offline training stage is to leverage prior knowledge from multiple radar environments via transfer learning or meta-learning. Being offline, this phase may be implemented with large amounts of data. In the adaptation phase, we refine the training based on few data samples collected from the current radar operating environment. Specific contributions of this work are: (1) we propose a two-stage learning procedure that enables detectors to adapt quickly to the current radar environment; (2) we develop a deep transfer learning-based algorithm for fast adaptation of radar detection; and (3) we develop a meta-learning-based algorithm by leveraging MAML for the design of fast adaptation of radar detection. \section{Problem Formulation} Consider a pulse-compression radar system, in which the system seeks to detect the presence of a single target over a clutter field. The transmitter emits $K$ modulated chips with deterministic complex amplitudes forming a coded waveform $\mathbf{y}=[y_1,\ldots, y_K]^T$. After chip matched filtering and sampling, the discrete-time $K$-dimensional column received signal, for the range cell under test containing a point target, is given by \begin{equation} \mathbf{z} = \alpha\mathbf{y} + \mathbf{c} + \mathbf{n}, \end{equation} where $\alpha$ is the complex target gain; $\mathbf{c}$ is the clutter vector; and $\mathbf{n}$ is the noise vector. Detection of the presence of a target in the range cell under test leads to the following binary hypothesis test \begin{equation} \left\{ \begin{aligned} &\mathcal{H}_0:{\mathbf{z}}={\mathbf{c}}+{\mathbf{n}} \\ &\mathcal{H}_1:{\mathbf{z}}=\alpha\mathbf{y}+{\mathbf{c}}+{\mathbf{n}}, \end{aligned} \right. \label{eq:binary hypo} \end{equation} where $\mathcal{H}_0$ and $\mathcal{H}_1$ represent the hypotheses under which the target is absent and present, respectively. The deep learning-based detection system under study is illustrated in Fig. \ref{f:detector}. The receiver is implemented as a parametric function $f_{\boldsymbol{\phi}}(\cdot)$ with trainable parameter vector $\boldsymbol{\phi}$. The radar operating environment is modeled as a stochastic system that produces the vector $\mathbf{z}\in \mathbb{C}^K$ from a likelihood function $p(\mathbf{z}|\mathcal{H}_i)$. The absence or presence of a target is indicated by the values $i=0$ and $i=1$, respectively. The receiver passes the vector $\mathbf{z}$ through a trainable mapping $p=f_{\boldsymbol{\phi}}(\mathbf{z})$, which produces the scalar $p\in (0,1)$. The final decision $\hat{i}\in \{0,1\}$ is made by comparing the output of the receiver $p$ to a hard threshold in the interval $(0,1)$. \begin{figure}[H] \vspace{-8ex} \hspace{16ex} \includegraphics[width=1.4 \linewidth]{figs/illustrate_system} \vspace{-179ex} \caption{A deep-learning-based detector. The receiver is implemented as a parametric function $f_{\boldsymbol{\phi}}(\cdot)$ with trainable parameter vector $\boldsymbol{\phi}$.} \label{f:detector} \end{figure} In the following, we detail an implementation of the receiver $f_{\boldsymbol{\phi}}(\cdot)$ based on a feedforward neural network. Denote the number of neurons at the input and output layers $M_0$ and $M_L$, respectively. A feedforward neural network is a parametric function that defines a mapping $\mathbf{u}_{L}={f}_{\boldsymbol{\phi}}(\mathbf{u}_0)$ from an input real-valued vector $\mathbf{u}_0\in\mathbb{R}^{M_0}$ to an output real-valued vector $\mathbf{u}_L\in\mathbb{R}^{M_L}$ via $L$ successive layers. At the $l$th layer, the intermediate output is \begin{equation} \mathbf{u}_l={f}_{\boldsymbol{\phi}_l} (\mathbf{u}_{l-1})= \sigma(\mathbf{W}_l\mathbf{u}_{l-1}+\mathbf{b}_l), \end{equation} where $\sigma(\cdot)$ represents an element-wise activation function, and $\boldsymbol{\phi}_l=\{\mathbf{W}_l, \mathbf{b}_l\}$ includes the trainable parameters of the $l$th layer consisting of the weight matrix $\mathbf{W}_l$ and the bias vector $\mathbf{b}_l$. The receiver trainable parameter set contains all parameters of the network, and is denoted $\boldsymbol{\phi}=\{\boldsymbol{\phi}_1,\ldots,\boldsymbol{\phi}_L\}$. The input real-valued vector $\mathbf{u}_0$ of the receiver $f_{\boldsymbol{\phi}}(\cdot)$ is the concatenation of the real and imaginary parts of the received vector $\mathbf{z}$. The last layer of the neural network $f_{\boldsymbol{\phi}}(\cdot)$ is selected as a logistic regression layer. The absence or presence of the target is determined by comparing the output of the neural network $f_{\boldsymbol{\phi}}(\cdot)$ with a threshold selected via the false alarm probability. \section{Two-stage Design of Fast Adaptive Receiver} This section proposes a two-stage learning procedure that enables the receiver to adapt quickly to the current radar environment. As illustrated in Fig. \ref{f:receiver}, the two-stage learning procedure consists of an offline training stage and an adaptation stage. The goal of the offline training stage is to leverage prior knowledge from data collected in multiple radar environments. The offline training stage can be carried out either via transfer learning or meta-learning. The adaptation stage refines the receiver parameter vector based on few samples from the current radar operating environment. \begin{figure}[H] \vspace{-8ex} \hspace{10ex} \includegraphics[width=1.5 \linewidth]{figs/illustrate} \vspace{-182ex} \caption{Illustration of the two-stage design of the fast adaptive receiver. Offline training can leverage either via transfer learning or meta-learning.} \label{f:receiver} \end{figure} The offline dataset is a collection of data from $N$ radar operating environments, and is denoted as $\mathcal{D}=\{\mathcal{D}_n \}_{n=1}^N$, where $\mathcal{D}_n=\big\{ \mathbf{z}^{(q)}_n\sim p_n(\mathbf{z}|\mathcal{H}_{i^{(q)}_n}), {i^{(q)}_n}\in\{0,1\} \big\}_{q=1}^{Q}$ contains $Q$ independent and identically (i.i.d.) training samples collected from the $n$th radar environment. The adaptation dataset is denoted as $\mathcal{D}_a=\big\{ \mathbf{z}^{(q)}_a\sim p_a(\mathbf{z}|\mathcal{H}_{i^{(q)}_a}), {i^{(q)}_a}\in\{0,1\} \big\}_{q=1}^{Q_a}$, which contains $Q_a$ i.i.d. samples collected from the current radar operating environment. Note that the number of samples used during the adaptation phase could be much smaller than that used during the offline training phase. The standard cross-entropy \cite{Moya2013} is adopted as the loss function for the receiver. For any dataset $\mathcal{D}_0=\big\{ \mathbf{z}^{(q)}\sim p(\mathbf{z}|\mathcal{H}_{i^{(q)}}), {i^{(q)}}\in\{0,1\} \big\}_{q=1}^{Q_0}$ containing $Q_0$ pairs of received signal $\mathbf{z}$ and target state indicator $i$, the empirical cross-entropy loss is a function of the trainable parameter vector $\boldsymbol{\phi }$, and is given by \begin{equation} \begin{aligned} {\mathcal{L}}_{\mathcal{D}_0}(\boldsymbol{\phi})=\frac{1}{Q_0}\sum_{q=1}^{Q_0} -i^{(q)}\ln f_{\boldsymbol{\phi}}(\mathbf{z}^{(q)})-(1-i^{(q)})\ln\big[1- f_{\boldsymbol{\phi}}(\mathbf{z}^{(q)})\big]. \end{aligned} \label{eq: rx loss grad} \end{equation} \subsection{Design of Fast Adaptive Receiver via Transfer Learning} The goal of transfer learning is to extract prior knowledge from the offline dataset $\mathcal{D}$ during the offline training stage, so as to improve training efficiency based on the adaptation dataset $\mathcal{D}_a$. In the offline training stage, we aim to find a shared parameter vector, denoted as $\boldsymbol{\psi}_{\text{TL}}$, that performs well over $N$ operating environments. Based on the offline dataset $\mathcal{D}$, the shared parameter vector $\boldsymbol{\psi}_{\text{TL}}$ is obtained by minimizing the sum of empirical loss (\ref{eq: rx loss grad}) over $N$ operating environments. The shared parameter vector $\boldsymbol{\psi}_{\text{TL}}$ is optimized iteratively according to stochastic gradient descent (SGD) rule \begin{equation} \boldsymbol{\psi}_{\text{TL}} \leftarrow \boldsymbol{\psi} _{\text{TL}}- \beta\nabla_{\boldsymbol{\psi}_{\text{TL}}}\sum_{n=1}^{N} {\mathcal{L}}_{\mathcal{D}_{n}}(\boldsymbol{\psi}_{\text{TL}}), \label{eq: transfer sgd} \end{equation} where $\beta>0$ is the learning rate. In the adaptation stage, we refine the training based on the adaptation dataset $\mathcal{D}_a$. The receiver parameter $\boldsymbol{\phi}$ is updated as follows: \begin{equation} \boldsymbol{\phi}^{(m)}=\boldsymbol{\phi}^{(m-1)} -\alpha {\nabla}_{\boldsymbol{\phi}}{\mathcal{L}}_{\mathcal{D}_a}(\boldsymbol{\phi}^{(m-1)}) \label{eq: rx sgd} \end{equation} across iterations $m=1,2,\ldots$ with ${\boldsymbol{\phi}}^{(0)}=\boldsymbol{\psi}_{\text{TL}}$, where $\alpha>0$ is the learning rate. The algorithm of fast adaptive detector via transfer learning is summarized in Algorithm 1. It is finally noted that the approach described in this subsection is also known as joint learning or fine-tuning (e.g., \cite{osvaldo_journal}). A more general implementation of transfer learning would also assume the availability of data from the operating environment, which is not considered here. \begin{algorithm}[] \DontPrintSemicolon \SetAlgoLined \tcc{offline training stage} initialize shared parameter vector $\boldsymbol{\psi}_{\text{TL}}$\; \While{stopping criterion not satisfied}{ evaluate the overall empirical loss $\sum_{n=1}^{N} {\mathcal{L}}_{\mathcal{D}_{n}}(\boldsymbol{\psi}_{\text{TL}})$ over $N$ radar operating environments \; update shared parameter vector $\boldsymbol{\psi}_{\text{TL}}$ via (\ref{eq: transfer sgd}) } \BlankLine \tcc{adaptation stage} initialize $\boldsymbol{\phi}^{(m)}=\boldsymbol{\psi}_{\text{TL}}$, and set $m=0$\; \While{stopping criterion not satisfied}{ update receiver parameter vector $\boldsymbol{\phi}$ via (\ref{eq: rx sgd}) } \caption{Design of Fast Adaptive Detector via Transfer Learning} \end{algorithm} \vspace{-2ex} \subsection{Design of Fast Adaptive Detector via Meta-Learning} As discussed in Section I, MAML aims to find the inferred inductive bias in the form of the initialization of the receiver neural network, denoted as $\boldsymbol{\psi}_{\text{MAML}}$, to enable fast adaptation on the current operating environment. In the offline training stage, the dataset $\mathcal{D}_n$ for the $n$th operating environment is randomly divided into two subsets. One subset of $\mathcal{D}_n$ is referred to as \emph{support set} $\mathcal{D}_n^s$, which is used to update the local receiver parameter $\boldsymbol{\theta}_n$ for the $n$th operating environment. The other subset of $\mathcal{D}_n$ is referred to as \emph{query set} $\mathcal{D}_n^q$, which is used to estimate the meta-learning empirical loss. Mathematically, with a single SGD iteration, we obtain the local update $\boldsymbol{\theta}_n$ according to the support set $\mathcal{D}_n^s$ for the $n$th operating environment \begin{equation} \boldsymbol{\theta}_{n}=\boldsymbol{\psi}_{\text{MAML}}-\alpha \nabla _{\boldsymbol{\psi }_{\text{MAML}} }{\mathcal{L}}_{\mathcal{D}_{n}^{s}}(\boldsymbol{\psi}_{\text{MAML}}). \label{eq: local_update} \end{equation}% Based on the query set $\mathcal{D}_n^q$ and the local update $ \boldsymbol{\theta}_n$ (\ref{eq: local_update}), the meta-training empirical loss is given by \begin{equation} \begin{aligned} {\mathcal{L}}(\boldsymbol{\psi }_{\text{MAML}})&=\sum_{n=1}^{N_b}{\mathcal{L}}_{\mathcal{D}_{n}^{q}}(% \boldsymbol{\theta}_{n})\\ &=\sum_{n=1}^{N_b}{\mathcal{L}}_{\mathcal{D}_{n}^{q}}\big( \boldsymbol{\psi}_{\text{MAML}}-\alpha \nabla _{\boldsymbol{\psi }_{\text{MAML}} }{\mathcal{L}}_{\mathcal{D}_{n}^{s}}(\boldsymbol{\psi}_{\text{MAML}})\big), \end{aligned} \label{eq:empirical} \end{equation} where $N_b$ represents the number of environments selected randomly from $N$ offline operating environments for each meta training update, and is referred to as \emph{meta batch size}. The initialization of the receiver parameter vector $\boldsymbol{\psi}_{\text{MAML}}$ is learned via minimizing (\ref{eq:empirical}) through SGD \begin{equation} \begin{aligned} \boldsymbol{\psi}_{\text{MAML}}\leftarrow \boldsymbol{\psi}_{\text{MAML}}-\beta \nabla _{\boldsymbol{\psi}_{\text{MAML}}}{\mathcal{L}}(\boldsymbol{\psi}_{\text{MAML}}) &= \boldsymbol{\psi}_{\text{MAML}} - \beta\sum_{n=1}^{N_b} \nabla _{\boldsymbol{\psi}_{\text{MAML}}}{\mathcal{L}}_{\mathcal{D}_{n}^{q}}(\boldsymbol{\theta}_{n})\\ & = \boldsymbol{\psi}_{\text{MAML}} - \beta \sum_{n=1}^{N_b}\big(\boldsymbol{I}-\alpha \nabla _{\boldsymbol{\psi}_{\text{MAML}}}^2{\mathcal{L}}_{\mathcal{D}_{n}^{s}}(\boldsymbol{\psi}_{\text{MAML}}) \big) \nabla_{\boldsymbol{\theta}_n}\mathcal{L}_{\mathcal{D}_{n}^{q}}(\boldsymbol{\theta}_{n}). \end{aligned} \label{eq:initial_update} \end{equation} Note that the update (\ref{eq:initial_update}) requires calculating the second-order gradient $\nabla _{\boldsymbol{\psi}_{\text{MAML}}}^2{\mathcal{L}}_{\mathcal{D}_{n}^{s}}(\boldsymbol{\psi}_{\text{MAML}})$, which may be treated as constant and further ignored to speed up offline training time \cite{MAML2017}. During the adaptation stage, in a manner similar to (\ref{eq: rx sgd}), the receiver parameter vector $\boldsymbol{\phi}$ is updated according to the gradient ${\nabla}_{\boldsymbol{\phi}}{\mathcal{L}}_{\mathcal{D}_{a}}(\boldsymbol{\phi})$ with ${\boldsymbol{\phi}}^{(0)}=\boldsymbol{\psi}_{\text{MAML}}$. The algorithm of the design of fast adaptive detector via MAML is summarized in Algorithm 2. \begin{algorithm} \DontPrintSemicolon \SetAlgoLined \tcc{offline training stage} initialize parameter vector $\boldsymbol{\psi}_{\text{MAML}}$\; \While{stopping criterion not satisfied}{ select $N_b$ environments randomly from $N$ offline operating environments\; \For{each selected operating environment $n$}{ compute local parameter vector $\boldsymbol{\theta}_n$ from (\ref{eq: local_update}) based on $\mathcal{D}_n^{s}$ } compute meta-training empirical loss ${\mathcal{L}}(\boldsymbol{\psi}_{\text{MAML}})$ from (\ref{eq:empirical}) based on $\mathcal{D}_n^{q}$\; update parameter vector $\boldsymbol{\psi}_{\text{MAML}}$ via (\ref{eq:initial_update}) } \BlankLine \tcc{adaptation stage} initialize $\boldsymbol{\phi}^{(m)}=\boldsymbol{\psi}_{\text{MAML}}$, and set $m=0$\; \While{stopping criterion not satisfied}{ update receiver parameter vector $\boldsymbol{\phi}$ via (\ref{eq: rx sgd}) } \caption{Design of Fast Adaptive Detector via Meta-Learning} \end{algorithm} \section{Numerical Results} This section first introduces models and parameters used in the simulation setup, and provides numerical results to evaluate the detection performance of the two proposed approaches. \subsection{Models and Parameters} The target is assumed stationary with a Rayleigh envelope $\alpha\sim\mathcal{CN}(0, \sigma_{\alpha}^2)$, where $\sigma_{\alpha}^2$ is the target power. The noise vector $\mathbf{n}$ has a zero-mean, complex Gaussian distribution with correlation matrix $\boldsymbol{\Omega}_n = \sigma^2_w \mathbf{I} + \sigma_I^2\boldsymbol{\Omega}_I$, where $\sigma^2_w$ is the thermal noise power level, $\sigma^2_I$ is signal-independent interference power level, and $\boldsymbol{\Omega}_I$ is the correlation matrix of the signal-independent interference. The signal-to-noise ratio is defined as $\text{SNR}\triangleq 10\log_{10}\{\sigma_{\alpha}^2/\sigma_w^2\}$. The signal-to-interference ratio is defined as $\text{SIR}\triangleq 10\log_{10}\{\sigma_{\alpha}^2/\sigma_I^2\}$. The signal-independent interference is located in the frequency band $[f_l, f_u]$, where $f_l$ and $f_u$ represent the lower and upper frequencies normalized by the sampling frequency $f_s$, respectively. Accordingly, the correlation matrix is $[\boldsymbol{\Omega}_I]_{v,h}=f_u-f_l$ if $v=h$, and $[\boldsymbol{\Omega}_I]_{v,h}=[e^{j2\pi f_u (v-h)}-e^{j2\pi f_l (v-h)}]/[j2\pi(v-h)]$ otherwise, with $(v,h)\in\{1,\ldots,K\}^2$. The clutter vector $\mathbf{c}$ is the superposition of returns from $2K-1$ range cells \cite{Stoica2012}, namely \begin{equation} {\mathbf{c}}=\sum_{\substack{ g=-K+1 }}^{K-1}{\gamma }_{g}\mathbf{J% }_{g}{\mathbf{y}}, \end{equation} where $\mathbf{J}_g$ and $\gamma_g$ represent the shift matrix and the complex clutter scattering coefficient for the $g$th range cell, respectively. Elements of the shift matrix is given by $[\mathbf{J}_g]_{v,h}=1$ if $v-h=g$, and $[\mathbf{J}_g]_{v,h}=0$ otherwise, with $(v,h)\in\{1,\ldots,K\}^2$. The clutter scattering coefficient $\gamma_g$ follows coherent Weibull distribution with shape parameter $\lambda$ and median ${\sigma}_m$ \cite{Richards2010}. Note that the nominal range of the shape parameter is $0.25\leq\lambda\leq2$ \cite{shape}. When the shape parameter $\lambda=2$, the clutter scattering coefficients are complex Gaussian random variables. Based on the assumed mathematical models of target, clutter, and noise, the optimal detector in the NP sense is available (see Appendix D of \cite{Wei2021} for details). The coded waveform $\mathbf{y}$ is a unit norm linear frequency modulated pulse with $K=16$ chips, namely $\mathbf{y} (k)= e^{j\pi R(k/f_s)^2} / \sqrt{K}$, for $k=0,\ldots, K-1$ with a chip rate $R=(100\times10^3)/(40\times10^{-6})$ Hz/s and a sampling rate $f_s=200$ kHz. A homogeneous clutter environment is considered. The median of the clutter scattering coefficient is set to $\sigma_m=0.0004$. The offline training stage is performed at $\text{SNR}_{\text{tr}}= 24\text{ dB}$. The offline training stage consists of $N=40$ different operating environments, which include two types of clutter distributions $\lambda_{\text{tr}}\in\{0.25, 2\}$, two signal-to-interference ratios $\text{SIR}_{\text{tr}}\in \{10\text{ dB}, 17\text{ dB}\}$, and ten different correlation matrices $\boldsymbol{\Omega}_I$ with a fixed frequency difference between the upper and lower normalized frequencies, i.e., $f_{u, \text{tr}}-f_{l,\text{tr}}=0.1$. The offline dataset for each operating environment $\mathcal{D}_n$ consists of $Q=4\times 10^5$ samples, equally divided between the $\mathcal{H}_0$ an $\mathcal{H}_1 $ hypotheses. The adaptation stage is performed at $\text{SNR}_{\text{a}}=20\text{ dB}$ and $\text{SIR}_{\text{a}}=16\text{ dB}$. The upper and lower normalized frequencies are $f_{u, a}=0.6$ and $f_{l, a}=0.4$, respectively. The adaptation dataset $\mathcal{D}_a$ contains $Q_a=8000$ samples, equally divided between $\mathcal{H}_0$ and $\mathcal{H}_1$ hypotheses. Unless stated otherwise, we choose $40$ gradient updates to do adaptation for the two proposed approaches, i.e., $m=40$. In the testing stage, $2\times 10^5$ samples under hypothesis $\mathcal{H}_0$ are used to estimate the probability of false alarm $P_{fa}$, while $5\times10^4$ samples under hypothesis $\mathcal{H}_1$ are used to estimate the probability of detection $P_{d}$. Unless stated otherwise, the testing stage is performed at $\text{SNR}_{\text{te}}=13\text{ dB}$. Receiver operating characteristic (ROC) curves are obtained via Monte Carlo simulation by varying the threshold applied at the output of the receiver. The receiver is a feedforward neural network with four layers, i.e., an input layer with $2K=32$ neurons, two hidden layers with $48$ neurons, and an output layer with $1$ neuron. The sigmoid function is adopted as the activation function for all neurons. In the offline training stage, we use a minibatch of size 128. The learning rates are set to $\alpha=0.2$ and $\beta=0.002$, respectively. For the design of fast adaptive detector via MAML, the meta batch size is set to $N_b=10$. In the adaptation stage, the batch of adaptation samples is adopted to estimate the empirical cross-entropy loss (\ref{eq: rx loss grad}) with the learning rate $\alpha=0.002$. \subsection{Results} Fig. \ref{f:adapt_2} illustrates the adaptation capability of the two proposed approaches in the presence of Gaussian clutter, i.e., $\lambda=2$, given the probability of false alarm $P_{fa} = 10^{-3}$. We compare the performance of the two proposed approaches with: (1) training from scratch (no prior knowledge) \cite{Moya2013}, whereby offline training data $\mathcal{D}$ is not used, and the receiver parameter vector $\boldsymbol{\phi}$ is initialized randomly; (2) an ideal Gaussian detector \cite{Wei2021} having access to the actual operating environment conditions. As shown in the figure, the detection performance of the proposed two approaches is close to the upper bound set by the ideal Gaussian detector \cite{Wei2021} even for a small number of gradient updates. Moreover, MAML is seen to adapt faster as compared with transfer learning. In contrast, conventional learning from scratch performs poorly due to the limited number of gradient updates. \begin{figure}[H] \vspace{-4ex} \hspace{20ex} \includegraphics[width=0.6 \linewidth]{figs/adapt11_2} \vspace{-2ex} \caption{Illustration of the adaptation capability of the two proposed approaches in the presence of Gaussian clutter with $P_{fa}=10^{-3}$.} \label{f:adapt_2} \end{figure} Fig. \ref{f:roc_2} compares ROC curves of the two proposed approaches in Gaussian clutter with the fixed number of gradient updates during the adaptation. With the limited number of gradient updates, the ROC curve obtained based on conventional learning from scratch is not shown due to poor performance. It is observed that the MAML-based detector outperforms the transfer learning-based detector in the presence of Gaussian clutter. For instance, for $P_{fa}=5\times10^{-3}$, MAML-based detector yields $P_d=0.74$, while transfer learning-based detector yields $P_d=0.6$. \begin{figure}[H] \vspace{-2ex} \hspace{20ex} \includegraphics[width=0.6 \linewidth]{figs/roc11_2} \vspace{-2ex} \caption{ROC curves of transfer learning-based detector and MAML-based detector in Gaussian clutter.} \label{f:roc_2} \end{figure} Detection performance of the proposed two approaches in non-Gaussian clutter $\lambda=0.25$ is shown in Fig. \ref{f:roc_025}. When the clutter is non-Gaussian, the optimal detector is not available. Thus, receiver training with a large number of training samples and gradient updates is adopted as the benchmark. The testing stage is performed at $\text{SNR}_{\text{te}}=25$ dB. As shown in the figure, both the transfer learning-based detector and MAML-based detector provide comparable detection performance with the benchmark. Moreover, the ideal Gaussian detector \cite{Wei2021} provides the worst detection performance due to the mismatch between the assumed mathmatical models and the actual non-Gaussian testing environment. \begin{figure}[H] \vspace{-2ex} \hspace{20ex} \includegraphics[width=0.6 \linewidth]{figs/roc11_025} \vspace{-2ex} \caption{ROC curves of transfer learning-based detector and MAML-based detector in non-Gaussian clutter.} \label{f:roc_025} \end{figure} \section{Conclusions} This paper proposes two methods of learning radar detectors. Each method comprises an offline training stage and an adaptation stage. We have developed two offline training algorithms, both of which enable fast adaptation of detectors with limited data. The offline training stage may be implemented either via transfer learning or meta-learning. Numerical results have shown that the proposed two approaches make remarkable gains over the receiver training with no prior knowledge. Moreover, the meta-learning-based detector outperforms the transfer learning-based detector in both Gaussian and non-Gaussian clutter. \section*{Acknowledgment} Research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-20-2-0219. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. \section{Introduction} Radar detector design has been a problem of long-standing interest in the radar literature \cite{Meyer1973}. Optimal target detection is achieved by a test derived from the Newman-Pearson (NP) criterion, which guarantees the highest probability of detection for a given probability of false alarm \cite{Kay1998}. For example, it is well known that the square law detector is optimal in the NP sense when targets, clutter and interference follow Gaussian distributions \cite{Richards2005}. However, such Gaussian models do not always reflect the actual operating conditions. For example, in high-resolution radar applications and/or at low grazing angles, the probability of observing large values of the clutter amplitude is greater than based on Rayleigh statistics \cite{Richards2010}. Accordingly, the clutter amplitude is commonly modeled by non-Gaussian distributions, such as Weibull, K, and lognormal distributions \cite{Gini2007}. In most cases involving heavy-tailed non-Gaussian models, the structure of optimal detectors requires intractable numerical integrations, which makes the implementation of such detectors computationally intensive \cite{Farina1986}. Moreover, detectors designed based on specific models suffer performance degradation when the actual signals behave differently than their assumed mathematical models \cite{Gini1998}. Deep learning has been successfully applied in a variety of fields to solve problems for which reliable mathematical models are unavailable or too complex to yield feasible optimal solutions \cite{osvaldo2}. In the radar field, deep learning-based approaches have been proposed for implementing NP detectors \cite{Moya2013}. These approaches rely on the assumption that the training and the actual operating environments have similar statistical properties. However, in practice, mismatch between training and testing conditions may result in detection performance loss \cite{Wei2019NN}. To deal with this problem, a straightforward approach is to re-train the detector from scratch based on newly collected data from the current operating environment. However, this approach requires large overhead in terms of data collection and training time. In \cite{Wei2021}, the authors train the detector with a mixture of data from different environments to robustify detection performance with respect to the characteristics of observed data. Transfer learning and meta-learning are two different learning paradigms in machine learning that can be used to address different operating conditions. In transfer learning, the goal is to extract knowledge from source tasks, so as to improve training efficiency on a target task \cite{transfer}. In contrast, meta-learning, or ``learning to learn", aims to infer an inductive bias given the data from multiple related tasks, to enable efficient training on a new task within a certain class \cite{meta2020}. The inferred inductive bias can take the form of a learning procedure or a prior over the model parameters \cite{osvaldo3}. Notably, reference \cite{MAML2017} proposes the model-agnostic meta-learning (MAML) algorithm, which optimizes the initialization for the parameters of a neural network to enable fast adaptation on a new task. Recently, the two learning paradigms have been applied to solve problems in communication systems to achieve fast adaptation, such as resource allocation in wireless networks \cite{resource}, downlink beamforming optimization \cite{beamform}, demodulating over fading channels \cite{osvaldo_journal}, and decoding for convolutional and turbo codes \cite{Mind}. In this work, we aim to design detectors that adapt quickly to the radar operating environment based on few data samples. Unlike the techniques in \cite{Moya2013}, the deep learning-based detector design is separated into two stages, i.e., an offline training stage and an adaptation stage. The goal of the offline training stage is to leverage prior knowledge from multiple radar environments via transfer learning or meta-learning. Being offline, this phase may be implemented with large amounts of data. In the adaptation phase, we refine the training based on few data samples collected from the current radar operating environment. Specific contributions of this work are: (1) we propose a two-stage learning procedure that enables detectors to adapt quickly to the current radar environment; (2) we develop a deep transfer learning-based algorithm for fast adaptation of radar detection; and (3) we develop a meta-learning-based algorithm by leveraging MAML for the design of fast adaptation of radar detection. \section{Problem Formulation} Consider a pulse-compression radar system, in which the system seeks to detect the presence of a single target over a clutter field. The transmitter emits $K$ modulated chips with deterministic complex amplitudes forming a coded waveform $\mathbf{y}=[y_1,\ldots, y_K]^T$. After chip matched filtering and sampling, the discrete-time $K$-dimensional column received signal, for the range cell under test containing a point target, is given by \begin{equation} \mathbf{z} = \alpha\mathbf{y} + \mathbf{c} + \mathbf{n}, \end{equation} where $\alpha$ is the complex target gain; $\mathbf{c}$ is the clutter vector; and $\mathbf{n}$ is the noise vector. Detection of the presence of a target in the range cell under test leads to the following binary hypothesis test \begin{equation} \left\{ \begin{aligned} &\mathcal{H}_0:{\mathbf{z}}={\mathbf{c}}+{\mathbf{n}} \\ &\mathcal{H}_1:{\mathbf{z}}=\alpha\mathbf{y}+{\mathbf{c}}+{\mathbf{n}}, \end{aligned} \right. \label{eq:binary hypo} \end{equation} where $\mathcal{H}_0$ and $\mathcal{H}_1$ represent the hypotheses under which the target is absent and present, respectively. The deep learning-based detection system under study is illustrated in Fig. \ref{f:detector}. The receiver is implemented as a parametric function $f_{\boldsymbol{\phi}}(\cdot)$ with trainable parameter vector $\boldsymbol{\phi}$. The radar operating environment is modeled as a stochastic system that produces the vector $\mathbf{z}\in \mathbb{C}^K$ from a likelihood function $p(\mathbf{z}|\mathcal{H}_i)$. The absence or presence of a target is indicated by the values $i=0$ and $i=1$, respectively. The receiver passes the vector $\mathbf{z}$ through a trainable mapping $p=f_{\boldsymbol{\phi}}(\mathbf{z})$, which produces the scalar $p\in (0,1)$. The final decision $\hat{i}\in \{0,1\}$ is made by comparing the output of the receiver $p$ to a hard threshold in the interval $(0,1)$. \begin{figure}[H] \vspace{-8ex} \hspace{16ex} \includegraphics[width=1.4 \linewidth]{figs/illustrate_system} \vspace{-179ex} \caption{A deep-learning-based detector. The receiver is implemented as a parametric function $f_{\boldsymbol{\phi}}(\cdot)$ with trainable parameter vector $\boldsymbol{\phi}$.} \label{f:detector} \end{figure} In the following, we detail an implementation of the receiver $f_{\boldsymbol{\phi}}(\cdot)$ based on a feedforward neural network. Denote the number of neurons at the input and output layers $M_0$ and $M_L$, respectively. A feedforward neural network is a parametric function that defines a mapping $\mathbf{u}_{L}={f}_{\boldsymbol{\phi}}(\mathbf{u}_0)$ from an input real-valued vector $\mathbf{u}_0\in\mathbb{R}^{M_0}$ to an output real-valued vector $\mathbf{u}_L\in\mathbb{R}^{M_L}$ via $L$ successive layers. At the $l$th layer, the intermediate output is \begin{equation} \mathbf{u}_l={f}_{\boldsymbol{\phi}_l} (\mathbf{u}_{l-1})= \sigma(\mathbf{W}_l\mathbf{u}_{l-1}+\mathbf{b}_l), \end{equation} where $\sigma(\cdot)$ represents an element-wise activation function, and $\boldsymbol{\phi}_l=\{\mathbf{W}_l, \mathbf{b}_l\}$ includes the trainable parameters of the $l$th layer consisting of the weight matrix $\mathbf{W}_l$ and the bias vector $\mathbf{b}_l$. The receiver trainable parameter set contains all parameters of the network, and is denoted $\boldsymbol{\phi}=\{\boldsymbol{\phi}_1,\ldots,\boldsymbol{\phi}_L\}$. The input real-valued vector $\mathbf{u}_0$ of the receiver $f_{\boldsymbol{\phi}}(\cdot)$ is the concatenation of the real and imaginary parts of the received vector $\mathbf{z}$. The last layer of the neural network $f_{\boldsymbol{\phi}}(\cdot)$ is selected as a logistic regression layer. The absence or presence of the target is determined by comparing the output of the neural network $f_{\boldsymbol{\phi}}(\cdot)$ with a threshold selected via the false alarm probability. \section{Two-stage Design of Fast Adaptive Receiver} This section proposes a two-stage learning procedure that enables the receiver to adapt quickly to the current radar environment. As illustrated in Fig. \ref{f:receiver}, the two-stage learning procedure consists of an offline training stage and an adaptation stage. The goal of the offline training stage is to leverage prior knowledge from data collected in multiple radar environments. The offline training stage can be carried out either via transfer learning or meta-learning. The adaptation stage refines the receiver parameter vector based on few samples from the current radar operating environment. \begin{figure}[H] \vspace{-8ex} \hspace{10ex} \includegraphics[width=1.5 \linewidth]{figs/illustrate} \vspace{-182ex} \caption{Illustration of the two-stage design of the fast adaptive receiver. Offline training can leverage either via transfer learning or meta-learning.} \label{f:receiver} \end{figure} The offline dataset is a collection of data from $N$ radar operating environments, and is denoted as $\mathcal{D}=\{\mathcal{D}_n \}_{n=1}^N$, where $\mathcal{D}_n=\big\{ \mathbf{z}^{(q)}_n\sim p_n(\mathbf{z}|\mathcal{H}_{i^{(q)}_n}), {i^{(q)}_n}\in\{0,1\} \big\}_{q=1}^{Q}$ contains $Q$ independent and identically (i.i.d.) training samples collected from the $n$th radar environment. The adaptation dataset is denoted as $\mathcal{D}_a=\big\{ \mathbf{z}^{(q)}_a\sim p_a(\mathbf{z}|\mathcal{H}_{i^{(q)}_a}), {i^{(q)}_a}\in\{0,1\} \big\}_{q=1}^{Q_a}$, which contains $Q_a$ i.i.d. samples collected from the current radar operating environment. Note that the number of samples used during the adaptation phase could be much smaller than that used during the offline training phase. The standard cross-entropy \cite{Moya2013} is adopted as the loss function for the receiver. For any dataset $\mathcal{D}_0=\big\{ \mathbf{z}^{(q)}\sim p(\mathbf{z}|\mathcal{H}_{i^{(q)}}), {i^{(q)}}\in\{0,1\} \big\}_{q=1}^{Q_0}$ containing $Q_0$ pairs of received signal $\mathbf{z}$ and target state indicator $i$, the empirical cross-entropy loss is a function of the trainable parameter vector $\boldsymbol{\phi }$, and is given by \begin{equation} \begin{aligned} {\mathcal{L}}_{\mathcal{D}_0}(\boldsymbol{\phi})=\frac{1}{Q_0}\sum_{q=1}^{Q_0} -i^{(q)}\ln f_{\boldsymbol{\phi}}(\mathbf{z}^{(q)})-(1-i^{(q)})\ln\big[1- f_{\boldsymbol{\phi}}(\mathbf{z}^{(q)})\big]. \end{aligned} \label{eq: rx loss grad} \end{equation} \subsection{Design of Fast Adaptive Receiver via Transfer Learning} The goal of transfer learning is to extract prior knowledge from the offline dataset $\mathcal{D}$ during the offline training stage, so as to improve training efficiency based on the adaptation dataset $\mathcal{D}_a$. In the offline training stage, we aim to find a shared parameter vector, denoted as $\boldsymbol{\psi}_{\text{TL}}$, that performs well over $N$ operating environments. Based on the offline dataset $\mathcal{D}$, the shared parameter vector $\boldsymbol{\psi}_{\text{TL}}$ is obtained by minimizing the sum of empirical loss (\ref{eq: rx loss grad}) over $N$ operating environments. The shared parameter vector $\boldsymbol{\psi}_{\text{TL}}$ is optimized iteratively according to stochastic gradient descent (SGD) rule \begin{equation} \boldsymbol{\psi}_{\text{TL}} \leftarrow \boldsymbol{\psi} _{\text{TL}}- \beta\nabla_{\boldsymbol{\psi}_{\text{TL}}}\sum_{n=1}^{N} {\mathcal{L}}_{\mathcal{D}_{n}}(\boldsymbol{\psi}_{\text{TL}}), \label{eq: transfer sgd} \end{equation} where $\beta>0$ is the learning rate. In the adaptation stage, we refine the training based on the adaptation dataset $\mathcal{D}_a$. The receiver parameter $\boldsymbol{\phi}$ is updated as follows: \begin{equation} \boldsymbol{\phi}^{(m)}=\boldsymbol{\phi}^{(m-1)} -\alpha {\nabla}_{\boldsymbol{\phi}}{\mathcal{L}}_{\mathcal{D}_a}(\boldsymbol{\phi}^{(m-1)}) \label{eq: rx sgd} \end{equation} across iterations $m=1,2,\ldots$ with ${\boldsymbol{\phi}}^{(0)}=\boldsymbol{\psi}_{\text{TL}}$, where $\alpha>0$ is the learning rate. The algorithm of fast adaptive detector via transfer learning is summarized in Algorithm 1. It is finally noted that the approach described in this subsection is also known as joint learning or fine-tuning (e.g., \cite{osvaldo_journal}). A more general implementation of transfer learning would also assume the availability of data from the operating environment, which is not considered here. \begin{algorithm}[] \DontPrintSemicolon \SetAlgoLined \tcc{offline training stage} initialize shared parameter vector $\boldsymbol{\psi}_{\text{TL}}$\; \While{stopping criterion not satisfied}{ evaluate the overall empirical loss $\sum_{n=1}^{N} {\mathcal{L}}_{\mathcal{D}_{n}}(\boldsymbol{\psi}_{\text{TL}})$ over $N$ radar operating environments \; update shared parameter vector $\boldsymbol{\psi}_{\text{TL}}$ via (\ref{eq: transfer sgd}) } \BlankLine \tcc{adaptation stage} initialize $\boldsymbol{\phi}^{(m)}=\boldsymbol{\psi}_{\text{TL}}$, and set $m=0$\; \While{stopping criterion not satisfied}{ update receiver parameter vector $\boldsymbol{\phi}$ via (\ref{eq: rx sgd}) } \caption{Design of Fast Adaptive Detector via Transfer Learning} \end{algorithm} \vspace{-2ex} \subsection{Design of Fast Adaptive Detector via Meta-Learning} As discussed in Section I, MAML aims to find the inferred inductive bias in the form of the initialization of the receiver neural network, denoted as $\boldsymbol{\psi}_{\text{MAML}}$, to enable fast adaptation on the current operating environment. In the offline training stage, the dataset $\mathcal{D}_n$ for the $n$th operating environment is randomly divided into two subsets. One subset of $\mathcal{D}_n$ is referred to as \emph{support set} $\mathcal{D}_n^s$, which is used to update the local receiver parameter $\boldsymbol{\theta}_n$ for the $n$th operating environment. The other subset of $\mathcal{D}_n$ is referred to as \emph{query set} $\mathcal{D}_n^q$, which is used to estimate the meta-learning empirical loss. Mathematically, with a single SGD iteration, we obtain the local update $\boldsymbol{\theta}_n$ according to the support set $\mathcal{D}_n^s$ for the $n$th operating environment \begin{equation} \boldsymbol{\theta}_{n}=\boldsymbol{\psi}_{\text{MAML}}-\alpha \nabla _{\boldsymbol{\psi }_{\text{MAML}} }{\mathcal{L}}_{\mathcal{D}_{n}^{s}}(\boldsymbol{\psi}_{\text{MAML}}). \label{eq: local_update} \end{equation}% Based on the query set $\mathcal{D}_n^q$ and the local update $ \boldsymbol{\theta}_n$ (\ref{eq: local_update}), the meta-training empirical loss is given by \begin{equation} \begin{aligned} {\mathcal{L}}(\boldsymbol{\psi }_{\text{MAML}})&=\sum_{n=1}^{N_b}{\mathcal{L}}_{\mathcal{D}_{n}^{q}}(% \boldsymbol{\theta}_{n})\\ &=\sum_{n=1}^{N_b}{\mathcal{L}}_{\mathcal{D}_{n}^{q}}\big( \boldsymbol{\psi}_{\text{MAML}}-\alpha \nabla _{\boldsymbol{\psi }_{\text{MAML}} }{\mathcal{L}}_{\mathcal{D}_{n}^{s}}(\boldsymbol{\psi}_{\text{MAML}})\big), \end{aligned} \label{eq:empirical} \end{equation} where $N_b$ represents the number of environments selected randomly from $N$ offline operating environments for each meta training update, and is referred to as \emph{meta batch size}. The initialization of the receiver parameter vector $\boldsymbol{\psi}_{\text{MAML}}$ is learned via minimizing (\ref{eq:empirical}) through SGD \begin{equation} \begin{aligned} \boldsymbol{\psi}_{\text{MAML}}\leftarrow \boldsymbol{\psi}_{\text{MAML}}-\beta \nabla _{\boldsymbol{\psi}_{\text{MAML}}}{\mathcal{L}}(\boldsymbol{\psi}_{\text{MAML}}) &= \boldsymbol{\psi}_{\text{MAML}} - \beta\sum_{n=1}^{N_b} \nabla _{\boldsymbol{\psi}_{\text{MAML}}}{\mathcal{L}}_{\mathcal{D}_{n}^{q}}(\boldsymbol{\theta}_{n})\\ & = \boldsymbol{\psi}_{\text{MAML}} - \beta \sum_{n=1}^{N_b}\big(\boldsymbol{I}-\alpha \nabla _{\boldsymbol{\psi}_{\text{MAML}}}^2{\mathcal{L}}_{\mathcal{D}_{n}^{s}}(\boldsymbol{\psi}_{\text{MAML}}) \big) \nabla_{\boldsymbol{\theta}_n}\mathcal{L}_{\mathcal{D}_{n}^{q}}(\boldsymbol{\theta}_{n}). \end{aligned} \label{eq:initial_update} \end{equation} Note that the update (\ref{eq:initial_update}) requires calculating the second-order gradient $\nabla _{\boldsymbol{\psi}_{\text{MAML}}}^2{\mathcal{L}}_{\mathcal{D}_{n}^{s}}(\boldsymbol{\psi}_{\text{MAML}})$, which may be treated as constant and further ignored to speed up offline training time \cite{MAML2017}. During the adaptation stage, in a manner similar to (\ref{eq: rx sgd}), the receiver parameter vector $\boldsymbol{\phi}$ is updated according to the gradient ${\nabla}_{\boldsymbol{\phi}}{\mathcal{L}}_{\mathcal{D}_{a}}(\boldsymbol{\phi})$ with ${\boldsymbol{\phi}}^{(0)}=\boldsymbol{\psi}_{\text{MAML}}$. The algorithm of the design of fast adaptive detector via MAML is summarized in Algorithm 2. \begin{algorithm} \DontPrintSemicolon \SetAlgoLined \tcc{offline training stage} initialize parameter vector $\boldsymbol{\psi}_{\text{MAML}}$\; \While{stopping criterion not satisfied}{ select $N_b$ environments randomly from $N$ offline operating environments\; \For{each selected operating environment $n$}{ compute local parameter vector $\boldsymbol{\theta}_n$ from (\ref{eq: local_update}) based on $\mathcal{D}_n^{s}$ } compute meta-training empirical loss ${\mathcal{L}}(\boldsymbol{\psi}_{\text{MAML}})$ from (\ref{eq:empirical}) based on $\mathcal{D}_n^{q}$\; update parameter vector $\boldsymbol{\psi}_{\text{MAML}}$ via (\ref{eq:initial_update}) } \BlankLine \tcc{adaptation stage} initialize $\boldsymbol{\phi}^{(m)}=\boldsymbol{\psi}_{\text{MAML}}$, and set $m=0$\; \While{stopping criterion not satisfied}{ update receiver parameter vector $\boldsymbol{\phi}$ via (\ref{eq: rx sgd}) } \caption{Design of Fast Adaptive Detector via Meta-Learning} \end{algorithm} \section{Numerical Results} This section first introduces models and parameters used in the simulation setup, and provides numerical results to evaluate the detection performance of the two proposed approaches. \subsection{Models and Parameters} The target is assumed stationary with a Rayleigh envelope $\alpha\sim\mathcal{CN}(0, \sigma_{\alpha}^2)$, where $\sigma_{\alpha}^2$ is the target power. The noise vector $\mathbf{n}$ has a zero-mean, complex Gaussian distribution with correlation matrix $\boldsymbol{\Omega}_n = \sigma^2_w \mathbf{I} + \sigma_I^2\boldsymbol{\Omega}_I$, where $\sigma^2_w$ is the thermal noise power level, $\sigma^2_I$ is signal-independent interference power level, and $\boldsymbol{\Omega}_I$ is the correlation matrix of the signal-independent interference. The signal-to-noise ratio is defined as $\text{SNR}\triangleq 10\log_{10}\{\sigma_{\alpha}^2/\sigma_w^2\}$. The signal-to-interference ratio is defined as $\text{SIR}\triangleq 10\log_{10}\{\sigma_{\alpha}^2/\sigma_I^2\}$. The signal-independent interference is located in the frequency band $[f_l, f_u]$, where $f_l$ and $f_u$ represent the lower and upper frequencies normalized by the sampling frequency $f_s$, respectively. Accordingly, the correlation matrix is $[\boldsymbol{\Omega}_I]_{v,h}=f_u-f_l$ if $v=h$, and $[\boldsymbol{\Omega}_I]_{v,h}=[e^{j2\pi f_u (v-h)}-e^{j2\pi f_l (v-h)}]/[j2\pi(v-h)]$ otherwise, with $(v,h)\in\{1,\ldots,K\}^2$. The clutter vector $\mathbf{c}$ is the superposition of returns from $2K-1$ range cells \cite{Stoica2012}, namely \begin{equation} {\mathbf{c}}=\sum_{\substack{ g=-K+1 }}^{K-1}{\gamma }_{g}\mathbf{J% }_{g}{\mathbf{y}}, \end{equation} where $\mathbf{J}_g$ and $\gamma_g$ represent the shift matrix and the complex clutter scattering coefficient for the $g$th range cell, respectively. Elements of the shift matrix is given by $[\mathbf{J}_g]_{v,h}=1$ if $v-h=g$, and $[\mathbf{J}_g]_{v,h}=0$ otherwise, with $(v,h)\in\{1,\ldots,K\}^2$. The clutter scattering coefficient $\gamma_g$ follows coherent Weibull distribution with shape parameter $\lambda$ and median ${\sigma}_m$ \cite{Richards2010}. Note that the nominal range of the shape parameter is $0.25\leq\lambda\leq2$ \cite{shape}. When the shape parameter $\lambda=2$, the clutter scattering coefficients are complex Gaussian random variables. Based on the assumed mathematical models of target, clutter, and noise, the optimal detector in the NP sense is available (see Appendix D of \cite{Wei2021} for details). The coded waveform $\mathbf{y}$ is a unit norm linear frequency modulated pulse with $K=16$ chips, namely $\mathbf{y} (k)= e^{j\pi R(k/f_s)^2} / \sqrt{K}$, for $k=0,\ldots, K-1$ with a chip rate $R=(100\times10^3)/(40\times10^{-6})$ Hz/s and a sampling rate $f_s=200$ kHz. A homogeneous clutter environment is considered. The median of the clutter scattering coefficient is set to $\sigma_m=0.0004$. The offline training stage is performed at $\text{SNR}_{\text{tr}}= 24\text{ dB}$. The offline training stage consists of $N=40$ different operating environments, which include two types of clutter distributions $\lambda_{\text{tr}}\in\{0.25, 2\}$, two signal-to-interference ratios $\text{SIR}_{\text{tr}}\in \{10\text{ dB}, 17\text{ dB}\}$, and ten different correlation matrices $\boldsymbol{\Omega}_I$ with a fixed frequency difference between the upper and lower normalized frequencies, i.e., $f_{u, \text{tr}}-f_{l,\text{tr}}=0.1$. The offline dataset for each operating environment $\mathcal{D}_n$ consists of $Q=4\times 10^5$ samples, equally divided between the $\mathcal{H}_0$ an $\mathcal{H}_1 $ hypotheses. The adaptation stage is performed at $\text{SNR}_{\text{a}}=20\text{ dB}$ and $\text{SIR}_{\text{a}}=16\text{ dB}$. The upper and lower normalized frequencies are $f_{u, a}=0.6$ and $f_{l, a}=0.4$, respectively. The adaptation dataset $\mathcal{D}_a$ contains $Q_a=8000$ samples, equally divided between $\mathcal{H}_0$ and $\mathcal{H}_1$ hypotheses. Unless stated otherwise, we choose $40$ gradient updates to do adaptation for the two proposed approaches, i.e., $m=40$. In the testing stage, $2\times 10^5$ samples under hypothesis $\mathcal{H}_0$ are used to estimate the probability of false alarm $P_{fa}$, while $5\times10^4$ samples under hypothesis $\mathcal{H}_1$ are used to estimate the probability of detection $P_{d}$. Unless stated otherwise, the testing stage is performed at $\text{SNR}_{\text{te}}=13\text{ dB}$. Receiver operating characteristic (ROC) curves are obtained via Monte Carlo simulation by varying the threshold applied at the output of the receiver. The receiver is a feedforward neural network with four layers, i.e., an input layer with $2K=32$ neurons, two hidden layers with $48$ neurons, and an output layer with $1$ neuron. The sigmoid function is adopted as the activation function for all neurons. In the offline training stage, we use a minibatch of size 128. The learning rates are set to $\alpha=0.2$ and $\beta=0.002$, respectively. For the design of fast adaptive detector via MAML, the meta batch size is set to $N_b=10$. In the adaptation stage, the batch of adaptation samples is adopted to estimate the empirical cross-entropy loss (\ref{eq: rx loss grad}) with the learning rate $\alpha=0.002$. \subsection{Results} Fig. \ref{f:adapt_2} illustrates the adaptation capability of the two proposed approaches in the presence of Gaussian clutter, i.e., $\lambda=2$, given the probability of false alarm $P_{fa} = 10^{-3}$. We compare the performance of the two proposed approaches with: (1) training from scratch (no prior knowledge) \cite{Moya2013}, whereby offline training data $\mathcal{D}$ is not used, and the receiver parameter vector $\boldsymbol{\phi}$ is initialized randomly; (2) an ideal Gaussian detector \cite{Wei2021} having access to the actual operating environment conditions. As shown in the figure, the detection performance of the proposed two approaches is close to the upper bound set by the ideal Gaussian detector \cite{Wei2021} even for a small number of gradient updates. Moreover, MAML is seen to adapt faster as compared with transfer learning. In contrast, conventional learning from scratch performs poorly due to the limited number of gradient updates. \begin{figure}[H] \vspace{-4ex} \hspace{20ex} \includegraphics[width=0.6 \linewidth]{figs/adapt11_2} \vspace{-2ex} \caption{Illustration of the adaptation capability of the two proposed approaches in the presence of Gaussian clutter with $P_{fa}=10^{-3}$.} \label{f:adapt_2} \end{figure} Fig. \ref{f:roc_2} compares ROC curves of the two proposed approaches in Gaussian clutter with the fixed number of gradient updates during the adaptation. With the limited number of gradient updates, the ROC curve obtained based on conventional learning from scratch is not shown due to poor performance. It is observed that the MAML-based detector outperforms the transfer learning-based detector in the presence of Gaussian clutter. For instance, for $P_{fa}=5\times10^{-3}$, MAML-based detector yields $P_d=0.74$, while transfer learning-based detector yields $P_d=0.6$. \begin{figure}[H] \vspace{-2ex} \hspace{20ex} \includegraphics[width=0.6 \linewidth]{figs/roc11_2} \vspace{-2ex} \caption{ROC curves of transfer learning-based detector and MAML-based detector in Gaussian clutter.} \label{f:roc_2} \end{figure} Detection performance of the proposed two approaches in non-Gaussian clutter $\lambda=0.25$ is shown in Fig. \ref{f:roc_025}. When the clutter is non-Gaussian, the optimal detector is not available. Thus, receiver training with a large number of training samples and gradient updates is adopted as the benchmark. The testing stage is performed at $\text{SNR}_{\text{te}}=25$ dB. As shown in the figure, both the transfer learning-based detector and MAML-based detector provide comparable detection performance with the benchmark. Moreover, the ideal Gaussian detector \cite{Wei2021} provides the worst detection performance due to the mismatch between the assumed mathmatical models and the actual non-Gaussian testing environment. \begin{figure}[H] \vspace{-2ex} \hspace{20ex} \includegraphics[width=0.6 \linewidth]{figs/roc11_025} \vspace{-2ex} \caption{ROC curves of transfer learning-based detector and MAML-based detector in non-Gaussian clutter.} \label{f:roc_025} \end{figure} \section{Conclusions} This paper proposes two methods of learning radar detectors. Each method comprises an offline training stage and an adaptation stage. We have developed two offline training algorithms, both of which enable fast adaptation of detectors with limited data. The offline training stage may be implemented either via transfer learning or meta-learning. Numerical results have shown that the proposed two approaches make remarkable gains over the receiver training with no prior knowledge. Moreover, the meta-learning-based detector outperforms the transfer learning-based detector in both Gaussian and non-Gaussian clutter. \section*{Acknowledgment} Research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-20-2-0219. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
proofpile-arXiv_065-10
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Binary prediction or classification is a fundamental problem in statistics and machine learning with applications in many scientific disciplines including economics. The predictive ability of statistical models used for binary classification is frequently evaluated through the receiver operating characteristic (ROC) curve, designed to summarize the tradeoffs between the probability of a true positive prediction (vertical axis) and a false positive prediction (horizontal axis) as one combines a predictive index with a varying classification threshold. Though its origins are in the signal detection and medical diagnostics literature, in recent years ROC analysis has become increasingly common in financial and economic applications as well (e.g., Anjali and Bossaerts 2014; Bazzi et al.\ 2021; Bonfim et al.\ 2021; Berge and Jorda 2011; Kleinberg et al.\ 2018; Lahiri and Wang 2013; Lahiri and Yang 2018; McCracken et al.\ 2021; Schularik and Taylor 2012 and many others). While there is a large literature on the statistical properties of empirical ROC curves, the standard distributional theory assumes that the signal or predicitive index used for classification is either directly observed---it is ``raw data''---or that it is a fixed function of raw data. However, if the signal itself is generated from an underlying regression model with estimated coefficients, conducting in-sample inference about ROC curves based on the traditional theory can be highly misleading. For instance, Demler et al.\ (2012) point out that the standard DeLong et al.\ (1988) test for comparing AUCs for different (but potentially correlated) signals can lead to flawed inference if the signals come from nested models with estimated coefficients.\footnote{AUC stands for ``area under [the ROC] curve''. It is an overall performance measure for binary prediction models. AUC=1/2 corresponds to no predictive power.} Similarly, Lieli and Hsu (2019) demonstrate that the asymptotic normality results in Bamber (1975) are inappropriate for testing AUC=1/2 for models with estimated parameters. The central contribution of this paper is the development of a general functional limit theory for the empirical ROC curve that takes the pre-estimation effect into account. Regarding the ROC curve as a random function defined over the [0,1] interval, we provide a uniform influence function representation theorem, and show that the difference between the empirical and population ROC curves converges weakly to a mean zero Gaussian process with a given covariance structure at the parametric rate. These results constitute a non-trivial extension of the functional limit result in Hsieh and Turnbull (1996), who work under the assumption that the observations available on the predictive index are i.i.d.\ conditional on the outcome. (If the predictive index depends on coefficients estimated in-sample, this assumption is no longer valid, as the parameter estimates depend on all data points.) Our results not only allow for the construction of a uniform confidence band for the ROC curve but also facilitate model selection through handling virtually any comparison between two correlated ROC curves (e.g., testing dominance or partial dominance; testing the difference between AUCs or partial AUCs, etc.) In terms implementation, we propose two methods to simulate the limiting distribution of the empirical ROC curve, one of which is the weighted bootstrap by Ma and Kosorok (2005). Although some type of bootstrap procedure would be a natural way to approach the pre-estimation problem in practice even without a theory, our results provide rigorous justification and guidance for doing this. A second contribution of the theory developed in this paper is that it provides insight into what determines the impact of the first stage estimation error on the asymptotic distribution of the ROC curve. The derivatives of the true and false positive prediction rates with respect to the first stage model parameters play a central role --- if these gradients vanish at the pseudo-true parameter values, then so does the estimation effect. Nevertheless, the gradients also depend on the classification threshold and will not generally be negligible along the entire ROC curve. For associated functionals, it is the gradient of the functional that drives the estimation effect. Some functionals, e.g., the area under the curve, have the property that they are maximal when the predictive index is given by $p(X)=P(Y=1|X)$, the conditional probability of a positive outcome $Y$ given the covariates $X$. For such functionals the estimation effect is negligible when the first stage model is correctly specified for $p(X)$ and the first stage estimator converges at the root-n rate. Nevertheless, the first stage estimation error will generally affect the asymptotic distribution under misspecification; thus, our results facilitate robust inference. There are additional technical contributions that are more subtle. In employing standard empirical process techniques to derive our results, we make most of the fact that the population and sample ROC curves are invariant to monotone increasing transformations of the predictive index. This observation allows us to leverage powerful assumptions that may seem restrictive at first glance. In particular, we use the assumption that the density $f_0$ of the predictive index conditional on $Y=0$ is bounded away from zero to derive various uniform approximations. The problem is that even if the individual predictors have densities bounded away from zero (already a big \textit{if}), the predictive index may not share this property, as it often involves a linear combination of the predictors.\footnote{Think of the sum of two independent uniform[0,1] random variables or the central limit theorem for that matter.} Nevertheless, one can always find a strictly increasing transformation of the predictive index, say, the probability integral transform, so that the post-transform $f_0$ will be greater than some $\epsilon>0$ across the whole support. What matters for the asymptotic theory is the properties of the likelihood ratio $f_1/f_0$, which is invariant to monotone increasing transformations (here $f_1$ is the conditional density of the predictive index given $Y=1$). In particular, uniform inference is possible only for parts of the ROC curve that are generated by thresholds falling into some interval $[c_L,c_U]$ over which $f_1/f_0$ is bounded and bounded away from zero.\footnote{That such an interval exists is a weak assumption; that it coincides with the support of $f_0$ is a much stronger one.} But given the properties of the likelihood ratio, one is free to assume the theoretically most convenient scenario about the individual density $f_0$ that is achievable through monotone transformations, even if this transformation is not implemented or even identified. We must also point out some technical limitations of the paper. First, we do not allow for serial dependence in the data, precluding time series applications such as the evaluation of recession forecasting models. Nevertheless, our proofs rely mostly on high level conditions; specifically, the asymptotically linear representation of the first stage estimator, the stochastic equicontinuity of the empirical process defined by the (pseudo-true) predictive index, and the uniform continuity of some derivatives. Given the availability of these conditions for stationary, weakly dependent time series, we conjecture that our representation results should generalize to this setting with relatively straightforward modifications. However, simulating the asymptotic distribution of the limiting process would require more complex procedures and we do not pursue this extension here. Second, the predictive model evaluated at the pseudo-true parameter values must have strictly positive variance. This is not an innocent assumption in that it rules out a completely uninformative predictive model. For example, our results are not suitable for testing the hypothesis that AUC=1/2; see Lieli and Hsu (2019) for some specialized results in this very non-standard case. More generally, in using our results to compare two ROC curves, the difference of the two influence functions evaluated at the pseudo-true parameter values must have strictly positive variance as well. This condition can be violated when the first stage models are nested, and we are currently working on some test procedures that are applicable in this scenario as well. Finally, we only consider parametric estimators of $p(X)$ as the first stage model; nonparametric estimators that converge slower than the root-n rate are ruled out. One might discount the practical relevance of our theoretical results discussed above based on the fact that the first stage estimation problem can be avoided by conducting out-of-sample evaluation. If the ROC curve is constructed over a test sample that is independent of the training sample used to estimate the first stage model, then the asymptotic validity of the standard inference procedures is restored. We acknowledge this point but offer two responses. First, out-of-sample evaluation is costly: it leads to power loss in model comparisons and the potential dependence of the results on the particular split(s) used. In fact, one could argue that out-of-sample evaluation is a necessity forced on practitioners by the fact that it is often very difficult to characterize analytically or in a practically useful way how goodness-of-fit measures behave over the training sample so that one can compensate for overfitting. In this case we do provide such a result. Second, apart from dealing with the pre-estimation problem, our results provide a unified framework for conducting uniform inference, and comparing ROC curves estimated over the same sample in virtually any way. This work has ties to several strands of the statistics and econometrics literature. We have already cited a number of classic works on the statistical properties of the empirical ROC curve that maintain the assumption of a directly observed signal (Bamber 1975, DeLong et al.\ 1988, Hsieh and Turnbull 1996). It is the last of these papers that is closest to ours; however, the pre-estimation effect is obviously missing from their framework and they actually do not exploit their functional limit result for inference apart from re-deriving the asymptotic normality result of Bamber (1975) for the empirical AUC. Instead, they focus on estimating the ROC curve under an additional ``binormal'' assumption, i.e., when the signal has a normal distribution conditional on both outcomes. Pre-estimation problems have a long history in the literature; for example, Pagan (1984) studied the distributional consequences of including ``generated regressors'' into a regression model. As mentioned above, Demler et al.\ (2012) pointed out the relevance of the pre-estimation effect in the context of ROC analysis. More generally, our work is related to papers dealing with two-step estimators where the first step involves estimating some nuisance parameter whose sampling variation potentially affects the otherwise well-understood second stage. Abadie and Imbens (2016) is a relatively recent example in the context of matching estimators. The application of our results in testing for dominance relations and AUC differences across ROC curves has similarities to stochastic dominance tests; see, e.g., Barrett and Donald (2003), Linton, Maasoumi and Whang (2005), Linton, Song and Whang (2010) and Donald and Hsu (2016). Some papers, such as Linton, Maasoumi and Whang (2005) and Linton, Song and Whang (2010), even allow for generated variables in this context. Finally, the paper speaks indirectly to the forecasting literature on the relative merits of in-sample vs.\ out-of-sample model evaluation (e.g., Inoue and Kilian 2004, Clark and McCracken 2012). The connection lies in the fact that we extend the scope of in-sample evaluation methods in binary prediction. The rest of the paper is organized as follows. Section \ref{sec: binpred} sets up the prediction framework and introduces the ROC curve along with some of its basic properties. Section \ref{sec: pointwise results} discusses the estimation effect in detail and presents pointwise (fixed-cutoff) asymptotic results. The functional limit theory is contained in Section \ref{sec: uniform results}. In Section \ref{sec: inference} we show how to use the abstract results for conducting inference about the ROC curve; we discuss dominance testing, AUC comparisons, etc. Section \ref{sec: simulations} presents Monte Carlo results highlighting the impact of first stage estimation on the distribution of the ROC curve. Section~\ref{sec: concl} concludes. \section{Making and evaluating binary predictions}\label{sec: binpred} \subsection{Cutoff rules and the ROC curve} Let $Y\in\{0,1\}$ be a Bernoulli random variable representing some outcome of interest and $X$ be a $k\times 1$ vector of covariates (predictors). We consider point forecasts (classifications) of $Y$ that are constructed by combining a scalar predictive index $G(X)$ with a suitable cutoff (threshold) $c$. More specifically, the prediction rule for $Y$ is given by \begin{equation}\label{eq: cutoff rule} \hat Y(c)=1[G(X)>c], \end{equation} where $1[.]$ is the indicator function. The role of the function $G: \reals^k\rar\reals$ is to aggregate the information that the predictors contain about $Y$ while the choice of $c$ governs the use of this information. With $G$ and $c$ unrestricted, (\ref{eq: cutoff rule}) represents a very general class of prediction rules. In many binary prediction problems there is also a loss function $\ell(\hat y, y)$ that describes the cost of predicting $Y=\hat y$ when the realized outcome is $Y=y$. If the decision maker's objective is to minimize expected loss conditional on $X$, the optimal choice of $G$ and $c$ is determined as follows. Given the observed value of $X$, the optimal point forecast of $Y$ solves \begin{equation}\label{exp loss min} \min_{\hat y\in\{0,1\} } E[\ell(\hat y, Y)|X]=\min_{\hat y\in\{0,1\} }\big[\ell(\hat y, 1)p(X)+\ell(\hat y, 0)(1-p(X))\big], \end{equation} where $p(X)=\P(Y=1|X)$ is the conditional probability of $Y=1$ given $X$. Adopting the normalization $\ell(0,0)=\ell(1,1)=0$ and assuming $\ell(0,1)>0$ and $\ell(1,0)>0$, it is straightforward to verify that the optimal prediction rule is given by \begin{equation}\label{eq: opt dec rule} \hat Y^*(c_\ell)=1[p(X)>c_\ell], \end{equation} where $c_\ell=\ell(1,0)/[\ell(0,1)+\ell(1,0)]$.\footnote{In case $p(X)=c_\ell$, which is often a zero probability event, the decision maker is indifferent between predicting 0 or 1. The formula stated above arbitrarily specifies $\hat Y=0$ in this case.} Equation (\ref{eq: opt dec rule}) reveals that the optimal predictive index is $p(X)$ regardless of $\ell$ while the optimal choice of $c$ is fully determined by $\ell$ (specifically, by the relative cost of a false alarm versus a miss). This simple observation motivates a two-step empirical strategy in binary prediction.\footnote{See Elliott and Lieli (2013) for an alternative approach where the decision rule (\ref{eq: opt dec rule}) is estimated in a single step based on a specific loss function.} First, one models and estimates $p(X)$ using data on $(Y,X)$; a common approach is to specify a parametric model $G(X,\beta)$ for $p(X)$ and to estimate it by maximum likelihood (logistic regression is a leading example). In the second step a point forecast is obtained by combining the estimated conditional probability $G(X,\hat\beta)$ with a suitable cutoff, which depends on the forecaster's or forecaster user's preferences. Thus, there is a separation between the construction of the predictive index, representing the objective information available to the forecaster, and the use of that information, governed by the loss function. We will now define the population \emph{receiver operating characteristic} (ROC) curve. Let $G(X,\beta)$ be a predictive index with a fixed value of the parameter $\beta$.\footnote{The following definitions do not depend on the parametric structure and generalize immediately to any predictive index $G(X)$. We work with a parametric specification in anticipation of studying the pre-estimation step.} Combined with a cutoff $c$, the resulting prediction rule produces true positive predictions and false positive predictions (false alarms) with the following probabilities: \begin{eqnarray*} TP(c,\beta)&=&\P[\hat Y(c)=1\mid Y=1]=\P[G(X,\beta)>c\mid Y=1]\\ FP(c,\beta)&=&\P[\hat Y(c)=1\mid Y=0]=\P[G(X,\beta)>c\mid Y=0], \end{eqnarray*} where TP and FP stand for the rate of ``true positive'' and ``false positive'' predictions, respectively. As the cutoff $c$ varies, both quantities change in the same direction; in general, TP can be increased only at the cost of increasing FP as well. The ROC curve traces out all attainable (FP, TP) pairs in the $[0,1]\times[0,1]$ unit square, i.e., it is the locus \[ \Big\{\big(FP(c,\beta), TP(c,\beta)\big): c\in\reals\Big\}. \] Intuitively, the ROC curve is a way of summarizing the information content of $G(X,\beta)$ about the outcome $Y$ without committing to any particular cutoff, i.e., loss function. The use of such a forecast evaluation tool is particularly appropriate in situations in which there many potential forecast users with diverse loss functions; see Lieli and Nieto-Barthaburu (2010). It is also clear from the definition that the ROC curve is invariant to strictly monotone transformations of $G(X,\beta)$. The ROC curve based on the true conditional probability function $p(X)$ possesses some optimality properties. To state these in a parametric modeling framework, we introduce the following correct specification assumption. \begin{assumption}\label{assn: correct spec} There exists some point $\beta^\circ$ in the parameter space $\mathcal{B}\subseteq\reals^p$ such that $G(X,\beta^\circ)=p(X)$ almost surely. \end{assumption} We state the following result. \begin{proposition}\label{prop: p(X) ROC opt} (i) Given Assumption~\ref{assn: correct spec}, $\beta^\circ$ solves the following maximization problem for any value of $c$: \begin{equation* \max_\beta \big[ (1-c)\pi TP(c,\beta)-c(1-\pi)FP(c,\beta)\big], \end{equation*} where $\pi=\P(Y=1)$. (ii) Given Assumption~\ref{assn: correct spec}, define $F^\circ_c= FP(c,\beta^\circ)$. Then $\beta^\circ$ also solves the following constrained maximization problem for any value of $c$: \begin{equation* \max_\beta TP(c,\beta)\text{ s.t. }FP(c,\beta)=F^\circ_c \end{equation*} \end{proposition} \paragraph{Remarks:} \begin{enumerate} \item Part (i) is a consequence of the predictor (\ref{eq: opt dec rule}) solving (\ref{exp loss min}) for any given value of $X$. To see this, note that by the law of iterated expectations and the monotonicity of the expectation operator, $\hat Y^*(c_\ell)$ also solves the unconditional expected loss minimization problem $\min_{\hat Y}E_{XY}[\ell(\hat Y,Y)]$, where the minimization is over all random variables $\hat Y$ that are (measurable) transformations of $X$. It is easy to verify that $E[\ell(\hat Y,Y)]$ can be written as \[ [\ell(0,1)+\ell(1,0)]\cdot\big[ c_\ell(1-\pi)\P(\hat Y=1\mid Y=0)-(1-c_\ell)\pi \P(\hat Y=1\mid Y=1)\big]+\pi\ell(0,1), \] which immediately implies the result. \item Part (ii) is a consequence of part (i) and it means that for any given FP rate, it is the ROC curve based on $p(X)$ that achieves the largest possible TP rate. In other words, the ROC curve associated with $p(X)$ weakly dominates any other ROC curve that is constructed based on some index $G(X)$.\footnote{To see this, fix a false positive rate $F_0\in[0,1]$, and find the cutoff $c_0$ that produces $FP(\beta^\circ, c_0)=F_0$ (for simplicity, assume that exact equality can be achieved). Let $\beta'$ be any other parameter value satisfying $FP(\beta', c_0)=F_0$. Proposition~\ref{prop: p(X) ROC opt}(i) implies $$(1-c_0)\pi TP(c_0,\beta^\circ)-c_0(1-\pi)FP(c_0,\beta^\circ)\ge (1-c_0)\pi TP(c_0,\beta')-c_0(1-\pi)FP(c_0,\beta')$$ so that $TP(c_0,\beta^\circ)\ge TP(c_0,\beta')$.} \item These results are not new; they have appeared in the ROC literature in alternative formulations. See, e.g., Egan (1975) and Pepe (2003, Section 4). \end{enumerate} \subsection{The sample ROC curve and conventional inference}\label{subsec: sample ROC} Throughout the paper we maintain the assumption that the available data consists of a random sample. More formally: \begin{assumption}\label{assn: iid} The sample $\{(Y_i,X_i)\}_{i=1}^n$ consists of independent and identically distributed observations on the random vector $(Y,X)\in\{0,1\}\times\reals^k$. \end{assumption} Given the sample and a \emph{fixed} value of $\beta$, the empirical ROC curve is defined as the locus $\big\{(\widehat{FP}(c,\beta), \widehat{TP}(c,\beta)): c\in\reals\big\}\subset [0,1]\times[0,1]$, where \begin{eqnarray*} \widehat{TP}(c,\beta)&=&\frac{1}{n_1}\sum_{i=1}^n 1[G(X_i,\beta)>c, Y_i=1]\\ \widehat{FP}(c,\beta)&=&\frac{1}{n_0}\sum_{i=1}^n 1[G(X_i,\beta)>c, Y_i=0], \end{eqnarray*} and $n_1=\sum_{i=1}^n Y_i$, $n_0=n-n_1$. The simplest type of inference about an ROC curve involves constructing (joint) confidence intervals for $TP(c,\beta)$ and $FP(c,\beta)$ for one threshold $c$ at a time and for fixed values of the coefficient vector $\beta$. In this case one can use the CLT to arrive at the normal approximations: \begin{eqnarray} &&\sqrt{n}[\widehat{TP}(c,\beta)-TP(c,\beta)]\rar_d N\big[0, TP(1-TP)/\pi\big] \label{eq: TP asy dist}\\ &&\sqrt{n}[\widehat{FP}(c,\beta)-FP(c,\beta)]\rar_d N\big[0, FP(1-FP)/(1-\pi)\big], \label{eq: FP asy dist} \end{eqnarray} where TP is a shorthand for $TP(c,\beta)$ (and similarly for FP). These results immediately provide asymptotic confidence intervals for TP and FP, and a joint confidence rectangle is also easy to construct due to the independence of $\widehat{TP}(c,\beta)$ and $\widehat{FP}(c,\beta)$; see Pepe (2003), Section 2.2.2 for details.\footnote{The two statistics are independent because they are computed from two disjoint sets of observations; namely, the $Y=1$ and $Y=0$ subsamples.} Furthermore, for fixed $\beta$ one can apply the asymptotic normality results in Bamber (1975) to conduct inference about the AUC, and the DeLong et.\ al.\ (1988) test for comparing the areas under ROC curves based on different (but non-random) values of $\beta$. The nonparametric functional limit result in Hsieh and Turnbull (1996) also applies. \section{In-sample inference: pointwise asymptotics}\label{sec: pointwise results} We will now develop a comprehensive theory of in-sample inference about individual points on the ROC curve taking the pre-estimation effect into account. We present both analytical results and results based on the weighted bootstrap. We start by describing the setup and stating some technical conditions. \subsection{First stage estimation and technical assumptions} The sample $\{(Y_i,X_i)\}_{i=1}^n$ now plays a dual role. First, it is used to construct an estimated parameter vector $\hat\beta=\hat\beta_n$. Typically, $\hat\beta$ consists of an intercept and slope coefficients from some type of regression of $Y$ on $X$ (e.g., linear, logit or probit). Second, the same sample is used to compute the predictive index values $G(X_i,\hat\beta)$, $i=1,\ldots, n$, and to construct the empirical ROC curve as described in Section \ref{subsec: sample ROC}. We impose the following high level condition on $\hat\beta$. \begin{assumption}\label{assn: beta-est} (i) Let $\hat\beta$ be an M-estimator of $\beta$ so that \begin{align*} \hat{\beta}\equiv \arg\max_{\beta\in \mathcal{B}} \frac{1}{n} \sum_{i=1}^n q(Y_i,X_i,\beta). \end{align*} (ii) There is a point $\beta^*$ in the interior of the compact parameter space $\mathcal{B}\subset\reals^p$ so that \begin{align} \sqrt{n}(\hat{\beta}_n-\beta^*)=\frac{1}{\sqrt{n}}\sum_{i=1}^n\psi_\beta(Y_i,X_i,\beta^*)+o_p(1), \label{eq: alpha-linear} \end{align} where $\psi_\beta: \reals^{1+k+p}\rightarrow \reals^p$ is a given function with $E[\psi_\beta(Y,X, \beta^*)]=0$ and $E\|\psi_\beta(Y,X, \beta^*)\|^{2+\epsilon}<\infty$ for some $\epsilon>0$. Furthermore, $\beta^*=\beta^\circ$ under Assumption~\ref{assn: correct spec}. \end{assumption} Assumption~\ref{assn: beta-est} states that $\hat\beta$ is an $M$-estimator with an asymptotically linear representation, implying that $\hat\beta$ is asymptotically normally distributed. The stated conditions do not require the first stage model to be correctly specified for $p(X)$; $\beta^*$ simply stands for the probability limit of $\hat\beta$, i.e., the pseudo-true value of the parameter vector. We nevertheless assume that $\hat\beta$ is consistent for $\beta^\circ$ under correct specification (Assumption \ref{assn: correct spec}). The reason for allowing for misspecification is twofold. First, the first stage predictive model is often simply an approximation of $p(X)$, e.g., a linear regression. Second, as we will see, the estimation effect can depend on whether or not the model is correctly specified. \begin{assumption}\label{assn: empirical process} The empirical processes \[ (c,\beta)\mapsto\sqrt{n}(\hat \P_0-\P_0)1[G(X,\beta)\le c]\text{ and }(c,\beta)\mapsto\sqrt{n}(\hat \P_1-\P_1)1[G(X,\beta)\le c] \] are stochastically equicontinuous over $\reals\times \mathcal{B}$, where $\P_j$ denotes probability conditional on $Y=j$ and $\hat\P_j$ is the corresponding empirical measure in the $Y=j$ subsample. \end{assumption} The stochastic equicontinuity requirement in Assumption \ref{assn: empirical process} limits the complexity of the model $G(X,\beta)$ and plays an important role in handling the estimation effect. It holds, for example, if $G(X,\beta)=X'\beta$ or $G(X,\beta)=G(X'\beta)$ with $G$ bounded (see the definition of a type I class in Andrews 1994). Apart from a small degree of added generality, we state stochastic equicontinuity as a high level condition to make it more transparent what is required for our results. The final assumption states the differentiability of $TP$ and $FP$ with respect to the components of $\beta$. Let $\nabla_\beta$ denote the corresponding gradient operator and $B^*(r)$ the open ball with radius $r>0$ centered on $\beta^*$. \begin{assumption}\label{assn: gradient} For any given cutoff $c$, the gradient vectors $\nabla_\betaTP(c,\beta)$ and $\nabla_\betaFP(c,\beta)$ exist and are continuous over $B^*(r)$ for some $r>0$. \end{assumption} As we will shortly see, the first stage estimation of $\beta$ affects the asymptotic distribution of the ROC curve through the derivatives presented in Assumption~\ref{assn: gradient}. \subsection{Theoretical illustration of the estimation effect}\label{subsec: est eff simple theory} Let $c$ be a given value of the cutoff; we want to conduct inference about the corresponding point $(FP(c,\beta^*), TP(c,\beta^*))$ on the limiting ROC curve. To isolate the effect of the first stage estimator $\hat\beta$ on the asymptotic distribution of $\widehat{TP}(c,\hat\beta)$, we can write \begin{eqnarray} &&\sqrt{n}[\widehat{TP}(c,\hat\beta)-TP(c,\beta^*)]\nonumber\\ &&=\sqrt{n}[\widehat{TP}(c,\hat\beta)-TP(c,\hat\beta)]+\sqrt{n}[TP(c,\hat\beta)-TP(c,\beta^*)]\nonumber\\ &&=\sqrt{n}[\widehat{TP}(c,\beta^*)-TP(c,\beta^*)]+\sqrt{n}[TP(c,\hat\beta)-TP(c,\beta^*)]+o_p(1),\label{eq: est effect decomp 2nd eq} \end{eqnarray} where the second equality is due to the fact that the process $\sqrt{n}(\widehat{TP}-TP)$ is stochastically equicontinuous (Assumption~\ref{assn: empirical process}), implying \[ \sqrt{n}[\widehat{TP}(c,\hat\beta)-TP(c,\hat\beta)]-\sqrt{n}[\widehat{TP}(c,\beta^*)-TP(c,\beta^*)]=o_p(1), \] given that $\hat\beta\rar_p\beta^*$. As $\beta^*$ is fixed and $Var[G(X,\beta^*)]>0$, the first term in equation (\ref{eq: est effect decomp 2nd eq}) has the asymptotic distribution given by (\ref{eq: TP asy dist}) and the second term represents the effect of estimating $\beta^*$. Does this term have a non-negligible effect on the asymptotic distribution of $\widehat{TP}(c,\hat\beta)$, and if yes, how do we characterize it? To address these questions, we can use Assumption \ref{assn: gradient} to expand the second term in (\ref{eq: est effect decomp 2nd eq}) around $\beta^*$ to obtain \begin{eqnarray} &&\sqrt{n}[\widehat{TP}(c,\hat\beta)-TP(c,\beta^*)]\nonumber\\ &&=\sqrt{n}[\widehat{TP}(c,\beta^*)-TP(c,\beta^*)]+\sqrt{n}\nabla_\beta TP(c,\beta^*)(\hat\beta-\beta^*)+o_p(1).\label{est eff exp: eg} \end{eqnarray} Equation (\ref{est eff exp: eg}) shows that the estimation effect is negligible whenever $\nabla_\beta TP(c,\beta^*)=0$. However, as Proposition \ref{prop: p(X) ROC opt}(ii) shows, this condition does not generally hold even if $G(X,\beta)$ is correctly specified (i.e., $\beta^*=\beta^\circ$), because $TP(c,\beta^\circ)$ solves a \emph{constrained} (rather than unconstrained) optimization problem.\footnote{The first order conditions are $\nabla_\beta TP(c,\beta^\circ)=\lambda\nabla_\beta FP(c,\beta^\circ)$ for some scalar $\lambda$ and $FP(c,\beta)=F^\circ_c$. The Lagrange multiplier $\lambda$ is generally non-zero, at least when $TP(c,\beta^\circ)<1$.} Under misspecification Proposition \ref{prop: p(X) ROC opt}(ii) does not apply, but of course there is still no general reason for $\nabla_\beta TP(c,\beta^*)$ to vanish. Therefore, in either case the asymptotic distribution of $\widehat{TP}(c,\hat\beta)$ and $\widehat{FP}(c,\hat\beta)$ will generally differ from that stated under (\ref{eq: TP asy dist}) and (\ref{eq: FP asy dist}) because $\sqrt{n}(\hat\beta-\beta^*)=O_p(1)$.\footnote{Proposition~\ref{prop: p(X) ROC opt}(i) implies that under correct specification one can conduct inference about the linear combination $(1-c)\piTP-c(1-\pi)FP$ without the need to consider the pre-estimation effect. This is because the true value of $\beta$ maximizes this linear combination and hence the corresponding gradient driving the estimation effect vanishes. More generally, the estimation effect is negligible for a functional of the ROC curve if (i) the ROC curve based on $p(X)$ maximizes that functional and (ii) Assumption~\ref{assn: correct spec} holds.\label{fn: no est effect}} \subsection{Pointwise inference based on analytical results} To describe the asymptotic distribution of $\widehat{TP}(c,\hat\beta)$ in more detail, we can further expand the decomposition in (\ref{est eff exp: eg}) by substituting in the asymptotically linear (influence function) representation of the two terms. Using the definition of $\widehat{TP}(c,\beta^*)$ and Assumption~\ref{assn: beta-est}, it is straightforward to verify that \begin{align} \sqrt{n}[\widehat{TP}(c,\hat\beta)-TP(c,\beta^*)]&=\frac{1}{\sqrt{n}}\sum_{i=1}^n \Big\{\frac{Y_i}{\pi}\big[1(G(X,\beta^*)>c)-TP(c,\beta^*)\big]\notag\\ &~~+\nabla_\betaTP(c,\beta^*)\psi_\beta(Y_i,X_i,\beta^*)\Big\}+o_p(1)\notag\\ &\equiv \frac{1}{\sqrt{n}}\sum_{i=1}^n \psi_{TP}(Y_i,X_i,c,\beta^*)+o_p(1),\label{eq: inf-TP} \end{align} where the definition of $\psi_{TP}$ is enclosed by the braces on the previous line. Of course, $\widehat{FP}(c,\hat\beta)$ has a corresponding asymptotically linear representation with influence function \[ \psi_{FP}(Y_i,X_i,c,\beta^*)=\frac{1-Y_i}{1-\pi}\big[1(G(X_i,\beta^*)>c)-FP(c,\beta^*)\big]+\nabla_\betaFP(c,\beta^*)\psi_\beta(Y_i,X_i,\beta^*). \] Stacking the influence functions as $$\psi(Y_i,X_i,c,\beta^*)=[\psi_{TP}(Y_i,X_i,c,\beta^*),\psi_{FP}(Y_i,X_i,c,\beta^*)]'$$ and applying the multivariate CLT gives the asymptotic joint distribution of an individual point $(\widehat{TP}(c,\hat\beta),\widehat{FP}(c,\hat\beta))$ on the sample ROC curve. \begin{proposition}\label{prop: est effect gen} Suppose that Assumptions \ref{assn: iid} to \ref{assn: gradient} are satisfied. Then \begin{equation}\label{eq: TF joint dist} \sqrt{n} \begin{pmatrix} \widehat{TP}(c,\hat\beta)-TP(c,\beta^*)\\ \widehat{FP}(c,\hat\beta)-FP(c,\beta^*)\\ \end{pmatrix} =\frac{1}{\sqrt{n}}\sum_{i=1}^n \psi(Y_i,X_i,c,\beta^*)+o_p(1) \rightarrow_d N[0,E(\psi\psi')] \end{equation} for cutoffs $c$ for which $E[\psi^2_{TP}(Y_i,X_i,c,\beta^*)]>0$ and $E[\psi^2_{FP}(Y_i,X_i,c,\beta^*)]>0$. \end{proposition} \paragraph{Remarks} \begin{enumerate} \item Using Proposition~\ref{prop: est effect gen}, it is easy to obtain the asymptotic distribution of any linear combination $a\widehat{TP}(c,\hat\beta)+b\widehat{FP}(c,\hat\beta)$. \item Proposition \ref{prop: est effect gen} is a ``pointwise'' result in the sense that the cutoff $c$ is assumed to be fixed. It is straightforward to generalize the setup so that one can make joint inference about points that are associated with a finite number of different cutoffs. One can simply stack the values of the influence function $\psi$ evaluated at these cutoffs and a result analogous to (\ref{eq: TF joint dist}) will continue to hold. \item The variance condition $E[\psi^2_{TP}]>0$ will generally hold for interior points $TP(c,\beta^*)\in (0,1)$ but fail for $TP(c,\beta^*)\in \{0,1\}$. The same is true for $FP$. \end{enumerate} We supplement Proposition \ref{prop: est effect gen} by some results that reveal the structure of $\nabla_\beta TP$ and $\nabla_\beta FP$ and facilitate their estimation. Let $\partial_{j}$ denote the partial derivative operator with respect to the $j$th component of $\beta$. \begin{assumption}\label{assn: gradient2} (i) $G(X,\beta)$ is twice continuously differentiable (a.s.) w.r.t.\ $\beta$ on $B^*(r)$ for some $r>0$ with $\sup_{\beta\in B^*(r)}|\partial_{jj}G(X,\beta)|\le M$ (a.s.) for some $M>0$. (ii) The conditional density of $G(X,\beta^*)$ given $Y=0,1$ exists. The conditional density of $G(X,\beta^*)$ given $\partial_jG(X,\beta^*)$ and $Y=y$ also exists and is bounded uniformly by some $M>0$ for almost all values of $\partial_jG(X,\beta^*)$, $y=0,1$, and all $j$. (iii) $E\big[|\partial_j G(X,\beta^*)|\,\big|\,Y=1\big]<\infty$ for all $j$. \end{assumption} \begin{proposition}\label{prop: gradient TP} Suppose that Assumptions~\ref{assn: gradient} and \ref{assn: gradient2} hold. Then: \begin{equation}\label{eq: gradient TP} \nabla_\beta TP(c,\beta^*)=E\Big[\nabla G_\beta(X,\beta^*)\Big|\,G(X,\beta^*)=c, Y=1 \Big]f^*_1(c), \end{equation} where $f^*_1(c)$ is the conditional density of $G(X,\beta^*)$ given $Y=1$. If, in addition, Assumption~\ref{assn: correct spec} is satisfied ($\beta^*=\beta^\circ$), then the expectation in equation (\ref{eq: gradient TP}) does not need to be conditioned on $Y=1$. The formula for $\nabla_\beta FP(c,\beta^*)$ is analogous; it conditions on $Y=0$ throughout. \end{proposition} Finally, we specialize Propositions~\ref{prop: est effect gen} and \ref{prop: gradient TP} by imposing a logit first stage. \begin{assumption}\label{assn: logit} Suppose that the first stage estimation consists of a logit regression of $Y$ on $X$ and a constant so that $G(X,\hat\beta)=\Lambda(\tilde X'\hat\beta)$, where $\Lambda(\cdot)$ is the logistic c.d.f., $\tilde X=(1, X')'$ and $\hat\beta$ is the maximum likelihood estimator. \end{assumption} \begin{proposition}\label{prop: logit} Suppose that Assumption~\ref{assn: logit} is satisfied. Then: \begin{itemize} \item [(a)] $\psi_\beta(Y_i,X_i,\beta)=A_\beta^{-1}X_i[Y_i-\Lambda(\tilde X_i'\beta)]$, where $A_\beta=E\{\Lambda(\tilde X'\beta)[1-\Lambda(\tilde X'\beta)]\tilde X\tilde X'\}$. \item [(b)] The components of $\nabla_\beta TP(c,\beta^*)$ are given by: \begin{equation}\label{eq: TP grad sp} c(1-c)E\big[X_j\big|\,\Lambda(\tilde X'\beta^*)=c, Y=1 \big]f_1^*(c),\; j=0,1,\ldots,d, \end{equation} where $X_0\equiv 1$, $X_j$, $j=1,\ldots,d$ is the $j$th component of $X$, and $f_1^*(c)$ is the conditional density of $\Lambda(\tilde X'\beta^*)$ given $Y=1$. \end{itemize} \end{proposition} \paragraph{Remarks:} \begin{enumerate} \item The proofs of Propositions \ref{prop: gradient TP} and \ref{prop: logit} are presented in Appendix B. \item The existence of $f_1^*(c)$ requires that $X$ has a continuous component and the corresponding coefficient in $\beta^*$ is nonzero. This rules out $X$ and $Y$ being independent. \item The expression for $\psi_\beta$ follows from formulas (12.16), (15.18) and (15.19) in Wooldridge (2002) when specialized to the logit case. \item One can estimate the unknown quantities in (\ref{eq: TP grad sp}) nonparametrically to obtain a semiparametric estimator for $\nabla_\beta TP(c,\beta^*)$. More precisely, expression (\ref{eq: TP grad sp}) may actually be estimated in a single step as \begin{equation}\label{1 step grad est} c(1-c)\frac{1}{n_1h}\sum_{i: Y_i=1}X_{ji}K\left(\frac{\Lambda(\tilde X_i'\hat\beta)-c}{h}\right), \end{equation} where $K(\cdot)$ is a kernel function and $h$ is a bandwidth that may be chosen according to Silverman's rule of thumb. \item Alternatively, if correct specification is assumed in the first stage ($\beta^*=\beta^\circ$), then one can estimate $E[X_j|\,\Lambda=c, Y=1]=E[X_j|\,\Lambda=c]$ by a kernel regression on the \emph{full sample} and $f^*_1(c)$ by a kernel estimator on the $Y=1$ subsample. \end{enumerate} \subsection{Pontwise inference based on the weighted bootstrap}\label{subsec: weighted} Here we provide an alternative method for making pointwise inference about the ROC curve by utilizing the weighted bootstrap for M-estimators proposed by Ma and Kosorok (2005). The main advantage of this approach is that it sidesteps the estimation of the gradient vectors $\nabla_\beta TP(c,\beta^*)$ and $\nabla_\beta FP(c,\beta^*)$. Furthermore, the method is similar to the simulation-based procedure that we propose for functional inference in Section \ref{sec: uniform results}. The weighted bootstrap employs a sequence of (pseudo) random variables as multipliers to simulate the sampling variation of an estimator. \begin{assumption}\label{assn: weighted bootstrap W} Let $\{W_i\}_{i=1}^n$ be a sequence of i.i.d.\ (pseudo) random variables, independent of the sample path $\{(Y_i,X_i)\}_{i=1}^n$, with $E(W_i)=1$ and $Var(W_i)=1$. \end{assumption} We first define the weighted bootstrap version of the first stage estimator of $\beta$: \begin{align*} \hat{\beta}^w= \arg\max_{\beta\in \mathcal{B}} \frac{1}{n} \sum_{i=1}^n W_i \cdot q(Y_i,X_i,\beta). \end{align*} Given $\hat{\beta}^w$, the weighted bootstrap estimators of $TP(c,\beta)$ and $FP(c,\beta)$ are defined as \begin{eqnarray*} \widehat{TP}^w(c,\beta)&=&\frac{1}{\sum_{i=1}^n W_i\cdot Y_i} \sum_{i=1}^n W_i \cdot 1[G(X_i,\beta)>c, Y_i=1]\\ \widehat{FP}^w(c,\beta)&=&\frac{1}{\sum_{i=1}^n W_i \cdot(1-Y_i)} \sum_{i=1}^n W_i \cdot 1[G(X_i,\beta)>c, Y_i=0]. \end{eqnarray*} \begin{assumption}\label{assn: beta-est-weighted boot} Assume that \begin{align} \sqrt{n}(\hat{\beta}^w-\beta^*)=\frac{1}{\sqrt{n}}\sum_{i=1}^nW_i \cdot \psi_\beta(Y_i,X_i,\beta^*)+o_p(1), \end{align} where $\beta^*$ and $\psi_\beta(Y_i,X_i,\beta^*)$ are given in Assumption \ref{assn: beta-est}. \end{assumption} Assumption~\ref{assn: beta-est-weighted boot} ensures that the weighted bootstrap is valid for the first stage estimator, i.e., conditional on the data, $\sqrt{n}(\hat{\beta}^w-\hat{\beta})$ has the same limiting distribution as $\sqrt{n}(\hat\beta-\beta^*)$ unconditionally. Furthermore, by Theorem 2 of Ma and Kosorok (2005), the validity of the weighted bootstrap for $\widehat{TP}^w(c,\hat\beta^w)$ follows from showing that (i) $\widehat{TP}$, $\widehat{FP}$, $\widehat{TP}^w$ and $\widehat{FP}^w$ can be represented as M-estimators and (ii) that these estimators are $\sqrt{n}$-consistent and asymptotically linear. Item (i) is verified by noting that \begin{eqnarray*} \widehat{TP}(c,\hat{\beta})&=&\arg\min_{t\in \reals }\frac{1}{n}\sum_{i=1}^n Y_i\cdot \big(1[G(X_i,\hat{\beta})>c]-t \big)^2\\ \widehat{TP}^w(c,\hat{\beta}^w)&=&\arg\min_{t\in \reals}\frac{1}{n}\sum_{i=1}^n W_i\cdot Y_i\cdot \big(1[G(X_i,\hat{\beta}^w)>c]-t \big)^2, \end{eqnarray*} and similarly for $\widehat{FP}$ and $\widehat{FP}^w$. As for item (ii), Proposition \ref{prop: est effect gen} establishes the asymptotically linear representation of $(\widehat{TP}(c,\hat\beta), \widehat{FP}(c,\hat\beta))$; essentially the same argument also yields \begin{equation*} \sqrt{n} \begin{pmatrix} \widehat{TP}^w(c,\hat\beta^w)-TP(c,\beta^*)\\ \widehat{FP}^w(c,\hat\beta^w)-FP(c,\beta^*)\\ \end{pmatrix} =\frac{1}{\sqrt{n}}\sum_{i=1}^n W_i\cdot \psi(Y_i,X_i,c,\beta^*)+o_p(1)\label{eq: weight B 1}. \end{equation*} Thus, we obtain the following result. \begin{proposition}\label{prop: est effect gen-bootstrap} Suppose that Assumptions \ref{assn: iid}-\ref{assn: gradient}, \ref{assn: weighted bootstrap W} and \ref{assn: beta-est-weighted boot} are satisfied. Then, conditional on the sample path of the data, \begin{equation*} \sqrt{n} \begin{pmatrix} \widehat{TP}^w(c,\hat\beta^w)-\widehat{TP}(c,\hat{\beta})\\ \widehat{FP}^w(c,\hat\beta^w)-\widehat{FP}(c,\hat{\beta})\\ \end{pmatrix} =\frac{1}{\sqrt{n}}\sum_{i=1}^n (W_i-1)\cdot \psi(Y_i,X_i,c,\beta^*) \rightarrow_d N[0,E(\psi\psi')] \end{equation*} with probability approaching one for cutoffs $c$ such that $E[\psi^2_{TP}]>0$ and $E[\psi^2_{FP}]>0$. \end{proposition} \paragraph{Remarks} \begin{enumerate} \item In applications we suggest letting the weights $W_i$ take the values 0 and 2 with equal probability. The main reason is that with non-negative weights the weighted objective function remains concave if the $q(Y_i,X_i,\beta)$ is concave in $\beta$. This makes it computationally easier to obtain $\hat\beta^w$. \item The weighted bootstrap estimator of the asymptotic variance-covariance matrix $\Psi(c)\equiv E(\psi\psi')$ can be constructed as follows. With a minor abuse of notation, let $\widehat R^w(c)=(\widehat{TP}^w(c,\hat\beta^w), \widehat{FP}^w(c,\hat\beta^w))'$ denote the ROC estimate from the $w$th bootstrap cycle, $w=1,\ldots, \mathcal{W}$. Then one can estimate $\Psi(c)$ by \begin{align*} &\widehat\Psi_\mathcal{W}(c)=\frac{n}{\mathcal{W}}\sum_{w=1}^\mathcal{W} \big(\widehat{R}^w(c)-\overline{\widehat{R}}^w(c)\big)\big(\widehat{R}^w(c)-\overline{\widehat{R}}^w(c)\big)',\text{ where}\\ &\overline{\widehat{R}}^w(c)=\frac{1}{\mathcal{W}}\sum_{w=1}^\mathcal{W}\widehat{R}^w(c). \end{align*} We have that conditional on sample path with probability approaching one, \begin{align*} &\widehat{\Psi}_{\mathcal{W}}(c)\stackrel{p}{\rightarrow}_w \frac{1}{n}\sum_{i=1}^n \psi(Y_i,X_i,c,\beta^*)\psi(Y_i,X_i,c,\beta^*)'+o_p(1), \end{align*} where $\stackrel{p}{\rightarrow}_w$ denotes probability limit under the law of the $W_i$'s. It follows that \begin{align*} \lim_{\mathcal{W}\rightarrow \infty} \widehat{\Psi}_\mathcal{W}(c)\stackrel{p}{\rightarrow} {\Psi}(c). \end{align*} \end{enumerate} \section{In-sample inference: uniform asymptotics}\label{sec: uniform results} To derive uniform results, we first express the ROC curve explicitly as a function over the interval [0,1]. Let the inverse of the \emph{decreasing} function $c\mapsto FP(c,\beta)$ be defined as \[ FP^{-1}_{\beta}(t)=\inf\{c: FP(c,\beta)\le t\},\; t\in [0,1]. \] The more compact notation on the l.h.s.\ emphasizes that the inverse is taken with respect to the cutoff $c$ for a fixed value of $\beta$. Thus, is $FP^{-1}_{\beta}(t)$ as the ``first'' (smallest) cutoff value at which the false positive rate is equal to $t$ or falls below $t$.\footnote{Of course, if $FP(c,\beta)$ is strictly decreasing and continuous in $c$, then $FP^{-1}_{\beta}(t)$ is the unique solution to the equation $FP(c,\beta)=t$.} Because $1-FP(c,\beta)$ is the c.d.f.\ of the conditional distribution of $G(X,\beta)$ given $Y=0$, an equivalent interpretation of $FP^{-1}_{\beta}(t)$ is that it is the $(1-t)$-quantile of this distribution. We can now represent the ROC curve as a function that returns the true positive rate associated with given false positive rate $t$: \begin{equation}\label{def: ROC pop explicit} R(t,\beta)=TP\big(FP^{-1}_{\beta}(t),\beta\big),\quad t\in[0,1]. \end{equation} For a given parameter value $\beta$, the sample ROC curve is defined by replacing $TP(\cdot,\beta)$ and $FP^{-1}_\beta(\cdot)$ by sample analogs: $\widehat R(t,\beta)=\widehat{TP}\big(\widehat{FP}^{-1}_{\beta}(t),\beta\big),\;t\in[0,1]$. \subsection{Additional technical assumptions for uniform inference} Our goal is to characterize the statistical behavior of the random function $t\mapsto \hat R(t,\hat\beta)$ over the interval $[0,1]$. This requires some additional assumptions. \begin{assumption}\label{assn: cond densities} (i) The conditional distribution of $G(X,\beta^*)$ given $Y=0$ has compact support $[a_0,b_0]$ and probability density function $f_0^*(c)$ that is continuous (and hence bounded) over $[a_0,b_0]$ and satisfies $\inf\{f_0^*(c): c\in[a_0,b_0]\}\ge\delta$ for some $\delta>0$. (ii) The conditional distribution of $G(X,\beta^*)$ given $Y=1$ has compact support $[a_1,b_1]$ and a probability density function $f_1^*(c)$. (iii) There exits a subinterval $[c_{0,L},c_{0,U}]\subseteq [a_0,b_0]$ such that $f^*_1(c)/f^*_0(c)$ is continuous (and hence bounded) over $[c_{0,L},c_{0,U}]$ and satisfies $\inf\{f_1^*(c)/f_0^*(c): c\in[c_{0,L},c_{0,U}]\}\ge\delta$ for some $\delta>0$. \end{assumption} Assumption~\ref{assn: cond densities} merits careful discussion. An immediate practical implication of part (i) is that the limiting model $G(X,\beta^*)$ must depend on at least one continuous predictor in a nontrivial way. For instance, if the model is based on a linear index, this rules out $X$ being completely independent of $Y$; see Remark~1 after Proposition~\ref{prop: logit}. Part (iii) implies that $supp(f_0^*)$ and $supp(f_1^*)$ overlap, ensuring that the classification problem is nontrivial. Nevertheless, the overlap does not need to be complete; we allow for applications in which extreme values of the index are associated exclusively with one of the two outcomes. From a technical standpoint, the main purpose of Assumption~\ref{assn: cond densities} is to facilitate uniform inference by controlling the behavior of the likelihood ratio $f_1^*/f_0^*$. In particular, $f_1^*/f_0^*$ is required to be bounded and bounded away from zero on an interval $[c_{0,L},c_{0,U}]$. Our uniform influence function representation result for $\hat R(t,\hat\beta)$ holds only for quantiles $t$ satisfying $FP^{-1}_{\beta^*}(t)\in [c_{0,L},c_{0,U}]$ or, equivalently, for $t\in [FP(c_{0,U},\beta^*), FP(c_{0,L},\beta^*)]$. While this representation depends on $f_1^*$ and $f_0^*$ only through $f_1^*/f_0^*$, the derivation of the result relies on the additional condition that $f_0^*$ is bounded away from zero (Assumption~\ref{assn: cond densities}(i)). This may seem overly restrictive at first glance---for example, if $G(X,\beta^*)=X'\beta^*$, then predictors with unbounded support are ruled out. Furthermore, it is easy to see that even if all components of $X$ have densities bounded away from zero, their linear combinations will generally not share this property.\footnote{For example, consider the sum of two independent uniform [-0.5,0.5] random variables. The resulting density is $(1-|x|)1_{[-1,1]}(x)$, which tends to zero as $x$ approaches $-1$ or $1$. } However, one can always find a monotone increasing transformation $\Phi(\cdot)$ such that the density of $\Phi[G(X,\beta^*)]$ conditional on $Y=0$ is bounded away from zero, e.g., one can use the probability integral transform to arrive at a uniform[0,1] density. At the same time, such a transformation leaves the ROC curve as well as the range of the likelihood ratio $f_1^*/f_0^*$ unchanged. Thus, the last part of Assumption \ref{assn: cond densities}(i) is simply a theoretical normalization that does not need to be imposed on the data in practice (see Figure~1 for an illustration). \begin{figure}[!t]\label{fig:densities} \begin{center} \includegraphics[scale=0.95]{Density_plots.pdf} \caption{ {\footnotesize On the left panel, $f_0^*$, the blue curve, is the $\beta(2,3)$ pdf so that $[a_0,b_0]=[0,1]$ and $f^*_0$ is not bounded away from zero. $f_1^*$, the red curve, is $\beta(3,4)+0.1$ so that $[a_1,b_1]=[0.1,1.1]$. The likelihood ratio is zero below 0.1 and becomes unbounded just below 1. On the right panel, we apply the transformation $\Phi=\,$cdf of $\beta(2,3)$. $f^*_0$ is now the uniform[0,1] density so that $f_1^*$ transforms into the likelihood ratio. Assumption~\ref{assn: cond densities}(iii) is satisfied over, say, $[c_{0,L},c_{0,U}]=[0.15,0.9]$ so that uniform inference about $R(t,\beta^*)$ is possible over $[FP(c_{0,U},\beta^*), FP(c_{0,L},\beta^*)]\approx [0.004,0.89]$. }} \end{center} \end{figure} Of course, Assumption~\ref{assn: cond densities} allows for scenarios in which $[FP(c_{0,U},\beta^*), FP(c_{0,L},\beta^*)]=[0,1]$, i.e., uniform inference is possible along the entire ROC curve. This is the case, for example, if the ``propensity score'' function $P(Y=1|X=x)$ takes values from an interval $[\delta, 1-\delta]$ for some $0<\delta<1/2$, which implies $supp(f_0^*)=supp(f_1^*)$ and that (iii) holds with $[c_{0,L},c_{0,U}]=[a_0,b_0]$.\footnote{To see this, let $f_x(x)$ denote the density function of $X$. Note that \begin{align*} &f_0(c)=\frac {\int_{G(x)=c} (1-p(x)) f_x(x) dx}{\int_{G(x)=c} (1-p(x)) dx } \quad\text{ and }\quad f_1(c)=\frac {\int_{G(x)=c} p(x) f_x(x) dx}{\int_{G(x)=c} p(x) dx }. \end{align*} It follows that \begin{align*} \frac{f_0(c)}{f_1(c)}=\frac {\int_{G(x)=c} (1-p(x)) f_x(x) dx}{\int_{G(x)=c} p(x) f_x(x) dx} \frac{\int_{G(x)=c} p(x) dx } {\int_{G(x)=c} (1-p(x)) dx }, \end{align*} which is bounded below by $ \delta^2/(1-\delta)^2$ and bounded above by $ (1-\delta)^2/\delta^2$. } More generally, Assumption~\ref{assn: cond densities}(iii) allows $f^*_1(c)/f^*_0(c)$ to reach zero or explode for cutoffs $c$ outside the range $[c_{0,L}, c_{1,L}]$. For example, the likelihood ratio vanishes as $c$ approaches $a_0$ from above whenever $a_0<a_1$. In this case the lowest index values imply $Y=0$ and the ROC curve reaches the top of the unit square for some $FP$ rate below unity. Similarly, $f^*_1(c)/f^*_0(c)$ may become unbounded as $c$ approaches $b_0$ from below. This can easily happen when $b_0<b_1$, i.e., the largest index values are associated exclusively with the $Y=1$ outcome. In this case the ROC curve has a positive vertical intercept at $FP=0$. Again, see Figure~1 for an example. The next assumption is a strengthening of Assumption~\ref{assn: gradient}. These stricter conditions on the gradient vectors $\nabla_\beta TP(c,\beta)$ and $\nabla_\beta FP(c,\beta)$ also play a key role in establishing a uniform influence function representation for the sample ROC curve. Recall that $B^*(r)$ denote the open ball with radius $r>0$ centered on $\beta^*$. \begin{assumption}\label{assn: gradient unif} Let $\mathcal{C}=[a_1,b_1]$. $\nabla_\betaTP(c,\beta)$ exits and is continuous over $\mathcal{C}\times B^*(r)$ for some $r>0$ with $\sup_{(c,\beta)\in\mathcal{C}\times B^*(r)}\|\nabla_\betaTP(c,\beta)\|\le M$ for some $M>0$. The same applies to $\nabla_\betaFP(c,\beta)$ with $\mathcal{C}=[a_0,b_0]$. \end{assumption} \subsection{Functional limit results} Letting $c^*_t={FP}_{\beta^*}^{-1}(t)\in[a_0,b_0]$ and $\hat c_t= \widehat{FP}_{\hat\beta}^{-1}(t)\in[a_0,b_0]$, we start from a decomposition of $\sqrt{n}[\widehat{TP}(\hat c_t,\hat\beta)-TP(c^*_t,\beta^*)]$ similar to (\ref{eq: est effect decomp 2nd eq}). There are two added layers of difficulty. First, functional results require uniform approximations to these terms as $t$ varies over the $[0,1]$ interval. Second, instead of being fixed, the cutoff is now estimated for any given value of $t$. The sampling variation in $\hat c_t$ contributes another non-trivial term to the asymptotic distribution. We express the centered and scaled empirical ROC curve as the sum of three terms: \begin{multline} \sqrt{n}[\widehat{R}(t,\hat\beta)-R(t,\beta^*)]=\sqrt{n}[\widehat{TP}(\hat c_t,\hat\beta)-TP(c^*_t,\beta^*)]\\ =\sqrt{n}[\widehat{TP}(\hat c_t,\hat\beta)-TP(\hat c_t,\hat\beta)]+\sqrt{n}[TP(\hat c_t,\hat\beta)-TP(\hat c_t,\beta^*)]\\ +\sqrt{n}[TP(\hat c_t,\beta^*)-TP(c^*_t,\beta^*)]\label{eq: func decomp} \end{multline} The first term in equation (\ref{eq: func decomp}) can be expanded similarly to the second equality in (\ref{eq: est effect decomp 2nd eq}): \begin{equation}\label{eq: R1} \sqrt{n}[\widehat{TP}(\hat c_t,\hat\beta)-TP(\hat c_t,\hat\beta)]=\sqrt{n}[\widehat{TP}(c^*_t,\beta^*)-TP(c^*_t,\beta^*)]+R_{1n}(t), \end{equation} where $\sup_{t\in[0,1]}|R_{1n}(t)|=o_p(1)$. The uniform convergence of the remainder term is a consequence of the stochastic equicontinuity of the process $(c,\beta)\mapsto \sqrt{n}(\widehat{TP}(c,\beta)-TP(c,\beta))$, stated directly in Assumption~\ref{assn: empirical process}, coupled with the fact that $\hat\beta\rar_p\beta^*$ (Assumption~\ref{assn: beta-est}) and $\sup_{t\in[0,1]}|\hat c_t-c^*_t|\rar_p 0$ (Lemma~\ref{lm: hatct to cstart} in Appendix~A). This last result makes use of Assumption~\ref{assn: cond densities}(i), which requires that the density $f_0^*$ be bounded away from zero on its compact support. The second term in equation (\ref{eq: func decomp}) is due to the estimation of $\beta$ and is again handled by a standard mean value expansion: \begin{equation}\label{eq: R2} \sqrt{n}[TP(\hat c_t,\hat\beta)-TP(\hat c_t,\beta^*)]=\nabla_\betaTP(c^*_t,\beta^*)\sqrt{n}(\hat\beta-\beta^*)+R_{2n}(t), \end{equation} where $\sup_{t\in[0,1]}|R_{2n}(t)|=o_p(1)$. The uniformity of the approximation is ensured by Assumption~\ref{assn: gradient unif}, which implies that $\nabla_\betaTP(c,\beta^*)$ is uniformly continuous. Finally, the third term in (\ref{eq: func decomp}) arises because of the need to estimate the cutoff value associated with a given false positive rate $t$; it therefore does not arise in the fixed-cutoff setting. Starting with a mean value expansion of $TP(\hat c_t,\beta^*)$ around $c^*_t$, one can write \begin{eqnarray} \sqrt{n}[TP(\hat c_t,\beta^*)- TP(c^*_t,\beta^*)]&=&f_1^*(c^*_t)\sqrt{n}(\hat c_t-c^*_t)+R_{3n}(t)\nonumber\\ &=&f_1^*(c^*_t)\sqrt{n}[\widehat{FP}_{\hat\beta}^{-1}(t)- FP^{-1}_{\beta^*}(t)]+R_{3n}(t),\label{eq: R3} \end{eqnarray} The remainder term $R_{3n}(t)$ converges in probability to zero uniformly over the interval \[ \big\{t: c^*_t\in[c_{0,L}, c_{0,U}]\big\}=\big[FP(c_{0,U}, \beta^*), FP(c_{0,L},\beta^*)\big], \] where $c_{0,L}$ and $c_{0,U}$ are specified in Assumption~\ref{assn: cond densities}(iii). The asymptotic distribution of the process $t\mapsto \sqrt{n}[\widehat{FP}_{\hat\beta}^{-1}(t)- FP^{-1}_{\beta^*}(t)]$ can be analyzed in two steps: First, we establish an asymptotically linear representation for the ``base process'' $c\mapsto \sqrt{n}[\widehat{FP}(c,\hat\beta)-FP(c,\beta^*)]$ that holds uniformly in $c$ (and implies a mean zero Gaussian limit process). Second, we apply the functional delta method under the inverse functional $\phi(F)=F^{-1}$ to characterize the contribution of the term (\ref{eq: R3}) to the asymptotic distribution of the empirical ROC curve. Lemma~\ref{lm: FP delta meth} summarizes and completes the development of the approximations presented in equations (\ref{eq: R1}), (\ref{eq: R2}) and (\ref{eq: R3}). \begin{lemma}\label{lm: FP delta meth} Suppose that Assumptions \ref{assn: iid}, \ref{assn: beta-est}, \ref{assn: empirical process}, \ref{assn: cond densities} and \ref{assn: gradient unif} are satisfied. Then: (i) $\sup_{t\in [0,1]}R_{1n}(t)=o_p(1)$; (ii) $\sup_{t\in [0,1]}R_{2n}(t)=o_p(1)$; (iii) $\sup_{t\in T} R_{3n}(t)=o_p(1)$, where $T=\big[FP(c_{0,U}, \beta^*), FP(c_{0,L},\beta^*)\big]$; (iv) $\widehat{FP}(c,\hat\beta)$ admits asymptotically linear representation that holds uniformly in $c$: \begin{equation}\label{eq: R4} \sqrt{n}\big(\widehat{FP}(c,\hat\beta)-FP(c,\beta^*)\big)=\frac{1}{\sqrt{n}}\sum_{i=1}^n \psi_{FP}(Y_i,X_i,c,\beta^*)+R_{4n}(t), \end{equation} where $\sup_{c_\in[a_0,b_0]}R_{4n}(c)=o_p(1)$; (v) and, by the functional delta method, \begin{equation}\label{eq: R5} \sqrt{n}[\widehat{FP}_{\hat\beta}^{-1}(t)- FP^{-1}_{\beta^*}(t)]= -\frac{1}{f_0^*(c^*_t)}\sqrt{n}[\widehat{FP}(c^*_t, \hat\beta)- FP(c^*_t,\beta^*)]+R_{5n}(t), \end{equation} where $\sup_{t\in(0,1)}|R_{5n}(t)|=o_p(1)$. \end{lemma} \paragraph{Remarks} \begin{enumerate} \item The proof of Lemma~\ref{lm: FP delta meth} is provided in Appendix~B; it simply adds some technical details to the arguments outlined in the main text. \item The fact that $f^*_0(c)$ is bounded away from zero (Assumption \ref{assn: cond densities}(i)) plays a critical role in ensuring that the remainder term $R_{5n}(t)$ associated with the delta method converges to zero uniformly over the entire $[0,1]$ interval. \end{enumerate} Combining equations (\ref{eq: func decomp}) through (\ref{eq: R5}) with the influence function representations of $\sqrt{n}[\widehat{TP}(c,\beta^*)-TP(c,\beta^*)]$ and $\sqrt{n}(\hat\beta-\beta^*)$ yields the following proposition, which is the central result of the paper. \begin{proposition} \label{prop: ROC curve asymptotics} Suppose that Assumptions \ref{assn: iid}, \ref{assn: beta-est}, \ref{assn: empirical process}, \ref{assn: cond densities} and \ref{assn: gradient unif} are satisfied. Define \begin{align} &\psi_{R}(y,x,t,\beta^*)=\psi_{TP}(y,x,c^*_t,\beta^*)- \frac{f_{1}^*(c^*_t)}{f_{0}^*(c^*_t)}\psi_{FP}(y,x,c^*_t,\beta^*),\nonumber \end{align} where $c^*_t=FP^{-1}_{\beta^*}(t)$. Then: (i) The empirical ROC curve admits an asymptotically linear representation that holds uniformly over $T=\big[FP(c_{0,U}, \beta^*), FP(c_{0,L},\beta^*)\big]$: \begin{align} \sup_{t\in T}\Big|\sqrt{n}&\big(\widehat{R}(t,\hat\beta)-R(t,\beta^*)\big)-\frac{1}{\sqrt{n}}\sum_{i=1}^n \psi_{R}(Y_i,X_i,t,\beta^*)\Big|=o_p(1), \label{eq: ROC-hat-linear} \end{align} where $c_{0,L}$ and $c_{0,U}$ are chosen in accordance with Assumption~\ref{assn: cond densities}(iii), i.e., $f_1^*/f_0^*$ is continuous and bounded away from zero on $[c_{0,L},c_{0,U}]$ . (ii) The process $t\mapsto \frac{1}{\sqrt{n}}\sum_{i=1}^n \psi_{R}(Y_i,X_i,t,\beta^*)$ is stochastically equicontinuous over $T$. (iii) Therefore, \begin{align*} \sqrt{n}(\widehat{R}_n(t,\hat\beta)-{R}(t,\beta^*))\Rightarrow \Psi_{h_{R}}(t)\text{ in the space $L^\infty(T)$}, \end{align*} where ``$\Rightarrow$'' denotes weak convergence, $L^\infty(T)$ is the space of bounded functions over $T$, and $\Psi_{h_R}(\cdot)$ is a zero mean Gaussian process defined on $T$ with covariance kernel $h_{R}(t_1,t_2)=E[\psi_{R}(Y,X,t_1,\beta^*)\psi_{R}(Y,X,t_2,\beta^*)]$. \end{proposition} \paragraph{Remarks} \begin{enumerate} \item The precise notion of weak convergence employed in part (iii) is given by Definition 1.3.3 of van der Vaart and Wellner (1996). \item Given the arguments leading up to Proposition~\ref{prop: ROC curve asymptotics}, the proof of part (i) is practically complete (technically, it still requires showing that the influence function representation of $\sqrt{n}[\widehat{TP}(c,\beta^*)-TP(c,\beta^*)]$ holds uniformly in $c$, but this is essentially covered by Lemma~\ref{lm: FP delta meth}(iv)). The proof of Part (ii) relies on Assumptions~\ref{assn: empirical process}, \ref{assn: cond densities} and \ref{assn: gradient unif}. Part (iii) follows immediately from parts (i) and (ii). Details are presented in Appendix~B. \end{enumerate} \subsection{Simulating the asymptotic distribution of the ROC curve} In order to employ Proposition~\ref{prop: ROC curve asymptotics} for statistical inference, we need a method to approximate $\Psi_{h_{R}}(t)$, the distributional limit of the process $\sqrt{n}(\widehat{R}_n(t,\beta^*)-{R}(t,\beta^*))$. To this end, offer two methods: the weighted bootstrap as in Ma and Korosok (2005) and the multiplier bootstrap as in Hsu (2016). We first present the discussion of the weighted bootstrap. Define $\widehat{TP}^w(c,\hat{\beta}^w)$ and $\widehat{FP}^w(c,\hat{\beta}^w)$ precisely as in Section \ref{subsec: weighted} and let $\hat c^w_t= \big(\widehat{FP}^w_{\hat{\beta}^w}\big)^{-1}(t)$. We can then construct the weighted ROC curve and its estimated limit process as $\widehat{R}^w_n(t,\hat\beta^w)=\widehat{TP}^w(\hat{c}^w_t,\hat{\beta}^w)$ and $\widehat{\Psi}^w_{R,n}(t)=\sqrt{n}(\widehat{R}^w_n(t,\hat\beta^w)- \widehat{R}_n(t,\hat\beta))$. \begin{proposition}\label{prop: weighted bootstrap} Suppose that \ref{assn: iid}-\ref{assn: gradient}, and \ref{assn: weighted bootstrap W}-\ref{assn: cond densities} are satisfied. Then, conditional on the sample path of the data, $\widehat{\Psi}^w_{R,n}(\cdot)\Rightarrow \Psi_{h_{R,2}}(\cdot)$ in the space $L^\infty(T)$ with probability approaching one. \end{proposition} Under the conditions of Proposition \ref{prop: ROC curve asymptotics}, one can apply the arguments in Theorem 2 of Ma and Kosorok (2005) to show that $\widehat{\Psi}^w_{R,n}(t)$ also approximates the distribution of $\Psi_{h_{R,2}}(t)$ in the sense of Proposition \ref{prop: multiplier bootstrap}. That is, conditional on the sample path of the data, $\widehat{\Psi}^w_{R,n}(t)\Rightarrow \Psi_{h_{R,2}}(t)$ with probability approaching one. We now turn to the discussion of the multiplier bootstrap method that is based on the conditional multiplier central limit theorem (see, e.g., van der Vaart and Wellner 1996, Section~2.9). The method requires consistent estimation of the components of the influence function $\psi_R$, uniformly in $t$. However, this estimation needs to be performed only once, over the original data set, given that the method does not rely on successive resampling and reestimation. Let $\widehat{\psi}_\beta(y,x, \hat{\beta})$ denote the estimated influence function of $\hat\beta$, where we replace any unknown parameters or functions within $\psi_\beta$ with consistent estimators (note that this function does not depend on $t$). We make the general assumption that the asymptotic variance-covariance matrix of $\hat\beta$ is consistently estimable using $\widehat{\psi}_\beta$ and the sample analog principle: \begin{assumption}\label{assn: consistent V} Let $V=E[{\psi}_\beta(Y,X, \beta^*){\psi}_\beta(Y,X, \beta^*)']$. Then: \begin{align*} \widehat{V}_n=n^{-1}\sum_{i=1}^n \widehat{\psi}_\beta(Y_i,X_i, \hat{\beta}_n)\widehat{\psi}_\beta(Y_i,X_i, \hat{\beta}_n)'\stackrel{p}{\rightarrow} V. \end{align*} \end{assumption} Let $\mathcal{C}=[c_{0,L},c_{0,U}]$. We further assume that there exist uniformly consistent estimators for $\nabla_\beta FP(c,\beta^*)$, $\nabla_\beta TP(c,\beta^*)$ and $f_{1}^*(c)/f_{0}^*(c)$ on $\mathcal{C}$. Here we state the existence of these estimators as a high level assumption and provide concrete implementations and additional assumptions in Appendix C. \begin{assumption}\label{assn: consistent estimator} The estimators $\nabla_\beta \widehat{FP}(c,\hat{\beta}_n)$, $\nabla_\beta \widehat{TP}(c,\hat{\beta}_n)$, and $\hat{f}_{1}(c)/\hat{f}_{0}(c)$ are Lipschitz continuous in $c$ on $\mathcal{C}$ which is compact and satisfy \begin{align*} &\sup_{c\in\mathcal{C}}\|\nabla_\beta\widehat{FP}(c,\hat{\beta}_n)-\nabla_\beta FP(c,\beta^*)\|=o_p(1),\\ &\sup_{c\in\mathcal{C}}\|\nabla_\beta\widehat{TP}(c,\hat{\beta}_n)-\nabla_\beta TP(c,\beta^*)\|=o_p(1),\\ &\sup_{c\in\mathcal{C}}\left|\frac{\hat{f}_{1}(c)}{\hat{f}_{0}(c)}-\frac{f_{1}(c)}{f_{0}(c)}\right|=o_p(1). \end{align*} In addition, the estimator $\hat{c}_t$ is uniformly consistent for $c_t$ for $t\in T$. \end{assumption} We now present the multiplier bootstrap. Let $U_1,\ldots, U_n$ be i.i.d.\ random variables independent of the data with moments $E[U]=0$, $E[U^2]=1$, and $E|U|^{2+\delta_u}<\infty$ for some $\delta_u>0$. For $t\in[0,1]$, we define the simulated stochastic process $\widehat{\Psi}^u_{R,n}(t)$ as \begin{align} \widehat{\Psi}^u_{R,n}(t)=\frac{1}{\sqrt{n}}\sum^n_{i=1}U_i\cdot\widehat{\psi}_R(Y_i,X_i,t,\hat\beta), \label{eq: simu-psi} \end{align} where \begin{align*} &\hat{\psi}_{R}(y,x,t,\beta)=\hat{\psi}_{TP}(y,x,\hat{c}_{t},\beta) -\frac{\hat{f}_{1}(\hat c_t)}{\hat{f}_{0}(\hat{c}_{t})}\hat{\psi}_{FP}(y,x,\hat{c}_{t},\beta),\\ &\hat{\psi}_{TP}(y,x,c,\beta)=\frac{y}{\hat{\pi}}\Big(1[G(x,\beta)> c] - \widehat{TP}(c,\beta)\Big)+ \nabla_\beta \widehat{TP}(c,\beta)\hat{\psi}_\beta(y,x,\beta) ,\\ &\hat{\psi}_{FP}(y,x,c,\beta)=\frac{1-y}{1-\hat{\pi}}\Big(1[G(x,\beta)> c] - \widehat{FP}(c,\beta)\Big)+ \nabla_\beta \widehat{FP}(c,\beta)\hat{\psi}_\beta(y,x,\beta) ,\\ &\hat{\pi}=\frac{1}{n}\sum_{i=1}^n Y_i. \end{align*} The next result shows that the distribution of the simulated process $\widehat{\Psi}^u_{R,n}(t)$ approximates that of the true limiting process $\Psi_{h_{R,2}}(t)$ in large samples. \begin{proposition}\label{prop: multiplier bootstrap} Suppose that Assumptions \ref{assn: iid}-\ref{assn: gradient} and \ref{assn: cond densities}-\ref{assn: consistent estimator} are satisfied. Then, conditional on the sample path of the data, $\widehat{\Psi}^u_{R,n}(\cdot)\Rightarrow \Psi_{h_{R,2}}(\cdot)$ in the space $L^\infty(T)$ with probability approaching one. \label{thm: simulated-process} \end{proposition} The weighted bootstrap has the advantage that it does not require explicit estimation of this function, but it is computationally somewhat more costly. On the other hand, the proof of weighted bootstrap is less involved because the multiplier method relies heavily on Assumption \ref{assn: consistent estimator}, i.e., the availability of uniformly consistent estimators for the components of $\psi_R$. To obtain estimators satisfy Assumption \ref{assn: consistent estimator} additional assumptions are needed and in Appendix C, we provide estimators and additional assumptions so that Assumption \ref{assn: consistent estimator} can be satisfied and we can apply multiplier method. \section{Applications to various inference problems}\label{sec: inference} In this section, we provide some examples that we can apply the results in Section \ref{sec: uniform results}. \subsection{Uniform confidence bands}\label{subsec: uniform CB} Let $\widehat{\sigma}^2_t$ denote a uniform consistent estimator for ${\sigma}^2_t$, the asymptotic variance of $\sqrt{n}(\widehat{R}_n(t,\hat\beta)-{R}(t,\beta^*))$ for $t\in T$. Later we will provide two estimators based on weighted bootstrap method and analytic results. Let $\widehat{\sigma}_{t,\epsilon}=\max\{\widehat{\sigma}_t,\epsilon\}$ in which $\epsilon>0$ is a fixed and small number. We are interested in a standardized version of confidence bands and by truncating $\widehat{\sigma}_t$ by $\epsilon$, we can make sure that we will not divide something close to zero when $t$ is close to 0 or 1. For a nominal significance level $\alpha$ and for $\tau_\ell,\tau_u\in T$ with $\tau_\ell\leq \tau_u$, let $\widehat{C}^\text{1-sided}_\alpha$ and $\widehat{C}^\text{2-sided}_\alpha$ respectively denote the one- and two-sided critical values that satisfy \begin{align} \widehat{C}^\text{1-sided}_\alpha&=\inf_{ a \in R}\left\{P\left(\sup_{t\in[\tau_\ell,\tau_u]} \frac{\widehat{\Psi}^w_{R,n}(t) }{\widehat{\sigma}_{t,\epsilon}}\leq a\right)\geq1-\alpha\right\}, \label{eq: one side CV}\\ \widehat{C}^\text{2-sided}_\alpha&=\inf_{a \in R}\left\{P\left(\sup_{t\in[\tau_\ell,\tau_u]}\frac{|\widehat{\Psi}^w_{R,n}(t)|}{\widehat{\sigma}_{t,\epsilon}}\leq a\right)\geq1-\alpha\right\}.\label{eq: two side CV} \end{align} Here, $\widehat{C}^\text{1-sided}_\alpha$ and $\widehat{C}^\text{2-sided}_\alpha$ are, respectively, the $(1-\alpha)$th quantile of $\sup_{t\in[\tau_\ell,\tau_u]}\widehat{\Psi}^w_{R,n}(t) \big/\widehat{\sigma}_{t,\epsilon}$ and $(1-\alpha)$th quantile of $\sup_{t\in[\tau_\ell,\tau_u]}\big|\widehat{\Psi}^w_{R,n}(t) \big/\widehat{\sigma}_{t,\epsilon}\big|$. Note that one can replace $\widehat{\Psi}^w_{R,n}(t) $ with $\widehat{\Psi}^u_{R,n}(t) $ to construct $\widehat{C}^\text{1-sided}_\alpha$ and $\widehat{C}^\text{2-sided}_\alpha$ as well. Once the critical values are constructed, we can also obtain one- and two-sided uniform confidence bands for ${R}(t,\beta^*)$ over $[\tau_\ell,\tau_u]$. Specifically, the one-sided $(1-\alpha)$ uniform confidence band is given by \begin{equation} \label{band:one} \left(\widehat{R}_n(t,\hat\beta)-\widehat{C}^\text{1-sided}_\alpha\frac{\widehat{\sigma}_{t,\epsilon}}{\sqrt{n}},\quad+\infty\right),\quad\tau\in[\tau_\ell,\tau_u] \end{equation} and the two-sided $(1-\alpha)$ uniform confidence band is \begin{equation} \label{band:two} \left(\widehat{R}_n(t,\hat\beta)-\widehat{C}^\text{2-sided}_\alpha\frac{\widehat{\sigma}_{t,\epsilon}}{\sqrt{n}},\quad\widehat{R}_n(t,\hat\beta)+\widehat{C}^\text{2-sided}_\alpha\frac{\widehat{\sigma}_{t,\epsilon}}{\sqrt{n}}\right),\quad\tau\in[\tau_\ell,\tau_u]. \end{equation} \bigskip \noindent {\bf Implementation of Uniform Confidence Bands} We now provide a step-by-step implementation for constructing uniform confidence bands. \begin{enumerate} \item Obtain $\widehat{R}_n(t,\hat\beta)$ from Section~\ref{sec: uniform results} and $\widehat{\sigma}_{t,\epsilon}$ from Section \ref{subsec: uniform CB} with $t\in\{\tau_\ell,\tau_\ell+0.01,\dotsc,\tau_u\}$. \item Draw i.i.d.\ pseudo random variables $\{W_1,\dotsc,W_n\}$ where $W_i$'s are normal distributions with mean and variance equal to one $B$ times for, say, $B=1000$. For each repetition $b=1,\dotsc,B$, calculate the simulated process $\widehat{\Psi}^w_{R,n}(t)$ according to (\ref{eq: simu-psi}). \item For the one-sided case, store the maximum value of ${\widehat{\Psi}^w_{R,n}(t) }\big/{\widehat{\sigma}_{t,\epsilon}}$ over the grid of $t$ values set up in Step 1; that is, let $M_b=\max_{t\in\{\tau_\ell,\tau_\ell+0.01,\dotsc,\tau_u\}}{\widehat{\Psi}^w_{R,n}(t) }\big/{\widehat{\sigma}_{t,\epsilon}}$ for $b=1,\dotsc,B$. \item Rank the $M_b$ values in an ascending order so that $M_{(1)}\leq\dotsc\leq M_{(B)}$. Next, define $M_{(\lfloor(1-\alpha)B\rfloor)}$ as the critical value $\widehat{C}^\text{1-sided}_\alpha$, where $\lfloor a \rfloor$ is the floor function returning the largest integer not greater than $a$. The one-sided $(1-\alpha)$ uniform confidence bands for $\{{R}(t,\beta^*):t\in[\tau_\ell,\tau_u]\}$ are given by \eqref{band:one}. \item For the two-sided case, simply replace ${\widehat{\Psi}^w_{R,n}(t) }\big/{\widehat{\sigma}_{t,\epsilon}}$ in Step 3 with $\big|{\widehat{\Psi}^w_{R,n}(t) }\big|\big/{\widehat{\sigma}_{t,\epsilon}}$ and repeat Step 4 for the critical value $\widehat{C}^\text{2-sided}_\alpha$. The two-sided $(1-\alpha)$ uniform confidence band for $\{{R}(t,\beta^*):t\in[\tau_\ell,\tau_u]\}$ is given by \eqref{band:two}. \end{enumerate} \bigskip \noindent {\bf Uniformly consistent estimators for ${\sigma}^2_t$}\\ We consider two estimators here. First estimator is based on weighted bootstrap that is similar to Remark 2 after Proposition \ref{prop: est effect gen-bootstrap}. Let $\widehat{\Psi}^w_{R,n}(t)$ denote the ROC estimate from the $w$th bootstrap cycle, $w=1,\ldots, \mathcal{W}$. Then one can estimate ${\sigma}^2_t$ by \begin{align*} &\widehat{\sigma}^2_t=\frac{n}{\mathcal{W}}\sum_{w=1}^\mathcal{W} \big(\widehat{R}^w_n(t,\hat\beta^w)-\overline{\widehat{R}}^w_n(t,\hat\beta^w)\big)\big(\widehat{R}^w_n(t,\hat\beta^w)-\overline{\widehat{R}}^w_n(t,\hat\beta^w)\big)',\text{ where}\\ &\overline{\widehat{R}}^w_n(t,\hat\beta^w)=\frac{1}{\mathcal{W}}\sum_{w=1}^\mathcal{W}\widehat{R}^w_n(t,\hat\beta^w). \end{align*} We have that conditional on sample path with probability approaching one, \begin{align*} &\widehat{\sigma}^2_t \stackrel{p}{\rightarrow}_w \frac{1}{n}\sum_{i=1}^n \psi^2_{R}(Y_i,X_i,t,\beta^*)+o_p(1), \end{align*} where $\stackrel{p}{\rightarrow}_w$ denotes probability limit under the law of the $W_i$'s. It follows that uniformly over $t\in[t_\ell,t_u]$, \begin{align*} \lim_{\mathcal{W} \rightarrow \infty} \widehat{\sigma}^2_t\stackrel{p}{\rightarrow} {\sigma}^2_t. \end{align*} The second estimator is based on analytic results. Recall that $\widehat{\psi}_R(Y_i,X_i,t,\hat\beta)$ is the estimated influence function for $\widehat{R}_n(t,\hat\beta)$ used in the multiplier bootstrap method. A uniformly consistent estimator for $ {\sigma}^2_t$ is given by \begin{align*} \widehat{\sigma}^2_t=\frac{1}{n}\sum_{i=1}^n \widehat{\psi}^2_R(Y_i,X_i,t,\hat\beta) \end{align*} and this is shown in the proof of \ref{prop: multiplier bootstrap}. \subsection{ROC dominance test} \label{subsec: ROC dominance} For two predictive index models $G_1(X,\beta_1)$ and $G_2(X,\beta_2)$, we may want to test whether $G_1$ has strictly better predictive power than $G_2$ in the sense that the ROC curve associated with $G_1$ dominates the ROC curve associated with $G_2$. What domination means is that for any given false positive rate $G_1$ delivers a higher true positive rate, i.e., the ROC curve for $G_1$ always lies above the ROC curve for $G_2$. Any decision maker, regardless of their loss function and their optimal cutoff, would then prefer model $G_1$ over $G_2$. Let $R_1(t,\beta_1^*)$ and $R_2(t,\beta_2^*)$ denote the ROC curves associated with $G_1$ and $G_2$, respectively. The hypotheses that $R_1(t,\beta_1^*)$ dominates $R_2(t,\beta_2^*)$ can be formally stated as \begin{align} &H_0: R_2(t,\beta_2^*)\leq R_1(t,\beta_1^*) ~~\text{ for all $t\in[0,1]$},\nonumber\\ &H_1: R_2(t,\beta_2^*) >R_1(t,\beta_1^*) ~~\text{ for some $t\in[0,1]$}. \label{eq: null-ROC-Dominace-1} \end{align} Our test for ROC dominance is similar to the test for first order stochastic dominance in Barrett and Donald (2003) and Donald and Hsu (2016) except that we need to consider the estimation effect of $\hat{\beta}$ as in Linton, Massoumi and Whang (2005) and Linton, Song and Whang (2010). Let $\widehat{R}_{j,n}(t,\hat{\beta}_j)$ be the estimators for $R_j(t,\beta_j^*)$ for $j=1,2$. Define $\psi_{j,R}(Y_i,X_i,t,\beta_1^*)$ for $j=1$ and 2 as above. Let $\widehat{\sigma}^2_{RD}(t)$ denote a uniform consistent estimator for ${\sigma}^2_{RD}(t)$, the asymptotic variance of $\sqrt{n}(\widehat{R}_{2,n}(t,\hat{\beta}_2)-\widehat{R}_{1,n}(t,\hat{\beta}_1)-{R}_{2}(t,{\beta}^*_2)-{R}_{1}(t,{\beta}^*_1))$. Let $\widehat{\sigma}_{RD,\epsilon}(t)=\max\{\widehat{\sigma}_{RD}(t),\epsilon\}$ in which $\epsilon>0$ is a fixed and small number. Uniform consistent estimator $\widehat{\sigma}^2_{RD}(t)$ can be obtained similar to $\widehat{\sigma}^2_t$ in Section \ref{subsec: uniform CB}, so we omit the details. We define the test statistic as $\widehat{S}_n=\sqrt{n}\sup_{t\in[0,1]}(\widehat{R}_{2,n}(t,\hat{\beta}_2)-\widehat{R}_{1,n}(t,\hat{\beta}_1))/\widehat{\sigma}_{RD,\epsilon}(t)$. Define the weighted bootstrap process $\Psi^w_{RD,n}(t)$ as $\widehat{\Psi}^w_{R,n}(t)=\sqrt{n}\big(\widehat{R}^w_{2,n}(t,\hat\beta^w_2)-\widehat{R}^w_{1,n}(t,\hat\beta^w_1) -(\widehat{R}_{2,n}(t,\hat{\beta}_2)-\widehat{R}_{1,n}(t,\hat{\beta}_1))\big) $ and define the multiplier bootstrap process $\Psi^u_{RD,n}(t)$ as \begin{align*} \widehat{\Psi}^u_{RD,n}(t)=\frac{1}{\sqrt{n}}\sum_{i=1}^nU_i\cdot(\widehat{\psi}_{2,R}(Y_i,X_i,t,\hat{\beta}_2)-\widehat{\psi}_{1,R}(Y_i,X_i,t,\hat{\beta}_1)). \end{align*} Under the least favorable configuration, we define the weighted bootstrap critical value as \begin{align} \hat{c}_n=\sup \Big\{c\big|P^{w}\Big(\sqrt{n}\sup_{t\in[0,1]}\frac{\widehat{\Psi}^w_{RD,n}(t)}{\widehat{\sigma}_{RD,\epsilon}(t)}\leq c\Big) \leq 1-\alpha \Big\}, \end{align} with significance level $\alpha$. The decision rule is \begin{align} \text{Reject $H_0$ if $\widehat{S}_n>\hat{c}_n$.}\label{eq: decision rule} \end{align} Then one can use $\widehat{\Psi}^u_{RD,n}(t)$ to construct critical value $\hat{c}_n$ as well. Similar to the stochastic dominance test literature, we can show that under the null hypothesis the asymptotic size of a test with decision rule defined in (\ref{eq: decision rule}) is less than or equal to $\alpha$. That is, we can control the asymptotic size of our ROC dominance test well. Also, under the fixed alternative, we have the test statistic converging to positive infinity and the critical value converging to a finite number, so the test is consistent. Our test is based on least favorable configuration, so it is conservative in that the asymptotic size is strictly smaller than $\alpha$ unless $R_2(t,\beta_2^*)= R_1(t,\beta_1^*)$ for all $t\in[0,1]$. One can improve the power of our test by using the recentering method in Hansen (2005), Donald and Hsu (201) which is similar to the generalized moment selection method in Andrews and Soreas (2010), and Andrews and Shi (2013), and the contact set approach in Linton, Song and Whang (2010). In this paper, we do not adopt this approach but the extension is straightforward. \subsection{Comparing AUCs} Recall that AUC is defined as the integral of ROC curve from 0 to one. Following Section \ref{subsec: ROC dominance}, let $R_1(t,\beta_1^*)$ and $R_2(t,\beta_2^*)$ denote the ROC curves for two predictive index models $G_1(X,\beta_1)$ and $G_2(X,\beta_2)$. Let $AUC_j=\int_{0}^1 R_j(t,\beta_j^*) dt$ and its estimator be $\widehat{AUC}_j=\int_{0}^1 \widehat{R}_{j,n}(t,\hat{\beta}_j)dt $ for $j=1$ and 2. Then it is true that \begin{align*} \sqrt{n}(\widehat{AUC}_2-\widehat{AUC}_1)= \frac{1}{\sqrt{n}}\sum_{i=1}^n \int_{0}^{1}({\psi}_{2,R}(Y_i,X_i,t,\beta^*_2)-{\psi}_{1,R}(Y_i,X_i,t,{\beta}^*_1)dt+o_p(1) \end{align*} and \begin{align*} &\sqrt{n}(\widehat{AUC}_2-\widehat{AUC}_1){\rightarrow}_d N[0, \mathcal{V}_{a}],\\ & \mathcal{V}_{a}=E[\big(\int_{0}^{1} {\psi}_{2,R}(Y,X,t,\beta^*_2)-{\psi}_{1,R}(Y,X,t,{\beta}^*_1)dt\big)^2]. \end{align*} To make inference, one can use a weighted bootstrap method to approximate the limiting distribution of $N[0, \mathcal{V}_{a}]$ or one can estimate $ \mathcal{V}_{a}$ analytically by \begin{align*} \widehat{\mathcal{V}}_{a}= \frac{1}{n}\sum_{i=1}^n \Big(\int_{0}^1(\widehat{\psi}_{2,R}(Y_i,X_i,t,\hat{\beta}_2)-\widehat{\psi}_{1,R}(Y_i,X_i,t,\hat{\beta}_1))dt\Big)^2. \end{align*} For brevity, we omit the details here. \bigskip \noindent {\bf Remark}\\ Suppose that $G(X,\beta)$ is a correct specification for the propensity score function, $P(Y=1|X)$, in that there exists $\beta^*$ such that $G(X,\beta^*)=P(Y=1|X)$ a.s. Then the estimation effect of $\beta^*$ on the distribution of AUC is negligible. It is then true that when two predictive predictive index models, $G_1(X,\beta_1)$ and $G_2(X,\beta_2)$, are both correctly specified for $P(Y=1|X)$, we have $\mathcal{V}_{a}=0$, i.e., the limiting distribution of $\sqrt{n}(\widehat{AUC}_2-\widehat{AUC}_1)$ is degenerate. \section{Simulations: the relevance of the estimation effect}\label{sec: simulations} We now present a small Monte Carlo simulation to illustrate the theoretical discussion of the estimation effect and the pointwise asymptotic results in Section~\ref{sec: pointwise results}. The data generating process (DGP) is specified as follows. Let $X=(X_1, X_2, X_3)'$ be a $3\times 1$ vector of predictors and $\tilde X=(1,X')'$. The components of $X$ are either independent $N(0,1)$ or unif$[-.5,1.5]$ variables. The outcomes $Y$ are generated according to the conditional probability function \[ p(X)=G(X,\beta^\circ)=G(\tilde X'\beta^\circ)\text{ with } \beta^\circ=(0, 0.5, 0.25, 1)'\text{ and }G\in\{\text{logit},\text{cauchit}\}. \] In the majority of the exercises we use a logistic link in the DGP (so that the logit first stage is correctly specified), but we also conduct some simulations with a cauchit link (so that the logit first stage is mildly misspecified). Given a sample of observations and a cutoff $c$, we construct nominally 90\% confidence intervals for TP$(c,\beta^\circ)$ and TP$(c,\beta^\circ)-FP(c,\beta^\circ)$ in three different ways: (i) using the true predictive index $p(X)=G(\tilde X'\beta^\circ)$ with the conventional limit distributions (\ref{eq: TP asy dist}) and (\ref{eq: FP asy dist}); (ii) using the estimated predictive index $\Lambda(\tilde X'\hat\beta)$ with the conventional limit distributions (so that the estimation effect is ignored); (iii) using the estimated predictive index $\Lambda(\tilde X'\hat\beta)$ with the corrected limit distribution (\ref{eq: TF joint dist}). We simulate 10,000 samples and compute the actual coverage probability of these intervals. \begin{table}[thbp] {\footnotesize \begin{center} \topcaption{Illustration of the estimation effect: actual coverage probabilities} \begin{tabular}{l cccc|cccc} &\multicolumn{4}{c|}{Nominal\ 90\% CIs for TP}&\multicolumn{4}{c}{Nominal\ 90\% CIs for TP-FP}\\ &True&\multicolumn{2}{c}{\underline{Conventional}}&\multicolumn{1}{c|}{\underline{Corrected}}&True&\multicolumn{2}{c}{\underline{Conventional}}&\multicolumn{1}{c}{\underline{Corrected}}\\[-5pt] $c$ & value & $G(\tilde X' \beta^\circ)$ & $\Lambda(\tilde X' \hat\beta)$ & $\Lambda(\tilde X' \hat\beta)$ & value\ & $G(\tilde X' \beta^\circ)$ & $\Lambda(\tilde X' \hat\beta)$ & $\Lambda(\tilde X' \hat\beta)$\\ \hline &\multicolumn{8}{c}{(A) $n=200$, $X_1, X_2, X_3\sim$iid $N(0,1)$}\\ \hline 0.2 &0.970 &0.794 & 0.793 & 0.885 & 0.166 & 0.897 & 0.628 & 0.850\\[-5pt] 0.33 &0.884 &0.887 & 0.856 & 0.892 & 0.313 & 0.893 & 0.778 & 0.885\\[-5pt] 0.5 &0.694 &0.894 & 0.768 & 0.890 & 0.388 & 0.896 & 0.889 & 0.896\\[-5pt] 0.67 &0.429 &0.894 & 0.620 & 0.876 & 0.313 & 0.897 & 0.769 & 0.878\\[-5pt] 0.8 &0.196 &0.895 & 0.533 & 0.847 & 0.166 & 0.894 & 0.615 & 0.842\\ \hline &\multicolumn{8}{c}{(B) $n=500$, $X_1, X_2, X_3\sim$iid $N(0,1)$}\\ \hline 0.2 & 0.970 & 0.861 & 0.843 & 0.893 & 0.166 &0.905 & 0.625 & 0.862\\[-5pt] 0.33 & 0.884 & 0.895 & 0.861 & 0.891 & 0.313 &0.903 & 0.779 & 0.888\\[-5pt] 0.5 & 0.694 & 0.892 & 0.766 & 0.891 & 0.388 &0.901 & 0.896 & 0.899\\[-5pt] 0.67 & 0.429 & 0.897 & 0.618 & 0.886 & 0.313 &0.900 & 0.773 & 0.886\\[-5pt] 0.8 & 0.196 & 0.895 & 0.531 & 0.862 & 0.166 &0.895 & 0.627 & 0.864\\ \hline &\multicolumn{8}{c}{(C) $n=2500$, $X_1, X_2, X_3\sim$iid $N(0,1)$}\\ \hline 0.2 & 0.970 & 0.893 & 0.857 & 0.901 & 0.166 & 0.902 & 0.630 & 0.885\\[-5pt] 0.33 & 0.884 & 0.898 & 0.863 & 0.899 & 0.313 & 0.899 & 0.779 & 0.899\\[-5pt] 0.5 & 0.694 & 0.899 & 0.779 & 0.902 & 0.388 & 0.901 & 0.902 & 0.899\\[-5pt] 0.67 & 0.429 & 0.898 & 0.633 & 0.896 & 0.313 & 0.894 & 0.776 & 0.896\\[-5pt] 0.8 & 0.196 & 0.895 & 0.545 & 0.881 & 0.166 & 0.898 & 0.630 & 0.885\\ \hline &\multicolumn{8}{c}{(D) $n=500$, $X_1, X_2, X_3\sim$unif $[-0.5,1.5]$}\\ \hline 0.5 & 0.934 & 0.889 & 0.515 & 0.876 &0.117 & 0.888 & 0.628 &0.855\\[-5pt] 0.67& 0.671 & 0.897 & 0.595 & 0.904 &0.257 & 0.894 & 0.897 &0.926\\[-5pt] 0.8 & 0.304 & 0.899 & 0.378 & 0.862 &0.182 & 0.894 & 0.675 &0.899\\ \hline &\multicolumn{8}{c}{(E) $n=500$, $X_1, X_2, X_3\sim$iid $N(0,1)$, $G$=cauchit}\\ \hline 0.2 & 0.963 & 0.878 & 0.841 & 0.879 & 0.157 &0.899 &0.602 & 0.862\\[-5pt] 0.33 & 0.862 & 0.902 & 0.626 & 0.657 & 0.339 &0.898 &0.701 & 0.858\\[-5pt] 0.5 & 0.702 & 0.898 & 0.769 & 0.898 & 0.404 &0.896 &0.890 & 0.895\\[-5pt] 0.67 & 0.476 & 0.901 & 0.494 & 0.814 & 0.339 &0.903 &0.701 & 0.857\\[-5pt] 0.8 & 0.194 & 0.895 & 0.523 & 0.861 & 0.157 &0.899 &0.601 & 0.858\\ \hline \end{tabular} \label{tbl: est effect} \end{center} Note: $c$ is the cutoff; ``True value'' is the true value of $TP(c,\beta^\circ)$ and $TP(c,\beta^\circ)$-$FP(c,\beta^\circ)$. All other numbers in the table are actual coverage probabilities. $G(\tilde X'\beta^\circ)$ means using the true value of $\P(Y=1|X)$ as the predictive index; $\Lambda(\tilde X'\hat\beta)$ means pre-estimating the predictive index by a logit regression of $Y$ on $\tilde X=(1,X')'$. The columns labeled ``Conventional'' report CIs based on the limit distributions (\ref{eq: TP asy dist}) and (\ref{eq: FP asy dist}). The columns labeled ``Corrected'' report CIs based on (\ref{eq: TF joint dist}), which accounts for the pre-estimation effect. } \end{table} Table \ref{tbl: est effect} reports the results from this exercise for $c\in\{0.2, 0.33, 0.5, 0.67, 0.8\}$ and $n\in\{200, 500, 5000\}$. The first message is that failing to account for the pre-estimation effect can cause substantial distortions in the coverage probability of the conventional CIs. In panels (A) through (D) the estimation effect can be seen by comparing the columns titled ``Conventional $G(\tilde X'\beta^\circ)$'' and ``Conventional $\Lambda(\tilde X'\hat\beta)$.'' In the former case there is no estimation effect and any deviation from the nominal confidence level of 90\% is a small sample phenomenon.\footnote{For example, for $c=0.2$ the value of TP$(c,\beta^\circ)$ is close to the upper bound 1, and the coverage probability of the fixed-$\beta$ CI is only 80\% for $n=200$.} Over the various cases, the estimation effect ranges from essentially zero to as large as a 30 to 40 percentage point difference in coverage probability. In panel (E), the comparison between the same two columns includes the estimation effect as well as some ``bias'' due to the fact that the first stage logit regression is misspecified. The theory presented in Section~\ref{subsec: est eff simple theory} gives insight into why the estimation effect is negligible in some cases. In particular, consider the parameter $TP-FP$ on panels (A) through (C) with $c=0.5$. As the predictors are independent standard normal variables and there is no constant in the DGP, the symmetry of the logistic cdf gives $\pi=\P(Y=1)=0.5$. Therefore, when $c=0.5$, $TP-FP$ is a scalar multiple of $(1-c)\piTP-c(1-\pi)FP$. As explained in footnote~\ref{fn: no est effect}, inference about this particular linear combination is not impacted by the pre-estimation effect. This is clearly reflected in the estimation results. By contrast, in panel (D) the predictor distribution is not symmetric around zero, so $\pi\neq 0.5$, and the estimation effect is indeed present for $TP-FP$ even when $c=0.5$. The second main message is that the proposed analytical correction works well in virtually all the cases considered here. This includes panel (A), where the sample size is small, and panel (E), where the first stage logit model is misspecified. Not surprisingly, under misspecification the corrected CI can also fall somewhat short of the 90\% confidence level, but it still represents a large improvement over conventional inference. \section{Conclusions}\label{sec: concl} We provided both pointwise and uniform asymptotic results that describe the distribution of an empirical ROC curve based on a pre-estimated index. The core theory is complete. Ongoing work consists of: (i) developing appropriate test procedures when the first stage models are nested and the ROC influence functions are the same under the null; (ii) additional simulations that illustrate the small sample performance of the uniform asymptotic results, the practical use of the tests, and the power gains afforded by in-sample inference. \newpage \begin{apabib} Abadie, A.\ and G.W.\ Imbens (2016): ``Matching on the Estimated Propensity Score". \textit{Econometrica}, 84: 781-807. Andrews, D.W.K.\ (1994): ``Empirical Process Methods in Econometrics,'' in Handbook of Econometrics, vol.\ IV, eds.\ R.F.\ Engle and D.L.\ McFadden, Elsevier. Andrews, D.\ W.\ K.\ and G.\ Soares (2010): ``Inference for Parameters Defined by Moment Inequalities Using Generalized Moment Selection". \textit{Econometrica}, 78: 119-157. Andrews, D.\ W.\ K.\ and X.\ Shi (2013): ``Inference Based on Conditional Moment Inequalities". \textit{Econometrica}, 81: 609-666. Anjali D.N.\ and P. Bossaerts (2014): ``Risk and Reward Preferences under Time Pressure''. \textit{Review of Finance}, 18: 999-1022. Bamber, D.\ (1975): ``The Area above the Ordinal Dominance Graph and the Area below the Receiver Operating Characteristic Graph''. \textit{Journal of Mathematical Psychology} 12: 387-415. Barrett, G.F.\ and S.G.\ Donald (2003): ``Consistent Tests for Stochastic Dominance''. \textit{Econometrica}, 71: 71-104. Bazzi, S., R.A.\ Blair, C.\ Blattman, O.\ Dube, M.\ Gudgeon and R.\ Peck (2021): ``The Promise and Pitfalls of Conflict Prediction: Evidence from Colombia and Indonesia''. \textit{The Review of Economics and Statistics}, forthcoming. Berge, T.J. and O.\ Jorda (2011): ``Evaluating the Classification of Economic Activity into Recessions and Expansions''. \textit{American Economic Journal: Macroeconomics} 3: 246-247. Bonfim, D., G. Nogueira and S.\ Ongena (2021): `` `Sorry, We're Closed' Bank Branch Closures, Loan Pricing, and Information Asymmetries''. \textit{Review of Finance}, 25: 1211-1259. Clark, T.E.\ and M.W.\ McCracken (2012): ``In-sample Tests of Predictive Ability: A New Approach''. \textit{Journal of Econometrics} 170: 1-14. DeLong, E.R., D.M. DeLong and D.L. Clarke-Pearson (1988): ``Comparing Areas under Two or More Correlated Receiver Operating Characteristic Curves: A Nonparametric Approach''. \textit{Biometrics} 44: 837-845. Demler, O.V., M.J.\ Pencina and R.B.\ D'Agostino, Sr. (2012): ``Misuse of DeLong Test to Compare AUCs for Nested Models''. \textit{Statistics in Medicine} 31: 2577-2587. Egan, J.P.\ (1975): \emph{Signal Detection Theory and ROC Analysis}. Academic Press: New York. Donald, S.G.\ and Y.-C.\ Hsu (2014): ``Estimation and Inference for Distribution Functions and Quantile Functions in Treatment Effect Models". \textit{Journal of Econometrics}, 178: 383-397. Donald, S.G.\ and Y.-C.\ Hsu (2016): ``Improving the Power of Tests of Stochastic Dominance". \textit{Econometric Reviews}, 35: 553-585. Donald, S.G.\ , Y.-C.\ Hsu and G.F.\ Barrett (2012): ``Incorporating Covariates in the Measurement of Welfare and Inequality: Methods and Applications". \textit{Econometrics Journal}, 15: C1-C30. Elliott, G.\ and R.P.\ Lieli (2013): ``Predicting Binary Outcomes''. \textit{Journal of Econometrics}, 174: 15-26. Hansen, P.\ R.\ (2005): ``A Test for Superior Predictive Ability". \textit{Journal of Business and Economic Statistics}, 23: 365--380. Hsieh, F.\ and Turnbull, B.W.\ (1996): ``Nonparametric and Semiparametric Estimation of the Receiver Operating Characteristic Curve''. \textit{The Annals of Statistics}, 24: 25-40. Inoue, A.\ and Kilian, L.\ (2004): ``In-sample or out-of-sample tests of predictability? Which one should we use?''. \textit{Econometric Reviews}, 23: 371-402. Kleinberg, J., H.\ Lakkaraju, J.\ Leskovec, J.\ Ludwig and S.\ Mullainathan (2018): ``Human Decisions and Machine Predictions''. \textit{The Quarterly Journal of Economics}, 133: 237–293. Lahiri, K.\ and L.\ Yang (2018): ``Confidence Bands for ROC Curves With Serially Dependent Data''. \textit{Journal of Business and Economic Statistics}, 36: 115-130. Lahiri, K. and J.G. Wang (2013): ``Evaluating Probability Forecasts for GDP Declines Using Alternative Methodologies''. \textit{International Journal of Forecasting}, 29: 175-190. Lieli, R.P.\ and Y-C.\ Hsu (2019): ``Using the Estimated AUC to Test the Adequacy of Binary Predictors''. \textit{Journal of Nonparametric Statistics}, 31: 100-130. Lieli, R.P.\ and A.\ Nieto-Barthaburu (2010): ``Optimal Binary Prediction for Group Decision Making''. \textit{Journal of Business and Economic Statistics}, 28: 308-319. Linton, O., E.\ Maasoumi and Y.-J.\ Whang (2005): ``Consistent Testing for Stochastic Dominance under General Sampling Schemes". \textit{The Review of Economic Studies}, 72: 735-765. Linton, O., K.\ Song and Y.-J.\ Whang (2010): ``An Improved Bootstrap Test of Stochastic Dominance". \textit{Journal of Econometrics}, 154: 186-202. Ma, S.\ and M.R.\ Kosorok (2005): ``Robust Semiparametric M-estimation and the Weighted Bootstrap''. \textit{Journal of Multivariate Analysis}, 96: 190-270. McCracken, M.W., J.T.\ McGillicuddy and M.T.\ Owyang (2021): ``Binary Conditional Forecasts,'' \textit{Journal of Business and Economic Statistics}, forthcoming. Pagan, A.\ (1984): ``Econometric Issues in the Analysis of Regressions with Generated Regressors''. \textit{International Economic Review}, 25: 221–247. Pepe, M.S.\ (2003): \emph{The Statistical Evaluation of Medical Tests for Classification and Prediction.} Oxford University Press: Oxford. Pollard, D.\ (1990): \emph{Empirical Processes: Theory and Application.} CBMS Conference Series in Probability and Statistics, Vol.\ 2. Hayward, CA: Institute of Mathematical Statistics. Schularik, M. and A.M. Taylor (2012): ``Credit Booms Gone Bust: Monetary Policy, Leverage Cycles, and Financial Crises, 1870-2008''. \textit{American Economic Review} 102: 1029-1061. Van der Vaart, A.\ W.\ and J.\ A.\ Wellner (1996): \emph{Weak Convergence and Empirical Processes: With Application to Statistics.} New York: Springer-Verlag. Wooldridge, J.M.\ (2002): \emph{Econometric Analysis of Cross Section and Panel Data.} The MIT Press: Cambridge. \end{apabib} \newpage
proofpile-arXiv_065-11
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{s1} A {\it mixed graph} is a graph in which directed and undirected edges between two distinct vertices may exist at most once. To distinguish, we refer an {\it arc} to as a directed edge in a mixed graph. An {\it undirected graph} is a mixed graph without arcs. For a mixed graph $G$ of order $n$, let the vertex set of $G$ be $V(G)=[n]=\{1,2,\ldots,n\}$, and the set $E(G)$ collects all arcs $\overrightarrow{ij}$ and all undirected edges $ij$ in $G$, where $i, j\in V(G)$ are distinct. The {\it size} of a mixed graph $G$ is defined to be the number of arcs plus twice the number of undirected edges in $G$. The {\it adjacency matrix} $A=(a_{ij})$ of $G$ is a square $01$-matrix of order $n$ defined by $a_{ij}=1$ if and only if $\vv{ij}$ or $ij$ in $E(G)$. The diagonal matrix $D^+={\rm diag}(d^+_1, d^+_2, \ldots, d^+_n)$ is called the {\it out-degree matrix} of $T$, where $d^+_i=|\{j~:~\vv{ij}\in E(G){~\rm or~}ij\in E(G)\}|$. For real number $\alpha\in[0,1]$, the matrix $A_\alpha(G)$ of $G$ is defined to be $\alpha D^++(1-\alpha)A$. The concepts of $A_\alpha$ matrix of graphs were first introduced by Nikiforov\cite{n:17} in 2017 and then liu et al.\cite{l:19} start to consider the $A_\alpha$ matrix for digraphs. Since $A_\alpha$ is nonnegative and it is well known that a nonnegative matrix has a real eigenvalue, let $\rho_\alpha(G)$ denote the largest real eigenvalue $\rho(A_\alpha(G))$ of the $A_\alpha$ matrix $A_\alpha(G)$ of $G$, and refer $\rho_\alpha(G)$ to as the {\it $A_\alpha$-spectral radius}, or $\alpha$-{\it index} of $G$. For the previous studies on $A_\alpha$-spectral radii of undirected graphs and mixed graphs, see \cite{gb:20,gz:20,lcm:19,nprs:17,r:20,xsw:20}. The {\it underlying graph} of a mixed graph $G$ is the undirected graph obtained from $G$ by removing the directions on arcs. The {\it distance} $\partial(a,b)$ for vertices $a,b$ in $G$ is their distance in the underlying graph of $G$. The {\it diameter} of $G$ is defined to be $\max_{a,b\in V(G)}\partial(a,b)$. Similarly the {\it mixed tree}, {\it mixed path}, {\it mixed star} are defined as mixed graphs whose underlying graphs are tree, path, and star, respectively, where a star is a tree of diameter at most $2$. We denote the mixed path of order $k$ and size $2k-2$ by $P_k$. The main goal of this paper is to find the sharp upper bound and sharp lower bound of the $A_\alpha$-spectral radii of mixed trees of order $n$ and size $m$, where $n-1\leq m\leq 2n-2$. The following two theorems are our main results. \begin{Theorem}\label{upr} If $\alpha\in [0, 1]$ and $T$ is a mixed tree of order $n$ and size $m$, then $$\rho_\alpha(T)\leq\frac{1}{2}\left(\alpha n+\sqrt{\alpha^2n^2-4\alpha^2(n-1)+4(1-\alpha)^2(m-n+1)}\right).$$ Moreover, every mixed star of order $n$ and size $m$ with maximum out-degree $n-1$ attains the upper bound. \end{Theorem} \begin{Theorem}\label{lwr} If $T$ is a mixed tree of order $n$ and size $m$, and set $k=\lceil \frac{n}{2n-m-1}\rceil$ then $$\rho_\alpha(T)\geq \rho_\alpha(P_k).$$ Moreover, the lower bound is attained when $T=P_n$. \end{Theorem} It worths mentioning that in the special case $m=2n-2$, Theorem~\ref{upr} and Theorem~\ref{lwr} are proved in \cite{nprs:17}. The main tool in the proof of Theorem~\ref{upr} is the Kelmans transformation for matrices \cite{kw:21} which will be introduced in Section~\ref{s2}. A partially ordered set (poset) $\mathcal{G}(n, m)$ of mixed graphs of order $n$ and size $m$ that respects the order of $A_\alpha$-spectral radii is introduced and the maximal elements in $\mathcal{G}(n, m)$ are characterized in Section~\ref{s2.5}. The maximal elements in the subposet $\mathcal{T}(n, m)$ of $\mathcal{G}(n, m)$ induced on the mixed trees are determined in Section~\ref{s2.7}. Theorem~\ref{upr} will be proven in Section~\ref{s3}. Theorem~\ref{lwr} is essentially a consequence of \cite{nprs:17}. We mention this in Section~\ref{s4}. \section{Preliminaries}\label{s2} To compare the $A_\alpha$-spectral radii of mixed graphs, we first introduced a useful tool called the Kelmans transformation. The Kelmans transformation, of an undirected graph, or called graph compression, was defined by A.K. Kelmans in 1981\cite{k:81}. The authors Kao and Weng have generalized it into a matrix version in \cite{kw:21}, and here we will further generalize the transformation to fit the $A_\alpha$ matrix. Let $C=(c_{ij})$ be a nonnegative square matrix of order $n$. Fix a $2$-subset $\{a, b\}$ of $[n]$, and assume that $c_{ab}=c_{ba}$. Choose $k$ such that \begin{equation}\label{e1} \max(0,c_{bb}-c_{aa})\leq k\leq c_{bb}\end{equation} and for $i\in [n]-\{a, b\}$, choose $t_i$ and $s_i$ such that \begin{equation}\label{e2} \max(0, c_{ib}-c_{ia})\leq t_i\leq c_{ib}, \quad \max(0, c_{bi}-c_{ai})\leq s_i\leq c_{bi}. \end{equation} We define a new matrix $C_b^a=C_b^a(t_i; s_i; k)$ of order $n$ from $C$ by shifting the portion $k$ from $c_{bb}$ to $c_{aa}$, the portion $t_i$ of $c_{ib}$ to $c_{ia}$ and the portion $s_i$ of $c_{bi}$ to $c_{ai}$ such that in the new matrix $C_b^a=(c'_{ij})$ we have $c'_{aa}\geq c'_{bb}$, $c'_{ia}\geq c'_{ib}$, and $c'_{ai}\geq c'_{bi}$, for all $i\in [n]-\{a, b\}$. The following is an illustration of $C_b^a$: $$C_b^a=~~ \kbordermatrix{~ & & j & a & b\\ &~~& & & \\ i && c_{ij} & c_{ia}+t_i & c_{ib}-t_i \\ && & & \\ a && c_{aj}+s_j & c_{aa}+k & c_{ab} \\ b && c_{bj}-s_j &c_{ba} & c_{bb}-k \\ }\qquad \left\{ \begin{array}{l} i, j\in [n]-\{a, b\}, \\ c_{ab}=c_{ba}, \\ \max(0, c_{ib}-c_{ia})\leq t_i\leq c_{ib}, \\ \max(0, c_{bj}-c_{aj})\leq s_j\leq c_{bj}, \\ \max(0,c_{bb}-c_{aa})\leq k\leq c_{bb}. \end{array} \right.$$ \noindent Formally, the matrix $C_b^a=(c'_{ij})$ is defined from $C=(c_{ij})$ by setting \begin{equation}\label{C'}c'_{ij}=\left\{ \begin{array}{ll} c_{ij}, & \hbox{if $i, j\in [n]-\{a, b\}$ or $(i, j)\in \{(a, b), (b, a)\}$;} \\ c_{ia}+t_i, & \hbox{if $j=a$ and $i\in [n]-\{a, b\}$;} \\ c_{ib}-t_i, & \hbox{if $j=b$ and $i\in [n]-\{a, b\}$;} \\ c_{aj}+s_j, & \hbox{if $i=a$ and $j\in [n]-\{a, b\}$;} \\ c_{bj}-s_j, & \hbox{if $i=b$ and $j\in [n]-\{a, b\}$;} \\ c_{aa}+k, & \hbox{if $i=j=a$;} \\ c_{bb}-k, & \hbox{if $i=j=b$.} \\ \end{array} \right. \end{equation} The matrix $C_b^a$ is referred to as the {\it Kelmans transformation of $C$ from $b$ to $a$ with respect to $(t_i;s_i;k)$}. In the above setting, if $C=(c_{ij})$ is the adjacency matrix of an undirected graph of order $n$ and assume that $C_b^a$ is also a symmetric binary matrix, then by the assumptions in (\ref{e1})-(\ref{e2}), $t_i=\max(0, c_{ib}-c_{ia})=\max(0, c_{bi}-c_{ai})=s_i\in \{0, 1\}$ and $k=0$ are uniquely determined from $C$. In this situation, we don't need to mention $(t_i;s_j;k)$ and the Kelmans transformation $C_b^a$ of $C$ from $b$ to $a$ is the adjacency matrix of the graph obtained by the Kelmans transformation of an undirected graph defined by A.K. Kelmans \cite{k:81}. P. Csikv\'{a}ri proved that the largest real eigenvalues of adjacency matrices will not be decreased after a Kelmans transformation of an undirected graph \cite{c:09}. The following theorem is a generalization of this result to a nonnegative matrix which is not necessary to be symmetric. \begin{Theorem}\label{thm1}\cite{kw:21} Let $C=(c_{ij})$ denote a nonnegative square matrix of order $n$ such that $c_{ab}=c_{ba}$ for some $1\leq a, b\leq n$. Choose $k, t_i, s_i$ for $i\in [n]-\{a, b\}$ that satisfy (\ref{e1}),(\ref{e2}). Let $C_b^a=C_b^a(t_i; s_i; k)$ be the Kelmans transformation from $b$ to $a$ with respect to $(t_i;s_j;k)$. Then $\rho(C)\leq \rho(C_b^a).$ \qed \end{Theorem} It worths mentioning that Theorem~\ref{thm1} appearing in \cite{kw:21} has the additional assumption $c_{aa}=c_{bb}$. The interested reader might trace the proof in \cite{kw:21} and find that the same proof works fine for this slightly more general situation here. As in the case of undirected graph, if $C$ in Theorem~\ref{thm1} is the adjacency matrix of a mixed graph $G$ and assume that $C_b^a$ is also an adjacency matrix of some mixed graph, then $t_i, s_i\in \{0, 1\}$ and $k=0$ are uniquely determined from $C$. We use $G_b^a$ to denote the mixed graph whose adjacency matrix is $C_b^a$ and called $G_b^a$ the {\it Kelmans transformation} of mixed graph $G$ from $b$ to $a$. Notice that when the notation $G_b^a$ appears, we always assume that $a, b\in V(G)$ are distinct and have no arc, i.e. $\vv{ab}\notin E(G)$ and $\vv{ba}\notin E(G)$. Figure~\ref{fig1} shows how the Kelmans transformation on mixed graph works. \begin{figure} \centering \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.6cm,y=0.6cm] \draw [stealth-](0.05,9.75)-- (0.95,5.25); \draw [-stealth](0.15,9.75)-- (2.85,5.25); \draw (0.2,9.75)-- (3.8,5.25); \draw [stealth-](0.25,9.75)-- (4.75,5.25); \draw (0.35,9.75)-- (6.65,5.25); \draw (7.6,5.25)-- (0.4,9.75); \draw [-stealth](0.4,5.25)-- (7.60,9.75); \draw [stealth-](1.35,5.25)-- (7.65,9.75); \draw [stealth-](7.05,5.25)-- (7.95,9.75); \draw [-stealth](8.,5.25)-- (8.,9.75); \draw [-stealth] (0.,9.75) -- (0.,5.25); \draw [fill=black] (0.,10.) circle (1.5pt); \draw [fill=black] (0.,5.) circle (1.5pt); \draw [fill=black] (1.,5.) circle (1.5pt); \draw [fill=black] (3.,5.) circle (1.5pt); \draw [fill=black] (4.,5.) circle (1.5pt); \draw [fill=black] (5.,5.) circle (1.5pt); \draw [fill=black] (7.,5.) circle (1.5pt); \draw [fill=black] (8.,5.) circle (1.5pt); \draw [fill=black] (8.,10.) circle (1.5pt); \draw [-stealth](19.75,9.75)-- (15.25,5.25); \draw (19.8,9.75)-- (16.2,5.25); \draw [stealth-](19.85,9.75)-- (17.15,5.25); \draw [-stealth](12.35,9.75)-- (18.65,5.25); \draw [-stealth](19.6,5.25)-- (12.4,9.75); \draw (12.4,5.25)-- (19.60,9.75); \draw (13.35,5.25)-- (19.65,9.75); \draw (19.05,5.25)-- (19.95,9.75); \draw (20.,5.25)-- (20.,9.75); \draw [fill=black] (12.,10.) circle (1.5pt); \draw [fill=black] (12.,5.) circle (1.5pt); \draw [fill=black] (13.,5.) circle (1.5pt); \draw [fill=black] (15.,5.) circle (1.5pt); \draw [fill=black] (16.,5.) circle (1.5pt); \draw [fill=black] (17.,5.) circle (1.5pt); \draw [fill=black] (19.,5.) circle (1.5pt); \draw [fill=black] (20.,5.) circle (1.5pt); \draw [fill=black] (20.,10.) circle (1.5pt); \draw (0,10.5) node{$b$}; \draw (8,10.5) node{$a$}; \draw (12,10.5) node{$b$}; \draw (20,10.5) node{$a$}; \draw (4,4) node{$G$}; \draw (16,4) node{$G_b^a$}; \end{tikzpicture} \caption{Kelmans transformation on mixed graph $G$.} \label{fig1} \end{figure} For a mixed graph $G$, let $N^+_{G}(u):=\{v \colon \vv{uv}\in E(G)~{\rm or~} uv\in E(G)\}$ be the set of {\it out-neighbors} of $u$, $N^-_{G}(u):=\{v \colon \vv{vu}\in E(G)~{\rm or~} uv\in E(G)\}$ be the set of {\it in-neighbors} of $u$, and $N_G(u):=N^+_{G}(u)\cup N^-_{G}(u)$ be the set of {\it neighbors} of $u$. The number $d^+_G(u):=|N^+_{G}(u)|$ is called the {\it out-degree} of $u$ in $G$, and the number $d_G(u):=|N^+_{G}(u)|+|N^-_{G}(u)|$ is called the {\it degree} of $u$ in $G$. The sequence $d(G):=(d_G(u))_{u\in V(G)}$ in descending order is called the {\it degree sequence} of $G$. In the rest of this paper, the order of degree sequences are considered in dictionary order, that is, $(a_1,a_2,\ldots,a_n)>(b_1,b_2,\ldots,b_n)$ if for the minimum $i$ with $a_i\not=b_i$, we have $a_i>b_i$. The (i) of the following lemma in mixed graph is generalized from its undirected graph version \cite{k:81}. \begin{Lemma}\label{l2.2} Let $G$ be a mixed graph and distinct $a, b\in V(G)$ have no arc. Then the following (i)-(ii) hold. \begin{enumerate} \item[(i)] The involution $f:V(G_b^a)\rightarrow V(G_a^b)$ defined by $$f(x)=\left\{ \begin{array}{ll} a, & \hbox{if $x=b$;} \\ b, & \hbox{if $x=a$;} \\ x, & \hbox{otherwise} \end{array} \right.$$ is a graph isomorphism from $G_b^a$ to $G_a^b$. \item[(ii)] In dictionary order, $d(G_b^a)\geq d(G)$. Moreover, the following (a)-(c) are equivalent. \begin{enumerate} \item[(a)] $d(G_b^a)=d(G)$; \item[(b)] $G$ is isomorphic to $G_b^a$; \item[(c)] $N^+_G(a)-\{b\}\subseteq N^+_G(b)-\{a\}$ and $N^-_G(a)-\{b\}\subseteq N^-_G(b)-\{a\}$; or $N^+_G(b)-\{a\}\subseteq N^+_G(a)-\{a\}$ and $N^-_G(b)-\{a\}\subseteq N^-_G(a)-\{b\}$. \end{enumerate} \end{enumerate} \end{Lemma} \begin{proof} Excluding the two vertices $a, b$ which are either with an undirected edges or without any directed arcs by the assumption, we have the following three observations of neighbor sets from the definition of Kelmans transformation on $G$ from $b$ to $a$. (1) the set of out-neighbors (resp. in-neighbors) of $b$ in $G_b^a$ is the union of the set of out-neighbors (resp. in-neighbors) of $a$ in $G$ and the set of out-neighbors (resp. in-neighbors) of $b$ in $G$; (2) the set of out-neighbors (resp. in-neighbors) of $a$ in $G_b^a$ is the intersection of set of the out-neighbors (resp. in-neighbors) of $a$ in $G$ and the set of out-neighbors (resp. in-neighbors) of $b$ in $G$; (3) the set of out-neighbors (resp. in-neighbors) of $x\not=a, b$ in $G_b^a$ is the same as that in $G$. From the above three observations, we find that vertices $a, b, x$ in $G_b^a$ play the role of $b, a, x$ respectively in $G_a^b$. This proves (i). \medskip \noindent (ii) In the proof of (i), we also have $d_G(x)=d_{G_b^a}(x)$ for $x\in V(G)-\{a, b\}$ and in dictionary order $(d_{G_b^a}(a), d_{G_b^a}(b))\geq (\max(d_G(a), d_G(b)), \min(d_G(a), d_G(b)))$, together implying $d(G_b^a)\geq d(G)$. Next we prove that (a), (b) and (c) are equivalent. \medskip \noindent ((b) $\Rightarrow$ (a)) This is clear. \medskip \noindent ((a) $\Rightarrow$ (c)) Suppose $d(G_b^a)=d(G)$. From the proof of (ii) above, we have $\{d_G(a), d_G(b)\}=\{d_{G_b^a}(a), d_{G_b^a}(b)\}$. If $d_G(a)=d_{G_b^a}(b)$ then $d_G(b)=d_{G_b^a}(a)\geq d_{G_b^a}(b)=d_G(a)$, which implies $N^+_G(a)-\{b\}\subseteq N^+_G(b)-\{a\}$ and $N^-_G(a)-\{b\}\subseteq N^-_G(b)-\{a\}$. If $d_G(a)=d_{G_b^a}(a)$ then $d_G(b)=d_{G_b^a}(b)$, which implies $N^+_G(b)-\{a\}\subseteq N^+_G(a)-\{b\}$ and $N^-_G(b)-\{a\}\subseteq N^-_G(a)-\{b\}$. \medskip \noindent ((c) $\Rightarrow$ (b)) If $N^+_G(a)-\{b\}\subseteq N^+_G(b)-\{a\}$ and $N^-_G(a)-\{b\}\subseteq N^-_G(b)-\{a\}$ then $G=G_a^b$ and the latter is isomorphic to $G_b^a$ by (i). If $N^+_G(b)-\{a\}\subseteq N^+_G(a)-\{b\}$ and $N^-_G(b)-\{a\}\subseteq N^-_G(a)-\{b\}$ then $G=G_b^a$. \end{proof} For a square matrix $M$, let ${\rm char}(M):={\rm det}(\lambda I-M)$ denote the {\it characteristic polynomial} of $M$. The following lemma is immediate from the definition of characteristic polynomial of $M$. \begin{Lemma}\label{l2.6} For an $n\times n$ nonnegative matrix $M$, if $$M=\begin{pmatrix} M_1& M_2\\ 0& M_3 \end{pmatrix}\quad {\rm ~or~}\quad \begin{pmatrix} M_1& 0\\ M_2 & M_3 \end{pmatrix}$$ where $M_1,M_3$ are square matrices, then ${\rm char}(M)={\rm char}(M_1)\cdot {\rm char}(M_2)$.\qed \end{Lemma} For an $n\times n$ matrix $M$ and a partition $\Pi=\{\pi_1,\pi_2\ldots,\pi_\ell\}$ of $[n]$, the $\ell\times\ell$ matrix $\Pi(M)=(m'_{ij})$, where $$m'_{ab}=\frac{1}{|\pi_a|}\sum_{i\in\pi_a,j\in\pi_b}m_{ij}\qquad (1\leq a, b\leq \ell),$$ is called the {\it quotient matrix} of $M$ with respect to $\Pi$. Furthermore, if for all $1\leq a,b\leq\ell$ and $i\in\pi_a$, $\sum_{j\in\pi_b}m_{ij}=m'_{ab}$, then $\Pi(M)$ is called the {\it equitable quotient matrix} of $M$ with respect to $\Pi$. The following well-known lemma is useful on the calculating of spectral index. See \cite{ap:18,clw:22} for recent proofs. \begin{Lemma}\label{equ_quo} If $\Pi(M)$ be the equitable quotient matrix of a nonnegative matrix $M$, then $\rho(M)=\rho(\Pi(M))$. \qed \end{Lemma} The following is a well-known consequence of Perron–Frobenius theorem \cite{bh:12}. \begin{Lemma}\label{perron} If $N$ is a nonnegative square matrix and $M$ is a nonnegative matrix of the same size with $M\leq N$, or $M$ is a nonnegative submatrix of $N$ then $\rho(M)\leq \rho(N)$. \qed \end{Lemma} \section{The poset $\mathcal{G}(n, m)$ of mixed graphs}\label{s2.5} For a mixed graph $G$ of order $n$ and size $m$, let $[G]$ denote the set of mixed graphs that are isomorphic to $G$. Let \begin{equation}\label{iso} \mathcal{G}(n, m):=\{ [G]~\colon~G~\hbox{is a mixed graph of order $n$ and size $m$}\}. \end{equation} We will define a reflexive and transitive relation $\leq$ in $\mathcal{G}(n, m)$ as follows. \begin{Definition} Let $\leq $ be the relation in $\mathcal{G}(n, m)$ such that for all $[G], [H]\in \mathcal{G}(n, m)$, $[G]\leq [H]$ if and only if $H$ is isomorphic to $G$, or $H$ is isomorphic to a graph which is obtained from $G$ by a finite sequence of Kelmans transformations. \end{Definition} \begin{Lemma}\label{l2.3} $(\mathcal{G}(n, m), \leq)$ is a partially ordered set (poset). \end{Lemma} \begin{proof} The relation $\leq$ is reflexive and transitive from its definition, so we only need to prove the anti-symmetric property. Suppose $[G]\leq [H]$ and $[H]\leq [G]$, where $[G], [H], \in G(n, m)$. Then $d(G)\leq d(H)\leq d(G)$ by Lemma~\ref{l2.2}(ii). Hence $d(G)=d(H)$. By Lemma~\ref{l2.2}(ii)(a)$\Rightarrow$(b), we have $[G]=[H]$. \end{proof} \begin{Lemma}\label{l2.4} Let $\alpha\in [0, 1]$, $[G]\in \mathcal{G}(n, m)$ with distinct vertices $a, b\in V(G)$ having no arc, adjacency matrix $A=(c_{ij})$ and $A_\alpha$ matrix $A_\alpha(G)$ of $G$. Set $k:=\alpha |N^+_G(b)-N^+_G(a)|$, $t_i=(1-\alpha) \max(0, c_{ib}-c_{ia})$ and $s_i=(1-\alpha) \max(0, c_{bi}-c_{ai})$ for $i\in V(G)-\{a, b\}$. Then the Kelmans transformation matrix $A_\alpha(G)_b^a$ of $A_\alpha(G)$ from $b$ to $a$ with respect to $(t_i;s_i;k)$ is the $A_\alpha$ matrix $A_\alpha(G_b^a)$ of $G_b^a$, i.e., $$A_\alpha(G)_b^a=A_\alpha(G_b^a).$$ \end{Lemma} \begin{proof} We only need to check that the $ij$ entries in matrices $A_\alpha(G)_b^a$ and $A_\alpha(G_b^a)$ are equal for one of $i, j$ in $\{a, b\}$. Indeed they are equal from the setting listed in the order $aa$, $bb$, $ia$, $ib$, $aj$ and $bj$ below: \begin{align*} \alpha d^+_G(a)+k=&\alpha d^+_{G_b^a}(a),\\ \alpha d^+_G(b)-k=&\alpha d^+_{G_b^a}(b),\\ (1-\alpha)c_{ia}+t_i =&(1-\alpha)(c_{ia}+\max(0, c_{ib}-c_{ia})),\\ (1-\alpha)c_{ib}-t_i=&(1-\alpha)(c_{ib}-\max(0, c_{ib}-c_{ia})),\\ (1-\alpha)c_{ia}+s_j=&(1-\alpha)(c_{aj}+\max(0, c_{bj}-c_{aj}),\\ (1-\alpha)c_{ib}-s_j=&(1-\alpha)(c_{bj}-\max(0, c_{bj}-c_{aj}), \end{align*} where $i, j\in V(G)-\{a, b\}$. \end{proof} \begin{Proposition}\label{p2.5} If $\alpha\in [0, 1]$, and $[G], [H]\in G(n, m)$ such that $[G]\leq [H]$, then $\rho_\alpha(G)\leq \rho_\alpha(H).$ \end{Proposition} \begin{proof} We might assume $H=G_b^a$ by Lemma~\ref{l2.3}. Applying Theorem~\ref{thm1} and Lemma~\ref{l2.4}, we have $$\rho_\alpha(G)=\rho(A_\alpha(G))\leq \rho(A_\alpha(G)_b^a)=\rho(A_\alpha(G_b^a))=\rho_\alpha(H).$$ \end{proof} \section{The poset $\mathcal{T}(n, m)$ of mixed trees}\label{s2.7} Let $n, m\in \mathbb{N}$ with $n-1\leq m\leq 2n-2$, $$\mathcal{T}(n, m):=\{[T]\in \mathcal{G}(n, m)\colon T \hbox{~is a mixed tree}\}.$$ The set $\mathcal{T}(n, m)$ is not closed under Kelmans transformations. We need the following lemma. \begin{Lemma}\label{tree} Let $[T]\in \mathcal{T}(n, m)$ with distinct $a, b\in V(T)$ having no arc. Then $[T_b^a]\in \mathcal{T}(n, m)$ if and only if $ab\in E(G)$ or $\partial(a, b)=2$ and the unique vertex $x\in V(G)$ with $\partial(a,x)=\partial(x, b)=1$ satisfying one of (i) $ax\in E(G)$ is an undirected edge, (ii) $xb\in E(G)$ is an undirected edge, (iii) $\overrightarrow{ax}, \overrightarrow{bx}\in E(G)$ are arcs or (iv) $\overrightarrow{xa}, \overrightarrow{xb}\in E(G)$ are arcs. \end{Lemma} \begin{proof} The assumption implies $\partial(a, b)\geq 1$ and if $\partial(a, b)=1$ then $ab\in E(G)$ is an undirected edge. If $\partial(a, b)=2$ and the necessary condition about $x$ fails then $a,b$ belong to different components of the underlying graph of $T_b^a$, so $T_b^a$ is not a mixed tree. If $\partial(a, b)\geq 3$ then the underlying graph of $T_b^a$ contains a cycle of order $\partial(a, b)$, so $T_b^a$ is not a mixed tree. On the other hand, it is straightforward to observe that $[T_b^a]\in \mathcal{T}(n, m)$ when $a,b$ satisfy the conditions. \end{proof} We use the notation $a-b$, $a-x\rightarrow b$, $a-x\leftarrow b$, $a\leftarrow x-b$, $a\rightarrow x-b$, $a\rightarrow x\leftarrow b$ and $a\leftarrow x \rightarrow b$ to denote the seven situations in the necessary condition of Lemma~\ref{tree}. We then give $\mathcal{T}(n, m)$ a poset structure by extending $[T]\leq [T_b^a]$ for any $[T]\in \mathcal{T}(n, m)$ and any $a, b\in V(T)$ that satisfy one of the seven situations. \begin{Proposition}\label{p3.2} Let $[T]\in\mathcal{T}(n,m).$ Then $[T]$ is a maximal element in $\mathcal{T}(n,m)$ if and only if $T$ is a mixed star or $T$ is a mixed tree without undirected edges (i.e. $m=n-1$) and whenever the subgraph $a\rightarrow x \leftarrow b$ or $a\leftarrow x \rightarrow b$ appears in $T$, one of $a$ and $b$ is a leaf. \end{Proposition} \begin{proof} $(\Leftarrow)$ If $T$ is a mixed star, and one of $a-b$, $a-x\rightarrow b$, $a-x\leftarrow b$, $a\leftarrow x-b$, $a\rightarrow x-b$, $a\rightarrow x\leftarrow b$ and $a\leftarrow x \rightarrow b$ appearing in $T$, then one of $a$ or $b$ is a leaf, so Lemma~\ref{l2.2}(iic) with $G=T$ holds, which implies that $T_b^a$ is isomorphic to $T$. If $T$ is a mixed tree without undirected edges, then we only need to consider $a\rightarrow x\leftarrow b$ and $a\leftarrow x \rightarrow b$ in $T$. By the assumption $a$ or $b$ is a leaf and by the same reason as above, $T_b^a$ is isomorphic to $T$. Hence in both cases, $[T]$ is a maximal element in $\mathcal{T}(n,m)$. \medskip \noindent $(\Rightarrow)$ Let $[T]$ be a maximal element in $\mathcal{T}(n, m)$ such that $T$ is not a mixed star, so $T$ has diameter at least $3$. Keeping in mind that the maximality of $[T]$ implies that Lemma~\ref{l2.2}(iic) with $G=T$ holds for $a, b\in V(T)$ satisfying the necessary conditions $a-b$, $a-x\rightarrow b$, $a-x\leftarrow b$, $a\leftarrow x-b$, $a\rightarrow x-b$, $a\rightarrow x\leftarrow b$ or $a\leftarrow x \rightarrow b$ of Lemma~\ref{tree}, thus at least one of $a$ or $b$ is a leaf. To exclude the situations $a-b$, $a-x\rightarrow b$, $a-x\leftarrow b$, $a\leftarrow x-b$ and $a\rightarrow x-b$, on the contrary, suppose that $T$ contains an undirected edge $uv$ with leaf $u$. Since diameter of $T$ at least $3$, we have another two vertices $y,z\in V(T)$ such that $\partial(v, y)=\partial(y, z)=1$ and $\partial(u, z)=3$. Since $v, y$ are not leafs in $T$, they have an arc, say $v\rightarrow y$ (similar for $v\leftarrow y$) in $E(T)$. Hence $T_y^u\in \mathcal{T}(n, m)$ is well-defined, $v\in (N^+_T(v)-\{y\})-(N_T(y)-\{u\})$, and $z\in N_T(y)-N_T(u)$, a contradiction to the maximality of $[T]$. Thus $T$ has no undirected edges. \end{proof} \section{The upper bound of $\rho_\alpha(T)$}\label{s3} If an arc in a mixed tree $T$ is deleted then we have two mixed trees. Thus if the arcs in a mixed tree $T$ of order $n$ and size $m$ are all removed, then the remaining is an undirected graph without cycles with $2n-m-1$ components. We call these $2n-m-1$ components the {\it components} of $T$. \begin{Lemma}\label{char} If $\alpha\in [0, 1]$ and $[T]\in \mathcal{T}(n, m)$ and $T$ has components $C_1$, $C_2$, $\ldots$, $C_t$, then $${\rm char} (A_\alpha(T))=\prod_{i\in [t]} {\rm char} (A_\alpha(T)[C_i]),$$ where $A_\alpha(T)[C_i]$ is the principal submatrix of $A_\alpha(T)$ restricted to $C_i$. \end{Lemma} \begin{proof} If $\overrightarrow{ij}\in E(T)$ is deleted to obtained two mixed trees with vertex sets $V$ and $W$, then besides $\overrightarrow{ij}$ there is no arcs or undirected edges between a vertex in $V$ and a vertex in $W$. With $M=A_\alpha(T)$, $M_1=M[V]$, $M_2=M[W]$, we find $M$ satisfies the assumption of Lemma~\ref{l2.6}. Hence ${\rm char}(M)={\rm char}(M_1)\times {\rm char}(M_2)$. We have the lemma by use this process on $M_1$ and $M_2$, and repeating again until each matrix is corresponding to a component of $T$. \end{proof} Note that $A_\alpha(T)[C_i]$ in Lemma~\ref{char} is not the $A_\alpha$ matrix of the component $C_i$ in $T$. \begin{Corollary}\label{n-1} If $\alpha\in[0,1]$ and $[T]\in\mathcal{T}(n,n-1)$, then $${\rm char} (A_\alpha(T))=\prod_{i\in [n]}(\lambda-\alpha d_i^+).$$ \end{Corollary} \begin{proof} For $[T]\in\mathcal{T}(n,n-1)$, each vertex forms a component. Since $A_\alpha(T)[\{i\}]$ is an $1\times 1$ matrix with entries $\alpha d_i^+$, the result is straightforward from Lemma~\ref{char}. \end{proof} \begin{Proposition}\label{p3.3} Let $S$ be mixed star of order $n$, size $m$ and maximum out-degree $m-n+k+1$ for some $0\leq k\leq 2n-m-2$. Then for $\alpha\in [0, 1]$, the $A_\alpha$-spectral radius $\rho_\alpha(S)$ of $S$ is the maximal root of the following quadratic polynomial in $\lambda$: \begin{equation}\label{e7} (\lambda-\alpha)(\lambda-\alpha(m-n+k+1))-(1-\alpha)^2(m-n+1). \end{equation} \end{Proposition} \begin{proof} Note that there are $m-n+1$ undirected edges in $S$. For convenience, assume that $1$ has the maximum degree $n-1$, $N^+_S(1)=[m-n+k+2]-\{1\}$ and $N^-_S(1)=([m-n+2]-\{1\})\cup \{m-n+k+3, m-n+k+4, \ldots, n\}$. \begin{figure} \centering \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \draw [stealth-](-0.85,9.9)-- (1.85,8.1); \draw [stealth-](-0.85,8.95)-- (1.85,8.05); \draw [stealth-](-0.85,8.)-- (1.85,8.); \draw [stealth-](-0.85,7.05)-- (1.85,7.95); \draw (0.575,10.85)-- (1.925,8.15); \draw (1.525,10.85)-- (1.975,8.15); \draw (3.425,10.85)-- (2.075,8.15); \draw (2.475,10.85)-- (2.025,8.15); \draw [stealth-](2.15,8.1)-- (4.85,9.9); \draw [stealth-](2.15,8.05)-- (4.85,8.95); \draw [stealth-](2.15,8.)-- (4.85,8.); \draw [stealth-](2.15,7.95)-- (4.85,7.05); \draw [rotate around={90.:(-1.,8.5)},line width=1.pt,dash pattern=on 5pt off 5pt] (-1.,8.5) ellipse (1.825140769936451cm and 1.0397782600555754cm); \draw [rotate around={1.145762838175079:(1.94,11.01)},line width=1.pt,dash pattern=on 5pt off 5pt] (1.94,11.01) ellipse (2.0607414666183628cm and 1.412712069828955cm); \draw [rotate around={90.:(5.,8.5)},line width=1.pt,dash pattern=on 5pt off 5pt] (5.,8.5) ellipse (1.825140769936462cm and 1.0397782600555816cm); \draw [line width=1.pt,dash pattern=on 5pt off 5pt] (2.,8.) circle (1.cm); \begin{scriptsize} \draw [fill=black] (-1.,10.) circle (1.5pt); \draw [fill=black] (2.,8.) circle (1.5pt); \draw [fill=black] (-1.,9.) circle (1.5pt); \draw [fill=black] (-1.,8.) circle (1.5pt); \draw [fill=black] (-1.,7.) circle (1.5pt); \draw [fill=black] (0.5,11.) circle (1.5pt); \draw [fill=black] (1.5,11.) circle (1.5pt); \draw [fill=black] (3.5,11.) circle (1.5pt); \draw [fill=black] (2.5,11.) circle (1.5pt); \draw [fill=black] (5.,10.) circle (1.5pt); \draw [fill=black] (5.,9.) circle (1.5pt); \draw [fill=black] (5.,8.) circle (1.5pt); \draw [fill=black] (5.,7.) circle (1.5pt); \end{scriptsize} \draw (2,7.5) node{$\pi_1$}; \draw (2,11.5) node{$\pi_2$}; \draw (5.5,8.5) node{$\pi_4$}; \draw (-1.5,8.5) node{$\pi_3$}; \end{tikzpicture} \caption{The partition $\Pi$ of the vertices of a mixed star.}\label{fig2} \end{figure} Set $\pi_1=\{1\},$ $\pi_2=\{2, 3, \ldots, m-n+2\},$ $\pi_3=\{m-n+3, m-n+4, \ldots, m-n+k+2\},$ and $\pi_4=[n]-\pi_1-\pi_2-\pi_3$ as illustrated in \ref{fig2}. With respect to the partition $\Pi=\{\pi_1, \pi_2, \pi_3, \pi_4\}$ of $[m]$, the adjacency matrix $A$ and the diagonal out-degree matrix $D^+$ of $T$ have equitable quotient matrices $$\Pi(A)=\begin{pmatrix} 0 & m-n+1 & k & 0\\ 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 \end{pmatrix}\mbox{ and }\Pi(D^+)=\begin{pmatrix} m-n+k+1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix},$$ respectively, which implies that the $A_\alpha$ matrix of $T$ has equitable quotient $$\Pi(A_\alpha)=\begin{pmatrix} \alpha(m-n+k+1) & (1-\alpha)(m-n+1) & (1-\alpha)k & 0\\ 1-\alpha & \alpha & 0 & 0\\ 0 & 0 & 0 & 0\\ 1-\alpha & 0 & 0 & \alpha \end{pmatrix}.$$ Since the characteristic polynomial of $\Pi(A_\alpha)$ is $$\lambda(\lambda-\alpha)((\lambda-\alpha)(\lambda-\alpha(m-n+k+1))-(1-\alpha)^2(m-n+1)),$$ and the zero in (\ref{e7}) is at least $\alpha,$ we complete the proof. \end{proof} \bigskip \noindent {\bf Proof of Theorem~\ref{upr}.} By Proposition~\ref{p2.5}, it suffices to show that for each maximal element $[T]\in \mathcal{T}(n, m)$ characterized in Proposition~\ref{p3.2}, $\rho_\alpha(T)$ is at most the upper bound appearing in Theorem~\ref{upr}. Suppose $T=S$ is a mixed star with maximal out-degree $m-n+k+1$. Since the largest root of the quadratic polynomial in (\ref{e7}) increases as lone as $k$ increases, we might assume $k=2n-m-2$, and find (\ref{e7}) becomes $$\lambda^2-\alpha n \lambda+\alpha^2(n-1)-(1-\alpha)^2(m-n+1),$$ which has largest root as the upper bound appearing in Theorem~\ref{upr}. For the remaining mixed trees $T\in\mathcal{T}(n,n-1)$, from Corollary~\ref{n-1} we know that the $A_\alpha$ matrix of $T$ has characteristic polynomial $\prod_{i\in[n]}(\lambda-\alpha d_i^+)$, so $\rho_\alpha(T)=\alpha\cdot(\max_{i\in[n]} d_i^+)\leq\alpha (n-1)$, where the equality holds when $T$ is the star with $n-1$ leaves being out-neighbor of a vertex. Moreover, $\alpha (n-1)$ is equal to the upper bound appearing in Theorem~\ref{upr} when $m=n-1$. \qed \section{The lower bound of $\rho_\alpha(T)$}\label{s4} The following theorem was proved in \cite{nprs:17}. \begin{Theorem}\label{tnprs} (\cite{nprs:17}) If $T$ is a tree of order $n$ and $\alpha\in [0, 1]$, then $$\rho_\alpha(T)\geq \rho_\alpha(P_n).$$ Equality holds if and only if $G=P_n$. \qed \end{Theorem} \noindent {\bf Proof of Theorem~\ref{lwr}.} Let $T$ be a mixed tree of order $n$ and size $m$. Then $T$ has $2n-m-1$ components, and there exists a components of order at least $k=\lceil \frac{n}{2n-m-1}\rceil$. If $m=2n-2$ then $k=n$ and $\rho_\alpha(T)\geq \rho_\alpha(P_k)$ by Theorem~\ref{tnprs} where the equality holds when $T=P_n$. For $m<2n-2$, let $C_1$ be a component of $T$ with maximum size $t$. Then $t\geq k\geq 2$ and $A_\alpha(T)[C_1]\geq A_\alpha(C_1)$. Hence by Lemma~\ref{perron}, Lemma~\ref{char} and Theorem~\ref{tnprs}, $$\rho_\alpha(T)\geq \rho(A_\alpha(T)[C_1])\geq\rho(A_\alpha(C_1))= \rho_\alpha(P_t)\geq \rho_\alpha(P_k).$$ \qed \bigskip {\bf Acknowledgements.} This research is supported by the Ministry of Science and Technology of Taiwan under the project MOST 109-2115-M-009 -007 -MY2.
proofpile-arXiv_065-12
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The noninteracting nature of photons makes them efficient for long distance communication but nonefficient for information processing. The interaction among photons is the corner stone for their physical implementation in quantum information processing and quantum logic gates \cite{OBrien2007}. Quantum nonlinear optics involving single photons is a recent and hot topic with importance for fundamental physics and applications \cite{Chang2014}, e.g., for photonic switches, optical modulation and manipulation, the generation of single photons on demand, nonlinear spectroscopy, memory devices and transistors, with impact on physical and biological sciences \cite{Phillips2001,Miller2010,Chen2013,Reiserer2015}. On the other hand, in classical nonlinear optics propagating light in a medium can modify the optical properties of the material, e.g., by producing intensity dependent refractive index, which implies powerful lasers. Therefore, conventional nonlinear optics found to be negligible at the level of individual photons \cite{Boyd2008,Agrawal2013}, and hence the need for efficient new mechanisms is appealing. Remarkable advances in the search for the realization of strong interactions among single photons have been accumulated in the last decades. Cavity Quantum Electrodynamics (QED) have been among the first experiments for achieving effective photon-photon interactions by enhancing the light-matter coupling in localizing atoms inside high finesse cavities \cite{Turchette1995,Birnbaum2005,Haroche2006,Reiserer2015}. Cooling and trapping single atoms within a cavity is a complex mission, hence cavity QED experiments have been successfully extended into solid state systems including quantum dots in semiconductors \cite{Michler2000,Pelton2002,Fushman2008,Lodahl2015} or nitrogen-vacancy centers in diamond \cite{Faraon2011}. However, in a recent successful experiment a deterministic photon-photon quantum gate has been realized for a single atom in an optical resonator \cite{Hacker2016,Welte2018}. Even though, the discreteness of the cavity spectrum gives rise to output photons with narrow bands, besides dephasing and decay effects of the excited electronic states, which put limitations on the quantum nonlinear optics performance. In order to overcome the limitations imposed by confining the photons in high quality optical cavity, the search has been turned to strong atom-photon coupling in cavity free environment \cite{Tey2008,Hammerer2010,Volz2014,Prasad2020}. For example, in using Rydberg atoms in a dense medium \cite{Gorshkov2011,Peyronel2012,Firstenberg2013,Firstenberg2016,Thompson2017} by exploiting Rydberg blockade phenomena \cite{Lukin2001} in the Electromagnetic Induced Transparency (EIT) scheme \cite{Harris1990,Fleischhauer2005}. In such systems the significant enhancement of photon-photon interactions is mainly due to the achievement of slow light using EIT environment with extremely narrow transparency band \cite{Petrosyan2011}. Slowing and stopping the light has been observed in cold and ultracold boson gases \cite{Phillips2001,Hau2008,Pritchard2010}. The interaction between photons is exploited to demonstrate a photon–photon quantum gate, extending the potential of Rydberg systems as a platform for quantum communication and quantum networking \cite{Tiarks2019}. In parallel, solid-state set-ups of optical fibers \cite{Zhu2007,Thevenaz2008,Douglas2015,Goban2015} and photonic crystals \cite{Russell2006,Baba2008,Eichenfield2009} have received significant interest, as they can be easily integrated into all-optical on-chip platforms. In particular optical fibers can realize tunable delays of optical signals with the possibility of achieving fast and slow light in a comparatively wide bandwidth \cite{Okawachi2005,Song2005,Herraez2006}. The most efficient nonlinear process inside optical fibers is Stimulated Brillouin Scattering (SBS), that is the scattering of optical photons by long lived acoustic phonons commonly induced by electrostriction \cite{Kim2015}. Recent progress in the fabrication of nanoscale waveguides, in which the wavelength of the light becomes larger than the waveguide dimension, achieved a breakthrough in SBS \cite{Pant2011,Shin2013,Eggleton2013}. In this regime the coupling of photons and phonons is significantly enhanced due to radiation pressure dominating over electrostriction \cite{Rakich2012,VanLaer2015a,Zoubi2016} with significant implications for the field of continuum quantum optomechanics \cite{Rakich2016,Zoubi2018,Zoubi2019,Zoubi2020}. We have explored the possibility of achieving a significant nonlinear phase shift among photons propagating in nanoscale waveguides. The interaction among photons is mediated by vibrational modes and induced through SBS, where an effective photon-photon interaction Hamiltonian is derived \cite{Zoubi2017}. Moreover, we have introduced a configuration for slowing down photons by several orders of magnitude via SBS involving sound waves and pump fields. We extracted the conditions for maintaining vanishing amplitude gain or loss for slowly propagating photons while keeping the influence of thermal phonons to the minimum. In the present paper we search for the possibility of the formation of exotic photon molecules. Such issue has been addressed before in the context of an atomic ensemble with Rydberg blockade where attractive photon-photon interactions can be achieved \cite{Firstenberg2013,Guerreiro2014,Maghrebi2015}. Here we aim to examine the possibility of the formation of two-photon bound states that is induced through the exchange of phonons. To this end we exploit our previous results where effective photon-photon coupling have been derived and that found to be attractive or repulsive \cite{Zoubi2016,Zoubi2017}. We use the tool of contour Green's functions and we derive a hierarchy of equations that can be truncated by the appeal to the T-matrix approximation \cite{Kadanoff1962,Abrikosov1963,Fetter1971,Mahan2000,Stefanucci2013}. The T-matrix approximation allows us to treat the scattering of particles in many-body systems, that is in a medium of interacting particles. The breakdown of the T-matrix approximation, which appears as a singularity in the solution, indicates the formation of bound states. We concentrate in the case of slow photons propagating in one-dimensional nanoscale wires, and we look for the critical temperature at which photon molecules form. The interactions among two counter-propagating photons give rise to an accumulated quantum nonlinear phase, and we show how such phase can be exploited in order to implement photon molecules as quantum logic gates. After presenting the Hamiltonian of interacting photons in section 2, we introduce the contour Green's functions in the complex plane, where the functions allow considering quantum and ensemble averages on equal footing. In section 3 we derive the equations of motion and solve them in applying the T-matrix approximation. We treat the one-dimensional case where a complex pole appears in the T-matrix that represents the appearance of photon molecules. The photon bound states are discussed in section 4, and the equation of motion of the state amplitude is solved to yield a quantum nonlinear phase that shown to be useful for achieving quantum logic gates. The conclusion is given in section 5. In the appendix we analyze the properties of two-point correlators in Keldysh space. \section{Interacting photons} We consider propagating optical photons in nanoscale waveguides (see Fig.1), where the photons can propagate to the left or to the right with effective group velocity $v_e$. In our previous work we introduced a configuration for controlling the group velocity and to achieve relatively slow photons by exploiting the coupling between photons and acoustic phonons that assisted by additional pump fields \cite{Zoubi2017}. The photons are shown to propagate without gain or loss along the waveguide with a linear dispersion of $\omega_k=\omega_0+v_ek$ in the appropriate region with wavenumber $k$, and $\omega_0$ appears due to the transverse nanoscale confinement of dimension in the range of hundred nanometers, (as seen in Fig.2). Slow photons are necessary for achieving manifest phenomena of nonlinear quantum optics in waveguides of several centimeter length \cite{Zoubi2016}. The photons found to interact by exchanging optical phonons (vibrational modes), that is the mechanism for achieving photon bound states, and which is the main concern in the present paper. The effective photon-photon coupling, $v$, is detunable and can be positive or negative that indicates a repulsive or an attractive interaction among the photons \cite{Zoubi2017}. In typical nanoscale waveguides one can achieve photon-photon coupling per meter of about $1$~MHz, and effective group velocity of $10^5$~m/s, as have been shown by us in \cite{Zoubi2017}. \begin{figure} \includegraphics[width=0.4\linewidth]{Fig1} \caption{The nanoscale wire of radius $a$ of several hundreds of nanometers and length of several centimeters. The fiber made of dielectric material with refractive index $n$ larger than that of the surrounding air.} \label{PhotPhonDis} \end{figure} The total Hamiltonian is given by $\hat{H}=\hat{H}_0+\hat{H}_I$. The free part Hamiltonian in real space is \begin{equation} \hat{H}_0=\int d{\bf x}d{\bf x}' \hat\psi^{\dagger}({\bf x})\langle {\bf x}|\hat{h}|{\bf x}'\rangle\hat\psi({\bf x}'), \end{equation} where $\langle {\bf x}|\hat{h}|{\bf x}'\rangle=\delta({\bf x}-{\bf x}')h({\bf x},-i{\bf \nabla})$, and the interaction part Hamiltonian is \begin{equation} \hat{H}_I=\frac{1}{2}\int d{\bf x}d{\bf x}'~v({\bf x},{\bf x}')~\hat\psi^{\dagger}({\bf x})\hat\psi^{\dagger}({\bf x}')\hat\psi({\bf x}')\hat\psi({\bf x}), \end{equation} where the interaction potential obeys $v({\bf x},{\bf x}')=v({\bf x}',{\bf x})$. Here $\hat\psi({\bf x})$ and $\hat\psi^{\dagger}({\bf x})$ are the field annihilation and creation operators, respectively. For bosons the operators obey the commutation relations \begin{equation} \left[\hat\psi({\bf x}), \hat\psi({\bf y})\right]=\left[\hat\psi^{\dagger}({\bf x}), \hat\psi^{\dagger}({\bf y})\right]=0,\ \ \ \left[\hat\psi({\bf x}), \hat\psi^{\dagger}({\bf y})\right]=\delta({\bf x}-{\bf y}). \end{equation} For linear dispersion we get $\langle {\bf x}|\hat{h}|{\bf x}'\rangle=\delta({\bf x}-{\bf x}')(\omega_0-iv_e{\bf \nabla})$, and the local interaction potential is $v({\bf x},{\bf x}')=v\delta({\bf x}-{\bf x}')$. \begin{figure} \includegraphics[width=0.4\linewidth]{Fig2} \caption{The photon dispersion $\omega_k$ as a function of wavenumber $k$, that includes two branches of photons propagating to the left and the right of the nanoscale waveguide. Here $\omega_0$ is a minimum frequency appears due to transverse confinement.} \label{PhotPhonDis} \end{figure} \subsection{The Contour Green's Functions} The many-particle system of interacting photons can be treated in using the tool of Green's functions \cite{Kadanoff1962,Abrikosov1963,Fetter1971,Mahan2000,Stefanucci2013}. We adopt a unified framework of the contour formalism that allows us treating time-dependent problems and statistical averages at finite temperature of thermal equilibrium. The contour n-particle Green's function is defined by \cite{Stefanucci2013} \begin{equation}\label{nGreen} G_n(1,\cdots,n;1',\cdots,n')=\frac{1}{i^n}\frac{ \text{Tr} \left[ {\cal T} \left\{ e^{-i\int_{\gamma}d\bar{z}\hat{H}(\bar{z}) } \hat\psi_H(1)\cdots\hat\psi_H(n)\hat\psi_H^{\dagger}(n')\cdots\hat\psi_H^{\dagger}(1') \right\} \right] } {\text{Tr} \left[ {\cal T} \left\{ e^{-i\int_{\gamma}d\bar{z}\hat{H}(\bar{z}) } \right\}\right] }. \end{equation} The integral is along the contour $\gamma$ that appears in figure (3) in the complex plane, where a general point on the contour is denoted by $z$. The horizontal part contains the forward branch $\gamma_-$ from time $t_0$ up to time $t$, and the backward branch $\gamma_+$ from $t$ back to $t_0$, along the real time axis. Then, a point on branch $\gamma_-$ at time $t'$ is denoted by $z'=t'_-$ and on branch $\gamma_+$ by $z'=t'_+$. The horizontal part is extended to infinity without affecting the result and is called the Schwinger-Keldysh contour. The vertical part $\gamma_M$ of the contour starts at $z_a=t_0$ and end at $z_b=t_0-i\beta$ along the imaginary axis, with the constrain $z_b-z_a=i\beta$, where at temperature $T$ we have $\beta=\frac{1}{k_BT}$. The field operators $\psi_H(j)$ and $\hat\psi_H^{\dagger}(j)$ are in the contour Heisenberg picture, where we used the short notation $(j\equiv{\bf x}_j,z_j)$. For an operator $\hat{O}(z)$ with explicit time dependent in the contour Heisenberg picture we have $\hat{O}_H(z)=\hat{U}(z_\text{i},z)\hat{O}(z)\hat{U}(z,z_\text{i})$, where $z_\text{i}$ is the initial point of the contour, (and $z_\text{f}$ is the final point). The contour evolution operator, for $z_2$ later than $z_1$, is defined by \begin{equation} \hat{U}(z_2,z_1)={\cal T}\left\{e^{-i\int_{z_1}^{z_2}dz'\hat{H}(z')}\right\}, \end{equation} where the sign ${\cal T}$ stands for the contour time ordering. The result holds also for the case of time independent operator $\hat{O}$, but we keep the contour argument of the operator in order to specify its position on the contour, especially under the contour time ordering ${\cal T}$. The n-particle Green's functions obey the boundary condition \begin{equation} G_n(1,\cdots,{\bf x}_k,z_{\text{i}},\cdots,n;1',\cdots,n')=G_n(1,\cdots,{\bf x}_k,z_{\text{f}},\cdots,n;1',\cdots,n'), \end{equation} known as the Kubo-Martin-Schwinger relation. \begin{figure} \includegraphics[width=0.5\linewidth]{Fig3} \caption{The contour $\gamma\equiv\gamma_-\oplus\gamma_+\oplus\gamma^M$ in the complex plane is presented. The horizontal part along the real axis contains two branches, the forward branch $\gamma_-$ from $t_0$ up to $t$, and the backward branch $\gamma_+$ from $t$ back to $t_0$, where the contour is extended to infinity. The vertical branch $\gamma_M$ is along the imaginary axis from $t_0$ to $t_0-i\beta$.} \label{PhotPhonDis} \end{figure} On the horizontal part of the contour, if $z$ lies on the $\gamma_-$ and $\gamma_+$ branches, in the n-particle Green's function Eq.(\ref{nGreen}) we get the time dependent quantum average. For a physical observable we have $\hat{O}(z'=t'_{\pm})\equiv\hat{O}(t')$, e.g. for the Hamiltonian we have $\hat{H}(z'=t'_{\pm})\equiv\hat{H}(t')$. For the field operators in the Schrodinger picture we have $\hat\psi({\bf x},z'=t_{\pm})\equiv \hat\psi({\bf x})$ and $\hat\psi^{\dagger}({\bf x},z'=t_{\pm})\equiv \hat\psi^{\dagger}({\bf x})$, and in the Heisenberg picture we have $\hat\psi_H({\bf x},z)\rightarrow\hat\psi_H({\bf x},t)$ and $\hat\psi_H^{\dagger}({\bf x},z)\rightarrow\hat\psi_H^{\dagger}({\bf x},t)$. Moreover, concerning our previous Hamiltonian we have $h(z=t_{\pm})=h(t)$ and $v({\bf x},{\bf x}',z=t_{\pm})=v({\bf x},{\bf x}',t)$. If $z$ lies on $\gamma_M$, the vertical part of the contour, in the n-particle Green's function Eq.(\ref{nGreen}) we get the ensemble average at thermal equilibrium. Now $\hat{O}(z)$ is the same at every point, and we use $\hat{O}^M\equiv\hat{O}(z\in\gamma^M)$, where we choose $\hat{O}^M=\hat{O}(t_0)$. For the Hamiltonian we have $\hat{H}^M=\hat{H}(z\in\gamma^M)$, where $\hat{H}^M=\hat{H}(t_0)-\mu \hat{N}$, with the number operator $\hat{N}=\int d{\bf x}~\hat\psi^{\dagger}({\bf x})\hat\psi ({\bf x})$, and $\mu$ is the chemical potential. The field operators on the contour $\gamma_M$ are constant, then we have $\hat\psi({\bf x},z\in\gamma^M)\equiv\hat\psi({\bf x})$ and $\hat\psi^{\dagger}({\bf x},z\in\gamma^M)\equiv\hat\psi^{\dagger}({\bf x})$. In our case of the above Hamiltonian we have $h(z\in\gamma^M)=h^M$, where $h^M=h-\mu$, and $v({\bf x},{\bf x}',z\in\gamma^M)=v^M({\bf x},{\bf x}')$. The one-particle Green's function, $G(1;1')\equiv G_1(1;1')$, equation of motion reads \begin{equation} \left[i\frac{d}{dz_1}-h(1)\right]G(1;1')=\delta(1;1')+ i\int d2~v(1;2)G_2(1,2;1',2^+), \end{equation} and the two-particle Green's function equation of motion is \begin{eqnarray} \left[i\frac{d}{dz_1}-h(1)\right]G_2(1,2;1',2')&=&\delta(1;1')G(2;2')\pm\delta(1;2')G(2;1') \\ \nonumber &+&i\int d3~v(1;3)G_3(1,2,3;1',2',3^+), \end{eqnarray} and so on for higher order Green's functions. We get a system of coupled differential equations that is known as the Martin-Schwinger hierarchy for the Green's functions. The delta function is defined by $\delta(j;k)\equiv \delta(z_j;z_k)\delta({\bf x}_j-{\bf x}_k)$, where $\delta(z,z')$ is zero everywhere except in $z=z'$. On the contour $\gamma$ the $\delta$-function is zero if $z$ and $z'$ lie on different branches, namely $\delta(t_{\pm},t_{\mp})=0$. Due to the orientation of the contour we have $\delta(t_-,t'_-)=\delta(t-t')$ and $\delta(t_+,t'_+)=-\delta(t-t')$ where $\delta(t,t')$ is the real axis $\delta$-function. On the vertical branch we get $\delta(t_0-i\tau,t_0-i\tau')=i\delta(\tau-\tau')$. We have for the diagonal Hamiltonian $\langle {\bf x}_j|\hat{h}(z_j)|{\bf x}_k\rangle=\delta({\bf x}_j-{\bf x}_k)h(j)$, with $h(j)=h({\bf x}_j,-i{\bf \nabla}_j,z_j)$. Note that $h(z)=h^M=h-\mu$ on the vertical part for $z=t_0-i\tau$, and $h(z)=h$ on the horizontal branches for $z=t_{\pm}$. We introduced $v(j;k)\equiv\delta(z_j;z_k)v({\bf x}_j,{\bf x}_k,z_j)$. Explicitly we get $v({\bf x},z;{\bf x}',z')=\delta(z,z')v({\bf x},{\bf x}',t)$ on the horizontal branches of $\gamma$ with $z=t_{\pm}$, and $v({\bf x},z;{\bf x}',z')=\delta(z,z')v^M({\bf x},{\bf x}')$ on the vertical part of $\gamma$. Furthermore, in the notation $(j^+={\bf x}_j,z_j^+)$ the $z_j^+$ is infinitesimally later than $z_j$, (and in $(j^-={\bf x}_j,z_j^-)$ the $z_j^-$ is infinitesimally earlier than $z_j$). The analytical properties of two point correlators that belong to the Keldysh space is presented in the appendix. \section{The T-Matrix approximation} We obtain an infinite hierarchy of equations in which the equation of motion for $G_n$ is related to $G_{n+1}$ and $G_{n-1}$. The hierarchy can be truncated by appealing to appropriate approximations. We apply the T-matrix approximation that holds for short range interactions in the limit of low density particles. The two-particle Green's function can be written as \begin{equation} G_2(1,2;1',2')=G(1;1')G(2;2')\pm G(1;2')G(2;1')+\Gamma(1,2;1',2'), \end{equation} where the first two terms give the Hartree-Fock approximation, and the last term includes the $\Gamma$ correlation function. To the lowest order in the interaction, the solution for the correlation function yields \begin{equation} \Gamma(1,2;1',2')\approx i\int d3d4~G_0(1;3)G_0(2;4)v(3;4)G_2(3,4;1',2'), \end{equation} where the one-particle non-interacting Green's function, $G_0$, obeys $\left[i\frac{d}{dz_1}-h(1)\right]G_0(1;1')=\delta(1;1')$. The $G_2$ can be written as \begin{equation} G_2(1,2;1',2')=\int d3d4~S(1,2;3,4)\left[G(3;1')G(4;2')\pm G(3;2')G(4;1')\right], \end{equation} where we define the $S$ function by \begin{equation} S(1,2;3,4)=\delta(1;3)\delta(2;4)+i\int d5d6~G_0(1;5)G_0(2;6)v(5;6)S(5,6;3;4). \end{equation} The $T_0$ matrix is defined by $T_0(1,2;1',2')\equiv v(1;2)S(1,2;1',2')$. Multiplying $G_2$ by $v$ we get \begin{equation} v(1;2)G_2(1,2;1',2')=\int d3d4~T_0(1,2;3,4)\left[G(3;1')G(4;2')\pm G(3;2')G(4;1')\right]. \end{equation} Using the above definition of $S$, we obtain \begin{equation} T_0(1,2;1',2')=\delta(1;1')\delta(2;2')v(1',2')+i\int d3d4~T_0(1,2;3,4)G_0(3;1')G_0(4;2')v(1';2'). \end{equation} The T-matrix is a tool for studying the scattering of particles in quantum mechanics. Here we use it in order to study the scattering of particles in many-body problems, namely quantum scattering among particles in a medium of interacting particles \cite{Kadanoff1962}. The T-matrix obeys the relation $T(1,2;1',2')=\delta(z_1,z_2)\delta(z'_1,z'_2)T({\bf x}_1,{\bf x}_2,z_1;{\bf x}'_1,{\bf x}'_2,z'_1)$, where we dropped the $0$ from the T-matrix, and in using the interaction property $v(j;k)=\delta(z_j,z_k)v({\bf x}_j,{\bf x}_k)$, we get \begin{eqnarray} &&T({\bf x}_1,{\bf x}_2,z_1;{\bf x}'_1,{\bf x}'_2,z'_1)=\delta(z_1,z'_1)\delta({\bf x}_1-{\bf x}'_1)\delta({\bf x}_2-{\bf x}'_2)v({\bf x}_1,{\bf x}_2) \\ \nonumber &+&\int d{\bf x}_3d{\bf x}_4\int dz_3~T({\bf x}_1,{\bf x}_2,z_1;{\bf x}_3,{\bf x}_4,z_3){\cal G}_2({\bf x}_3,{\bf x}_4,z_3;{\bf x}'_1,{\bf x}'_2,z'_1)v({\bf x}'_1;{\bf x}'_2), \end{eqnarray} where ${\cal G}_2({\bf x}_3,{\bf x}_4,z_3;{\bf x}'_1,{\bf x}'_2,z'_1)=iG({\bf x}_3,z_3;{\bf x}'_1,z'_1)G({\bf x}_4,z_3;{\bf x}'_2,z'_1)$. Note that the one-particle Green's function is the non-interacting one $G_0$. At thermal equilibrium the functions depend only on the time difference, and the Fourier transform yields \begin{eqnarray} &&T({\bf x}_1,{\bf x}_2;{\bf x}'_1,{\bf x}'_2;\omega)=\delta({\bf x}_1-{\bf x}'_1)\delta({\bf x}_2-{\bf x}'_2)v({\bf x}_1,{\bf x}_2) \\ \nonumber &+&\int d{\bf x}_3d{\bf x}_4~T({\bf x}_1,{\bf x}_2;{\bf x}_3,{\bf x}_4;\omega){\cal G}_2({\bf x}_3,{\bf x}_4;{\bf x}'_1,{\bf x}'_2;\omega)v({\bf x}'_1;{\bf x}'_2). \end{eqnarray} The T-matrix belong to the Keldysh space and contains a singular part (as presented in the appendix). We treat now the retarded and advanced Keldysh components of the T-matrix. We have \begin{equation} T^{R/A}({\bf x}_1,{\bf x}_2;{\bf x}'_1,{\bf x}'_2;\omega)=\delta({\bf x}_1-{\bf x}'_1)\delta({\bf x}_2-{\bf x}'_2)v({\bf x}_1,{\bf x}_2)+\int \frac{d\omega'}{2\pi}\frac{{\cal B}({\bf x}_1,{\bf x}_2;{\bf x}'_1,{\bf x}'_2;\omega')}{\omega-\omega'\pm i\eta}, \end{equation} where ${\cal B}({\bf x}_1,{\bf x}_2;{\bf x}'_1,{\bf x}'_2;\omega)=i\left[T^>({\bf x}_1,{\bf x}_2;{\bf x}'_1,{\bf x}'_2;\omega)-T^<({\bf x}_1,{\bf x}_2;{\bf x}'_1,{\bf x}'_2;\omega)\right]$. Moreover, we get the fluctuation-dissipation theorem $T^>({\bf x}_1,{\bf x}_2;{\bf x}'_1,{\bf x}'_2;\omega)=e^{\beta(\omega-2\mu)}T^<({\bf x}_1,{\bf x}_2;{\bf x}'_1,{\bf x}'_2;\omega)$. We obtain $T^<({\bf x}_1,{\bf x}_2;{\bf x}'_1,{\bf x}'_2;\omega)=-if(\omega-2\mu){\cal B}({\bf x}_1,{\bf x}_2;{\bf x}'_1,{\bf x}'_2;\omega)$ and $T^>({\bf x}_1,{\bf x}_2;{\bf x}'_1,{\bf x}'_2;\omega)=-i\bar{f}(\omega-2\mu){\cal B}({\bf x}_1,{\bf x}_2;{\bf x}'_1,{\bf x}'_2;\omega)$ with $f(\epsilon)=\frac{1}{e^{\beta\epsilon}-1}$ and $,\bar{f}(\epsilon)=1+f(\epsilon)$. In similar way, for ${\cal G}_2$ we get \begin{equation} {\cal G}_2^{R/A}({\bf x}_3,{\bf x}_4;{\bf x}'_1,{\bf x}'_2;\omega)=i\int \frac{d\omega'}{2\pi}\frac{{\cal G}_2^>({\bf x}_3,{\bf x}_4;{\bf x}'_1,{\bf x}'_2;\omega')-{\cal G}_2^<({\bf x}_3,{\bf x}_4;{\bf x}'_1,{\bf x}'_2;\omega')}{\omega-\omega'\pm i\eta}. \end{equation} The retarded ${\cal G}_2$ function is defined by \begin{equation} {\cal G}_2^R({\bf x}_3,{\bf x}_4;{\bf x}'_1,{\bf x}'_2;t_3-t'_1)=\theta(t_3-t'_1)\left\{{\cal G}_2^>({\bf x}_3,{\bf x}_4;{\bf x}'_1,{\bf x}'_2;t_3-t'_1)-{\cal G}_2^<({\bf x}_3,{\bf x}_4;{\bf x}'_1,{\bf x}'_2;t_3-t'_1)\right\}, \end{equation} where as ${\cal G}_2^{\lessgtr}({\bf x}_3,{\bf x}_4;{\bf x}'_1,{\bf x}'_2;t_3-t'_1)=iG^{\lessgtr}({\bf x}_3;{\bf x}'_1;t_3-t'_1)G^{\lessgtr}({\bf x}_4;{\bf x}'_2;t_3-t'_1)$, the Fourier transform gives \begin{equation} {\cal G}_2^{R}({\bf x}_3,{\bf x}_4;{\bf x}'_1,{\bf x}'_2;\zeta)=i^2\int\frac{d\omega'}{2\pi}\frac{d\omega''}{2\pi}\frac{\left\{G^>({\bf x}_3;{\bf x}'_1;\omega')G^>({\bf x}_4;{\bf x}'_2;\omega'')-G^<({\bf x}_3;{\bf x}'_1;\omega')G^<({\bf x}_4;{\bf x}'_2;\omega'')\right\}}{\zeta-\omega'-\omega''}, \end{equation} where $\zeta=\omega+i\eta$. The retarded T-matrix obeys \begin{eqnarray} &&T^R({\bf x}_1,{\bf x}_2;{\bf x}'_1,{\bf x}'_2;\zeta)=\delta({\bf x}_1-{\bf x}'_1)\delta({\bf x}_2-{\bf x}'_2)v({\bf x}_1,{\bf x}_2) \\ \nonumber &+&\int d{\bf x}_3d{\bf x}_4~T^R({\bf x}_1,{\bf x}_2;{\bf x}_3,{\bf x}_4;\zeta){\cal G}_2^R({\bf x}_3,{\bf x}_4;{\bf x}'_1,{\bf x}'_2;\zeta)v({\bf x}'_1;{\bf x}'_2). \end{eqnarray} The retarded T-matrix is of big interest as it directly connected to the scattering amplitude. \subsection{Momentum-Space Representation} As a first step, due to transnational symmetry, we change variables and use the center of mass and relative positions, ${\bf X}=\frac{{\bf x}_1+{\bf x}_2}{2}$ and ${\bf x}={\bf x}_1-{\bf x}_2$, to get \begin{eqnarray} &&T^R({\bf x},{\bf X};{\bf x}',{\bf X}';\zeta)=\delta({\bf X}-{\bf X}')\delta({\bf x}-{\bf x}')v({\bf x}') \\ \nonumber &+&\int d\bar{\bf x}d\bar{\bf X}~T^R({\bf x},{\bf X};\bar{\bf x},\bar{\bf X};\zeta){\cal G}_2^R(\bar{\bf x},\bar{\bf X};{\bf x}',{\bf X}';\zeta)v({\bf x}'). \end{eqnarray} Now we define the center of mass and relative momentum by ${\bf P}={\bf p}_1+{\bf p}_2$ and ${\bf p}=\frac{{\bf p}_1-{\bf p}_2}{2}$, respectively. The Fourier transform and its inverse are defined by \begin{equation} {\cal O}({\bf x})=\int\frac{d{\bf p}}{(2\pi)^3}e^{i{\bf p}\cdot{\bf x}}{\cal O}({\bf p}),\ \ \ {\cal O}({\bf p})=\int d{\bf x}e^{-i{\bf p}\cdot{\bf x}}{\cal O}({\bf x}), \end{equation} and we use the identity $\delta({\bf x})=\int\frac{d{\bf p}}{(2\pi)^3}e^{ i{\bf p}\cdot{\bf x}}$ and its inverse $\delta({\bf p})=\int d{\bf x}e^{ -i{\bf p}\cdot{\bf x}}$. The Fourier transform of the T-matrix equation gives \begin{equation} T^R({\bf p};{\bf p}';{\bf P},\zeta)=v({\bf p}-{\bf p}')+\int \frac{d\bar{\bf p}}{(2\pi)^3}\frac{d\bar{\bf p}'}{(2\pi)^3}~T^R({\bf p};\bar{\bf p};{\bf P},\zeta){\cal G}_2^R(\bar{\bf p};\bar{\bf p}';{\bf P},\zeta)v(\bar{\bf p}'-{\bf p}'). \end{equation} Here ${\bf p}$ is the initial momentum of one particle in the center of mass system, and ${\bf p}'$ is the final momentum of the scattered particle. The center of mass momentum is conserved, where the initial center of mass momentum ${\bf P}$ equals the final one ${\bf P'}$, (that is ${\bf P}={\bf P}'$). The momentum space coupling potential is \begin{equation} v({\bf p}-{\bf p}')=\int d{\bf r}'e^{-i({\bf p}-{\bf p}')\cdot{\bf x}'}v({\bf x}'), \end{equation} and the ${\cal G}_2^R$ is ${\cal G}_2^R(\bar{\bf p},\bar{\bf p}';{\bf P},\zeta)=\delta(\bar{\bf p}-\bar{\bf p}')\Upsilon(\bar{\bf p};{\bf P}\zeta)$, where \begin{eqnarray} \Upsilon(\bar{\bf p};{\bf P},\zeta)&=&i^2\int\frac{d\omega'}{2\pi}\frac{d\omega''}{2\pi}\left\{\frac{G^>(\frac{\bf P}{2}+\bar{\bf p};\omega')G^>(\frac{\bf P}{2}-\bar{\bf p};\omega'')}{\zeta-\omega'-\omega''}\right. \\ \nonumber &-&\left.\frac{G^<(\frac{\bf P}{2}+\bar{\bf p};\omega')G^<(\frac{\bf P}{2}-\bar{\bf p};\omega'')}{\zeta-\omega'-\omega''}\right\}. \end{eqnarray} We have \begin{equation} T^R({\bf p};{\bf p}';{\bf P},\zeta)=v({\bf p}-{\bf p}')+\int \frac{d\bar{\bf p}}{(2\pi)^3}~T^R({\bf p};\bar{\bf p};{\bf P},\zeta)\Upsilon(\bar{\bf p};{\bf P},\zeta)v(\bar{\bf p}-{\bf p}'). \end{equation} We assume local interaction, in which $v({\bf x})=v\delta({\bf x})$, and the Fourier transform gives $v({\bf p}-{\bf p}')=v$. Now, in the center of mass, the scattering is elastic where only the particle direction changes with $|{\bf p}|=|{\bf p}'|$. The T-matrix is depends only on the center of mass momentum, where $T^R({\bf p};{\bf p}';{\bf P},\zeta)\equiv T^R({\bf P},\zeta)$. We achieve \begin{equation} T^R({\bf P},\zeta)=\frac{v}{1-v\int \frac{d\bar{\bf p}}{(2\pi)^3}~\Upsilon(\bar{\bf p};{\bf P},\zeta)}. \end{equation} Let us assume a small center of mass momentum. We can consider ${\bf P}\approx 0$, where we have counter-propagating particles such that the total momentum is small, hence we have $T^R(\zeta)=\frac{v}{1-v\int \frac{d\bar{\bf p}}{(2\pi)^3}~\Upsilon(\bar{\bf p};\zeta)}$. The greater and lesser one-particle Green's functions are $G^<(\bar{\bf p};\omega)=- if(\omega){\cal A}(\bar{\bf p};\omega)$ and $G^>(\bar{\bf p};\omega)=- i\bar{f}(\omega){\cal A}(\bar{\bf p};\omega)$, respectively, where $f(\omega)=\frac{1}{e^{\beta(\omega-\mu)}-1}$ and $\bar{f}(\omega)=1+ f(\omega)=e^{\beta(\omega-\mu)}f(\omega)$. For non-interacting spectral function we have ${\cal A}({\bf p};\omega)=2\pi\delta(\omega-\epsilon({\bf p}))$, where $\epsilon({\bf p})$ is the free particle dispersion. Using the dispersion symmetry $\epsilon({\bf p})=\epsilon(-{\bf p})$, we get $\Upsilon(\bar{\bf p};\zeta)=\frac{1+ 2f(\epsilon(\bar{\bf p}))}{\zeta-2\epsilon(\bar{\bf p})}$, and we obtain $1+ 2f(\epsilon(\bar{\bf p}))=\coth\left(\beta\frac{\epsilon({\bf p})-\mu}{2}\right)$. Finally we reach \begin{equation} T^R(\zeta)=\frac{v}{1-v\int \frac{d{\bf p}}{(2\pi)^3}~\frac{\coth\left(\beta\frac{\epsilon({\bf p})-\mu}{2}\right)}{\zeta-2\epsilon({\bf p})}}. \end{equation} \subsection{One-Dimensional Attractive Photons} We consider now interacting photons propagating in one-dimensional nanoscale waveguide. The coupling parameter is negative for attractive interaction, where we make the change $v\rightarrow -v$, then in terms of the wavenumber \begin{equation} T^R(\zeta)=\frac{-v}{1+v\int \frac{dk}{2\pi}~\frac{\coth\left(\beta\frac{\epsilon(k)-\mu}{2}\right)}{\zeta-2\epsilon(k)}}. \end{equation} The linear dispersion is $\epsilon(k)=\epsilon_0+v_ek$, where $\epsilon_0$ is the minimum energy that appears due to transverse confinement, and $v_e$ is the photon effective group velocity. The interaction is limited to an energy band around the chemical potential where $|\epsilon(k)-\mu|<\Delta$. The band width is taken to be of the order of the phonon frequency $\Omega$. We change into the variable $\epsilon=\epsilon(k)-\mu$, where $d\epsilon=v_edk$, and we get \begin{equation} T^R(\zeta)=\frac{-v}{1+\frac{v}{2\pi v_e}\int_{-\Delta}^{+\Delta} d\epsilon~\frac{\coth\left(\beta\epsilon/2\right)}{\zeta-2\mu-2\epsilon}}. \end{equation} We evaluate the integral at the imaginary value of $\zeta-2\mu=i\eta$, where in the limit $\eta\rightarrow 0$ we obtain \begin{equation} T^R=\frac{-v}{1-g\int_{0}^{\Lambda}dx~\frac{\coth x}{x}}, \end{equation} with $\Lambda=\beta\Delta/2$ and $g=\frac{v}{2\pi v_e}$, after making the change of variable of $x=\beta\epsilon/2$. The T-matrix measures the probability amplitude for adding a pair of particles into the system and afterward removing a pair. A complex pole in the upper half plane indicates that if a pair of particles with opposite momenta are added at a certain time, the probability amplitude for removing such a pair increases exponentially in time, and the T-matrix approximation breakdown. The appearance of complex poles signals the formation of bound states (photon molecules). In the limit of high temperature $T\rightarrow\infty$, where $\beta\rightarrow 0$ and $x\rightarrow 0$, we have $1<g\int_{0}^{\Lambda}dx\frac{\coth x}{x}$, as $\coth x\rightarrow \infty$. Hence the T matrix contains no complex poles. On the other hand, in the limit of low temperature $T\rightarrow 0$, where $\beta\rightarrow \infty$ and $x\rightarrow \infty$, we have $1\ge g\int_{0}^{\Lambda}dx\frac{\coth x}{x}$, as $\coth x\rightarrow 1$. Hence at low temperature complex poles can appear in the T-matrix. The critical temperature for the formation of bound states can be estimated from the equality $1=g\int_{0}^{\Lambda}dx\frac{\coth x}{x}$, when the denominator of the T-matrix vanishes. We are in the limit of $\Delta\ll\beta$, and the integral is taken in a region where $\coth x\approx 1$. As we are in the limit of low temperature one can integrate in the region of $x$ between $1$ and $\Lambda$, where $1\approx g\int_{1}^{\Lambda}dx\frac{1}{x}$. Then we get $1=g\ln \Lambda$, where we can write $\frac{2\pi v_e}{v}=\ln \frac{\Delta}{2k_BT_c}$, that lead to the result of \begin{equation} k_BT_c=\frac{\Delta}{2}e^{-\frac{2\pi v_e}{v}}. \end{equation} We choose the numbers such that $v/v_e=4\pi$, and the band width is fixed by the vibrational mode frequency of about $\Omega=40$~GHz, with $\Delta=\hbar\Omega$ \cite{Zoubi2017}. Then we have $k_BT_c=\frac{\hbar}{\sqrt{e}}~20\text{GHz}$, where the critical temperature is $T_c\approx 0.1$~K. \section{Photon Molecules} After demonstrating the possibility of the formation of photon molecules in the previous section, we represent here the photon bound states and show how to implement them for quantum logic gates. We start with the three dimensional case and concentrate later in the one dimensional waveguide case. We consider two counter-propagating photons, $(a)$ and $(b)$, that form a molecule of wavevector ${\bf K}$ and is characterized by a normalized wavefunction $\chi({\bf x}_a-{\bf x}_b)$. The photon molecule wavefunction is defined by \begin{equation} \phi_{\bf K}({\bf x}_a,{\bf x}_b)=\frac{1}{\sqrt{V}}~e^{i{\bf K}\cdot({\bf x}_a+{\bf x}_b)/2}\chi({\bf x}_a-{\bf x}_b), \end{equation} where $V=L^3$ is the system volume. Using transnational symmetry, then the Fourier transform reads $\tilde{\chi}_{\bf k}=\frac{1}{\sqrt{V}}\int d^3x~e^{-i{\bf k}\cdot{\bf x}}\chi({\bf x})$ with the inverse transform $\chi({\bf x})=\frac{1}{\sqrt{V}}\sum_{\bf k}\tilde{\chi}_{\bf k}~e^{i{\bf k}\cdot{\bf x}}$, to get \begin{equation} \phi_{\bf K}({\bf x}_a,{\bf x}_b)=\frac{1}{V}\sum_{\bf k}\tilde{\chi}_{\bf k}~e^{i\left(\frac{\bf K}{2}+{\bf k}\right)\cdot{\bf x}_a}e^{i\left(\frac{\bf K}{2}-{\bf k}\right)\cdot{\bf x}_b}. \end{equation} We use the periodic boundary condition, then the wavenumber ${\bf k}=(k_x,k_y,k_z)$ takes the values $k_i=2\pi n_i/L$ with $(i=x,y,z)$, and $n_i$ are integers. The normalization condition is given by $\int d^3x\left|\chi({\bf x})\right|^2=\sum_{\bf k}\left|\tilde{\chi}_{\bf k}\right|^2=1$. The ket eigenstate is given by $|\phi_{\bf K}(a,b)\rangle=\sum_{\bf k}\tilde{\chi}_{\bf k}\left|\left.a,\left(\frac{\bf K}{2}+{\bf k}\right);b,\left(\frac{\bf K}{2}-{\bf k}\right)\right.\right\rangle$. The ket can be created from the vacuum using the second quantization operators of Fock's space, by $|\phi_{\bf K}\rangle=\sum_{\bf k}\tilde{\chi}_{\bf k}~\hat{a}_{\frac{\bf K}{2}+{\bf k}}^{\dagger}\hat{a}_{\frac{\bf K}{2}-{\bf k}}^{\dagger}|\text{vac}\rangle$. We define the pair creation operator of a molecule having a total wavenumber ${\bf K}$ by $\hat{A}_{\bf K}^{\dagger}=\sum_{\bf k}\tilde{\chi}_{\bf k}~\hat{a}_{\frac{\bf K}{2}+{\bf k}}^{\dagger}\hat{a}_{\frac{\bf K}{2}-{\bf k}}^{\dagger}$. The creation operator is related to the field operator by $\hat{a}^{\dagger}_{\bf k}=\frac{1}{\sqrt{V}}\int d^2x~e^{i{\bf k}\cdot{\bf x}}\hat{\psi}^{\dagger}({\bf x})$, which lead to \begin{equation} \hat{A}_{{\bf K}}^{\dagger}=\frac{1}{\sqrt{V}}\int d^3X~e^{i{\bf K}\cdot{\bf X}}\int d^3x~{\chi}({\bf x})~\hat{\psi}^{\dagger}({\bf X}+{\bf x}/2)\hat{\psi}^{\dagger}({\bf X}-{\bf x}/2). \end{equation} The pair field operator can be defined as $\hat{\Psi}^{\dagger}({\bf X})=\frac{1}{\sqrt{V}}\sum_{\bf K}e^{-i{\bf K}\cdot{\bf X}}\hat{A}_{{\bf K}}^{\dagger}$. We obtain, using the result $\sum_{\bf K}e^{i{\bf K}\cdot\left({\bf X'}-{\bf X}\right)}=V\delta({\bf X}-{\bf X'})$, the pair field creation operator \begin{equation} \hat{\Psi}^{\dagger}({\bf X})=\int d^3x~{\chi}({\bf x})~\hat{\psi}^{\dagger}({\bf X}+{\bf x}/2)\hat{\psi}^{\dagger}({\bf X}-{\bf x}/2). \end{equation} \subsection{Quantum Nonlinear Phase} The wavefunction of a photon molecule build of two counter-propagating pair of photons can accumulate a quantum nonlinear phase of the order of $\pi$ in the appropriate condition. The phase is shown to be useful for the implementation of the photon molecule as a quantum logic gate. We consider two counter propagating photons, ($a$) and ($b$), of minimum frequency $\omega_0$, and with effective group velocity $v_e$, inside one-dimensional nanoscale wires. The free photon real space Hamiltonian is \begin{equation} \hat{H}_0=\int dx\left\{\omega_0\ \hat{\psi}_a^{\dagger}(x)\hat{\psi}_a(x)+iv_e\frac{\partial\hat{\psi}_a^{\dagger}(x)}{\partial x}\hat{\psi}_a(x)+\omega_0\ \hat{\psi}_b^{\dagger}(x)\hat{\psi}_b(x)-iv_e\frac{\partial\hat{\psi}_b^{\dagger}(x)}{\partial x}\hat{\psi}_b(x)\right\}, \end{equation} and the photon-photon interaction Hamiltonian of coupling parameter $v$ is given by \begin{equation} \hat{H}_I=\frac{v}{2}\int dxdx'\ \delta(x-x')\ \hat{\psi}_a^{\dagger}(x')\hat{\psi}_b^{\dagger}(x)\hat{\psi}_a(x')\hat{\psi}_b(x). \end{equation} The photon pair bound-state is given by \begin{equation} |\Psi(t)\rangle=\int d(x_1-x_2)\ \phi(x_1-x_2,t)\ \hat{\psi}_a^{\dagger}(x_1)\hat{\psi}_b^{\dagger}(x_2)|\text{vac}\rangle. \end{equation} The state obeys the Schrodinger equation $i\frac{\partial}{\partial t}|\Psi(t)\rangle=\hat{H}|\Psi(t)\rangle$, and we get \begin{equation} i\left\{\frac{\partial}{\partial t}+v_e\left(\frac{\partial}{\partial x_1}-\frac{\partial}{\partial x_2}\right)\right\}\phi(x_1-x_2,t)=\left\{2\omega_0+\frac{v}{2}\delta(x_1-x_2)\right\}\phi(x_1-x_2,t). \end{equation} Moving into rotating frame by using $\phi(x_1-x_2,t)=\tilde{\phi}(x_1-x_2,t)e^{-2i\omega_0t}$, gives \begin{equation} i\left\{\frac{\partial}{\partial t}+v_e\left(\frac{\partial}{\partial x_1}-\frac{\partial}{\partial x_2}\right)\right\}\tilde{\phi}(x_1-x_2,t)=\frac{v}{2}\delta(x_1-x_2)\tilde{\phi}(x_1-x_2,t). \end{equation} In term of the center of mass and relative positions, $X=\frac{x_1+x_2}{2}$ and $x=x_1-x_2$, where \begin{equation} \frac{\partial}{\partial X}=\frac{\partial}{\partial x_1}+\frac{\partial}{\partial x_2},\ \ \ 2\frac{\partial}{\partial x}=\frac{\partial}{\partial x_1}-\frac{\partial}{\partial x_2}, \end{equation} we get \begin{equation} i\left\{\frac{\partial}{\partial t}+2v_e\frac{\partial}{\partial x}\right\}\tilde{\phi}(x,t)=\frac{v}{2}\delta(x)\tilde{\phi}(x,t). \end{equation} We apply the change of variables $\eta=x-2v_et$ and $\xi=x$, where \begin{equation} \frac{\partial}{\partial X}=\frac{\partial}{\partial\eta}+\frac{\partial}{\partial\xi},\ \ \ \frac{\partial}{\partial t}=-2v_e\frac{\partial}{\partial\eta}, \end{equation} and we obtain $\frac{\partial}{\partial t}+2v_e\frac{\partial}{\partial x}=2v_e\frac{\partial}{\partial\xi}$, then the equation of motion \begin{equation} \frac{\partial}{\partial\xi}\tilde{\phi}(\eta,\xi)=-i\frac{v}{4v_e}\delta(\xi)\tilde{\phi}(\eta,\xi), \end{equation} and the solution reads \begin{equation} \tilde{\phi}(\eta,\xi)=\tilde{\phi}_{\text{in}}e^{-i\frac{v}{4v_e}\int d\xi\delta(\xi)}=\tilde{\phi}_{\text{in}}e^{-i\frac{v}{4v_e}}. \end{equation} \begin{table}[ht] \caption{Z-Controlled Gate} \centering \begin{tabular}{c c} \hline\hline Input State$\ \ \ $ & $\ \ \ $Output State \\ [0.5ex] \hline $|0,0\rangle$ & $\ |0,0\rangle$ \\ $|1,0\rangle$ & $\ |1,0\rangle$ \\ $|0,1\rangle$ & $\ |0,1\rangle$ \\ $|1,1\rangle$ & -$|1,1\rangle$ \\ [1ex] \hline \end{tabular} \label{table:nonlin} \end{table} Photon molecules can serve as a tool for the implementation of quantum logic gates. We show how to achieve Z-controlled gate for the case of $\vartheta=\pi$ where the nonlinear quantum phase is $\vartheta=\frac{v}{4v_e}$, which is obtained for the previous value of $v/v_e=4\pi$. If in the input we have zero photons at the two channels, that is $|0,0\rangle$, the output is the same state of $|0,0\rangle$. If in the input one channel includes one photon and the second is empty, that is $|0,1\rangle$ or $|1,0\rangle$, then the output is again the same state, $|0,1\rangle$ or $|1,0\rangle$. While for the case of two counter-propagating photons, then the input state is $|1,1\rangle$, and now the output state acquires a phase where $-|1,1\rangle$, as presented in table (1). The gate is universal and in combination with photon Hadamard gates one can achieve all requested quantum logic gates. \section{Conclusions} Quantum nonlinear optics is a hot topic and found big interest in the recent years for its importance to both fundamental physics and applications in quantum information processing. Several proposals have been suggested recently for the realization of quantum nonlinear optics and mainly for achieving strongly interacting photons, while for each proposal exists the advantages and disadvantages. Nanoscale structures are solid components and then can be easily integrated into all-optical platforms, the fact that makes them very attractive. Many-body physics of photons can be treated in using different techniques that rest on approximations. In the present paper we adopted the method of contour Green's functions that permits one to extract the system physical properties. We reached a hierarchy of equations of motion for the Green's functions, where the equation for one-particle Green's function depends on the equation for two-particle Green's function and so on for higher orders. The system is unsolvable exactly and we applied the T-matrix approximation that allows the truncation of the equations, and that can be solved for the scattering problem in a medium of interacting particles. In the paper we considered interacting slow photons in nanoscale wires, where we used our previously derived Hamiltonian for the effective photon-photon interaction that is mediated by phonons. We calculated the T-matrix and found the complex pole at which the approximation breaks down. The singularity in the T-matrix is the signature for the formation of photon bound states and provides the critical temperature at which such phenomena can appear. The photon bound state is represented by a ket state that is defined through two-particle creation operators. The equation of motion for the state amplitude is solved and results in nonlinear quantum phase shift that can be of the order of $\pi$ in the appropriate region of physical numbers. The photon molecules are shown to act as quantum logic gates by exploiting the nonlinear phase shift. The present work can be extended into more interesting effects of many-body physics of photons in nanostructures. The contour Green's function is a strong tool that allows us to study further interesting phenomena, e.g. Bose-Einstein condensation and superfluidity of photons. Moreover, interacting photons in nanoscale structures is an ideal environment for studying non-equilibrium behavior, and the contour Green's function is the optimal tool for achieving this aim. The implementation of interacting photons for quantum information processing is an important step toward introducing further nanophotonic quantum information components. Our setup allows us to achieve quantum information processing and communication in using hybridized components involving interacting photons. The issue we presented here can be extended into a wide range of quantum computing devices of photons. \section*{Appendix} We introduce here some analytical properties of the the general two point correlator that belongs to the Keldysh space. The two-point correlator is defined by ${\cal C}(z,z')=\text{Tr}\left[\hat{\rho}{\cal T}\left\{\hat{O}_1(z)\hat{O}_2(z')\right\}\right]$ for the two operators $\hat{O}_1(z)$ and $\hat{O}_2(z')$, where the matrix operator is given by $\hat\rho=\frac{e^{-\beta \hat{H}^M}}{Z}$, with the partition function $Z=\text{Tr}\left\{e^{-\beta \hat{H}^M}\right\}$. In Keldysh space we can write the general form \begin{equation} {\cal C}(z,z')={\cal C}^{\delta}\delta(z,z')+\theta(z,z'){\cal C}^>(z,z')+\theta(z',z){\cal C}^<(z,z'), \end{equation} where ${\cal C}^{\delta}$ is the singular part, and ${\cal C}^>(z,z')=\text{Tr}\left[\hat{\rho}\hat{O}_1(z)\hat{O}_2(z')\right]$, with ${\cal C}^<(z,z')=\text{Tr}\left[\hat{\rho}\hat{O}_2(z')\hat{O}_1(z)\right]$. On the backward and forward branches, as $\hat{O}_i(t_+)=\hat{O}_i(t_-)$ for $(i=1,2)$, we have ${\cal C}^{\lessgtr}(t_+,z')={\cal C}^{\lessgtr}(t_-,z')$, and ${\cal C}^{\lessgtr}(z,t'_+)={\cal C}^{\lessgtr}(z,t'_-)$. Also we have ${\cal C}^{\delta}(t_+)={\cal C}^{\delta}(t_-)\equiv {\cal C}^{\delta}(t)$. In general, we have $\theta(z,z')=1$ for $z$ later than $z'$ on the contour, and zero otherwise. Moreover, we have $\delta(z,z')=\frac{d}{dz}\theta(z,z')=-\frac{d}{dz'}\theta(z,z')$. We now define different Keldysh components that are functions of real variables. When both arguments on the horizontal branches, the greater and lesser Keldysh components are ${\cal C}^>(t,t')\equiv {\cal C}(t_+,t'_-)$, and ${\cal C}^<(t,t')\equiv {\cal C}(t_-,t'_+)$. Namely, the value of the contour function ${\cal C}^{\lessgtr}(z,z')$ is the real-time function ${\cal C}^{\lessgtr}(t,t')$. Furthermore, we define the retarded and advanced Keldysh components of real time arguments \begin{eqnarray} {\cal C}^R(t,t')&\equiv&{\cal C}^{\delta}\delta(t-t')+\theta(t-t')\left[{\cal C}^>(t,t')-{\cal C}^<(t,t')\right], \\ \nonumber {\cal C}^A(t,t')&\equiv&{\cal C}^{\delta}\delta(t-t')-\theta(t'-t)\left[{\cal C}^>(t,t')-{\cal C}^<(t,t')\right]. \end{eqnarray} The retarded component vanishes for $t<t'$, while the advanced component vanishes for $t>t'$. Dropping the singular part, the Fourier transform of the correlator, using ${\cal O}(t_1-t_2)=\int\frac{d\omega}{2\pi}e^{-i\omega(t_1-t_2)}{\cal O}(\omega)$ and the Heaviside identity $\theta(t_1-t_2)=i\int\frac{d\omega}{2\pi}\frac{e^{-i\omega(t_1-t_2)}}{\omega+i\eta}$, gives \begin{equation} {\cal C}^{R/A}(\omega)=\int\frac{d\omega'}{2\pi}\frac{\hat{\cal A}(\omega')}{\omega-\omega'\pm i\eta}, \end{equation} where we defined the spectral function $\hat{\cal A}(\omega)=i\left[{\cal C}^>(\omega)-{\cal C}^<(\omega)\right]$ or $\hat{\cal A}(\omega)=i\left[{\cal C}^R(\omega)-{\cal C}^A(\omega)\right]$. We get the fluctuation-dissipation theorem ${\cal C}^<(\omega)=- i f(\omega-\mu)\hat{\cal A}(\omega)$ and ${\cal C}^>(\omega)=-i\bar{f}(\omega-\mu)\hat{\cal A}(\omega)$, where for bosons $f(\omega)=\frac{1}{e^{\beta\omega}+ 1}$ and $\bar{f}(\omega)=1\pm f(\omega)=e^{\beta\omega}f(\omega)$, with the relation ${\cal C}^>(\omega)=e^{\beta(\omega-\mu)}{\cal C}^<(\omega)$. \section*{Acknowledgment} The work was supported by the Council for Higher Education in Israel via the Maa'of Fellowship.
proofpile-arXiv_065-13
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction For a classical Hamiltonian system with $n$ degrees of freedom, the existence of $n$ integrals of motion in involution is required to make it integrable in the Liouville sense. These integrals must be well-defined functions in the phase space and functionally independent. On the other hand, a superintegrable system possesses $k$ additional integrals of motion being $k=n-1$ the maximum possible number. The concept of superintegrability can be defined both in classical and quantum mechanics and it has been studied extensively for a very long period of time. The outcome of such a long period of research activity has far reaching consequences both in mathematical and physical points of view. There exist several exhaustive review articles in literature which describe the history and current status of this topic \cite{Millerebook, MillerPostWinternitz:2013}. Starting with a spherically symmetric standard Hamiltonian (i.e., the potential $V=V(r)$ being velocity and spin independent), there exist only two superintegrable systems, namely the Kepler--Coulomb and the harmonic oscillator. Actually, these two potentials are exactly the ones which appear in the celebrated Bertrand's theorem \cite{Bertrand1873,Goldstein}. Superintegrability of the Kepler--Coulomb problem is due to the existence of the conserved Laplace--Runge--Lenz vector \cite{Goldstein, Laplace, Lenz, Runge} whilst in the case of the harmonic oscillator is a consequence of the existence of the quadrupole Jauch--Fradkin tensor \cite{Fradkin, Jauch}. The systematic investigation of superintegrability has been initiated by Pavel Winternitz and his collaborators in 1965 \cite{Fris}. They first considered quadratic superintegrability in Euclidean spaces and the subject has been subsequently developed into many directions by several authors since then. For example, its close relation with multiseparability was studied in detail in the references \cite{Evans.b, Evans.a, Fris, Kalnins.a, Makarov, Miller.a}, the search for superintegrable systems in $2$- and $3$-dimensional spaces of constant and nonconstant curvature has been carried out in the works \cite{Grosche1, Grosche2, Grosche3, Kalnins.a, Kalnins.d, Kalnins.c,Kalnins.b, Kalnins.e,Kalnins.f, Miller.b} and their generalizations to $n$-dimensions have been analyzed in the papers \cite{Kalnins.i, Kalnins.h, Rodriguez}. Another important research direction in this field is to consider the Hamiltonians with magnetic field and/or spin. Superintegrability with magnetic field was first explored in the articles \cite{Berube, Dorizzi} and much recently developed in the articles \cite{BKSnobl, MarSnoblW1, MarSnoblW2}. The systematic investigation of integrability and superintegrability for systems involving particles with spin was initiated in the reference \cite{wy1} and subsequently all the rotationally invariant superintegrable systems in {$E_3$} were classified in the articles \cite{wy3, wy2, YTW}. On the other hand, spin dependent superintegrable systems were studied in the works \cite{Nikitin1, Nikitin2} for matrix potentials simulating charged or neutral fermions with non-trivial dipole moment in the presence of an electric field. Still another interesting direction is to go beyond quadratic superintegrability, i.e., the gene\-ral theory of higher-order superintegrability. Initial pioneering works were the articles of Drach \cite{Drach1,Drach2}, where $10$ potentials allowing third-order integrals of motion were announced. However, much later it was shown that $7$ of these potentials are actually reducible, the third-order integral is the Poisson commutator of two second-order integrals \cite{Ranada, Tsiganov:2000}. Once again, the systematic investigation of higher-order superintegrability, in particular the third-order one has been initiated by Pavel Winternitz and his collaborators in the articles \cite{g, gW, Marquette.b, Marquette.a, Popper, TPW2010}. Almost around the same time higher-order symmetry operators were calculated for the Schr\"{o}dinger ope\-ra\-tor and the determining equations for the corresponding integrals of motion appeared in~\cite{Nikitin3}. Never\-the\-less, it was soon realized that the analysis became very complicated and some new ways of approaching to the problem of higher-order superintegrability have to be considered. After the publication of the seminal paper ``{An infinite family of solvable and integrable quantum systems on a plane}'' \cite{TTWquantum}, the direction of the research has been thoroughly shifted to {higher-order} integrability/superintegrability \cite{PostWinternitz:2010, PostWinternitz:2015, TTWclassical}. Moreover, in order to make them more easily tractable, new techniques and methods have been implemented in the study of higher-order {integrable} and superintegrable systems \cite{Chanu1, Chanu2}. From our point of view one of the main issues on higher-order superintegrability is the classification of the superintegrable potentials. In the case of 2D separable potentials in Cartesian coordinates, an $N$th-order superintegrable system appeared for the first time in~\cite{Thompson}, where the existence of nonlinear equations for the potential which makes the general problem much more complicated was stressed as well. Recently in 2018, by means of a systematic study an infinite 2-parametric family of superintegrable potentials embracing those found in~\cite{GungorKNN:2014, Thompson} was presented in the paper \cite{Grigoriev} by Grigoriev and Tsiganov. Their key element to construct polynomial integrals of motion is the addition theorems for the action-angle variables, especially the Chebyshev theorem applied to integrals on differential binomials (see also~\cite{Tsiganov:2008B,Tsiganov:2008}). Such an elegant approach has the advantage that it uses the action-angle variables which play a~fundamental role in classical mechanics. However, unlike the present direct approach some systems can be missed and the gene\-ra\-li\-zation to the quantum case is not straightforward. The explicit list of all $N$th-order 2D (polynomial) superintegrable potential separating in Cartesian coordinates is far to be complete. Throughout many recent research activity in the field of higher order superintegrability, it has been clarified that three types of potentials can occur, namely the standard, the doubly exotic and singly exotic potentials. Standard ones are solutions of a linear compatibility condition for the determining equations that govern the existence of a higher-order polynomial integral of motion. For doubly exotic potentials this linear compatibility condition is satisfied trivially, it~is identically zero, and the potentials satisfy non-linear equations. These classes of potentials, appearing in classical and quantum superintegrable systems have been studied both in Cartesian and polar coordinates \cite{AW, AMJVPW2015,AMJVPWIY2018, EWY, MSW}. The aim of this work is to establish in detail general properties of $N$th-order superintegrable classical systems that allow separation of variables in Cartesian coordinates. It can be considered as the classical counterpart of the general study on quantum superintegrable systems treated in~\cite{AMGen}. However, unlike the latter, in this work we also study the algebra of the integrals of motion and provide exhaustive results for the case $N=5$. In particular, it contains the classical analogues of all the quantum doubly exotic potentials obtained in~\cite{AW} explicitly. We~emphasize that for doubly exotic potentials, unlike the doubly standard ones, the limit from the quantum to the corresponding classical system (i.e., $\hbar \rightarrow 0$) is singular for all the cases studied in the present work. Thus, the corresponding quantum and classical solutions are not connected at~all. The Painlev\'e property characterizing the relevant determining equations in the quantum systems is completely lost in the classical case. In addition, a formulation of inverse problem in superintegrability is briefly discussed as well. In the present article we focus on 2D classical Hamiltonian systems that are separable in Cartesian variables $(x,y)$ and they also admit an extra polynomial integral of order $N>2$. The generic Hamiltonian is given by \begin{gather} {\mathcal H} = {\mathcal H}_1(x) + {\mathcal H}_2(y) \equiv \frac{1}{2}\big(p_1^2 + p_2^2\big) + V_1(x) + V_2(y), \label{Hcart} \end{gather} where $p_i$, $i=1,2$, are the canonical momenta conjugate to $x$ and $y$, respectively. It describes a~two-dimensional particle with unit mass $m=1$ moving in the potential $V(x,y)=V_1(x)+V_2(y)$. Thus, the phase space is four-dimensional. These systems are trivially second-order integrable because in addition to the Hamiltonian (\ref{Hcart}) they admit, for any $V_1(x)$ and $V_2(y)$, another 2nd-order symmetry of the form \begin{gather} {\mathcal X} = {\mathcal H}_1(x) - {\mathcal H}_2(y) = \frac{1}{2}\big(p_1^2 - p_2^2\big) + V_1(x) - V_2(y), \label{X} \end{gather} which Poisson commutes (i.e., $\{{\mathcal H},{\mathcal X}\}_{\rm PB}=0$) with the Hamiltonian (\ref{Hcart}). The existence of an $N$th-order third integral $\mathcal Y$, makes the system $N$th-order superintegrable (more integrals of motion than degrees of freedom). In this case, the system is maximally superintegrable. Notice that ${\mathcal H}$ (\ref{Hcart}) is ${\mathcal{S}}_2$-invariant under the permutation $x \Leftrightarrow y $ whilst the integral $\mathcal X$ is anti-invariant. From a physical point of view we are looking for 2D potentials $V(x,y)=V_1(x)+V_2(y)$ for which all the bounded trajectories are closed and periodic. It is worth mentioning that (\ref{Hcart}) can also be interpreted as the Hamiltonian of the relative motion of a two-body problem on the plane with translational invariance. In this case, $(x,y)$ are nothing but the Cartesian coordinates of the relative vector ${\bf r}={\bf r}_1-{\bf r}_2\equiv (x,y)$ between the two bodies. \begin{figure}[h]\centering \includegraphics[scale=0.3]{System2D_n.pdf} \put(-145,100){\makebox(0,0)[lb]{\color{yellow}\small$V(x,y)=V_1(x)+V_2(y)$}} \put(-169,74){\makebox(0,0)[lb]{\color{yellow}\small$m$}} \put(-222,51){\makebox(0,0)[lb]{\color{blue}\small$y$}} \put(-142,40){\makebox(0,0)[lb]{\color{blue}\small$x$}} \caption{The Hamiltonian (\ref{Hcart}) describes a particle with unit mass $m=1$ moving in a two-dimensional potential $V(x,y)=V_1(x)+V_2(x)$. } \end{figure} The outline of the paper is as follows. In Section \ref{integralN}, for an arbitrary potential $V(x,y)$ not necessarily separable in a coordinate system we revisited the so called determining equations governing the existence of a general $N$th-order polynomial integral of motion~${\mathcal Y}_N$. In particular, the dominant $N$th-order terms in~${\mathcal Y}_N$ lie in the enveloping algebra of the Euclidean Lie algebra~$e(2)$. From the next leading terms in~${\mathcal Y}_N$, a linear compatibility condition (LCC) can be obtained for the potential~$V$ only. The case of a separable potential in Cartesian coordinates is then analyzed in Sections~\ref{PSCC}--\ref{coefj4}, where we show and describe a~\textit{well} of determining equations and derive the first non-linear compatibility condition for the potential alone. The general form of the potentials is determined by solving these compatibility conditions. Afterwards, the surviving determining equations become linear and can be solved. In Section~\ref{families}, based on the LCC, we~introduce the doubly exotic potentials. A general formula for the corresponding integral~${\mathcal Y}_N$ is given. In Section~\ref{ODEvsAE} we discuss the role of the algebra of the integrals of motion in the search of superintegrable potentials, and a formulation of inverse problem in superintegrability is commented. Section~\ref{N3N4} is devoted to the known examples with $N=3,4$. Finally, in Sections~\ref{Ne5} and~\ref{Rne5} we consider the case $N=5$ and derive in detail all possible doubly exotic potentials. For conclusions see Section~\ref{conclu}. \section[Superintegrability: existence of an Nth-order polynomial integral] {Superintegrability: existence of an $\boldsymbol{N}$th-order polynomial\\ integral}\label{integralN} In the present article we are considering Hamiltonian systems separable in Cartesian coordinates and hence they are second order integrable by construction. To further search for the superintegrability, we need to give the conditions for the existence of an additional integral of motion, which is a polynomial of order $N>2$ in variables $p_1$, $p_2$. Although the general ideas for the existence of a $N$th-order integral is given in~\cite{Hietarinta1987, PostWinternitz:2015}, here we would like to summarize those results for the sake of completeness. \subsection{General form} The most general form of an $N$th-order polynomial integral ${\mathcal Y}_N$ is given by \begin{gather}\label{YNQSd} {\mathcal Y}_N = \sum_{\ell=0}^{[\frac{N}{2}]}\sum_{j=0}^{N-2\ell} f_{j,2\ell}\, p_1^j p_2^{N-j-2\ell}, \end{gather} see \cite{Hietarinta1987, PostWinternitz:2015}, where ${f}_{j,2\ell}={f}_{j,2\ell}(x,y,V)$ are assumed to be real functions which depend on the coordinates $x$ and $y$ and the potential $V(x,y)$. The integral ${\mathcal Y}_N$ (\ref{YNQSd}) can be conveniently rewritten as follows \begin{gather}\label{Yq} {\mathcal Y}_N = W_N + \text{lower order terms}, \end{gather} where the leading term $W_{N}$ in (\ref{Yq}) \begin{gather}\label{YNq} W_{N} = \sum_{0\leq m+n\leq N} A_{N-m-n,m,n} L_z^{N-m-n} p_1^m p_2^n, \end{gather} plays a fundamental role since it governs the existence or non-existence of the integral ${\mathcal Y}_N$, here $A_{N-m-n,m,n}$ are $\frac{(N+1)(N+2)}{2}$ real parameters and $L_z=x p_2-y p_1$ is the $z$-component of angular momentum. {If the quantity~${\mathcal Y}_N$ Poisson commutes with the Hamiltonian (\ref{Hcart}) then the system becomes $N$th-order ($N>1$) \emph{superintegrable}. In fact, for 2D systems it would correspond to maximal superintegrability.} \subsection{The determining equations} The Poisson bracket of ${\mathcal Y}_N$ (\ref{YNQSd}) with the Hamiltonian $\mathcal H$ (\ref{Hcart}) gives a polynomial, in $p_1$ and $p_2$, of degree $(N+1)$. Explicitly, we have \begin{gather* \{{\mathcal H},{\mathcal Y}_N\}_{\rm PB} = \sum_{n_1+n_2=0}^{N+1} M_{n_1,n_2}p_1^{n_1}p_2^{n_2}, \end{gather*} where the coefficients $ {M}_{n_1,n_2}= {M}_{n_1,n_2}(x,y;f_{j,2\ell},V,N)$ depend on the variables $x$, $y$, the functions $f_{j,2\ell}$ appearing in the integral ${\mathcal Y}_N$, the potential $V(x,y)$ we are looking for, and they also carry an $N$-dependence. Superintegrability requires \begin{gather}\label{Zxy} {M}_{n_1,n_2} = 0, \qquad n_1+n_2=0,1,2,\dots,(N+1), \end{gather} ($\{{\mathcal H},{\mathcal Y}_N \}=0$). For an arbitrary potential $V(x,y)$ not necessarily separable, the system (\ref{Zxy}) is equivalent to the following set of \textit{determining equations} (DE): \begin{gather}\label{quant deteq} (\partial_x{ f}_{j-1,2\ell} + \partial_y{f}_{j,2\ell}) - \big[(j+1){f}_{j+1, 2\ell-2} \big]\partial_x V - \big[(N-2\ell+2-j){f}_{j, 2\ell-2} \big]\partial_y V = 0, \end{gather} $\ell=0,1,2,\dots,\big[\frac{N}{2}\big]$, $j=0,1,2,\dots,(N-2\ell)$. In (\ref{quant deteq}), the real functions ${f}_{j, \ell} \equiv 0$ identically for $\ell<0$ and $j < 0$ as well as for $j > N - 2\ell$ (further details can be found in~\cite{Hietarinta1987, PostWinternitz:2015}). The DE correspond to the vanishing of all the coefficients, in the Poisson bracket $\{{\mathcal H},{\mathcal Y}_N\}_{\rm PB}$, multiplying the momentum terms of order $n_1+n_2=k=N+1,N-1,N-3,\dots,(N+1-2\ell)$. In particular, for odd $N$ the coefficient multiplying the zero order term is simply $ f_{1,N-1}V_1' + f_{0,N-1}V_2' = 0,$ obtained from~(\ref{quant deteq}) by making the replacement $\ell \rightarrow \ell +1$. The DE govern the existence of the integral ${\mathcal Y}_N$. In general, the system (\ref{quant deteq}) is overdetermined. If the potential $V(x,y)$ is not known a priori, then it must be calculated from the compatibility conditions of the DE. The structure of the DE (\ref{quant deteq}) can be summarized as follows: \begin{itemize}\itemsep=0pt \item The set of DE (\ref{quant deteq}) can be seen as a \textit{well} of recursive equations. The coefficients $f_{j,2\ell}$ in~${\mathcal Y}_N$ depend on the preceding $f_{j,2k}$, $0\leq k<\ell$. \item The bottom level of equations (\ref{quant deteq}) corresponds to $\ell=0$. The associated DE do not depend on $V$, thus, allowing exact solvability. Indeed, they define the coefficient-functions~$f_{j,0}$, $j=0,1,2,\dots,N$. The explicit expression for $f_{j,0}$ is given by \begin{gather*} f_{j,0} = \sum_{n=0}^{N-j} \sum_{m=0}^{j}\binom{ N-n-m}{j-m}A_{N-n-m,m,n}x^{N-j-n}(-y)^{j-m}, \end{gather*} see~\cite{Hietarinta1987, PostWinternitz:2015}. Accordingly, the leading part~(\ref{YNq}) of ${\mathcal Y}_N$ is a polynomial of order $N$ in the enveloping algebra of the Euclidean Lie algebra $e(2)$ with basis $\{p_1, p_2, L_z\}$. \item The 2nd level of DE (\ref{quant deteq}) occurs at $\ell=1$. They provide a \emph{linear compatibility condition} (LCC) for the potential $V$ only. For arbitrary potential, this linear PDE can be written in the compact form \cite{Hietarinta1987,PostWinternitz:2015} \begin{gather}\label{LCC} \sum_{j=0}^{N-1}{(-1)}^j\partial_x^{N-1-j}\partial_y^{j}\big[ (j+1)f_{j+1,0}\partial_x V + (N-j)f_{j,0}\partial_yV \big] = 0. \end{gather} This above equation is a necessary but not sufficient condition for $\{{\mathcal H},{\mathcal Y}_N \}=0$. Also, in the quantum case the LCC remains identical to (\ref{LCC}). However, the corresponding DE do acquire $\hbar$-dependent terms. \item Beginning from $\ell=2$, the DE (\ref{quant deteq}) will lead to \emph{nonlinear compatibility conditions} (NLCC) for the potential $V$ alone. We~should remind here that in the quantum case these NLCC, unlike the LCC, do depend non-trivially on $\hbar$. Hence, the classical and quantum cases can greatly differ, and it requires to treat them separately. \end{itemize} \section{Superintegrable potentials separable in Cartesian coordinates}\label{PSCC} \subsection{The linear compatibility condition} In the case of a separable potential the LCC (\ref{LCC}) leads to the ordinary differential equations for $V_1(x)$ \begin{gather}\label{QX} \sum_{j=0}^{N-1}(j+1)! \sum_{n=0}^{N-j-1}\binom{N-1-n}{j} A_{N-1-n,1,n}\bigg(\frac{\rm d}{{\rm d}x}\bigg)^{N-j+1} \big[x^{N-j-n-1} V_1'(x)\big] = 0, \end{gather} and \begin{gather}\label{QX2} \sum_{j=0}^{N-1}(j\!+1)(j\!+1)! (-1)^{2j+1} \!\sum_{n=0}^{N-j-1}\!\!\binom{N\!-n}{j\!+1} A_{N-n,0,n}\bigg(\frac{\rm d}{{\rm d}x}\bigg)^{N-j+1}\!\! \big[x^{N-j-n-1} V_1'(x)\big]\! = 0. \end{gather} For superintegrability, $\{{\mathcal H},{\mathcal Y}_N \}=0$, these two linear equations (\ref{QX}) and~(\ref{QX2}) must be simultaneously satisfied. Similarly, for $V_2(y)$ there exist two ODEs which can be obtained from (\ref{QX}) and~(\ref{QX2}) using the symmetry $x\leftrightarrow y$, respectively. \section{The first nonlinear compatibility condition}\label{coefj4} In the case of an arbitrary odd {$N \geq 3$} polynomial integral of motion ${\mathcal Y}_N$, following the derivation presented in~\cite{AMGen} we describe the procedure to construct the first NLCC in detail. In general, this equation obtained from the DE (\ref{quant deteq}) with $\ell=2$ provides the form of the doubly exotic potentials, see below. As a first step, one solve the DE (\ref{quant deteq}) with $\ell=1$. These equations define all the coefficient-functions ${f}_{j,2}$ appearing in the integral (\ref{YNQSd}). Secondly, from the DE (\ref{quant deteq}) with $\ell=2$ we compute the ($N-3$) functions ${f}_{j,4}$ except those with $j=\frac{N-5}{2}$ and $j=\frac{N-3}{2}$. Eventually, we arrive at the equations \begin{gather}\label{NLCCNODD} \partial_y{f}_{\frac{N-5}{2},4} = \tilde F_\frac{N-5}{2},\qquad \partial_x{ f}_{\frac{N-5}{2},4} + \partial_y{ f}_{\frac{N-3}{2},4} = \tilde F_\frac{N-3}{2},\qquad \partial_x{ f}_{\frac{N-3}{2},4} = \tilde F_\frac{N-1}{2}, \end{gather} here the $\tilde F$'s, by construction, are real functions that solely depend on the potential $V$ (and its derivatives). Finally, from (\ref{NLCCNODD}) it follows the equation \begin{gather* \partial^2_{x}\tilde F_\frac{N-5}{2} + \partial^2_{y}\tilde F_\frac{N-1}{2} - \partial^2_{x,y}\tilde F_\frac{N-3}{2} \equiv 0, \end{gather*} which gives the aforementioned NLCC for the potential $V$. In the case of arbitrary even {$N \geq 4$}, the steps are quite similar, see details in~\cite{AMGen}. From (\ref{quant deteq}), it follows that more NLCC occur for each value of $\ell=3,4,\dots,\big[\frac{N}{2} \big]$. Nevertheless, these additional equations will simply restrict the general solution of the potential $V$ found from the previous NLCC with $\ell=2$. Therefore, for a separable potential $V=V_1(x)+V_2(y)$ the set of DE with $\ell=0$ are given by a system of ODEs which do not depend on $V$ and they specify the coefficient-functions $f_{j,0}$ ($j=0,1,2,\dots,N$) appearing in (\ref{YNQSd}). Then, the next level of DE with $\ell=1$ provide a LCC for the potential alone and they also determine the functions $f_{j,2}$ ($j=0,1,2,\dots,N-2$). At all further levels $\ell\geqslant 2$ the DE and their compatibility conditions are nonlinear ODEs for $V$. These compatibility conditions are instrumental to specify the general form of the potential $V$. \section{Doubly exotic potentials}\label{families} Hereafter, we will restrict ourselves to the case of doubly exotic potentials. These potentials satisfy the LCC (\ref{LCC}) trivially. In particular, the two linear ODEs (\ref{QX}) and~(\ref{QX2}) vanish identically for any $V_1(x)$. Hence, this LCC does not impose any constraint for $V_1(x)$ nor for $V_2(y)$. This situation occurs when the number of coefficients $A_{N-m-n,m,n}$ that figure in the LCC is less that those appearing in the integral $Y_N$ (\ref{YNQSd}). In this case, we simply put equal to zero the coefficients $A_{N-m-n,m,n}$ in the LCC, thus, it vanishes identically, but still the integral $Y_N$ is of order $N$. In general, based on the LCC (\ref{LCC}) one can classify the $N$th-order superintegrable systems into three major classes: doubly exotic potentials, singly exotic potentials and standard potentials (see~\cite{AMGen}). This general classification is summarized in Table~\ref{FiC}. \begin{table}[h \caption{Classification of $N$th-order superintegrable classical systems ($N>2$) separating in Cartesian coordinates. For a fixed value of $N$, there exist three generic types of potentials: doubly standard, doubly exotic and singly exotic potentials.}\label{FiC}\vspace{4pt}\centering {\small\begin{tabular}{c|c|c|c} \hline Potential\rule{0pt}{12pt} & Doubly standart & Doubly exotic & Singly exotic \\ $V=V_1(x)+V_2(y)$ &potentials &potentials &potentials \\[2pt] \hline Classical\rule{0pt}{12pt} &Both functions &Both $V_1(x)$, $V_2(y)$ &The $x$-component \\ superintegrable &$V_1(x)$, $V_2(y)$ satisfy &obey a NLCC, & $V_1(x)$ satisfies \\ systems & non-trivially &a non-linear ODE &a linear/non-linear \\ &the LCC, &which do not pass &ODE whilst $y$-component \\ &a linear ODE&the Painlev\'{e} test. &$V_2(y)$ obeys \\ &&The LCC is identically zero&a non-linear/linear OD \\[2pt] \hline \end{tabular}} \end{table} From this point of view, Cases 1--3 of Proposition 1 presented in~\cite{Grigoriev} are doubly exotic potentials for $n_1,n_2>1$ whilst Cases 4--5 at $n>1$ can not be doubly standard ones. Moreover, the aforementioned Cases 1--3 are nothing but particular solutions of the present direct approach. It is worth mentioning that we solely consider potentials where neither the $x$-part $V_1(x)$ nor the $y$-part $V_2(y)$ are constant functions. \subsection[Integral \protect{$Y\_N$} for doubly exotic potentials] {Integral $\boldsymbol{{\mathcal Y}_N}$ for doubly exotic potentials} In the present work we will focus on doubly exotic potentials. In this case, the corresponding $N$th-order terms of the integral ${\mathcal Y}_{N}$ (\ref{YNQSd}) are given by \begin{align} W_N = {}&A_{0,N,0}p_1^N + A_{0,0,N}p_2^N + A_{N-4,2,2} L_z^{N-4} p_1^2 p_2^2\nonumber \\ & + \sum_{4 < m+n < N;\,| m-n| < N-4} A_{N-m-n,m,n}L_z^{N-m-n} p_1^mp_2^n \nonumber \\ &+ \sum_{0 \leq m+n = N;\,| m-n| \leq N-4} A_{0,m,n} p_1^mp_2^n. \label{PNQ} \end{align} Therefore, $W_N$ (\ref{PNQ}) carries $\frac{1}{2}\big(10-5N+N^2\big)$ less constants $A_{N-m-n,m,n}$ than the generic term~(\ref{YNQSd}). For large $N$, the number of constants $A_{N-m-n,m,n}$ in $W_N$ grows quadratically with~$N$. Notice that $L_z$ occurs in $W_N$ starting from $N=5$. For $N\geq5$, it can also contain terms like $L_z, L_z^2, L_z^3,\dots,L_z^{N-4}$ only. Let us give the most general (leading) term $W_N$ of the integral ${\mathcal Y}_{N,{\rm doubly\ exotic}}$ for $N=3,4,5$ explicitly \begin{gather* W_3 = A_{030}p_1^3 + A_{003}p_2^3 \\%\label{YN4de} W_4 = A_{040}p_1^4 + A_{004}p_2^4 + A_{022}p_1^2 p_2^2, \\%\label{YN5de} W_5 = A_{050}p_1^5 + A_{005}p_2^5 + A_{032}p_1^3p_2^2 + A_{023}p_1^2p_2^3 + A_{122}L_zp_1^2 p_2^2. \end{gather*} \section[ODEs versus algebraic equations. Algebras of integrals of motion] {ODEs versus algebraic equations. Algebras of integrals \\of motion}\label{ODEvsAE} In the search of $N$th-order superintegrable potentials one faces the problem of solving an overdetermined system of ODEs where some of them are non-linear. Moreover, the number of involved equations increases with $N$. Therefore, the direct approach of solving all the DE (\ref{quant deteq}) is far from being an efficient method. In order to simplify it, in the present consideration we propose to combine two basic elements, namely the non-linear compatibility conditions and the use of the algebra of the integrals (see below). As a result, in some cases the ODEs are reduced to pure algebraic equations. From $\mathcal X$ and ${\mathcal Y}_N$, we introduce the quantity \begin{gather}\label{PIC} C \equiv \{{\mathcal Y}_N,{\mathcal X}\}_{\rm PB}, \end{gather} which is a polynomial function in $p_1$ and $p_2$ of degree ($N+1$). If ${\mathcal Y}_N$ is an integral of motion, then by construction $C$ (\ref{PIC}) is also conserved. The closure of the algebra generated by the integrals of motion $({\mathcal H},{\mathcal X},{\mathcal Y}_N,C)$ is guaranteed by the property of maximal superintegrability. The main question we aim to explore is the appearance and utility of a \emph{closed polynomial algebra}. It is important to mention that the study of the algebraic structure of the integrals of motion has been proven to be fruitful in the classification of higher-order superintegrable classical and quantum systems \cite{Daskol,IanM}. Also, the explicit results obtained in Section \ref{Rne5} suggests to explore the inverse problem, namely we take two polynomial functions ${\mathcal A}$ and ${\mathcal B}$ in momentum variables $(p_1,p_2)$ and construct the new object ${\mathcal C}=\{{\mathcal A},{\mathcal B}\}_{\rm PB}.$ Now, let us assume that the algebra generated by $({\mathcal A},{\mathcal B},{\mathcal C})$ is a closed polynomial algebra with polynomial coefficients in ${\mathcal H} $. The question is \textit{under what conditions these closure relations imply that ${\mathcal A}$ and ${\mathcal B}$ are integrals, i.e., they Poisson commute with $\mathcal H$}? \section[Lowest order cases N=3 and N=4: doubly exotic potentials] {Lowest order cases $\boldsymbol{N=3}$ and $\boldsymbol{N=4}$: \\doubly exotic potentials}\label{N3N4} \subsection[Case N=3] {Case $\boldsymbol{N=3}$} The general integral (\ref{YNQSd}) at $N=3$ is given by \begin{gather}\label{Y3ggene} {\mathcal Y}_3 = f_{3,0}p_1^3 + f_{0,0}p_2^3 + {f}_{1,0} p_1p_2^2 + {f}_{2,0} p_2p_1^2 + {f}_{1,2} p_1 + {f}_{0,2} p_2. \end{gather} The first set of DE (\ref{quant deteq}) with $\ell=0$ corresponds to the vanishing of all the coefficients, in the Poisson bracket $\{{\mathcal H},{\mathcal Y}_3\}_{\rm PB}$, multiplying the highest momentum terms of order $4$. They can be solved directly to give the functions $f_{j,0}$. For doubly exotic potentials, they read \begin{gather*} f_{30} = A_{030}, \qquad f_{20} = 0, \qquad f_{10} = 0, \qquad f_{00} = A_{003}. \end{gather*} Thus, (\ref{Y3ggene}) reduces to \begin{gather}\label{IYEn3} {\mathcal Y}_3 = A_{030}p_1^3 + A_{003}p_2^3 + {f}_{1,2} p_1 + {f}_{0,2} p_2. \end{gather} The next set of DE is obtained by setting $\ell=1$ in (\ref{quant deteq}). They correspond to the vanishing of all the coefficients, in the Poisson bracket $\{{\mathcal H},{\mathcal Y}_3\}_{\rm PB}$, multiplying the (next-to-leading) momentum terms of order $2$. These DE take the form \begin{gather} f_{1,2}^{(1,0)} = 3A_{030}V_1' \qquad f_{1,2}^{(0,1)} + f_{0,2}^{(1,0)} = 0 \qquad f_{0,2}^{(0,1)} = 3A_{003}V_2'. \label{N3f} \end{gather} The compatibility condition of the above system (\ref{N3f}) does not provide further information on the potentials functions. However, the first and third equations can be solved immediately, they define the functions $f_{0,2}$ and $f_{1,2}$ \begin{gather} f_{1,2} = 3A_{030}V_1 + u_2(y) \qquad f_{0,2} = 3A_{003}V_2 + u_1(x), \label{N3fs} \end{gather} where $u_1(x)$ and $u_2(y)$ are arbitrary functions of $x$ and $y$, respectively. Substituting (\ref{N3fs}) into the second equation in (\ref{N3f}) we obtain the equation $u_1' + u_2' = 0$. Therefore, \begin{gather*} u_1 = \alpha_1 + \beta x, \qquad u_2 = \alpha_2 - \beta y, \end{gather*} here $\alpha_1$, $\alpha_2$ are constants of integrations whilst $\beta$ is a separation constant. Finally, the last determining equation corresponds to the vanishing of the coefficient, in the Poisson bracket $\{{\mathcal H},{\mathcal Y}_3\}_{\rm PB}$, of order zero in momentum variables. Explicitly, it takes the form \begin{gather} \label{N3DM0} 3 A_{030} V_1 V_1' + 3 A_{003} V_2 V_2' + \alpha_2 V_1' - \beta y V_1' + \beta x V_2' + \alpha_1 V_2' = 0. \end{gather} Non trivial solutions of (\ref{N3DM0}) correspond to separation of variables, namely $\beta=0$. In this case, (\ref{N3DM0}) leads to the following uncoupled equations \begin{gather*} 3 A_{030} V_1 V_1' + \alpha_2 V_1' = \lambda, \qquad 3 A_{003} V_2 V_2' + \alpha_1 V_2' = -\lambda, \end{gather*} being $\lambda \neq 0$ {(otherwise the functions $V_{1,2}$ are just constants)} the corresponding separation constant. Eventually, we arrive to the solutions \begin{gather} V_1 = \sqrt{\frac{2\lambda}{3A_{030}}}\sqrt{x},\qquad V_2 = \sqrt{\frac{-2\lambda}{3A_{003}}}\sqrt{y}. \label{N3SOL} \end{gather} In general, the Poisson bracket $C = \{{\mathcal Y}_3,{\mathcal X} \}_{\rm PB}$ between ${\mathcal Y}_N$ (\ref{YNQSd}) with $N=3$ and $\mathcal X$ (\ref{X}) is a~polynomial in the variables $p_1$ and $p_2$ of degree four. However, for the potential functions (\ref{N3SOL}), we obtain $C \propto \lambda $. Hence, in this case the algebra of the three integrals of motion ($C,{\mathcal Y}_3,{\mathcal X}$) takes the form \begin{gather*} \{ C,{ \mathcal X}\}_{\rm PB} = 0, \qquad \{ C,{\mathcal Y}_3 \}_{\rm PB} = 0. \end{gather*} \begin{figure}[h]\centering\vspace{2ex} \includegraphics[scale=0.4]{N3_n.pdf} \put(-170,135){\makebox(0,0)[lb]{\small$V=V_1(x)+V_2(y)$}} \put(-236,30){\makebox(0,0)[lb]{\small$0$}} \put(-245,46){\makebox(0,0)[lb]{\small$0.5$}} \put(-246,63){\makebox(0,0)[lb]{\small$1.0$}} \put(-247,80){\makebox(0,0)[lb]{\small$1.5$}} \put(-248,98){\makebox(0,0)[lb]{\small$2.0$}} \put(-228,22){\makebox(0,0)[lb]{\small$0$}} \put(-205,16){\makebox(0,0)[lb]{\small$0.2$}} \put(-175,11){\makebox(0,0)[lb]{\small$0.4$}} \put(-155,0){\makebox(0,0)[lb]{\small$x$}} \put(-143,5){\makebox(0,0)[lb]{\small$0.6$}} \put(-110,-1){\makebox(0,0)[lb]{\small$0.8$}} \put(-74,-8){\makebox(0,0)[lb]{\small$1.0$}} \put(-57,-2){\makebox(0,0)[lb]{\small$0$}} \put(-45,10){\makebox(0,0)[lb]{\small$0.2$}} \put(-32,22){\makebox(0,0)[lb]{\small$0.4$}} \put(-4,23){\makebox(0,0)[lb]{\small$y$}} \put(-21,32){\makebox(0,0)[lb]{\small$0.6$}} \put(-10,42){\makebox(0,0)[lb]{\small$0.8$}} \put(-2,52){\makebox(0,0)[lb]{\small$1.0$}} \caption{The doubly exotic potential (\ref{N3SOL}) corresponding to $N=3$. It admits the third-order integ\-ral~${\mathcal Y}_3$~(\ref{IYEn3}). The values $A_{030}=-A_{003}=\frac{2\lambda}{3}$ were used.} \end{figure} It is worth mentioning that the family of superintegrable potentials with \[ {\mathcal Y}_N = A_{0N0}p_1^N + A_{00N}p_2^N + {\rm lower\ order\ terms}, \] has been analyzed in~\cite{GungorKNN:2014} by means of {H}eisenberg-type higher order symmetries. For this family all three conserved quantities $({\mathcal H},{\mathcal X},{\mathcal Y}_N)$ admit separation of variables in Cartesian coordinates. However, such an approach does not allow us to obtain all the doubly exotic potentials with non-separable integrals ${\mathcal Y}_N$. \subsection[Case N=4] {Case $\boldsymbol{N=4}$} In this case $N=4$, for a doubly exotic potential the most general expression of the fourth-order integral ${\mathcal Y}_4$ reads \begin{gather*} {\mathcal Y}_4 = A_{040}p_1^4 + A_{004}p_2^4 + A_{022}p_1^2 p_2^2 + {\rm lower\ order\ terms}, \end{gather*} where $A_{040}$, $A_{004}$ and $A_{022}$ are real constants. It can immediately be rewritten as follows \begin{gather}\label{Y4DEinte} {\mathcal Y}_4 = A_{040}{( {\mathcal H}+{\mathcal X} )}^2 + A_{004}{( {\mathcal H}-{\mathcal X} )}^2 + A_{022}{( {\mathcal H}+{\mathcal X} )} {( {\mathcal H}-{\mathcal X} )} + {\rm lower\ order\ terms}. \end{gather} Now, without losing generality, one can always add to (\ref{Y4DEinte}) any arbitrary function of the second order trivial integrals ${\mathcal H}$ and $\mathcal X$. This implies that no \emph{bona fide} doubly exotic potentials, with a~non-trivial fourth order integral, exist. \section[Case N=5: doubly exotic potentials] {Case $\boldsymbol{N=5}$: doubly exotic potentials}\label{Ne5} We can write the most general $5$th-order polynomial integral ${\mathcal Y}_5$ in the form \begin{gather}\label{Y5QSd} {\mathcal Y}_5 = \sum_{\ell=0}^{2}\sum_{j=0}^{5-2\ell} f_{j,2\ell}\, p_1^j p_2^{5-j-2\ell}. \end{gather} \subsection{Determining equations} Putting $\ell=0$ in (\ref{quant deteq}) corresponds to the vanishing of all the coefficients, in the Poisson bracket $\{{\mathcal H},{\mathcal Y}_5\}_{\rm PB}$, multiplying the highest momentum terms of order $6$. They can be solved directly to give the functions $f_{j,0}$ \begin{gather*} f_{50} = A_{050}, \qquad f_{40} = 0, \qquad f_{30} = A_{032} - y A_{122}, \\ f_{20} = A_{023} + x A_{122}, \qquad f_{10} = 0, \qquad f_{00} = A_{005}, \end{gather*} where the condition that the LCC (\ref{LCC}) is satisfied trivially was imposed, namely we consider doubly exotic potentials only. It implies that the existence or non-existence of fifth order doubly\allowdisplaybreaks exotic potentials is governed by 5 parameters $A_{050}$, $A_{005}$, $A_{032}$, $A_{023}$, $A_{122}$ only. The next set of~DE are obtained by setting $\ell=1$: \begin{gather} f_{0,2}{}^{(0,1)} = 5 A_{005} V_2',\nonumber \\ f_{1,2}{}^{(0,1)} + f_{0,2}{}^{(1,0)} = 2 ( A_{023} + x A_{122} ) V_1',\nonumber \\ f_{1,2}{}^{(1,0)} + f_{2,2}{}^{(0,1)} = 3 ( A_{032} - y A_{122} ) V_1' + 3 ( A_{023} + x A_{122} ) V_2',\nonumber \\ f_{3,2}{}^{(0,1)} + f_{2,2}{}^{(1,0)} = 2 ( A_{032} - y A_{122} ) V_2',\nonumber \\ f_{3,2}{}^{(1,0)} = 5 A_{050} V_1'. \label{D1} \end{gather} Now, the three DE (\ref{quant deteq}) with $\ell=2$ are given by \begin{gather} f_{1,4}{}^{(1,0)} = 3 f_{3,2} V_1' + f_{2,2} V_2',\nonumber \\ f_{1,4}{}^{(0,1)} + f_{0,4}{}^{(1,0)} = 2 (f_{2,2} V_1' + f_{1,2} V_2' ),\nonumber \\ f_{0,4}{}^{(0,1)} = 3 f_{0,2} V_2' + f_{1,2} V_1'. \label{D2} \end{gather} Next, following the discussion of Section \ref{coefj4} we obtain from (\ref{D1}) the functions $f_{3,2}$, $f_{2,2}$, $f_{1,2}$, $f_{0,2}$ in terms of $V$ (see below). Afterwards, the r.h.s.\ in (\ref{D2}) would depend (non-linearly) on $V$ and its derivatives alone. Consequently, (\ref{D2}) leads to the first NLCC in the form \begin{gather}\label{D3} \partial^2_x f_{0,4}^{(0,1)} + \partial^2_y f_{1,4}^{(1,0)} - \partial_x \partial_y \big(f_{0,4}^{(1,0)} - f_{1,4}^{(0,1)}\big) = 0. \end{gather} Finally, the last determining equation with {$\ell=2$} reads \begin{gather*} f_{1,4}V_1' + f_{0,4}V_2' = 0. \end{gather*} \subsection{The (first) NLCC} The DE with $\ell=1$ (\ref{D1}) define the four functions $f_{0,2}$, $f_{1,2}$, $f_{2,2}$ and $f_{3,2}$ appearing in the integral~${\mathcal Y}_5$ (\ref{Y5QSd}) in front of the cubic terms ($p_1^{i} p_2^{j}$ with $i+j=3$). Explicitly \begin{gather} f_{0,2} = 2 x A_{122} T_1'(x) + 2 A_{023} T_1'(x) + A_{122} T_1(x) + 5 A_{005} T_2'(y) + \alpha_1 - \beta_4 x^3 + \sigma_3 x^2 + \alpha_2 x,\nonumber \\ f_{1,2} = y ( -3 A_{122} T_1'(x) - \alpha _2+3 \beta _4 x^2-2 \sigma _3 x ) + 3 A_{032} T_1'(x) + \nu _1 + \nu _3 x^2 - \sigma _2 x,\nonumber \\ f_{2,2} = x ( 3 A_{122} T_2'(y)-\beta _2-3 \beta _4 y^2-2 \nu _3 y ) + 3 A_{023} T_2'(y) + \sigma _1 + \sigma _3 y^2 + \sigma_2 y,\nonumber \\ f_{3,2} = -2 y A_{122} T_2'(y) + 2 A_{032} T_2'(y) - A_{122} T_2(y) + 5 A_{050} T_1'(x)\nonumber \\ \hphantom{f_{3,2} =} {} + \beta _1 + \beta _4 y^3 + \nu _3 y^2 + \beta _2 y, \label{fsN5de} \end{gather} where \begin{gather*} V_1(x) \equiv T_1'(x), \qquad V_2(y) \equiv T_2'(y). \end{gather*} Next, substituting (\ref{D2}) and (\ref{fsN5de}) into (\ref{D3}) we obtain the following non-linear compatibility condition (NLCC) \begin{gather} \text{NLCC} = T_1{}^{(4)} (3 T_1' (A_{032}-y A_{122})+\nu _1+3 \beta _4 x^2 y+x (-\sigma _2+\nu _3 x-2 \sigma _3 y)-\alpha _2 y )\nonumber \\ \hphantom{\text{NLCC} =} {} + T_2{}^{(4)} (3 T_2' (x A_{122}+A_{023})+\sigma _1-3 \beta _4 x y^2+y (\sigma _2+\sigma _3 y -2 \nu _3 x )-\beta _2 x)\nonumber \\ \hphantom{\text{NLCC} =} {}+ T_1{}^{(3)}(-9 y A_{122} T_1''+9 A_{032} T_1''-4 \sigma _2+8\nu _3 x+24 \beta _4 x y-8 \sigma _3 y)\nonumber \\ \hphantom{\text{NLCC} =} {} + T_2{}^{(3)} (9 x A_{122} T_2''+9 A_{023} T_2''+4 \sigma _2-8 \nu _3 x-24 \beta _4 x y+8 \sigma _3 y)\nonumber \\ \hphantom{\text{NLCC} =} {} + 12\nu _3 T_1'' + 12 \sigma _3 T_2'' - 36 \beta _4 x T_2'' + 36 \beta _4 y T_1'' = 0, \label{Enlcc} \end{gather} where $\alpha_1$, $\beta'$s, $\nu'$s and $\sigma'$s are constants to be determined. We~have the freedom to replace $T_{1(2)}$ by $T_{1(2)}+c $ for some real constant $c$ to simplify the expressions. Also we can shift the variables~$x$ or $y$. {Notice that the constants $A_{050}$ and $A_{005}$ do not appear in (\ref{Enlcc}).} In terms of the parameters $A_{5-m-n,m,n}$ that define the existence or non-existence of the integral ${\mathcal Y}_5$, we identify two cases for which the above NLCC (\ref{Enlcc}) admits separation of variables in Cartesian coordinates, namely: \begin{itemize}\itemsep=0pt \item[$(i)$] $A_{122} \neq 0$, $A_{023}=A_{032}=0 $, \item[$(ii)$] $A_{023}^2 + A_{032}^2 \neq 0$, $A_{122}=0$, \end{itemize} with $A_{050}$ and $A_{005}$ arbitrary. These two cases are ${\mathcal{S}}_2$-invariant under the permutation $x \Leftrightarrow y$ (thus, $p_1 \Leftrightarrow p_2$). Let us recall that the Hamiltonian $\mathcal H$ and the integral $\mathcal X$ are ${\mathcal{S}}_2$-invariant and ${\mathcal{S}}_2$-antiinvariant, respectively. Moreover, if \begin{itemize}\itemsep=0pt \item [$(iii)$] $A_{122}= A_{023}= A_{032} = 0,$ \end{itemize} with $A^2_{050}+A^2_{005}\neq 0$, the NLCC degenerates into a linear equation which must be identically zero for doubly exotic potentials. In such a case the NLCC does not provide any information on the potential. As a result of calculations, the cases $(i)$, $(ii)$ and $(iii)$ are the only generic ones that satisfy all the DE. \section{Results}\label{Rne5} \subsection{Superintegrable potentials} Below, {adopting the notation introduced in~\cite{AW}} we present the full list of doubly exotic fifth-order ($N=5$) superintegrable potentials: \medskip {\it Case} $(i)$. $\bullet$ The system $Q_1$: $A_{122}\neq 0$, $A_{032}=A_{023}=A_{050}=A_{005}=0$. This system corresponds to $A_{122}=1$, all other parameters $A_{ijk}=0$. In this case, by solving all the DE (\ref{D1})--(\ref{D3}) we eventually arrive to the first-order nonlinear ODEs \begin{gather} (T_1'){}^2 - 2 \beta_4 x^2 T_1' - 4 \beta _4 T_1 x + \beta _4^2 x^4 = 0,\nonumber \\ (T_2'){}^2 - 2 \beta _4 y^2 T_2' - 4 \beta_4 T_2 y + \beta_4^2 y^4 = 0, \label{VQ1} \end{gather} $\beta_4 \neq 0$ is a real constant. The corresponding fifth-order integral of motion is given by \begin{gather} {\mathcal Y}_5^{(Q_1)} = p_1^2 p_2^3 x - p_1^3 p_2^2 y + p_1^3 \big({-}2 y T_2'-T_2+\beta _4 y^3\big) + p_2^3 \big(2 x T_1'+T_1-\beta _4 x^3\big)\nonumber \\ \hphantom{{\mathcal Y}_5^{(Q_1)} = } {}+ p_1^2 p_2 x \big(3 T_2'-3 \beta _4 y^2\big) + p_1 p_2^2 y \big(3 \beta _4 x^2-3 T_1'\big) \nonumber \\ \hphantom{{\mathcal Y}_5^{(Q_1)} = } {}+p_1 \bigg({-}\frac{3}{2} \beta _4 x^2 y^2 T_2'' + 3 \beta _4 y^3 T_1' + \frac{3}{2} x^2 T_2' T_2'' - 6 y T_1' T_2' - 3 T_2 T_1'\bigg) \nonumber \\ \hphantom{{\mathcal Y}_5^{(Q_1)} = } {}+ p_2 \bigg(\frac{3}{2} \beta _4 x^2 y^2 T_1'' - 3 \beta _4 x^3 T_2' - \frac{3}{2} y^2 T_1' T_1'' + 6 x T_1' T_2' + 3 T_1 T_2'\bigg). \label{Y5Q1} \end{gather} From $\mathcal X$ and ${\mathcal Y}_5^{(Q_1)}$, we built the quantity\vspace{-1ex} \begin{gather* C \equiv \big\{{\mathcal Y}_5^{(Q_1)},{\mathcal X}\big\}_{\rm PB}, \end{gather*} which is a polynomial function in $p_1$ and $p_2$ of sixth degree. By construction, it is an integral when (\ref{VQ1}) are satisfied. Now, if we demand that the three elements $\big(\mathcal X$, ${\mathcal Y}_5^{(Q_1)},C\big)$ generate a~\emph{closed polynomial algebra} we eventually arrive to a nonlinear first-order differential equation for $T_1(x)$ and similarly for $T_2(y)$. Therefore, from these equations and (\ref{VQ1}) we can eliminate the first-derivative $T_1'$, $T_2'$ terms and obtain an algebraic equation for both $T_1(x)$ and $T_2(y)$. The solutions of such algebraic equations turn out to be the general solutions of (\ref{VQ1}). Explicitly, these algebraic equations take the form \begin{gather} 3\beta _4 T_1^2 + 8 \beta _4^{3/2} x^{3/2} T_1^{3/2} + 6 \beta _4^2 x^3 T_1 - \beta _4^3 x^6 - \delta = 0,\nonumber \\ 3 \beta _4 T_2^2 + 8 \beta _4^{3/2} y^{3/2} T_2^{3/2} + 6 \beta _4^2 y^3 T_2 - \beta _4^3 y^6 - \delta = 0, \label{AEQ1} \end{gather} where $\delta$ is an arbitrary constant. In the case $\delta=0$, we immediately obtain the particular solutions \begin{equation*} T_1(x) = \beta_4x^3,\ \frac{\beta_4}{9}x^3, \qquad \text{and} \qquad T_2(y) = \beta_4y^3,\ \frac{\beta_4}{9}y^3, \end{equation*} which correspond to a well-known lower-order superintegrable system. \begin{figure}[h]\centering \includegraphics[scale=0.4]{Q1_n.pdf} \put(-150,170){\makebox(0,0)[lb]{\small$V_1(x)$}} \put(-149,146){\makebox(0,0)[lb]{\small$3$}} \put(-149,109){\makebox(0,0)[lb]{\small$2$}} \put(-148.3,73){\makebox(0,0)[lb]{\small$1$}} \put(-156,3){\makebox(0,0)[lb]{\small$-1$}} \put(-7,39){\makebox(0,0)[lb]{\small$x$}} \put(-282,30){\makebox(0,0)[lb]{\small$-1.5$}} \put(-239,30){\makebox(0,0)[lb]{\small$-1.0$}} \put(-197,30){\makebox(0,0)[lb]{\small$-0.5$}} \put(-107,31){\makebox(0,0)[lb]{\small$0.5$}} \put(-65,31){\makebox(0,0)[lb]{\small$1.0$}} \put(-23,31){\makebox(0,0)[lb]{\small$1.5$}} \caption{Case $N=5$: the $x$-component $V_1(x)$ of the doubly exotic potential $V(x,y)=V_1(x)+V_2(y)$ of type $Q_1$. It corresponds to the fifth-order integral ${\mathcal Y}_5$ (\ref{Y5Q1}). From the algebraic equations (\ref{AEQ1}) we obtain the four solutions $V_{1,i}(x) =T'_{1,i}$, $i=1,2,3,4$, displayed above. In the case $Q_1$ the $y$-component~$V_2(y)$ is of the same form with four similar solutions $V_{2,i}(y)$. The values $\beta_4=\delta=1$ were used. } \end{figure} \begin{figure}[h]\centering \vspace{2ex} \includegraphics[scale=0.4]{Q1Bound_n.pdf} \put(-170,186){\makebox(0,0)[lb]{\small$V=V_1(x)+V_2(y)$}} \put(-264,58){\makebox(0,0)[lb]{\small$-1.25$}} \put(-260,68,5){\makebox(0,0)[lb]{\small$-0.1$}} \put(-267,77,5){\makebox(0,0)[lb]{\small$-0.75$}} \put(-263,87,5){\makebox(0,0)[lb]{\small$-0.5$}} \put(-269,97){\makebox(0,0)[lb]{\small$-0.25$}} \put(-251,108){\makebox(0,0)[lb]{\small$0$}} \put(-265,117){\makebox(0,0)[lb]{\small$0.25$}} \put(-236,51){\makebox(0,0)[lb]{\small$0$}} \put(-215,40){\makebox(0,0)[lb]{\small$0.2$}} \put(-188,29){\makebox(0,0)[lb]{\small$0.4$}} \put(-200,20){\makebox(0,0)[lb]{\small$x$}} \put(-159,18){\makebox(0,0)[lb]{\small$0.6$}} \put(-128,6){\makebox(0,0)[lb]{\small$0.8$}} \put(-96,-6){\makebox(0,0)[lb]{\small$1.0$}} \put(-80,0){\makebox(0,0)[lb]{\small$0$}} \put(-62,20){\makebox(0,0)[lb]{\small$0.2$}} \put(-47,38){\makebox(0,0)[lb]{\small$0.4$}} \put(-17,40){\makebox(0,0)[lb]{\small$y$}} \put(-31,56){\makebox(0,0)[lb]{\small$0.6$}} \put(-18,71){\makebox(0,0)[lb]{\small$0.8$}} \put(-7,84){\makebox(0,0)[lb]{\small$1.0$}} \caption{A doubly exotic potential $Q_1$ corresponding to $N=5$. It admits the fifth-order integ\-ral~${\mathcal Y}_5$~(\ref{Y5Q1}). It also possesses bounded trajectories which by construction are closed and periodic. The values $\beta_4=\delta=1$ were used.} \end{figure} The algebra generated by the integrals takes the form \begin{gather*} \{C,{\mathcal X}\}_{\rm PB} = - 24\beta_4{\mathcal Y}_5^{(Q_1)}, \\ \big\{C,{\mathcal Y}_5^{(Q_1)} \big\}_{\rm PB} = 12X\big({\mathcal H}^2-{\mathcal X}^2\big)^2 - 48\delta{\mathcal X}{\mathcal H}. \end{gather*} In the corresponding quantum system analyzed in~\cite{AW}, the case $(i)$ splits into two subclasses of integrals ${\mathcal Y}_5$ that solely differ in their lower order $\hbar$-dependable terms. Consequently, two systems called $Q_1$ and $Q_2$ occur. However, in the classical limit $\hbar \rightarrow 0$ the two systems $Q_1$ and~$Q_2$ coincide. Next, within case $(ii)$ the classical systems $Q_3$ ($A_{023}A_{050}A_{005}\neq 0$, $A_{122}=A_{032}=0$) and $Q_4$ ($A_{023}A_{005}\neq 0$, $A_{122}=A_{050}=A_{032}=0$) are \emph{not} superintegrable (like in the quantum case). \medskip Case $(ii)$. $\bullet$ The system $Q_5$: $A_{023}\neq 0$, $A_{122}=A_{032}=A_{005}=0$, $A_{050}$ arbitrary. This system corresponds to $A_{023}=1$ and arbitrary $A_{050}$, all other $A_{ijk}=0$. Again, by solving all the DE (\ref{D1})--(\ref{D3}) we arrive to the first-order nonlinear ODE for $T_1$ \begin{gather}\label{VQ5} 5 A_{050} (T_1')^3 - 12 \tau^2 x T_1' + 3\beta _1 (T_1')^2 - 12 \tau^2 T_1 + \mu = 0, \end{gather} $\tau \neq 0$, $\beta_1$ and $\mu$ are real constants, whereas \begin{gather*} V_2 \equiv T_2' = \pm2\tau\sqrt{-y}. \end{gather*} The corresponding highest-order integral of motion reads \begin{gather} {\mathcal Y}_5^{(Q_5)} = A_{050}p_1^5 + p_1^2p_2^3 + p_1^3 (5 A_{050} T_1'+\beta_1) + p_1 \bigg(\frac{15}{2} A_{050} (T_1'){}^2+3\beta_1 T_1'-6 x \tau^2\bigg)\nonumber \\ \hphantom{{\mathcal Y}_5^{(Q_5)} =} {} + 6\tau\sqrt{-y} p_2 p_1^2 + 2T_1' p_2^3 + 12\tau\sqrt{-y} T_1' p_2. \label{Y5Q5} \end{gather} Clearly, the case $A_{032}=1$ and arbitrary $A_{005}$ (all other $A_{ijk}=0$) also leads to a superintegrable potential. It can simply be obtained by replacing $A_{050} \rightarrow A_{005}$ and making the permutation $x \Leftrightarrow y$ ($V_1 \Leftrightarrow V_2$) in (\ref{VQ5}) and (\ref{Y5Q5}). \begin{figure}[h!]\centering \includegraphics[scale=0.4]{Q5_n.pdf} \put(-150,171){\makebox(0,0)[lb]{\small$V_1(x)$}} \put(-150,148){\makebox(0,0)[lb]{\small$4$}} \put(-150,119){\makebox(0,0)[lb]{\small$2$}} \put(-158,61){\makebox(0,0)[lb]{\small$-2$}} \put(-158,32){\makebox(0,0)[lb]{\small$-4$}} \put(-158,3){\makebox(0,0)[lb]{\small$-6$}} \put(-5,91){\makebox(0,0)[lb]{\small$x$}} \put(-280,82){\makebox(0,0)[lb]{\small$-10$}} \put(-213,82){\makebox(0,0)[lb]{\small$-5$}} \put(-83,82){\makebox(0,0)[lb]{\small$5$}} \put(-23,82){\makebox(0,0)[lb]{\small$10$}} \caption{Case $N=5$: the $x$-component $V_1(x)$ of the doubly exotic potential $V(x,y)=V_1(x)+V_2(y)$ of type $Q_5$. It admits the fifth-order integral ${\mathcal Y}_5$ (\ref{Y5Q5}). From the equation (\ref{VQ5}) we obtain the three numerical solutions $V_{1,i}(x) =T'_{1,i}$, $i=1,2,3$, displayed above. The values $A_{050}=\frac{1}{5}$, $\tau=\frac{1}{\sqrt{12}}$, $\beta_1=1$ and $\mu=-3$ were used.} \end{figure} {\samepage In this case, the algebra generated by the integrals $C=\big\{{\mathcal Y}_5^{(Q_5)},{\mathcal X}\big\}_{\rm PB}$, ${\mathcal Y}_5^{(Q_5)}$ and $\mathcal X$ takes the form \begin{gather*} \{C,{\mathcal X}\}_{\rm PB} = 0, \\ \big\{ C,{\mathcal Y}_5^{(Q_5)} \big\}_{\rm PB} = -144\tau^4{({\mathcal H}+{\mathcal X})}, \end{gather*} and it does not provide further information about the solutions of (\ref{VQ5}). } $\bullet$ The system $Q_6$: $A_{023}A_{032}\neq 0$, $A_{122}=A_{050}=A_{005}=0$. This system corresponds to $A_{023} A_{032}\neq 0$, all other $A_{ijk}=0$. \begin{gather} 3 A_{032} (T_1'){}^2 - \sigma _2 x T_1' - \sigma _2 T_1 = 0,\nonumber \\ 3 A_{023} (T_2'){}^2 + \sigma _2 y T_2' + \sigma _2 T_2 = 0, \label{VQ6} \end{gather} $\sigma _2 \neq 0$ is a real constant. \begin{gather} {\mathcal Y}_5^{(Q_6)} = A_{032}p_2^2 p_1^3 + A_{023}p_2^3 p_1^2 + 2 p_2^3 A_{023} T_1' + 2 p_1^3 A_{032} T_2' + p_1 p_2^2 \bigg(3 A_{032} T_1'-\frac{\sigma _2 x}{2}\bigg)\nonumber \\ \hphantom{{\mathcal Y}_5^{(Q_6)} =} {}+ p_1^2 p_2 \bigg(3 A_{023} T_2'+\frac{\sigma _2 y}{2}\bigg) + p_1 \bigg(3 x A_{023} T_2' T_2''+6 A_{032} T_1' T_2'+\frac{1}{2} \sigma _2 x y T_2''\bigg)\nonumber \\ \hphantom{{\mathcal Y}_5^{(Q_6)} =} {}+ p_2 \bigg(3 y A_{032} T_1' T_1''+6 A_{023} T_1' T_2'-\frac{1}{2} \sigma _2 x y T_1''\bigg). \label{Y5Q6} \end{gather} \begin{figure}[h]\centering \vspace{2ex} \includegraphics[scale=0.4]{Q6Bound_n.pdf} \put(-165,131){\makebox(0,0)[lb]{\small$V=V_1(x)+V_2(y)$}} \put(-252,14){\makebox(0,0)[lb]{\small$-0.4$}} \put(-253,30){\makebox(0,0)[lb]{\small$-0.3$}} \put(-253,46){\makebox(0,0)[lb]{\small$-0.2$}} \put(-253,63){\makebox(0,0)[lb]{\small$-0.1$}} \put(-239,80){\makebox(0,0)[lb]{\small$0$}} \put(-247,98){\makebox(0,0)[lb]{\small$0.1$}} \put(-231,6){\makebox(0,0)[lb]{\small$-5$}} \put(-177,3){\makebox(0,0)[lb]{\small$-4$}} \put(-147,-9){\makebox(0,0)[lb]{\small$x$}} \put(-118,0){\makebox(0,0)[lb]{\small$-3$}} \put(-57,-3){\makebox(0,0)[lb]{\small$-2$}} \put(-10,3){\makebox(0,0)[lb]{\small$-5$}} \put(-8,19){\makebox(0,0)[lb]{\small$-4$}} \put(-6,32){\makebox(0,0)[lb]{\small$-3$}} \put(-5,45){\makebox(0,0)[lb]{\small$-2$}} \put(10,28){\makebox(0,0)[lb]{\small$y$}} \caption{A doubly exotic potential $Q_6$ corresponding to $N=5$. It admits the fifth-order integral~${\mathcal Y}_5$~(\ref{Y5Q6}). It also possesses bounded trajectories which by construction are closed and periodic. The values $A_{032}=-A_{023}=\frac{1}{12}$, $\sigma_1=1$ were used.} \end{figure} In this case, the algebra generated by the integrals $C=\big\{{\mathcal Y}_5^{(Q_6)},{\mathcal X}\big\}_{\rm PB}$, ${\mathcal Y}_5^{(Q_6)}$ and $\mathcal X$ takes the form \begin{gather*} \{ C,{\mathcal X}\}_{\rm PB} = 0, \\ \big\{C,{\mathcal Y}_5^{(Q_6)} \big\}_{\rm PB} = 2{ \mathcal X}\sigma_2^2\big({\mathcal H}^2 - {\mathcal X}^2\big)^2, \end{gather*} and it does not provide further information about the solutions of (\ref{VQ6}). However, it is easy to check that \begin{gather*} T_1(x) = \frac{W^2(x) - \sigma_2 x^2}{12 A_{032}}, \qquad T_2(y) = -\frac{W^2(y) - \sigma_2 y^2}{12 A_{023}}, \end{gather*} satisfy (\ref{VQ6}), where $W=W(z)$ is given by the following third order polynomial equation \begin{gather*} \big(W - z \sqrt{\sigma_2}\big)\big(W + 2 z \sqrt{\sigma_2}\big)^2 + \tau = 0, \end{gather*} here $\tau \neq 0$ is an integration constant. $\bullet$ The system $Q_7$. This system is a particular case of system $Q_5$. It corresponds to the situation where $A_{023}=1$ and all other $A_{ijk}=0$. \begin{gather* 3\beta _1 (T_1'){}^2 - 12 \tau^2 x T_1' - 12 \tau^2 T_1 + \mu = 0,\qquad V_2 \equiv T_2' = \pm2 \tau \sqrt{-y}, \end{gather*} $\tau \neq 0$, $\beta_1$ and $\mu$ are real constants. \begin{gather*} {\mathcal Y}_5^{(Q_7)} = p_1^2 p_2^3 + p_1^3\beta_1 + p_1 \big(3\beta_1 T_1'-6 x \tau^2\big) + 6\tau\sqrt{-y} p_2 p_1^2+ 2 T_1'p_2^3 + 12\tau\sqrt{-y} T_1'p_2. \end{gather*} \pagebreak {\it Case} $(iii)$. $\bullet$ The system $Q_8$: $A_{050}^2+A^2_{005}\neq 0$, $A_{122}=A_{023}=A_{032}=0$. This system corresponds to the case $A_{050}^2+A^2_{005}\neq 0$ and all other $A_{ijk}=0$. \begin{gather} A_{050} T_1'{}^3 + \beta _1 T_1'{}^2 + \theta _1 T_1' - \Lambda x + \kappa _1 = 0,\nonumber \\ A_{005} T_2'{}^3 + \alpha _1 T_2'{}^2 + \phi_1 T_2' + \Lambda y + \omega _1 = 0, \label{VQ8} \end{gather} $\Lambda \neq 0$, $\alpha _1,\beta_1,\theta _1,\kappa _1,\phi_1$ and $\omega _1$ are real constants. \begin{gather*} {\mathcal Y}_5^{(Q_8)} = A_{050}p_1^5 + A_{005}p_2^5 + p_1^3 \bigg(5 A_{050} T_1'+\frac{5 \beta _1}{3}\bigg) + p_2^3 \bigg(5 A_{005} T_2'+\frac{5 \alpha _1}{3}\bigg) \\ \hphantom{{\mathcal Y}_5^{(Q_8)} = } {}+ p_1 \bigg(\frac{15}{2} A_{050} T_1'{}^2+5 \beta _1 T_1'+\frac{5 \theta _1}{2}\bigg) + p_2 \bigg(\frac{15}{2} A_{005} T_2'{}^2+5 \alpha _1 T_2'+\frac{5 \phi _1}{2}\bigg). \end{gather*} In this case, for the solutions of (\ref{VQ8}) the Poisson bracket $C = \big\{{\mathcal Y}_5^{(Q_8)},{ \mathcal X} \big\}_{\rm PB} \propto \lambda$. Thus, the algebra of the integrals of motion takes the form \begin{equation*} \{C,{ \mathcal X}\}_{\rm PB} = 0, \qquad \big\{ C,{\mathcal Y}_5^{(Q_8)}\big\}_{\rm PB} = 0. \end{equation*} Again, the system $Q_8$ was found in~\cite{GungorKNN:2014} by means of Heisenberg-type higher order symmetries. All three conserved quantities $\big({\mathcal H},{\mathcal X},{\mathcal Y}_5^{(Q_8)}\big)$ admit separation of variables in Cartesian coordinates. The particular case with $\beta_1=\theta_1=\kappa_1=\alpha_1=\phi_1 =\omega_2=0$, thus $V_1 \propto x^{\frac{1}{3}}$ and $V_2 \propto y^{\frac{1}{3}}$, was studied in~\cite{Grigoriev} using action-angle variables. $\bullet$ The system $Q_9$. This system is a particular case of system $Q_8$. It corresponds to the situation where $A_{050}=1$ and all other $A_{ijk}=0$. \begin{gather*} A_{050} T_1'{}^3 + \beta _1 T_1'{}^2 + \theta _1 T_1' - \Lambda x + \kappa _1 = 0,\nonumber \\ \alpha _1 T_2'{}^2 + \phi_1 T_2' + \Lambda y + \omega _1 = 0, \label{VQ9} \end{gather*} $\Lambda \neq 0$, $\beta_1$, $\theta_1$, $\kappa_1$, $\phi_1$, $\omega _1$ and $\alpha_1^2+\phi_1^2 \neq0$ are real constants. \begin{gather*} {\mathcal Y}_5^{(Q_9)} = A_{050}p_1^5 + p_1^3 \bigg(5 A_{050} T_1'+\frac{5 \beta _1}{3}\bigg) + \frac{5 \alpha _1}{3}p_2^3 + p_1 \bigg(\frac{15}{2} A_{050} T_1'{}^2+5 \beta _1 T_1'+\frac{5 \theta _1}{2}\bigg) \\ \hphantom{{\mathcal Y}_5^{(Q_9)} =} {}+ p_2 \bigg(5 \alpha _1 T_2'+\frac{5 \phi _1}{2}\bigg). \end{gather*} \section{Conclusions}\label{conclu} We considered $N$th-order superintegrable classical systems in a two-dimensional Euclidean space separating in Cartesian coordinates. They are characterized by three polynomial (in momentum variables) integrals of motion (${\mathcal H},{\mathcal X},{\mathcal Y}_N$). Let us summarize the main results reported in this paper: \begin{enumerate}\itemsep=0pt \item Higher-order ($N>2$) superintegrable classical systems \[ {\mathcal H} = \frac{1}{2}\big(p_1^2 + p_2^2\big) + V_1(x) + V_2(y), \] can be classified into three classes: doubly standard, singly exotic and doubly exotic potentials. This classification is based on the nature of the equation that defines the most general form of the potential functions $V_1(x)$ and $V_2(x)$. For doubly standard potentials this equation is a linear compatibility condition necessary for the existence of the $N$th order integral of motion ${\mathcal Y}_N$ (in general a PDE of order $N$ in two variables) whilst in the case of doubly exotic potentials it is given by a nonlinear compatibility condition. \item From the equation $\{{\mathcal Y}_N,{\mathcal H}\}_{\rm PB}=0$, we show in a systematic manner how to find and successively solve a “well” of NLCC separately for $V_1(x)$ and $V_2(y)$. It was also indicated that requiring the integrals of motion (${C}=\{{\mathcal X},{\mathcal Y}_N\}_{\rm PB},{\mathcal X},{\mathcal Y}_N$) to span a closed polynomial algebra may help to simplify (reduce the order) of the DE and eventually to find the explicit solutions $V_1(x)$ and $V_2(y)$. \item All fifth-order ($N=5$) superintegrable doubly exotic potentials were derived explicitly by solving the set of DE. {The DE lead to first order non-linear ODEs that define the functions $V_1$ and $V_2$, respectively. Unlike the quantum case, these equations do not have the Painlev\'e property. This was verified either by finding their general solutions explicitly or by applying a standard test to them~\cite{Baldwin}}. Interestingly, at $N=5$ doubly exotic confining potentials appear for the first time. At $N=4$ no doubly exotic potentials occur at all. \item The present study suggests to explore the inverse problem, namely we take two polynomial functions $({\mathcal A}$ and ${\mathcal B})$ in momentum variables ($p_1,p_2$) and construct the new object ${\mathcal C}=\{{\mathcal A},{\mathcal B}\}_{\rm PB}$. If the algebra generated by $({\mathcal A},{\mathcal B},{\mathcal C})$ is a closed polynomial algebra with polynomial coefficients in ${\mathcal H} $, then \textit{under what conditions this closure relations imply that~${\mathcal A}$ and~${\mathcal B}$ are integrals, i.e., they Poisson commute with~$\mathcal H$}? \end{enumerate} Finally, a direct computation for the next two cases $N=6,7$ leads us to the following conjecture: \begin{Conjecture*} There exists an infinite family of $N$th-order superintegrable systems with an inte\-gral \begin{gather*} {\mathcal Y}_{N}^{(\rm Doubly\ exotic)} = L^{(N-4)}_zp_1^{2}p_2^{2} + {\rm(lower\ order\ terms)}, \qquad N\geq 5, \\ \big\{{\mathcal Y}_{N}^{(\rm Doubly\ exotic)},{\mathcal H}\big\}_{\rm PB} = 0. \end{gather*} The associated potential $V$ can be written as follows \begin{gather*} V = V_1(x) + V_2(y) = \mathcal{G}'(x;N) + \mathcal{G}'(y;N), \end{gather*} here $\mathcal{G}=\mathcal{G}(u;N)$ obeys a nonlinear first-order ODE of the form \begin{gather} {\mathcal{G}}'\big[6u^{N-4}{\mathcal{G}}' +4{(N-5)}u^{N-5}{\mathcal{G}} + F_{1}(u) + \sigma u^{N-2} \big] \nonumber \\ \qquad {}+ {\mathcal{G}}\big[ 2{(N-5)}u^{N-6} {\mathcal{G}} +F_{2}(u) -2\sigma u^{N-3} \big] + F_{3}(u) + bu^N = 0. \label{F2N} \end{gather} The three functions $F_{q}$'s in \eqref{F2N} are polynomials in the variable $u$ of degree at most $(N-1)$, and $\sigma$, $b$ are real parameters as well. The equation~\eqref{F2N} is in complete agreement with the limit $\hbar \rightarrow 0$ of its quantum analogue treated in~{\rm \cite{AMGen}}. In future work, we plan to establish in detail under what conditions the closed algebra of the integrals of motion is polynomial, and how to use it as a new systematic tool to solve the determining equations in a simpler and more efficient manner. \end{Conjecture*} \subsection*{Acknowledgments} \.{I}Y and AMER during a sabbatical leave and a postdoctoral academic stay at the Centre de Recherches Math\'ematiques, Universit\'e de Montr\'eal, respectively, were introduced to the subject of higher-order superintegrability by Pavel Winternitz. His enormous influence is present in this study as it does in the whole subject. It is with admiration and great affection that we dedicate this paper to his memory. We~thank the anonymous referees and the editor for their valuable comments and constructive suggestions on the manuscript. \pdfbookmark[1]{References}{ref}
proofpile-arXiv_065-14
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Datasets form the backbone of machine learning research (MLR). They are deeply integrated into work practices of machine learning researchers, serving as resources for training and testing machine learning models. Datasets also play a central role in the organization of MLR as a scientific field. Benchmark datasets provide stable points of comparison and coordinate scientists around shared research problems. Improved performance on benchmarks is considered a key signal for collective progress. Such performance is thus an important form of scientific capital, sought after by individual researchers and used to evaluate and rank their contributions. Datasets exemplify machine learning tasks, typically through a collection of input and output pairs \citep{Schlangen2020}. When they institutionalize benchmark datasets, task communities implicitly endorse these data as meaningful abstractions of a task or problem domain. The institutionalization of benchmarks influences the behavior of both researchers and end-users \citep{Dotan2020}. Because advancement on established benchmarks is viewed as an indicator of progress, researchers are encouraged to make design choices that maximize performance on benchmarks, as this increases the legitimacy of their work. Institutionalization signals to industry adopters that models can be expected to perform in the real world as they do on the benchmark datasets. Close alignment of benchmark datasets with ``real world'' tasks is thus critical to accurate measurement of collective scientific progress and to safe, ethical, and effective deployment of models in the wild. Given their central role in the social and scientific organization of MLR, benchmark datasets have also become a central object of critical inquiry in recent years \citep{paulladaDataItsDis2020}. Dataset audits have revealed concerning biases that have direct implications for algorithmic bias and harms \citep{gendershades, Shankar2017NoCW, zhao-etal-2018-gender, dixon2018measuring}. Problematic categorical schemas have been identified in popular image datasets, including poorly-formulated categories and the inclusion of derogatory and offensive labels \citep{excavatingai, Prabhu2020LargeID}. Research into the disciplinary norms of dataset development has revealed troubling practices around dataset development and dissemination, like unstandardized documentation and maintenance practices \citep{datasheets, geiger2020, cv-dataset-politics}. There is also growing concern about the limitations of existing datasets and standard metrics for evaluating model behavior in real-world settings and assessing scientific progress in a problem domain \citep{geirhos2020shortcut, underspecifiction}. Despite the increase in critical attention to benchmark datasets, surprisingly little attention has been paid to patterns of dataset use and reuse across the field as a whole. In this paper, we dig into these dynamics. We study how dataset usage patterns differ across machine learning subcommunities and across time (from 2015-2020) in the Papers With Code (PWC) corpus.\footnote{\url{https://paperswithcode.com}} More specifically, we study machine learning subcommunities that have formed around different machine learning tasks (e.g., \textit{Sentiment Analysis} and \textit{Facial Recognition}) and examine: (i) the extent to which research within task communities is concentrated or distributed across different benchmark datasets; (ii) patterns of dataset creation and adoption between different task communites; and (iii) the institutional origins of the most dominant datasets. We find increasing concentration on fewer and fewer datasets within most task communities. Consistent with this finding, the majority of papers within most tasks use datasets that were originally created for \emph{other} tasks, instead of ones explicitly created for their own task---even though most tasks have created more datasets than they have imported. Lastly, we find that these dominant datasets have been introduced by researchers at just a handful of elite institutions. The remainder of this paper is organized as follows. First, we motivate our research questions by underscoring the critical importance of benchmarks in coordinating machine learning research. Second, we describe our analyses on the PWC corpus, a catalog of datasets and their usage jointly curated by the machine learning community (manually) and by Facebook AI Research (algorithmically). We then present our findings and discuss their implications for scientific validity, the ethical usage of MLR, and inequity within the field. We close by offering recommendations for possible reform efforts for the field. \section{Related Work: Scientific, Social, and Ethical Importance of Datasets} \label{sec:motivation} Following Schlangen \citep{Schlangen2020}, we understand machine learning benchmarks as community resources against which models are evaluated and compared. Benchmarks typically formalize a particular task through a dataset and an associated quantitative metric of evaluation. The practice was originally introduced to MLR after the ``AI Winter'' of the 1980s by government funders, who sought to more accurately assess the value received on grants \citep{church_2018,church_hestness_2019}. Today, benchmarking is the dominant paradigm for scientific evaluation in MLR, and the field collectively views upward trends on benchmarks as noisy but meaningful indicators of scientific progress \citep{Schlangen2020,Dotan2020, grover}. Over time, MLR has evolved strong norms to facilitate widespread benchmarking, including the development of open-access datasets, formal competitions and challenges, and accompanying ``black-box'' software that allows researchers to test their algorithms on benchmark datasets with minimal effort. The establishment of benchmark datasets as shared evaluative resources across the MLR community has unique advantages for coordinating scientists around common goals. First, barriers to participation in MLR are reduced, since well-resourced institutions can shoulder the costs of dataset curation and annotation.\footnote{However, machine learning model development still remains a resource-intensive activity \cite{amodei2018ai}.} Second, by reducing otherwise complex comparisons to a single agreed-upon measure, the scientific community can easily align on the value of research contributions and assess whether progress is being made on a particular task \citep{sim2003theory,sim2003se}. Finally, a complete commitment to benchmarking has allowed MLR to relax reliance on slower institutions for evaluating progress like peer-review, qualitative or heuristic evaluation, or theoretical integration. Together, these advantages have contributed to MLR's unprecedented transformation into a ``rapid discovery science'' in the past decade \citep{collins1994social}. While there are clear advantages to benchmarking as a methodology for comparing algorithms and measuring progress, there are growing concerns about benchmarking cultures in MLR that tend to valorize state-of-the-art (SOTA) results on established benchmark datasets over other forms of quantitative or qualitative analysis. The necessity of SOTA results on well-established benchmarks for publication has been identified as a barrier to the development of new ideas \citep{geoffbenchmarks}. There have been growing calls for more rigorous and comprehensive empirical analysis of models beyond standard top-line metrics: reporting model size, energy consumption, fairness metrics, and more \citep{Sculley2018WinnersCO, Schwartz2019, dodge2019, Ethayarajh2020}. The standard benchmarking paradigm also contributes to issues with underspecification in ML pipelines; a given level of performance on a held-out benchmark test set doesn't guarantee that a model has learned the appropriate causal structure of a problem \citep{underspecifiction}. In short, while community alignment on benchmarks and metrics can enable rapid algorithmic advancement, excessive focus on single metrics at the expense of more comprehensive forms of rigorous evaluation can lead the community astray and risk the development of models that generalize poorly to the real world. The MLR community has begun to reflect on the utility of established benchmarks and their suitability for evaluative purposes. For example, the Fashion-MNIST dataset was introduced because the original MNIST dataset came to be perceived as over-utilized and too easy \citep{fashion-mnist}; the utility of ImageNet\,---\,one of the most influential ML benchmarks in existence\,---\,as a meaningful measure of progress has been a focus of critical examination in the past few years \citep{Beyer2020AreWD, tsipras2020imagenet}. SOTA-chasing concerns are also compounded by the great capacity ML algorithms have to be ``right for the wrong reason'' \citep{heinzerling2019cleverhans}, enabling SOTA results that rely on ``shortcuts'' rather than learning the causal structure dictated by the task \citep{geirhos2020shortcut}. Bender et al. suggest the NLP community may have been ``led down the garden path'' by over-focusing on ``beating'' benchmark tasks with models that can easily manipulate linguistic form without any real capacity for language understanding \citep{stochasticparrots}. Recent dataset audits have also revealed that established benchmark datasets tend to reflect very narrow\,---\,typically white, male, Western\,---\,slices of the world \citep{gendershades, Shankar2017NoCW, zhao-etal-2018-gender, dixon2018measuring, Prabhu2020LargeID}. Thus, over-concentration of research on a small number of datasets and metrics can distort perceptions of progress within the field and have serious ethical implications for communities impacted by deployed models. Despite these discussions, little empirical work has considered whether over-concentration of research on a small number of datasets is a systemic issue across MLR. This prompts our first research question: \textbf{RQ1: How concentrated are machine learning task communities on specific datasets, and has this changed over time?} There are also growing concerns regarding the gap between benchmark datasets and the problem domains in which they are used to evaluate progress. For example, Scheuerman et al. found that computer vision datasets tend to be developed in a manner that is decontextualized from a particular task or application area \citep{cv-dataset-politics}. Supposedly ``general purpose’’ benchmarks are often valued within the field, though the precise bounds of what makes a dataset suitable for general evaluative purposes remains unclear \citep{grover}. These observations prompt our second research question: \textbf{RQ2: How frequently do machine learning researchers borrow datasets from other tasks instead of using ones created explicitly for that task?} Despite widespread recognition that datasets are critical to the advancement of the field, careful dataset development is often undervalued and disincentivized, especially relative to algorithmic contributions \citep{cv-dataset-politics, Sambasivan2021}. Given the high value the MLR community places on SOTA performance on established benchmarks, researchers are also incentivized to reuse recognizable benchmarks to legitimize their contributions. Dataset development is time- and labor-intensive, making large-scale dataset development potentially inaccessible to lower-resourced institutions. These observations prompt our final research question: \textbf{RQ3: What institutions are responsible for the major ML benchmarks in circulation?} Our paper makes two distinct contributions to the literature. First, it provides a concise, multi-dimensional discussion of the pros and cons of benchmarking as an evaluation paradigm in MLR, drawing on earlier work as well as insights from the sociology of science. Second, and more substantially, it provides the first field-level, quantitative analysis of benchmarking practice in MLR. \section{Data} \label{sec:data} Our primary data source is Papers With Code (PWC), an open source repository for machine learning papers, datasets, and evaluation tables created by researchers at Facebook AI Research. PWC is largely community-contributed\,---\,anyone can add a benchmarking result or a task, provided the benchmarking result is publicly available in a pre-print repository, conference proceeding, or journal. Once tasks and datasets are introduced by humans, PWC scrapes arXiv using keyword searches to find other examples of the task or uses of the dataset. We downloaded the complete PWC dataset on 06/16/2021 (licensed under CC BY-SA 4.0). In this study, we focus primarily on the ``Datasets'' archive, as well as papers utilizing those datasets. Each dataset in the archive is associated with metadata such as the modality of the dataset (e.g., texts, images, video, graphs), the date the dataset was introduced, and the paper title that introduced the dataset (if relevant). We found 4,384 datasets on the site and scraped 60,647 papers that PWC associates with those datasets using a PWC internal API (see Figure \ref{fig:datadist} for a truncated histogram of usage across datasets). In PWC papers, benchmarks and datasets are associated with tasks (e.g., \textit{Object Recognition}, \textit{Machine Translation}). Because we are interested in the dynamics of dataset usage (both within and across task communities), our first two analyses are restricted to dataset usages published in papers annotated with tasks. We call the task for which the dataset was originally designed the ``origin task." We call the task of the paper using the dataset the ``destination task." For example, ImageNet \citep{imagenet_cvpr09} was introduced as a benchmark for \textit{Object Recognition} and \textit{Object Localization} (origin tasks), but is now regularly utilized as a benchmark for \textit{Image Generation} (destination task) among many others. PWC includes a taxonomy of tasks and subtasks. The graph is cyclic, making it hard to disentangle dataset transfer between broad tasks and finer-grained tasks. For each dataset transfer, we record the transfer between the origin task and the destination task. We also record the transfer between the origin's parents and the destination's parents. This approach allows us to accurately capture transfer dynamics between larger tasks (e.g., \textit{Image Classification} and \textit{Image Generation}), and between finer-grained tasks (e.g,. \textit{Image-to-Image Translation} and \textit{Image Inpainting}, which are both children of \textit{Image Generation}). We took three additional steps to pre-process the data. First, we only consider datasets that are used by others at least once. Second, because we found dataset usages in PWC to be noisy (i.e., a paper would be associated with a dataset if the corresponding dataset name appeared multiple times in the paper), we dropped dataset usages where the dataset-using paper shared no tasks in common with the dataset itself.\footnote{Datasets in PWC are labeled with all tasks they are used for, not just the origin tasks. We focus on datasets introduced in papers so that we can identify tasks associated with the paper as origin tasks.} Second, we found 640 papers that introduced a dataset but were not associated with a task. Two authors manually annotated the top 90 most widely-used dataset papers with origin tasks (see GitHub for justifications and appendix for details). We dropped the remaining 550 dataset papers (accounting for only 10.2\% of total usages). \textbf{Datasets for Analysis 1 and 2 (RQ1, RQ2):} To minimize double-counting of dataset usages across parent tasks and child subtasks, we chose to focus exclusively on parent tasks in PWC. The outcome measures we use in these analyses (Gini, Adoption Proportion, and Creation Proportion) are biased in small samples, so we used only parent tasks above the median size of 34 papers (see GitHub for the list of tasks). Because these tasks were larger, we also felt that parent tasks tended to be more widely-recognized as coherent task communities. Table \ref{tab:stats} presents descriptive statistics for the data used in each analysis. Analysis 1 explores dataset usage within tasks, so it includes datasets that are introduced in papers as well as those that are not (e.g., introduced on a website or competition). Analysis 2 explores the transfer of datasets between origin and destination tasks. This dataset is smaller because we can only determine the origin task for a dataset if it is introduced in a paper (Table \ref{tab:stats}). In the appendix, we describe robustness checks that remove some of these cleaning steps; these choices minimally affect our results. \textbf{Dataset for Analysis 3 (RQ3):} To study the distribution of widely-utilized datasets across institutions, we linked all dataset-introducing papers to the Microsoft Academic Graph (MAG) \citep{wang2020microsoft}. Analyses were performed on dataset usages for which the dataset's last author affiliation was annotated in MAG (Table \ref{tab:stats}). We again imposed the restriction that usages must share a labeled task with the dataset, but again found it had minimal effects on the results (see appendix). \begin{table}[ht] \centering \begin{tabular}{lllll} \hline Analysis &\# Datasets & \# Usages & \# Tasks & \# Papers \\ \hline 1 & 2,063 & 49,008 & 133 & 26,691 \\ 2 & 960 & 33,034 & 133 & 20,747 \\ 3 & 1,933 & 43,140 & N/A & 26,535 \\ \hline \end{tabular} \caption{Descriptive statistics for data used in the three analyses. Note that the number of dataset usages is larger than the number of papers because many papers use multiple datasets.} \label{tab:stats} \end{table} The datasets, a datasheet \citep{datasheets}, and code for curation/analysis can be found at \url{https://github.com/kochbj/Reduced_Reused_Recycled}. \section{Methods and Findings} \subsection{Analysis 1 (RQ1): Concentration in Task Communities on Datasets} \label{Analysis1} \subsubsection{Methods} \label{Methods1} To measure how concentrated task communities are on certain datasets (RQ1), we calculated the Gini coefficient of the distribution of observed dataset usages within each task. Gini is a continuous measure of dispersion in frequency distributions. It is frequently used in social science to study inequality \citep{McDonald2008}.\footnote{To give some indication of the range of Gini, the country with the lowest Gini for income inequality according to the World Bank \href{https://data.worldbank.org/indicator/SI.POV.GINI}{[linked here]} is Slovenia with a Gini of 24.6 (scaled 0 to 100). The country with the highest Gini inequality is South Africa at 63. The U.S. has a Gini of 41.4.} The Gini score varies between 0 and 1, with 0 indicating that the papers within a task use all datasets in equal proportions, and 1 indicating that only a single dataset is used across all dataset-using papers. Gini is calculated as the average absolute difference in the usage of all pairs of datasets used in the task, divided by the average usage of datasets. Formally, if $x_i$ is the number of usages of dataset $i$ out of all $n$ datasets used in the task, then the Gini coefficient of dataset usage is, \begin{equation} \small {\displaystyle G={\frac {\displaystyle {\sum _{i=1}^{n}\sum _{j=1}^{n}\left|x_{i}-x_{j}\right|}}{\displaystyle {2\sum _{i=1}^{n}\sum _{j=1}^{n}x_{j}}}}={\frac {\displaystyle {\sum _{i=1}^{n}\sum _{j=1}^{n}\left|x_{i}-x_{j}\right|}}{\displaystyle {2n\sum _{j=1}^{n}x_{j}}}}={\frac {\displaystyle {\sum _{i=1}^{n}\sum _{j=1}^{n}\left|x_{i}-x_{j}\right|}}{\displaystyle {2n^{2}{\bar {x}}}}}} \end{equation}\footnote{Notation from \href{https://en.wikipedia.org/wiki/Gini_coefficient}{Wikipedia}, which provides an excellent exposition.} Because Gini can be biased in small samples \citep{deltas2003small}, we use the the sample-corrected Gini, $G_s=\frac{n}{n-1}G$, and excluded tasks (or task-years when disaggregating by time) with fewer than 10 papers. \textbf{Regression Model 1:} In addition to descriptive statistics, we built a regression model to assess the extent to which observed trends in Gini from year-to-year could be attributable to confounding variables like task size, task age, or other task-specific traits at that time. Our outcome is $G_s$ in each task year from 2015-2020 (Figure \ref{fig:pwcdist} shows PWC coverage is limited for papers published before 2015). Our predictors of interest are: \setlist{nolistsep} \begin{enumerate}[noitemsep] \item \textbf{Year} (since we are interested in trends in concentration over time) \item \textbf{CV, NLP, Methods}\footnote{Example ``Methods" tasks in PWC include Transfer Learning, Domain Adaptation, and AutoML.} (three dummy variables indicating whether the task belongs to the Computer Vision, Natural Language Processing, or Methodology categories in PWC). \end{enumerate} To absorb additional variation, we also included the following control covariates: \begin{enumerate}[noitemsep] \item \textbf{Task size} in number of dataset-using/introducing papers for that task in that year \item \textbf{Task age} (because younger tasks may have higher Gini coefficients) \item \textbf{Random intercepts for each task} (because we have repeated observations over time) \end{enumerate} We use beta regression to model Gini because the beta distribution is very flexible, between 0 and 1, and commonly used for this purpose \citep{McDonald2008}. However, we apply the smoothing transformation in \citep{smithson2006better} to deal with the occasional task-year where the Gini is 0. We use a model with the following interactions: \begin{small} \begin{align*} \mathrm{Beta(G_s)} &= \alpha+\beta_1 \mathrm{Year} +\beta_2 \mathrm{Task Size} + \beta_3 \text{TaskAge}\\ &\qquad \beta_4\text{CV}\ + \beta_5\mathrm{NLP}\ + \beta_6\mathrm{Methods} +\beta_7\mathrm{Year*Task Size}\ +\\ &\qquad \beta_8 \mathrm{CV*Year}\ + \beta_9 \mathrm{NLP*Year} + \beta_{10} \mathrm{Methods*Year} \end{align*} \end{small} \iffalse {\small \begin{alignat} \text{Beta(G_s)} &= \alpha+\beta_1 \text{Year} +\beta_2 \text{Task Size} + \beta_3 \text{TaskAge}\ +\nonumber\\ &\qquad \beta_4\text{CV} + \beta_5\text{NLP}\ + \beta_6\text{Methods} +\beta_7\text{Year * Task Size}\ +\nonumber\\ &\qquad \beta_8 \text{CV*Year} + \beta_9 \text{NLP*Year} + \beta_{10} \text{Methods*Year} \nonumber \end{alignat} } \fi This model was chosen among a set of nested models with two- and three-way interactions because it had the lowest Akaike information criterion (AIC) and Bayesian information criterion (BIC). See the appendix for model selection criteria and Table \ref{tab:fit} for fit statistics. \subsubsection{Findings} Controlling for task size, task age, and task-specific effects, Model 1 finds significant evidence for increasing concentration in task communities for the full dataset over time, predicting a marginal increase in Gini of 0.113 from 2015-2020 (Figure \ref{fig:gini} top green; Table \ref{tab:coeff}). This trend is also visible in the overall distributions of Gini coefficients over this period (Figure \ref{fig:gini} bottom). By 2020, the median Gini coefficient for a task was 0.60. There are no statistically significant differences between Computer Vision and Methodology tasks compared to the full sample (Figure \ref{fig:gini} top, Figure \ref{fig:gini_by_modality}), but Model 1 suggests that increases in concentration are attenuated for Natural Language Processing task communities (Figure \ref{fig:gini} top orange). We note that this is the only result that varies somewhat with our model specification; while the rate of increasing concentration in NLP tasks is always significantly lower than the rest of the dataset, the sign and slope of this change does vary somewhat across models. We discuss this point in the appendix. \subsection{Analysis 2 (RQ2): Changes in Rates of Adoption and Creation of Datasets Over Time} \subsubsection{Methods} \label{Methods2} We created two proportions to better understand patterns of dataset usage and creation within tasks as outcomes: {\footnotesize \begin{align*} \begin{split} \text{ Adoption\ Proportion}{} &= \frac{\text{\# of Papers Using Datasets from Other Tasks}}{\text{\# of Papers Using Datasets from Other Tasks}+\text{\# of Papers Using Datasets from this Task}} \end{split}\\\\ \begin{split} \text{ Creation\ Proportion}{} &= \frac{\text{\# of Datasets Created Within this Task}}{\text{\# of Datasets Created within this Task}+\text{\# of Datasets Imported from Other Tasks}} \end{split} \end{align*} } \textbf{Aggregated Descriptive Analyses:} We first computed these proportions for each of the 133 parent tasks aggregated across all years, and subsetted these by the ``Computer Vision,'' ``Natural Language Processing,'' and ``Methodology'' categories. \textbf{Regression Models 2A \& 2B:} Because we chose to formulate our outcomes as fractions of discrete events, logistic regression is the most theoretically appropriate model for these data. We used a mixed effects logistic regression to model these outcomes with the same predictors as Model 1. \subsubsection{Findings} The top row of Figure \ref{fig:adoptionviolins} shows a wide variance in adoption proportions in both the full sample and the subcategories. Within the full sample, more than half of task communities use adopted datasets at least 57.8\% of the time. However, this number varies dramatically across the three PWC subcategories. In more than half of Computer Vision communities, authors adopt at least 71.9\% of their datasets from a different task. The equivalent statistic in Methodology tasks is 74.1\%. Conversely, half of Natural Language Processing communities adopt datasets less than 27.4\% of the time. In the bottom row of Figure \ref{fig:adoptionviolins}, we see a largely inverted trend. Of all unique datasets used in a task community, 62.5\% are created specifically for that task in more than half of tasks. Within Computer Vision and Methods tasks, the median is lower at 53.3\% and 52.6\%, with similar distributions across tasks. Most strikingly, 76.0\% of datasets are created specifically for the task in more than half of NLP communities, with a much tighter variance. We were unable to recover convincing evidence for trends in adoption or creation proportions either way (Regression Models 2A \& 2B) because of a lack of data (results not shown). Disaggregating tasks over time creates a significant number of task-years with no events, and these metrics are undefined in those circumstances. \begin{minipage}[t]{0.45\textwidth} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{GiniTrends.png} \caption{\textbf{Top: Predicted concentration on datasets across task communities over time.} Gini predicted by Model 1 holding task size/age at means. Green plots show the estimated effects of the full dataset, other colors are fixed effects for categories. 95\% confidence intervals shown. \textbf{Bottom: Distributions of concentrations.} Higher Gini indicates greater concentration on fewer datasets. We observe significant spread of Gini across tasks, with the median increasing over time.} \label{fig:gini} \end{figure} \end{minipage} \hspace{5mm} \begin{minipage}[t]{0.49\textwidth} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{ViolinRatios.png} \caption{\textbf{Adoption (Top) and Creation (Bottom) Proportions for PWC Parent Tasks}. Full dataset in green, tasks in the Computer Vision category in purple, Natural Language Processing tasks in orange, and Methods tasks in red,. Red dot and line in boxplot indicate median. Width of violins indicates distribution of tasks.} \label{fig:adoptionviolins} \end{figure} \end{minipage} \\ \vspace{-3mm} \subsection{Analysis 3 (RQ3): Concentration in Dataset-Introducing Institutions Over Time} \subsubsection{Methods} \label{Methods3} To look at trends in Gini inequality across institutions and datasets over time for the larger set of dataset-using papers, we calculated the Gini coefficient $G_s$ in each year for dataset usages both by dataset and by institution. We regressed this Gini on year, as well as residuals capturing variance in the size of PWC that is not correlated with time (see appendix), using a standard beta regression. We also mapped dataset-introducing institutions using the longitude and latitude coordinates provided for the last author's institution on Microsoft Academic. \subsubsection{Findings} Overall, we find that widely-used datasets are introduced by only a handful of elite institutions (Figure \ref{fig:affiliations} left). In fact, over 50\% of dataset usages in PWC as of June 2021 can be attributed to just twelve institutions. Moreover, this concentration on elite institutions as measured through Gini has increased to over 0.80 in recent years (Figure \ref{fig:affiliations} right red). This trend is also observed in Gini concentration on datasets in PWC more generally (Figure \ref{fig:affiliations} right black). \begin{figure}[H] \centering \includegraphics[width=\textwidth]{MapCombined.png} \caption{\textbf{Increases in concentration of dataset usages on institutions and datasets (non-task specific) over time.} \textbf{Left:} Map of dataset usages per institution as of June 2021. Dot size indicates number of usages. Blue dots indicates for-profit institutions and orange dots indicate not-for-profit. Institutions accounting for 50\%+ of usages labeled. \textbf{Right:} Gini coefficient for concentration of dataset usages across the whole PWC dataset over time for both institutions and datasets. Ribbons indicate 95\% CI; dot size indicates number of usages that year.} \label{fig:affiliations} \end{figure} \section{Discussion} In this paper, we find that task communities are heavily concentrated on a limited number of datasets, and that this concentration has been increasing over time (see Figure \ref{fig:gini}). Moreover, a significant portion of the datasets being used for benchmarking purposes within these communities were originally developed for a different task (see Figure \ref{fig:adoptionviolins}). This result is striking given the fact that communities \textit{are} creating new datasets\,---\,in most cases more than the unique number that have been imported from other tasks\,---\,but the newly created datasets are being used at lower rates. When examining PWC without disaggregating by task category, we find that there is increasing inequality in dataset usage globally, and that more than 50\% of all dataset usages in our sample of 43,140 corresponded to datasets introduced by twelve elite, primarily Western, institutions. NLP tasks differ somewhat from PWC as a whole: the broader trend of increasing concentration on a few datasets is moderated in NLP communities, new datasets are created at higher rates, and outside datasets are used at lower rates. One possible explanation for these findings is that NLP task communities in our dataset tend to be bigger than other task communities (median size of 76 dataset usages compared to 55). While we find very modest evidence of correlations between task size and adoption or creation proportions overall (Kendall's $\tau =-.008$, $p=.89$; $\tau =.014$, $p=.81$ respectively), these correlations are stronger within NLP tasks (Kendall's $\tau =-.10$, $p=.45$; $\tau =.09$, $p=.50$ respectively). It is possible that larger NLP communities are more coherent and thus generate and use their own datasets at higher rates than other task communities. Another possibility is that NLP datasets are easier to curate because the data are more accessible, easier to label, or smaller. The resolution of this puzzle is beyond the scope of this paper, but the distinct nature of NLP datasets provides an interesting direction for future work. For our broader findings, there are valid reasons to expect widespread adoption and concentration on key datasets. First, a certain degree of research focus on a particular benchmark is both necessary and healthy to establish the validity and utility of the benchmark (or in some cases, to contest these properties) and to gain community alignment around the benchmark as a meaningful measure of progress. Second, the curation of large-scale datasets is not just costly in terms of resources, but may require unique or privileged data (e.g., anonymized medical records, self-driving car logs) accessible to only a few elite academic and corporate institutions. Nevertheless, the extent of concentration we observe poses questions relating to the scientific rigor and ecological validity of machine learning research and underscores benchmarking as a potential driver for inequality in the field. In the remainder of this section we discuss our findings in relation to these two broad themes and outline recommendations that can be enacted at an individual and institutional level. We close by discussing limitations of this analysis and outlining directions for future work. \begin{figure}[b] \captionsetup{font={footnotesize}} \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{ImageGenerationTasks.png} \subcaption{Origin tasks of datasets used by \textit{Image Generation} community.} \label{fig:imagegen_task} \end{subfigure} \hspace{5mm} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{ImageGenerationDatasets.png} \subcaption{Datasets used by \textit{Image Generation} community.} \label{fig:imagegen_dataset} \end{subfigure}\hspace{5mm} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{FaceRecognitionDatasets.png} \subcaption{Datasets used by \textit{Face Recognition} community.} \label{fig:facerec} \end{subfigure} \caption{\textbf{Top datasets used across \textit{Image Generation} and \textit{Face Recognition} task communities}: (a) Origin task communities of top \textit{Image Generation} datasets. Only 7.49\% of \textit{Image Generation} papers in PWC evaluate on datasets developed for \textit{Image Generation}. (b) Names of top \textit{Image Generation} datasets. Only one of the top datasets, FFHQ \citep{ffhq}, was developed for the task. (c) The small number of datasets in usage within the high stakes domain of \textit{Face Recognition}. Two of the datasets, MegaFace \citep{kemelmacher2016megaface} and MS-Celeb-1M \citep {msceleb} (in white), have been recently retracted, the latter due to serious ethical violations \citep{exposing_msceleb}.} \end{figure} \label{fig:piechart} \subsection{Scientific Rigor and Ecological Validity of MLR} The heavy concentration of research on a small number of datasets for each task community is a fairly unsurprising result given the value placed on SOTA performance in established benchmark datasets\,---\,a valuation that incentivizes individual researchers to concentrate on maximizing performance gains on well-established benchmarks. However, as discussed in Section \ref{sec:motivation}, over-concentration of research efforts on established benchmark datasets risks distorting measures of progress. Moreover, as the rate of technology transfer has accelerated, benchmarks have been increasingly used by industry practitioners to assess the suitability and robustness of different algorithms for live deployment in production settings. This transition has transformed epistemic concerns about overfitting datasets into ethical ones. For example, critical research on datasets for facial recognition, analysis, and classification has repeatedly highlighted the lack of diversity in standard benchmark datasets used to evaluate progress \citep{gendershades}, even as the technologies are applied in law enforcement contexts that adversely affect underrepresented populations \citep{Garvie2016}. Figure \ref{fig:facerec} shows the top datasets in usage within the \textit{Face Recognition} community. Here, we see a significant amount of high stakes research being concentrated on a small number of datasets, many of which contain significant racial and gender biases \citep{gendershades, Wang2019RacialFI}. An in-depth examination of bias within the top benchmarks datasets in use within different task communities is outside the scope of this work. However, the systemic nature of bias concerns in ML datasets compounds the epistemic concerns associated with highly concentrated research. Our findings also indicate that datasets regularly transfer between different task communities. On the most extreme end, the majority of the benchmark datasets in circulation for some task communities were created for other tasks. For example, Figure 4 plots the dataset usages of \textit{Image Generation} papers on PWC broken down by origin task (Figure \ref{fig:imagegen_task}) and dataset name (Figure \ref{fig:imagegen_dataset}). We observe that only one of the datasets heavily used in the \textit{Image Generation} community was designed specifically for this task. The widespread practice of adopting established datasets to train and evaluate models in new problem domains isn't inherently a problem. However, this practice does raise potential concerns regarding the extent to which datasets are appropriately aligned with a given problem space. Moreover, given the widespread prevalence of systematic biases in the most prominent ML datasets, adopting existing datasets, rather than investing in careful curation of new datasets, risks further entrenching existing biases. Our findings on creation and adoption rates are quite nuanced. The extent to which high adoption rates raise significant concerns to ecological validity is yet to be determined. Furthermore, it is worth distinguishing between at least two forms of dataset adoption that seem to be conflated in the PWC data. On the one hand, we observe how datasets that have been developed for one task become \textit{adapted} in some form for a new task through, for example, the addition of new annotations. On the other hand, we observe some datasets being \textit{imported} whole cloth from one task community to another. Each of these forms of dataset adoption raises potentially unique concerns regarding the validity of the benchmark in a given context. That said, our results add empirical support to the growing body of scholarship calling for dataset development and use to be rooted in context \citep{paulladaDataItsDis2020, cv-dataset-politics}, which is particularly important for application-oriented tasks. This paper complements and supports the growing calls to include forms of qualitative and quantitative evaluations beyond top-line benchmark metrics \citep{Sculley2018WinnersCO, Schwartz2019, dodge2019, Ethayarajh2020}. Given the observed high concentration of research on a small number of benchmark datasets, we believe diversifying forms of evaluation is especially important to avoid overfitting to existing datasets and misrepresenting progress in the field. \subsection{Social Stratification in MLR} The extent of concentration we observe underscores that benchmarking is also a vehicle for inequality in science. The \textit{prima facie} scientific validity granted by SOTA benchmarking is generically confounded with the social credibility researchers obtain by showing they can compete on a widely recognized dataset, even if a more context-specific benchmark might be more technically appropriate. We posit that these dynamics creates a ``Matthew Efffect'' (i.e. ``the rich get richer and the poor get poorer'') where successful benchmarks, and the elite institutions that introduce them, gain outsized stature within the field \citep{merton1973}. Insofar as benchmarks shape the types of questions that get asked and the algorithms that get produced, current benchmarking practices offer a mechanism through which a small number of elite corporate, government, and academic institutions shape the research agenda and values of the field (Figure \ref{fig:affiliations} left). Empirical support for this claim is beyond the scope of this paper, but there is work within the sociology of science and technology showing that government and corporate institutions tend to support research that serves (at least in part) their own interests, e.g., \citep{oreskes2021science}. There is nothing \textit{a priori} scientifically invalid about powerful institutions being interested in datasets or research agendas that would benefit them. However, issues arise when the values of these institutions are not aligned with those of other ML stakeholders (i.e., academics, civil society). For example, Dotan and Milli argue that deep learning’s reliance on large datasets has forced MLR to confront decisions about the extent to which it is willing to violate privacy to acquire/curate data \cite{Dotan2020}. Corporate and government institutions have objectives that may come into conflict with privacy (e.g., surveillance), and their weighting of these priorities is likely to be different from those held by academics or AI's broader societal stakeholders. Returning to the Facial Recognition example in Figure 4c, four of the eight datasets (33.69\% of total usages) were exclusively funded by corporations, the US military, or the Chinese government (MS-Celeb-1M, CASIA-Webface, IJB-A, VggFace2). MS-Celeb-1M was ultimately withdrawn because of controversy surrounding the value of privacy for different stakeholders \citep{exposing_msceleb}. The recently introduced NeurIPS Dataset and Benchmark Track is a clear example of an intervention that shifts incentive structures within the MLR community by rewarding dataset development and other forms of data work. We believe these sorts of interventions can play a critical role in incentivizing careful dataset development that is meaningfully aligned with problem domains. However, our finding that a small number of well-resourced institutions are responsible for most benchmarks in circulation today has implications for data-oriented interventions in the field. Our research suggests that simply calling for ML researchers to develop more datasets, and shifting incentive structures so that dataset development is valued and rewarded, may not be enough to diversify dataset usage and the perspectives that are ultimately shaping and setting MLR research agendas. In addition to incentivizing dataset development, we advocate for equity-oriented policy interventions that prioritize significant funding for people in less-resourced institutions to create high-quality datasets. This would diversify\,---\,from a social and cultural perspective\,---\,the benchmark datasets being used to evaluate modern ML methods. \subsection{Limitations and Future Work} \label{sec:limitations} Because our findings rely on a unique community-curated resource, our results are contingent on the structure and coverage of PWC. Sensitivity analyses suggest that while PWC's coverage of ML publications is not perfect and exhibits some recency bias, the omitted papers tend to be low impact. Moreover, the crowdsourced taxonomy of parent-child task relations in PWC may be subjective and/or noisy, especially for small or new tasks.\footnote{The full list of parent tasks and parent/child relations is available in the GitHub.} To increase our confidence in task annotations, we focused our analyses on larger, higher-level task communities and considered dataset usages invalid if they did not share a task label with the dataset. Lastly, we find that the concentration trends in Regression 1 are largely robust to model specification and our choice of Gini as an outcome. See the appendix for details on design choices and sensitivity analyses. Finally, we emphasize that our findings are highly nuanced. We report trends that our analyses revealed, but refrain from imposing normative judgements on many of these trends. For example, the high rates of adoption raise potential concerns and point to an important future area of examination. The mere fact that datasets travel between task communities is not necessarily problematic, and indeed the widespread sharing of datasets has been central to methodological advancements in the field. We hope this work will offer a foundation for future empirical work examining the details of dataset transfer and the context-specific implications of our findings. \vspace{-2mm} \section{Conclusion} \vspace{-2mm} Benchmark datasets play a powerful role in the social organization of the field of machine learning. In this work, we empirically examine patterns of creation, adoption, and usage within and across MLR task communities. We find that benchmarking practices are heavily concentrated on a small number of datasets for each task community and heavily concentrated on datasets originating from a small number of well-resourced institutions across the field as a whole. We also find that many benchmark datasets flow between multiple task communities and are leveraged to evaluate progress on tasks for which the data was not explicitly designed. We hope this analysis will inform community-wide initiatives to shift patterns of dataset development and use so as to enable more rigorous, ethical, and socially informed research. \iffalse [EXAMPLE HERE]. We call two tasks “related” if they are siblings or have a parent-child relationship. Ultimately we decided to annotate both of the following types of transfers: Child to unrelated child Parent to unrelated parent Studying usage patterns across task communities and time requires that \textit{both} the papers utilizing that dataset and the original dataset paper be labeled with tasks. We downloaded the complete dataset for paperswithcode.com (PWC) on 06/16/2021. In this study, we focus primarily on the “Datasets” archive provided by PWC. Each dataset in the archive is associated with metadata such as This archive describes datasets, repository weblinks, and a host of other, and provides repository website links, but our primary interest is in the modality of the dataset (e.g., texts,images,video,graphs), the tasks it is associated with, the date the dataset was introduced, and the paper title that introduced the dataset (if relevant). We focus on 4,384 datasets from PWC (3600 were associated with the introduction of a paper). Lastly, the PWC data dump does not give the titles of the 60,647 papers that use these datasets, so we scrape them from the website directly with an internal API. Encoding Dataset Transfers between Tasks In PWC papers, benchmarks, and (by transitivity) datasets are associated with tasks. To answer this question we circumscribed the PWC dataset down to datasets that were definitively introduced in a paper, and papers that use that dataset. We call the tasks affiliated with the paper that introduced the dataset the “origin tasks” for that dataset. For example, CelebA was introduced in the 2015 paper “Deep Learning Face Attributes in the Wild”, which PWC associates with the task X. Thus, we consider X to be the “origin” task for CelebA. The tasks affiliated with papers that adopt the dataset at a later point in time are “destination tasks.” For example, CelebA is utilized to benchmark Image Generation tasks, but was not developed for Image Generation. Thus, we consider Image Generation to be a “destination task” for CelebA. PWC includes a taxonomy of tasks and subtasks but the graph is cyclic, making it hard to disentangle dataset transfer between tasks. When considering transfers between tasks we wanted to capture both the transfers between subtasks and the transfers between their parent tasks. [EXAMPLE HERE]. We call two tasks “related” if they are siblings or have a parent-child relationship. Ultimately we decided to annotate both of the following types of transfers: Child to unrelated child Parent to unrelated parent This data is naturally modeled as a dynamic directed graph of transfers over time. To provide more detailed analysis on the NLP and CV communities specifically, we create subsets that only focus on transfers of datasets labeled with the modality “Texts” or the modality “Images” in PWC. Given the critical importance of machine learning research in \fi \begin{ack} We thank the reviewers for their helpful comments. The authors have no competing interests to disclose. BK was supported by DDRIG and GRFP grants from the US National Science Foundation. JGF was supported by an Infosys Membership in the School of Social Science at the Institute for Advanced Study. \end{ack} \medskip
proofpile-arXiv_065-15
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Introduction} Layered transition metal dichalcogenides (TMDs) are among the most studied two-dimensional (2D) materials in the last decade. Their atomically-thin structure and physical properties have attracted attention not only because of their interesting fundamental physics but also due to their potential applications for ultra-thin technological devices \cite{Yu2013, Nalwa2020, Sarkar2020, Pradeepa2020, Jia2019, Xu2016, Lin2015, Wang2012}. Similar to graphene, these materials can be mechanically exfoliated to obtain single layers. Of special interest are MoS$_2$, MoSe$_2$, WS$_2$ and WSe$_2$, TMDs that have been widely studied mostly because they suffer a transition from an indirect to a direct bandgap semiconductor when the monolayer thickness is achieved \cite{Mak2010,Splendiani2010}. As a consequence, the photoluminescence (PL) of the monolayer of these materials is much more intense when compared to that of the bulk material \cite{Splendiani2010}. Also, owing to their two-dimensional nature, TMD monolayers have their PL spectra dominated by excitonic effects. For monolayer MoS$_2$, a characteristic PL spectrum can usually be decomposed into three main peaks related to the recombination of different excitons, the so-called A and B excitons, and charged excitons, the trions \cite{Jadczak2017}. The large spin-orbit splitting (SOS) at the top of the valence band is responsible for the existence of the two exciton states, A and B \cite{Zhu2011,Splendiani2010}, while the third PL peak routinely observed in the PL spectrum of monolayer MoS$_2$ corresponds to charged A-excitons, or trions, which are tightly bound and are observed even at room temperature \cite{Mak2013a}. In the monolayer limit, the properties of all TMDs are highly affected by the substrate on which they are deposited \cite{Sun2017, Buscema2014}. One interesting substrate for these monolayer materials is GaAs, a prototypical semiconductor which has been extensively studied and employed for electronics and optoelectronics applications that take advantage of its direct gap (1.42 eV at room temperature) and relatively high electron mobility (up to 8000 cm$^2$ V$^{-1}$ s$^{-1}$ at room temperature) \cite{Sze2006}. The combination of the optical and electronic properties of TMDs and GaAs as a substrate has already shown promising results for implementation of solar cells \cite{Lin2015}, with a power conversion efficiency of up to 9.03$\%$, and photodetectors \cite{Xu2016,Sarkar2020,Jia2019,Zhang2017}, with a detectivity of up to 1.9 $\times$ 10$^{14}$ Jones. The success of these proof-of-concept studies urges the need to investigate in detail the properties of MoS$_2$/GaAs heterojunctions, in order to further improve device quality \cite{Padma2019}. Particularly, the band alignment between the two materials is still not well established although it is of major importance for applications involving these 2D/3D semiconductor architectures. Here, we present a study of the effect of GaAs substrates on monolayer MoS$_2$ by analyzing the changes in the photoluminescence spectra of monolayer MoS$_2$ on GaAs substrates with different doping levels. We used three types of commercially-available GaAs substrates that we identify hereon as i-GaAs for intrinsic GaAs (semi-insulating), p-GaAs for Zn-doped p-type GaAs and n-GaAs for Si-doped n-type GaAs. The doping concentrations are $\sim$10$^{18}$ cm$^{-3}$ for both n-GaAs and p-GaAs. As a reference, we have control samples on two substrates, SiO$_2$/Si and n-GaAs, with the transferred MoS$_2$ monolayer isolated from the substrates by a bulk hexagonal boron nitride (hBN) flake. We propose a type-I band alignment, with a charge transfer between GaAs and the MoS$_2$ monolayer which depends on the GaAs doping. This band alignment model is supported by Scanning Kelvin Probe Microscopy (SKPM) measurements in the heterostructures. Monolayers of MoS$_2$ (ML-MoS$_2$) were mechanically exfoliated and transferred to the substrates through the all-dry viscoelastic stamp method \cite{Castellanos-Gomez2014}. Similar processes were used to exfoliate and transfer the hBN bulk to the Si/SiO$_2$ and n-GaAs substrates. To confirm the single layer character of the MoS$_2$ flakes we used Raman spectroscopy to monitor the separation in frequency of the well-known $A_{1g}$ and $E^{1}_{2g}$ Raman modes \cite{Lee-Heinz-Hone-ACSNano2010,Li-Zhang-Advanced-Funct-Materials2012}, see Supplementary Information (SI). The samples were studied in two sets. The first set was composed of a control sample of ML-MoS$_2$ on hBN/SiO$_2$/Si substrate (MoS$_2$/hBN/SiO$_2$) and three samples of ML-MoS$_2$ on GaAs with different doping: MoS$_2$/i-GaAs, MoS$_2$/p-GaAs and MoS$_2$/n-GaAs. The second set is composed of two samples, one ML-MoS$_2$ on n-GaAs and one ML-MoS$_2$ control sample on hBN/n-GaAs substrate (MoS$_2$/hBN/n-GaAs). The second set of samples allowed us to verify the reproducibility of the results obtained for ML-MoS$_2$ as well as to produce a control sample with a dielectric environment that allows better comparisons of SKPM measurements made on different samples (see SI). We start our considerations by the PL measurements, which were accomplished with the same experimental conditions for all the samples. We are cautious with the laser exposure and spectra acquisition to minimize changes in the PL caused by photodoping effects\cite{Cao2021} and to allow the comparison of PL spectra from different samples (details are provided in the SI). The ML-MoS$_2$ spectra were obtained after removing the background photoluminescence from the GaAs substrate when applicable (see SI). In Figure \ref{Fig1:Samples and PL}a we present the ML-MoS$_2$ emission for the first set of samples. The intensity of the emission from ML-MoS$_2$ is approximately the same (within experimental resolution) for all MoS$_2$/x-GaAs (x=p, n, i) samples. However, their PL signals are around 10 times less intense than that of the ML-MoS$_2$ from the MoS$_2$/hBN/SiO$_2$ control sample. This observation suggests an important quenching mechanism for the ML-MoS$_2$ photoluminescence in the MoS$_2$/x-GaAs 2D/3D heterostructures, which is independent of the substrate doping level. We suggest two main paths for the reduction of PL from MoS$_2$ on GaAs: exciton dissociation through the junction and exciton transfer from MoS$_2$ to GaAs. The first process will contribute more if ML-MoS$_2$/x-GaAs form a type II heterojunction and the latter will be more important in a type I heterojunction. Therefore, we will try to elucidate the band alignment of the heterojunctions with other observations and the discussion that follows. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Fig1_PL_and_samples.pdf} \caption{Photoluminescence spectra of ML-MoS$_2$ from the first set of samples (a) and from the second set of samples (b). Insets: Representation of the studied samples on x-GaAs substrate (a) and on hBN/n-GaAs substrate (b). } \label{Fig1:Samples and PL} \end{figure} The results shown in Figure \ref{Fig1:Samples and PL}a are consistent with measurements on a second set of samples: MoS$_2$/hBN/n-GaAs and MoS$_2$/n-GaAs. We observe a 10:1 relation between the PL of the sample containing the hBN spacer to the one without this spacer (Figure \ref{Fig1:Samples and PL}b). This confirms that the hBN bulk layer worked well to isolate the ML-MoS$_2$ from the n-GaAs substrate, preventing exciton dissociation/transfer. From now, we are going to consider just the control sample of the second set, as it presents a comparable dielectric environment with the first set of samples. \begin{figure} \centering \includegraphics[scale=0.12]{Fig2_PL_fit_Voigt.pdf} \caption{Peak decomposition of the photoluminescence spectra of ML-MoS$_2$ on different substrates. The peaks are contributions from: localized states (L), trions (T), A excitons (A), and B exciton (B) emissions. The solid lines are experimental data and the dashed lines represent the sum of the component peaks.} \label{Fig2:PL_fitts} \end{figure} To further understand the interaction between MoS$_2$ and GaAs in the heterostructures we decompose the PL spectra into peaks corresponding to the radiative recombination of different exciton species on ML-MoS$_2$. In Figure \ref{Fig2:PL_fitts} we present the PL spectra and their constituent peaks for all ML-MoS$_2$ on GaAs from the first set of samples and for the MoS$_2$/hBN/n-GaAs control sample. Four peaks with a Voigt lineshape were identified in the fitted spectra, the A and B exciton peaks, the trion peak (T), and a fourth peak (L), which has been previously assigned to the recombination of excitons bound to localized states \cite{Tongay2013, Saigal2016, Jadczak2017, Greben2020}. The presence of a peak from recombination of trions, which are charged excitons, allows us to infer the existence of free charge, or an excess charge density, in ML-MoS$_2$. Exfoliated ML-MoS$_2$ are usually found to be intrinsically n-type \cite{Zhang2020, Singh2019}, having excess electrons in its conduction band. Thus, by comparing the integrated PL intensities of the trion peak, I$_T$, and of the A-exciton peak, I$_A$ (see Table \ref{Tab1:PL_T_A_ratio}) we can quantify the excess charge density comparatively among the samples and identify the relationship between the doping level of the substrate and the excess charge density on ML-MoS$_2$. A higher value of the ratio I$_T$/I$_A$ indicates higher excess charge density, as was observed for monolayers under electric gating \cite{Mak2013a, Greben2020, Pradeepa2020}. Based on I$_T$/I$_A$ values (table \ref{Tab1:PL_T_A_ratio}) we can say that the excess charge density on ML-MoS$_2$ in our samples increases, depending on the substrate, in the following order: p-GaAs, i-GaAs, hBN/n-GaAs and n-GaAs. By assumption, the ML-MoS$_2$ in the control sample does not exchange charge with the substrate, therefore its I$_T$/I$_A$ is a measure of the isolated ML-MoS$_2$ excess electron density. The high contribution of trions in the control sample PL spectrum corroborates this assumption since it agrees with the already mentioned intrinsic n-type nature of exfoliated ML-MoS$_2$ samples, mostly related to sulfur vacancies \cite{Zhang2020,Singh2019}. Comparing the I$_T$/I$_A$ of the MoS$_2$/x-GaAs samples with the control sample we can infer that the n-GaAs substrate is the only one that transfers electrons to the monolayer, while inversely the i-GaAs and p-GaAs substrates receive electrons transferred from the MoS$_2$ monolayer. The excess charge density on ML-MoS$_2$ is controlled by its Fermi level position. We expect that when the ML-MoS$_2$ and the substrate enter into contact they exchange charge carriers until their Fermi levels align, achieving an equilibrium state. This process may change the surface potential of GaAs causing some band bending but its Fermi level position is fixed by the bulk far from the surface. For ML-MoS$_2$, however, charge exchange will change its Fermi level position. Thus, the relations between I$_T$/I$_A$ among the samples give us a hint about the Fermi level change in ML-MoS$_2$ when it comes into contact with each substrate. Then, from a band alignment point of view, we may say that the Fermi level of ML-MoS$_2$, before contacting the substrate, is positioned somewhere between the Fermi level of the intrinsic and n-doped GaAs substrates. Nevertheless, from the I$_T$/I$_A$ connections alone, we cannot determine the band alignments for the different heterojunctions. \begin{table}[h!] \begin{center} \begin{tabular}{lccc} \hline \hline Sample & I$_A$ & I$_T$ & I$_T$/I$_A$ \\ \hline MoS$_2$/n-GaAs & 19.66 & 26.25 & 1.33 \\ MoS$_2$/i-GaAs & 27.86 & 19.12 & 0.69\\ MoS$_2$/p-GaAs & 37.13 & 16.31 & 0.44\\ MoS$_2$/hBN/n-GaAs & 666.82 & 745.26 & 1.12\\ \hline \hline \end{tabular} \caption{Integrated photoluminescence intensities of the A exciton, I$_A$, and the trion, I$_T$, emission peaks of ML-MoS$_2$ in each sample, in arbitrary units, and their ratio, I$_T$/I$_A$.} \label{Tab1:PL_T_A_ratio} \end{center} \end{table} In order to elucidate the band offsets of the three ML-MoS$_2$/x-GaAs heterojunctions, we used Scanning Kelvin Probe Microscopy (SKPM), which measures the contact potential difference (CPD) between the cantilever tip of an atomic force microscope and the surface of the sample \cite{Nonnenmacher1991, Melitz2011}. In the biased tip configuration, which we used for the SKPM measurements, by measuring the CPD and knowing the work function of the tip, $\phi_{tip}$, it is possible to determine the surface work function of the sample, $\phi_{samp}$, through the relation $e \cdot CPD=\phi_{tip}-\phi_{samp}$, where $e$ is the electron charge. We performed the experiments under standard ambient conditions, which can affect the precision of the specific values. Nevertheless, all uncertainties affect all samples equally, and we can confidently extract relationships between the surface work functions of the different materials in each sample measured. \begin{figure}[b!] \centering \includegraphics[width=\linewidth]{Fig3_SKPM_warm_rev1-comp.pdf} \caption{Contact potential difference maps obtained by SKPM of the studied heterojunctions. ML-MoS$_2$ (substrate) analyzed areas are delimited by black (white) dashed lines. The type of substrate is indicated in each map in a region of the image that corresponds to the substrate.} \label{Fig3:SKPM} \end{figure} \begin{figure*}[ht!] \includegraphics[width=\linewidth]{Fig4_Band_offset-Rev.pdf} \caption{\label{Fig4:Bandoffset}Schematic band offsets of ML-MoS$_2$ and x-GaAs before contact (a), and after ML-MoS$_2$/x-GaAs heterojunction formation (b-d). E$_{vac}$, CB, FL, VB, $\chi$, and $\Phi$ represents the vacuum level, the bottom of the conduction band, the Fermi level, the top of the valence band, the electron affinity, and the work function respectively.} \end{figure*} To extract the CPD at each material we used the mean value of homogeneous areas of the monolayers, shown in Figure \ref{Fig3:SKPM} by dashed black lines, and the clean areas at each x-GaAs substrate, shown by dashed white lines in the figure. Optical images and sample details are shown in the SI. Therefore, it is possible to determine the difference between the work functions of the ML-MoS$_2$ and its corresponding substrate by the negative of the value of the CPD contrast, or $\Delta\phi_{MoS_2-GaAs}= \phi_{MoS_2}-\phi_{GaAs}\ = e\ (V_{GaAs} - V_{MoS_2})$ (see Table \ref{Tab2:CPD_WF}). We observe that the obtained difference is positive for all samples, which indicates that the work function of MoS$_2$ is larger than the work function at the surface of GaAs in all samples. To relate the work function of a material with its conduction and valence band edges we need to know the electron affinity $\chi$ and band gap E$_g$ of the material. The GaAs parameters are well established in literature: $\chi _{GaAs}=4.07~$eV and $E_{g,GaAs}=1.42~$eV \cite{Sze2006}. For ML-MoS$_2$, reports in the literature have a range of $\chi _{MoS_2}=$ 3.74 - 4.1 eV \cite{Keyshar2017, Guo2016} and the bandgap will suffer modulations owing to the dielectric screening from the environment, which in our samples should imply a value of $E_{gMoS_2} \sim 2.2~$eV considering the dielectric constant of GaAs as $\kappa _{GaAs}$= 12.88 \cite{Ryou2016}. To propose a band alignment for our heterojunctions we will consider $\chi _{MoS_2}=4.0~$eV, which was the value used in other works on MoS$_2$/GaAs \cite{Lin2015, Xu2016, Padma2019} and the electronic bandgap. \begin{table}[h!] \begin{center} \begin{tabular}{lcc} \hline \hline Sample & $\Delta\phi_{MoS_2-GaAs}$ (eV) \\ \hline MoS$_2$/n-GaAs & 0.23 $\pm$ 0.04 \\ MoS$_2$/i-GaAs & 0.05 $\pm$ 0.05 \\ MoS$_2$/p-GaAs & 0.22 $\pm$ 0.03 \\ MoS$_2$/hBN/n-GaAs & 0.14 $\pm$ 0.01 \\ \hline \hline \end{tabular} \caption{Work function difference between ML-MoS$_2$ and GaAs extracted from the SKPM maps shown on Figure \ref{Fig3:SKPM}.} \label{Tab2:CPD_WF} \end{center} \end{table} Since the position of the conduction band can be described as $E_c=\phi - \chi$, with respect to the Fermi level, we approximate the difference in the conduction band edge between the MoS$_2$ layer and the x-GaAs substrate by $\Delta E_c=E_{c,MoS_2}-E_{c,GaAs}=\Delta\phi_{MoS_2-GaAs}+\Delta\chi_{GaAs-MoS_2}$. As both quantities are positive, the conduction band edge of MoS$_2$ is always at a higher energy than that of GaAs, with their Fermi levels aligned. Figure \ref{Fig4:Bandoffset} presents schematically the band offsets we propose for the ML-MoS$_2$/x-GaAs heterojunctions based on our analysis of the PL and SKPM results. In Figure \ref{Fig4:Bandoffset}a we present the band edges and Fermi levels of each material before contact. Fermi levels are represented by the yellow dotted lines and, in GaAs, are labeled n, i, and p for the type of substrate doping. As inferred from the PL I$_T$/I$_A$ analysis, we position the Fermi level of MoS$_2$ between those of i-GaAs and n-GaAs. Heterojunction band alignments after contact are shown in Figures \ref{Fig4:Bandoffset}b, \ref{Fig4:Bandoffset}c and \ref{Fig4:Bandoffset}d. According to our proposal, ML-MoS$_2$ and GaAs form type I heterojunctions for all GaAs doping levels studied. The SKPM data does not give a quantitative, exact value of the conduction band offset in the heterojunction (see SI for more details on the technique). Nevertheless, there is a clear indication that the steps in conduction band at the junction are of comparable magnitude for all three types of GaAs substrates. After establishing the conduction band step at the junction, the position of the Fermi level is set by the doping of the GaAs substrate, according to the assumption that the Fermi level is pinned down by the bulk of the material. This determines the position of the Fermi level in the MoS$_2$ side of the junction. As the estimated differences in work function obtained from SKPM are between the surface work functions, the band alignments we present in Figure \ref{Fig4:Bandoffset} assume that the surface work function of GaAs is the same as its bulk work function, or that the GaAs bands are flat. We prefer not to speculate on the curvature of the bands inside GaAs because our experiments do not provide sufficient evidence to support it. This means that, although at the interface the band positions we proposed should be correct, the curvature of the GaAs bands may change as one moves from the surface to the bulk, which means that the Fermi level positioning should also be reexamined. Therefore, we propose the band alignments in Figure \ref{Fig4:Bandoffset} as a first approximation, to contribute to the discussion and analysis of the surface and charge dynamics in these 2D/3D heterostructures and we expect to instigate other works aiming to elucidate the shape of the bands inside GaAs on these types of junctions, since band bending can affect the operation of devices based on them. Most of the work done on MoS$_2$/GaAs junctions so far employ n-doped GaAs \cite{Xu2016, Lin2015, Padma2019, Sarkar2020, Jia2019}. Nearly all of these works propose a type II band alignment for MoS$_2$/n-GaAs. That is not in complete disagreement with our proposal, since the transition to type II alignments for the MoS$_2$/n-GaAs junctions would only imply that the conduction band step is larger than the one we estimated, which is based in comparisons of the experimental data for the three types of MoS$_2$/x-GaAs junction and the control sample. Furthermore, it is worth pointing out that the devices studied in these other works were built with MoS$_2$ produced by chemical vapour deposition (CVD) \cite{Xu2016, Lin2015, Padma2019}, thermal decomposition \cite{Jia2019} or solution processing \cite{Sarkar2020, Zhang2017}, while we used exfoliated ML-MoS$_2$. That could be relevant since it is well known that the defects, and thus doping, of MoS$_2$ monolayers obtained through each method can be quite different. Our proposed type I band alignments for all the studied heterojunctions implies that the mechanism behind the quenching of the ML-MoS$_2$ photoluminescence in the heterojunctions should be the transfer of excitons from MoS$_2$ to GaAs and not exciton dissociation through the junction. Additionally, the different Fermi level positions in ML-MoS$_2$ on different substrates allows us to explain the variations in the relative intensity of the emission from the trion and the A exciton that were observed. In conclusion, we presented the photoluminescence spectra of monolayers of MoS$_2$ on commercial GaAs substrates with different doping levels. The results revealed an important reduction of the PL intensity of the monolayers, when compared with a control sample. In addition, the spectra presented a dependence of the ratio of the trion to exciton emission intensities on the doping level of the substrate. This behavior evidences different ammounts of excess charge in the single layers related to a charge exchange process with their substrates. Scanning Kelvin probe microscopy measurements provided an estimation of the difference in work function between the materials in the heterojunctions and allowed us to propose a type I band alignment for all MoS$_2$/x-GaAs heterojunctions we studied. Our proposal is consistent with the analysis of the photoluminescence measurements and suggests exciton migration as the main mechanism behind the PL intensity reduction. The results reported here contribute to the understanding of the charge transfer processes in 2D/3D semiconductor heterojunctions which are of central importance for the implementation of the next generation of electronic and optoelectronic devices. \section*{Supplementary Material} See Supplementary Information for additional experimental details regarding the fabrication, characterization and spectra processes of the heterojunctions. \begin{acknowledgments} An important part of the work reported here was done at the LCPNano laboratory at UFMG. We thank Freddie Hendriks for the calculations of thin-film interference effects shown in the SI. This work was financially supported by the Brazilian funding agencies CNPq, FAPEMIG and the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) \end{acknowledgments} \section*{Data Availability Statement} The data that support the findings of this study are available from the corresponding author upon reasonable request. \section*{Conflict of interest} The authors have no conflicts to disclose. \section*{References}
proofpile-arXiv_065-16
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{introduction} The interaction between light and matter is an important subject in the field of quantum optics. The study of light toward the perspective of quantum leads to some interesting phenomena different from classical ones. One of the most famous phenomenon is induced transparency (such as electromagnetically/optomechanically induced transparency)~\cite{001,002,003,004,005,006,007,008,009,010,011,012,013,014,015,016}, as well as induced absorption~\cite{003,014} and induced amplification~\cite{015,016,017,018,019,020} which has been widely studied in current decades. Besides, signal amplification whose aim is to increase signal-to-noise ratio is significantly crucial in the field of quantum information and quantum optics. It is known that optical amplification usually results from the inversion of particle numbers under the action of a pumping field and stimulated radiation. It can directly amplify optical signals without converting them into electrical ones so as to possess a high degree of transparency on the format and rate of signals, making the whole optical fiber communication transmission system more simple and flexible~\cite{015,016}. It is noted that there are many mechanisms of light amplification, such as adding external drive and changing detuning conditions. Through the coupling effect of strong photon tunneling, double-cavity OMS not only shows the characteristics of photomechanically-induced absorption, photomechanically-induced amplification and simple normal mode splitting (NMS), but also adjusts the photon tunneling intensity. The transformation from photomechanical induced absorption to photomechanical induced amplification can be further realized. In this article we build our mechanism by adding active cavities. In addition, the added gain scheme is widely used in quantum information and quantum communication due to its excellent characteristics of convenience and easy adjustment and may very useful for optical and microwave amplifiers~\cite{021}. Parity-time($\mathcal{PT}$) symmetry, the non-Hermitian Hamitonian, which has a real spectra was proposed by Bebder in 1998 firstly and attracted wide attention ~\cite{022,023,024,025,026,027,028,029,030}. Since $\mathcal{PT}$-symmetry requires a strict balance between loss and gain. However the balance condition may be too difficult for the realistic implementation, especially when tiny disturbances are inevitable. $\mathcal{PT}$-symmetric-like system not requiring the strict balance can still follow the predictions of the $\mathcal{PT}$-symmetry in many cases and thus attract considerable attention~\cite{031,032}. At the exceptional point, where the system undergoes the transition from the $\mathcal{PT}$-symmetric-like phase and $\mathcal{PT}$-symmetric-like broken, pairs of eigenvalues collide and become complex has manifested in various physical system, such as photonics, electronics, acoustics, phononics. And the OMIA of the $\mathcal{PT}$-symmetric OMS has been achieved in the whispering-gallery-mode microtoroidal cavities. Common PT symmetric systems have two-cavity systems, but there are also examples of single cavities achieving effective gain by introducing external drives or other means ~\cite{027}. In the past few years, cavity magnonics, a new interdiscipline, attracted much attention. It mainly explores the interaction between confined electromagnetic fields and magnons, especially Yttrium iron garnet (YIG)~\cite{033,034,035,036,037,038,039,040,041}. The reason is that the Kittel mode within YIG has a low damping rate and holds great magnonic nonlinearities~\cite{039}. In addition, the high spin density of magnons allows strong coupling between magnons and photons, giving rise to quasiparticles, i.e. the cavity-magnon polaritons. Then strong coupling between magnons and cavity photons can be observed at both low and room temperature. In this case, a large number of quantum-information-related problems have been studied by this method, including the coupling of magnons with superconducting qubits, observation of bistability~\cite{042,043}, cavity spintronics, energy level attraction of cavity magnetopolaron, magnon dark modes. Other interesting phenomena including magneton-induced transparency (MIT), magnetically induced transparency (MMIT), and magnetically controlled slow light have also been studied~\cite{044}. In this paper, we utilize a cavity-magnomechanical system, which consists of a YIG sphere placed inside a three-dimensional microwave cavity that is connected with an passive cavity to realize microwave amplification. Through the discussion the properties of absorption and transmission, we obtain the amplification in the context of $\mathcal{PT}$-symmetric-like cavity magnomechanical system. The remaining parts are organized as follows. In Sec.~\ref{s2}, we introduce the model of our proposal. In Sec.~\ref{s2}, we plot the magnomechanically induced transparency window profiles. In Sec.~\ref{s3}, we explore magnomechanically induced amplification of the $\mathcal{PT}$-symmetric-like cavity magnomechanical system and slow light propegation Sec.~\ref{s4}, we present the conclusion of our work. \begin{figure} \centering \includegraphics[width=1\linewidth,height=0.18\textheight]{1.eps} \hspace{0in}% \caption{Schematic of the setup studied in this paper. A cavity magnomechanical system consists of one ferromagnetic yttrium iron garnet (YIG) sphere placed inside a passive microwave cavity, which connected with an auxiliary cavity. A bias magnetic field is applied in the $z$ direction on the sphere to excites the magnon modes, which are strongly coupled with the cavity field. In the YIG sphere, bias magnetic field activates the magnetostrictive interaction. The magnetic coupling strength of a magnon depends on the diameter of the sphere and the direction of the external bias field~\cite{045}. We assumed that the YIG's magnomechanical interactions were directly enhanced by microwave driving (in the $y$ direction) its magnon mode. Cavity, phonon, and magnon modes are labeled $a_i, b, m$ ($i=1,2$).} \label{fig1} \end{figure} \section{Model and Hamiltonian}\label{s2} We use a hybrid cavity magnomechanical system that consists of one high-quality YIG sphere placed inside a microwave cavity which connects with another empty cavity, as shown in Fig.~\ref{fig1}. The YIG sphere has $250\mu{m}$ in diameter and ferric ions ${\rm Fe}^{+3}$ of density $\rho=4.22\times10^{27}m^{-3}$. This causes a total spin $S=5/2\rho{V}_m=7.07\times10^{14}$, where $V_m$ is the volume of the YIG and $S$ is the collective spin operator which satisfies the algebra i.e., $\left[S_{\alpha},S_{\beta} \right]=i\varepsilon^{\alpha\beta\gamma}S_{\gamma}$. A uniform bias magnetic field (along z direction) is applied on the sphere, exciting the magnon mode that is then coupled to the first cavity field via magnetic-dipole interaction. In addition, the excitation of the magnon mode (i.e. Kittel mode) inside the sphere leads to a variable magnetization that results in the deformation of its lattice structure. The magnetostrictive force causes vibrations of the YIG, resulting in magnon-phonon interaction within YIG spheres~\cite{045}. It is noted that the single-magnon magnomechanical coupling strength depended on sphere diameter and direction of the external bias field is very weak. In this case, magnomechanical interaction of YIG can be enhanced by directly driving its magnon mode via an external microwave field. Furthermore, the first cavity is not only coupled to the second cavity, but also driven by a weak probe field. With consideration of the situation, the Hamiltonian for the whole system reads~\cite{044,046} \begin{equation}\label{e001} \begin{aligned} \mathcal{H}/\hbar&=\omega_m\hat{m}^{\dag}\hat{m}+\omega_{a_1}\hat{a}^{\dag}_1\hat{a}_1+\omega_{a_2}\hat{a}^{\dag}_2\hat{a}_2+\omega_b\hat{b}^{\dag}\hat{b}\\ &+g_1(\hat{m}^{\dag}\hat{a}_1+\hat{m}\hat{a}^{\dag}_1)+g_2\hat{m}^{\dag}\hat{m}(\hat{b}+\hat{b}^{\dag})\\ &+J(\hat{a}^{\dag}_1\hat{a}_2+\hat{a}^{\dag}_2\hat{a}_1)+i\Omega(\hat{m}^{\dag}e^{-i\omega_{pu}t}-\hat{m}e^{i\omega_{pu}t})\\ &+i\varepsilon_{pr}(\hat{a}^{\dag}_1e^{-i\omega_{pr}t}-\hat{a}_1e^{i\omega_{pr}t}),\\ \end{aligned} \end{equation} where $ \hat{a}_j^{\dag} $($j=1,2$), $ \hat{m}^{\dag} $and $ \hat{b}^{\dag} $ ($ \hat{a}_j $ , $\hat{ m} $ and $ \hat{ b} $) are the creation (annihilation) operators of the $ j $th cavity, magnon and phonon, respectively. They all satisfy the standard commutation relations for bosons. $ \omega_{a_j} $, $\omega_m $, $ \omega_b $ represent the resonance frequencies for the $ j $th cavity, magnon and phonon, respectively. $g_1 (J)$ denotes the coupling strength between the first cavity mode and magnon (the second cavity), and $g_2$ is the coupling constant between magnon and phonon. It is noted that the frequency $\omega_m$ is determined by the gyromagnetic ratio $\gamma$ and external bias magnetic field $H$ i.e., $\omega_m=\gamma H$ with $\gamma/2\pi=28\rm GHz$. In addition, $\Omega=\sqrt{5}/4\gamma \sqrt{N}B_0$ is the Rabi frequency, which is dependent of the coupling strength of the driving field with amplitude $B_0$ and frequency $\omega_{pu}$. And $\omega_{pr}$ is the probe field frequency having amplitude $\varepsilon_{pr}=\sqrt{2P_p\kappa_{1}/\hbar\omega_{pr}}$. It should be noted that we have ignored the nonlinear term $K\hat{m}^{\dag}\hat{m}^{\dag}\hat{m}\hat{m}$ in Eq.(1) that may arise due to strongly driven magnon mode~\cite{043} so as to ${K\left| \left\langle m\right\rangle \right| ^3}{\ll }\Omega$. With the rotating wave approximation, we can rewrite the whole Hamiltonian as \begin{equation}\label{e002} \begin{aligned} \mathcal{H}/\hbar&=\Delta_{m}\hat{m}^{\dag}\hat{m}+\Delta_{a_1}\hat{a}^{\dag}_1\hat{a}_1+\Delta_{a_2}\hat{a}^{\dag}_2\hat{a}_2+\omega_b\hat{b}^{\dag}\hat{b}\\ &+g_1(\hat{m}^{\dag}\hat{a}_1+\hat{m}\hat{a}^{\dag}_1)+g_2\hat{m}^{\dag}\hat{m}(\hat{b}+\hat{b}^{\dag})\\ &+J(\hat{a}^{\dag}_1\hat{a}_2+\hat{a}^{\dag}_2\hat{a}_1)+i\Omega(\hat{m}^{\dag}-\hat{m})\\ &+i\varepsilon_{pr}(\hat{a}^{\dag}_1e^{-i\delta t}-\hat{a}_1e^{i\delta t}),\\ \end{aligned} \end{equation} with $\Delta_{a_j}=\omega_{a_j}-\omega_{pu}$($j=1,2$), $\Delta_m=\omega_m-\omega_{pu}$, and $\delta=\omega_{pr}-\omega_{pu}$. In order to obtain the evolution of $a_j(t), m(t)$ and $b(t)$, we use quantum Heisenberg-Langevin equations, which can be expressed by \begin{equation}\label{e003} \begin{aligned} \dot{\hat{a_1}}&=-i\Delta_{a_1}\hat{a}_1-ig_1\hat{ m}-\kappa_{1}\hat{a}_1+\varepsilon_{pr}e^{-i\delta t}\\ &+\sqrt{2\kappa_1}{\hat{a}_1}^{in}(t)-iJ\hat{a}_2,\\ \dot{\hat{a_2}}&=-i\Delta_{a_2}\hat{a}_2-\kappa_{2}\hat{a}_2+\sqrt{2\kappa_2}{\hat{a}_2}^{in}(t)-iJ\hat{a}_1,\\ \dot{\hat{m}}&=-i\Delta_m\hat{m}-ig_1\hat{a}_1-\kappa_m\hat{m}-ig_{2}\hat{m}(\hat{ b}+\hat{b}^{\dag})\\ &+\sqrt{2\kappa_m}\hat{m}^{in}(t)+\Omega,\\ \dot{\hat{b}}&=-i\omega_b\hat{b}-ig_2\hat{m}^{\dag}\hat{m}-\kappa_{b}\hat{ b}+\sqrt{2\kappa_b}\hat{b}^{in}(t) \end{aligned} \end{equation} where $\kappa_1(\kappa_2),\kappa_b$ and $\kappa_m$ are the decay rates of the cavities, phonon and magnon modes, respectively. ${{\hat{a}_1}^{in}(t)}$, ${{\hat{a}_2}^{in}(t)}$, ${\hat{b}^{in}(t)}$ and ${\hat{m}^{in}(t)}$ are the vacuum input noise operators which have zero mean values and satisfies ${\left\langle \hat{q}^{in}\right\rangle}=0 (q=a_1, a_2, m, b) $. The magnon mode $m$ is strongly driven by a microwave field that causes a large steady-state amplitude corresponds to $ \arrowvert\langle m_s \rangle\arrowvert \gg 1 $. Moreover, owing to the magnon coupled to the cavity mode through the beam-splitter-type interaction, the two cavity fields also exhibit large amplitudes $ \arrowvert\langle a_{js} \rangle\arrowvert\gg 1 $. Then we can linearize the quantum Langevin equations around the steady-state values and take only the first-order terms in the fluctuating operator:${\left\langle\hat{O}\right\rangle}=O_s+{\hat{O}_+}{e^{-i\delta t}}+ {\hat{O}_-}{e^{i\delta t}}$~\cite{043}, where $\hat{O}=a_1, a_2, b, m.$ the steady-state solutions are given by \begin{equation}\label{e004} \begin{aligned} a_{1s}&=\frac{-(ig_1 m_s +iJa_{2s})}{i\Delta_{a_1}+\kappa_1}, a_{2s}=\frac{-iJa_{1s}}{i\Delta_{a_2}+\kappa_2},\\ b_s&=\frac{-ig_{2}\left| m_s\right|^2 }{i\omega_b+\kappa_b},\\ m_s&=\frac{-ig_1a_{1s}+\Omega}{i\widetilde{\Delta}_m+\kappa_m},\\ \widetilde{\Delta}_m&=\Delta_m+g_2(b_s+{b_s}^*)\\ \end{aligned} \end{equation} In order to achieve our motivation of signal amplification, we neglect off resonance terms to let $\hat{O}_-=0$, but $\hat{O}_+$ safisfying the relations \begin{equation}\label{e006} \begin{aligned} (i\lambda-\kappa_{1})\hat{{a}_1}_+-ig_1\hat{m}_+-iJ\hat{a_2}_++\varepsilon_{pr}&=0,\\ (i\lambda-\kappa_{2})\hat{{a}_2}_+-iJ\hat{a_1}_+&=0,\\ (i\lambda-\kappa_{m})\hat{m}_+-ig_1\hat{{a}_1}_+-iG\hat{b}_+&=0,\\ (i\lambda-\kappa_{b})\hat{b}_+-iG^\ast\hat{m}_+&=0,\\ \end{aligned} \end{equation} where we have set $G=g_2m_s$, $\lambda=\delta-\omega_b$, ${\omega_a}_i\gg\kappa_i$ $(i=1,2)$, and $\Delta_{a_1}=\Delta_{a_2}=\widetilde{\Delta}_m=\omega_b$. In this case, we can easily obtain \begin{equation}\label{e007} \begin{aligned} \hat{{a}_1}_+&=\frac{\varepsilon_{pr}}{\kappa_{1}-i\lambda+\frac{J^2}{\kappa_{2}-i\lambda}+\frac{{g_1}^2}{\kappa_{m}-i\lambda+\frac{\left|G \right|^2} {\kappa_{b}-i\lambda}}}.\\ \end{aligned} \end{equation} By use of the input-output relation for the cavity field $\varepsilon_{out}=\varepsilon_{in}-2\kappa_{1}\left\langle a_{1+} \right\rangle $ and setting $\varepsilon_{in}=0$, the amplitude of the output field can be written as \begin{equation}\label{e008} \begin{aligned} \varepsilon'_{out}&=\dfrac{\varepsilon_{out}}{\varepsilon_{pr}}=\frac{2\kappa_{1}\hat{{a}_1}_+}{\varepsilon_{pr}}. \end{aligned} \end{equation} The real and imaginary parts of the output field are $\rm Re$ $[\varepsilon'_{out}]=\kappa_{1}(\hat{a}_{1+}+\hat{a}_{1+}^{*})/\varepsilon_{pr}$ and $\rm Im$ $[\varepsilon'_{out}]=\kappa_{1}(\hat{{a}}_{1+}-\hat{{{a}}}^*_{1+})/\varepsilon_{pr}$. These factors describe the absorption and dispersion of the systems, respectively. \begin{figure} \centering \includegraphics[width=0.48\linewidth,height=0.16\textheight]{2a.eps} \hspace{0in}% \includegraphics[width=0.48\linewidth,height=0.16\textheight]{2b.eps} \hspace{0in}% \includegraphics[width=0.48\linewidth,height=0.16\textheight]{2c.eps} \hspace{0in}% \includegraphics[width=0.48\linewidth,height=0.16\textheight]{2d.eps} \hspace{0in}% \caption{The transmission $\left| t_p\right| ^2$ spectrum of probe field as function of $\delta/\omega_b$ when only interaction between two cavities is nonzero, (a) $J/2\pi=0.6\rm MHz$, (b) $J/2\pi=0.8\rm MHz$, (c )$J/2\pi=2.0\rm MHz$ and (d) $J/2\pi=6\rm MHz$. } \label{fig2} \end{figure} \section{ Induced amplification and slow light propegation in $\mathcal{PT}$-symmetric-like magnomechanically systems} \label{s3} For the numerical calculation, we use parameters chosen from a recent experiment on a hybrid magnomechanical system, where $\omega_{a_1}/2\pi=\omega_{a_2}/2\pi=10\rm {GHz}$, $\omega_b/2\pi=10\rm {MHz}$, $\kappa_b/2\pi=100\rm Hz$, $\omega_m/2\pi=10\rm GHz$, $\kappa_{1}/2\pi=2.0\rm MHz $, $\kappa_m/2\pi=0.1\rm MHz$, $g_1/2\pi=1.0\rm MHz$, $G/2\pi=3.5\rm MHz$,$\Delta_{a_1}=\Delta_{a_2}=\widetilde{\Delta}_m=\omega_b$, $\omega_d/2\pi=10\rm GHz$ are set ~\cite{026,033,034}. At first, we consider the transmission rate $\left| t_p\right| ^2$ as a function of the probe detuning $\delta/\omega_b$ in the context of parity-time-($\mathcal{PT}$-) symmetric-like magnomechanical system. From Eq.~(\ref{e008}), the rescaled transmission corresponding to the probe field can be expressed as \begin{equation}\label{e009} \begin{aligned} t_p&=1-{\frac{2\kappa_{1}\hat{{a}_1}_+}{\varepsilon_{pr}}}.\\ \end{aligned} \end{equation} We first depict the transmission spectrum of the probe field against the scaled detuning $\delta/\omega_b$, for different values of $J$ in Fig.~\ref{fig2}, where the phonon-magnon coupling rate and photon-magnon interaction parameters are set to zero, i.e. ${G}={g_1}=0$. From Fig.~\ref{fig2}(a), we can observe that the transmission peak near $\delta=\omega_b$ which is associated with the coupling rate of two cavities can much be larger than 1. The reason is that the gain cavity can scatter photons into the dissipative cavity. From Fig~\ref{fig2}(a)-(d), transmission coefficient decreases with the increase of coupling strength between two cavities. And we got a downward dip with two peaks From Fig~\ref{fig2}(c)-(d), amplification area becomes wider when $J$ getting lager simultaneously. This means that we can adjust the transmission coefficient and the size of the amplification region by changing the coupling between the two cavities when the system is double-cavity $\mathcal{PT}$-symmetric-like and the cavity contains no magnon. \begin{figure} \centering \includegraphics[width=0.48\linewidth,height=0.16\textheight]{3a.eps} \hspace{0in}% \includegraphics[width=0.48\linewidth,height=0.16\textheight]{3b.eps} \hspace{0in}% \includegraphics[width=0.48\linewidth,height=0.16\textheight]{3c.eps} \hspace{0in}% \includegraphics[width=0.48\linewidth,height=0.16\textheight]{3d.eps} \hspace{0in}% \caption{The transmission $\left| t_p\right| ^2$ spectrum of probe field as function of $\delta/\omega_b$ when only coupling between magnon and phonon is absent means $G=0$, $J/2\pi=3.0\rm MHz$ (a) $g_1/2\pi=1.0\rm MHz$, (b) $g_1/2\pi=1.2\rm MHz$, (c) $g_1/2\pi=1.5\rm MHz$ and (d) $g_1/2\pi=2.0\rm MHz$. } \label{fig3} \end{figure} Next, we introduced one more coupling constant only set the coupling between magnon-phonon $G=0$. We got another peak near $\delta=\omega_b$ compared with Fig~\ref{fig2}(c)-(d), which was caused by coupling between magnon-photon in Fig~\ref{fig3}. This is because the magnon can scatter the photons of the active field into the probe field via indirect interaction. However, from Fig~\ref{fig3}(b)-(d), the middle peak became taller when $g_1$ increases. Hence the effect of light amplification caused by the interaction of magnon-photon get better as $g_1$ increase. And with the increasing of middle peak, the height of two peaks on both sides stay the same, that is, the light amplification caused by the coupling between two cavities not affected by $g_1$. However, the amplification effect is not ideal. \begin{figure} \centering \includegraphics[width=0.48\linewidth,height=0.16\textheight]{4a.eps} \hspace{0in}% \includegraphics[width=0.48\linewidth,height=0.16\textheight]{4b.eps} \hspace{0in}% \includegraphics[width=0.48\linewidth,height=0.16\textheight]{4c.eps} \hspace{0in}% \includegraphics[width=0.48\linewidth,height=0.16\textheight]{4d.eps} \hspace{0in}% \caption{The transmission $\left| t_p\right| ^2$ spectrum of probe field as function of $\delta/\omega_b$ when $G/2\pi=2.0\rm MHz$, $g_1/2\pi=6.0\rm MHz$ (a) $J/2\pi=0.64\rm MHz$, (b) $J/2\pi=0.8\rm MHz$, (c)$J/2\pi=2\rm MHz$ and (d)$J/2\pi=4\rm MHz$. } \label{fig4} \end{figure} We show the transmission spectrum when three coupling constants are nonzero simultaneously and coupling between magnon-photon lager than magnon-phonon $g_1>G$ in Fig~\ref{fig4}(a)-(d). We got only one amplification peak when the coupling between two cavities $J/2\pi=0.64\rm MHz$, another upward peak appeared with the increasing of $J$, and the height of two peaks is the same and the amplification effect induced by the interaction of magnon-phonon and magnon-photon were superior at this time, this is because magnon and phonon can also scatter the photons of active cavity field into the probe field. And since the excited states of the cavity field are pumped into higher energy levels, they stay long enough can also be amplified by stimulated radiation. Amplification area becomes wider when $J$ getting lager simultaneously. These results provide an effective way to realize continuous optical amplification and have practical significance for the construction of quantum information processing enhancement signal based on cavity magnetic system. \begin{figure} \centering \includegraphics[width=0.8\linewidth,height=0.2\textheight]{5a.eps} \hspace{0in}% \includegraphics[width=0.8\linewidth,height=0.2\textheight]{5b.eps} \hspace{0in}% \caption{The transmission $\left| t_p\right| ^2$ spectrum of probe field as function of $\delta/\omega_b$ when three coupling constants are nonzero, (a) $\widetilde{\Delta}_m=0.5\omega_b$, (b) $\widetilde{\Delta}_m=1.5\omega_b$.} \label{fig5} \end{figure} Finally, we plotted the transmission spectrum of the probe field against the scaled detuning $\delta/\omega_b$, for different values of $\widetilde{\Delta}_m$. From Fig~\ref{fig5}(a)-(d), The obvious displacement of the two peaks means that we can not only change the value of amplification and the size of the amplification region by adjusting the coupling strength, but also flexibly change the location of the amplification region. Moreover, the phase $\phi_t$ of the output field can be given as \begin{equation}\label{e010} \begin{aligned} \phi_t&=\arg[\varepsilon_{out}]\\ \end{aligned} \end{equation} \begin{figure} \centering \includegraphics[width=0.8\linewidth,height=0.20\textheight]{6.eps} \hspace{0in}% \caption{The group delay $\tau_g$ as functions of $\delta/\omega_b$ when $G=2\rm MHz$, $J/2\pi=6.3\rm MHz$, $g_1/2\pi=6.1\rm MHz$.} \label{fig6} \end{figure} And the rapid phase dispersion of output field can cause the group delay, which can expressed as \begin{equation}\label{e011} \begin{aligned} \tau_g=\frac{\partial\phi_t}{\partial\omega_{pr}}\\ \end{aligned} \end{equation} From Fig.~\ref{fig6} shows that the group delay $\tau_g$ as a function of the detuning $\delta/\omega_b$ when three coupling constants are present. We can observe double peaks and double dips, peaks corresponding positive group delay i.e., slow light propagation, dips corresponding negative group delay means fast light propagation. And we can realize group speed delay of $3.5\times10^{-5}s$, a tunable switch from slow to fast can be achieved by adjusting the gain of active cavity or coupling constants. \section{Conclusion}\label{s4} In conclusion, we study the transmission of probe field in the situation of $\mathcal{PT}$-symmetric-like under a strong control field in a hybrid magnomechanical system in the microwave regime and realized ideal induced amplification when three coupling constants are nonzero simultaneously, which due to gain cavity, magnon and phonon can also scatter the photons into the dissipative cavity. Therefore, our results are not only providing rich scientific insight in terms of new physics but also potentially have important long-term technological implications, including the development of on-chip optical systems that support states of light that are immune to back scatter, are robust against perturbation and feature guaranteed unidirectional transmission. Then we achieved a group delay of $3.5\times{10}^{-5}$ seconds. Slowing down the energy speed of light allows photons to interact with matter enough to enhance some nonlinear effects. And because the dispersion curve is relatively flat, a small change in frequency will also cause a large change in photon momentum, so it can be made into a more sensitive optical switch. Finally, the slow light effect slows down the energy speed of light, which can play a role of storing photons and quantum optical chips. \label{key} \begin{center} {\bf{ACKNOWLEDGMENTS}} \end{center} This work is supported by the National Natural Science Foundation of China (Grant No. 62165014) and the Fujian Natural Science Foundation (Grant No. 2021J01185).
proofpile-arXiv_065-17
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} A number of new and upcoming applications require ultra-high data rates that are beyond the capabilities of mmWave-based 5G communication systems. In order to meet these requirements, higher frequencies such as the THz band (0.1-10 THz) are being investigated because of the availability of considerable amounts of unused spectrum in these bands \cite{Tataria_6G,5764977,huq2019terahertz,rappaport2019wireless}. Therefore the THz band, especially the frequencies between 0.1-0.5 THz, has been explored by a number of studies, e.g., \cite{Kurner2014,6898846,khalid2019statistical,ju2021subterahertz}. The recent decision of the Federal Communication Commission (FCC), the US spectrum regulator, to provide experimental licenses in this band has fostered additional research interest, and this band is widely expected to be an important part of 6G wireless systems \cite{tataria20216g}. It is important to know the characteristics of a wireless channel before the design of a communication system that is to operate in it can proceed. Channel sounding measurements and their statistical analysis are an essential first step towards the understanding of a channel and consequently towards the design and deployment of a wireless system \cite{molisch2012wireless}. Since channel characteristics are highly dependent on the operating frequency range as well as the environment and the scenarios a wireless channel operates in, channel sounding campaigns need to be performed in the key scenarios of interest. Existing channel measurements in the THz bands are mostly limited to short-distance indoor channels, see \cite{priebe2010measurement,6898846,6574880,khalid2019statistical,abbasi2020channel,xing2021millimeter}, usually as a result of measurement setup constraints; see also \cite{han2021terahertz} and references therein. However, recently there has been some progress on longer distances and outdoor scenarios as well. These include the first long-distance (100 m) double-directional channel measurements for the 140 GHz band, which were reported in 2019 \cite{abbasi2019double,abbasi2020double} by our group, as well as our recent works \cite{abbasi2021ultra,abbasi2021double, Abbasi2021THz} where we target device-to-device (D2D) scenarios, where both Tx and Rx are at about 1.6 m height. Another recent series of papers \cite{xing2021propagation,ju2021subterahertz,9558848} also reported channel measurements, path loss and statistical modeling at 140 GHz over longer channel lengths in an urban scenario; in those measurements the Tx is placed at 4 m above the ground (i.e., typical lamppost height). Our current paper aims to provide analysis for a scenario where the Tx is significantly higher, at 11.5 m, which is comparable to the height of a typical microcell base station height. This paper presents the results of an extensive measurement campaign in this environment, with sufficient points to allow a meaningful statistical evaluation. To the best of our knowledge, such a detailed channel measurement campaign for cases where Tx is elevated more than 10 m above the ground has not been reported before in the THz band. The results of this paper are based on ultra-wideband double-directional channel measurements for a 1 GHz bandwidth between 145-146 GHz\footnote{Some authors prefer to use the term "THz" to identify the frequency range $>300$ GHz while using "high mmWave", "sub-THz" or `low-THz' for frequencies between 100-300 GHz. Other authors use the term "THz" for both these cases. Since the latter is the most widely used terminology, we will employ it in this paper as well}, conducted at 26 different transmitter (Tx) - receiver (Rx) location pairs. 13 of these represent line-of-sight (LoS) scenarios with direct Tx-Rx distances ranging from nearly 20 m to 83 m, while the other 13 are non-line-of-sight (NLoS) cases with direct Tx-Rx distances also in approximately the same range. Based on the nearly 110,000 directional impulse responses we collected from these measurements, we model the path loss, shadowing, delay spread, angular spread and multipath (MPC) power distribution for both LoS and NLoS cases. Our detailed analysis includes results both for the maximum-power-beam direction (max-dir) and the omni-directional characteristics as well as the distance dependence of the key parameters, and their relevant confidence intervals for the various model fits. The remainder of this paper is organized as follows. In Section II, we describe the channel sounding setup and the measurement locations. Key parameters of interest and their processing is described in Section III. The results of the measurements and modeling are presented in Section IV. We finally conclude the manuscript in Section V. \section{Measurement equipment and site} \subsection{Testbed description} \begin{figure}[t!] \centering \includegraphics[width=12cm]{setup.png} \caption{Channel sounding setup.} \label{fig:setup} \end{figure} \begin{table}[t!] \centering \caption{Setup parameters.} \label{table:parameters} \begin{tabular}{|l|l|l|} \hline \textbf{Parameter} & \textbf{Symbol} & \textbf{Value} \\ \hline\hline \textit{Measurement points} & $N$ & 1001 \\ \textit{Tx height} & $h_{Tx}$ & 11.5 m \\ \textit{Rx height} & $h_{Rx}$ & 1.7 m \\ \textit{Start frequency} & $ f_{start} $ & 145 GHz \\ \textit{Stop frequency} & $ f_{stop} $ & 146 GHz \\ \textit{Bandwidth} & $BW$ & 1 GHz \\ \textit{IF bandwidth} & $IF_{BW}$ & 10 KHz \\ \textit{THz IF} & $ f_{THz IF} $ & 279 MHz \\ \textit{Antenna 3 dB beamwidth} & $\theta_{3dB}$ & 13$^{\circ}$ \\ \textit{Tx Az rotation range} & $\phi_{Tx}$ & [-60$^{\circ}$,60$^{\circ}$] \\ \textit{Tx Az rotation resolution} & $\Delta \phi_{Tx}$ & 10$^{\circ}$ \\ \textit{Rx Az rotation range} & $\phi_{Rx}$ & [0$^{\circ}$,360$^{\circ}$] \\ \textit{Rx Az rotation resolution} & $\Delta \phi_{Rx}$ & 10$^{\circ}$ \\ \textit{Tx El rotation range} & $\tilde{\theta}_{Tx}$ & [-13$^{\circ}$,13$^{\circ}$] \\ \textit{Tx El rotation resolution} & $\Delta \tilde{\theta}_{Tx}$ & 13$^{\circ}$ \\ \textit{Rx El rotation range} & $\tilde{\theta}_{Rx}$ & [-13$^{\circ}$,13$^{\circ}$] \\ \textit{Rx El rotation resolution} & $\Delta \tilde{\theta}_{Rx}$ & 13$^{\circ}$ \\ \hline \end{tabular} \end{table} For this measurement campaign, a frequency-domain channel sounder was used (see in Fig. \ref{fig:setup}), similar to \cite{Abbasi2021THz}. It is based on a Vector Network Analyzer (VNA), PNAX N5247A from Keysight, which has a frequency range from 10 MHz to 67 GHz. Frequency extenders, WR-5.1 VNAX manufactured by Virginia Diodes, were used to increase the VNA's frequency range to the 140-220 GHz band, which encompasses the band of interest to us. The extenders were used with the "high sensitivity" waveguide option to improve the received Signal to Noise Ratio (SNR). The antennas (along with the extenders) are mounted on a rotating positioning system. A key aspect of this setup is the use of a RF-over-fiber (RFoF) link, which was originally introduced in \cite{abbasi2020double}. The RFoF allows us to measure over longer distances than the typical 5-10 m range of similar systems without the link. For further details of the system please see \cite{Abbasi2021THz}. Table \ref{table:parameters} shows the configuration parameters for the sounder. The IF bandwidth of the VNA was selected such that there is a compromise between the dynamic range and the measurement duration, such that the duration of a measurement sweep is lower than the mechanical movement of the horn, and therefore has only a minor impact on the total measurement time. Each sweep of the VNA contains 1001 frequency points over the 1 GHz bandwidth, therefore allowing a maximum excess delay of 1 $\mu s$ without suffering the effects of aliasing. In other words, the maximum measurable excess runlength for multipaths is 300 m, a reasonable distance considering the scenarios and the frequency band being sounded. Given that the measurements take a significant amount of time, they were conducted at night while ensuring the scenario remains static/quasi static. The measurement locations were selected to be typical of a "microcellular" scenario. The Tx for the current measurements is set at a height of 11.5 m above the ground while the Rx is placed 1.7 m high from the ground. These parameters have been selected following the 3GPP UMi Street Canyon model, (3GPP TR 38.901 version 14.0.0 Release 14 suggests $h_{Tx}=10m$ and $1.5m \leq h_{Rx} \leq 22.5m$). Additionally, to extract the double-directional characteristics of the channel, the frequency sweeps of the VNA were repeated with sets of different orientations of the antennas. The positioners were oriented to ensure that the azimuth angle zero at both ends (Tx and Rx) corresponded to the LoS direction, irrespective of whether an unblocked optical LoS connection between Tx and Rx actually exists or not. We anticipated that multiple elevation scans are required to properly analyze the scenario, due to the different heights of the link ends, therefore, three elevation cuts are scanned on both the Tx and Rx. The Tx azimuth will scan a $120^\circ$ sector from $-60^\circ$ to $60^\circ$ with $10^\circ$ of azimuthal resolution, meanwhile, the Rx will carry out a complete azimuth scan, from $0^\circ$ to $360^\circ$ in steps of $10^\circ$, similar to Tx. In elevation, Tx and Rx are aligned so that when both antennas are facing ($\tilde{\theta}_{Tx}=\tilde{\theta}_{Rx}=0^\circ$), they are in the same elevation cut. After that, both ends will make additional scans $13^\circ$ above and $13^\circ$ below the "alignment", giving a total of 9 elevation scans per Tx-Rx location (3 elevation scans at the Tx and 3 for the Rx). The measurements were performed on different days, due to the long measurement time per point. For each day a calibration of the VNA, as well an over-the-air calibration (OTA) with the Tx and Rx at a LoS location was performed. Additional details of the setup are described in \cite{abbasi2020double,abbasi2021double,abbasi2021ultra} \footnote{It is important to mention that $\tilde{\theta} = 0^\circ$ is not equivalent to $\theta = 90^\circ$ in elevation, i.e. it is not the horizontal. $\tilde{\theta} = 0^\circ$ is different on each point in an absolute elevation reference.}.\\ Finally, the frequency domain-sounder provides a high phase stability which allows to conduct Fourier analysis and High Resolution Parameter Extraction (HRPE). Although HRPE can provide more accurate results, the current paper only uses Fourier analysis; HRPE analysis will be discussed in future work. \subsection{Measurement locations} A very important step in the measurement campaign is the selection of suitable locations so that we can realistically measure samples of LoS and NLoS scenarios. For this purpose we selected an area inside the University Park Campus of the University of Southern California (USC) in Los Angeles California, USA, that is located in the center of the city and is characterized as an urban environment. Fig. \ref{fig:Micro_sce} shows the scenario and locations of the Tx and Rx locations. As can be seen, the measurement campaign is divided into 6 routes with LoS or NLoS points each corresponding to a unique Tx location. For all 6 Tx locations, the positioner was placed on the edge of the Downey Way Parking Structure (PSA) building on the third floor. \begin{figure}[ht] \centering \includegraphics[width=1\columnwidth]{Microcellular.png} \caption{Microcellular campaign measurement scenario.} \vspace*{0mm} \label{fig:Micro_sce} \end{figure} Route One contains 6 LoS points aligned on the walkway of the Andrus Gerontology Center (GER) on the McClintock side of the building, covering a distance range from 33.5 to 81.7 m (see Fig. \ref{fig:LOS TX1-RX1}). Ronald Tutor Hall (RTH) and the Hughes Aircraft Electrical Engineering Center (EEB) together with the GER building create a "street canyon" for Route One points. It is important to note that the LoS was not obstructed or partially obstructed by foliage or other environmental objects. The three NLoS points were placed under the portico of the GER building (see Fig. \ref{fig:NLOS TX1-RX7}). Apart from the roof of the building, the pillars provide additional obstructions to the LoS. The second route is at the opposite side of PSA on a parking lot surrounded by Ray Irani (RRI) and Michelson Hall (MCB). While photo of Fig. \ref{fig:Micro_sce} shows cars, no cars were present during the measurement. Rx points 10, 11 and 13 were set on a straight line aligned to the Tx and 12 was set 30 meters north of point 11. For Route Three, the Tx is moved 40 meters along PSA parallel to Downey Way. Here, MCB's side corner completely blocks the LoS components for points 14-16. The distances for this route are approximately in the range of 40 to 60 meters. \begin{figure*}[t!] \centering \begin{subfigure}{0.31\textwidth} \centering \centering \hspace{7mm} \includegraphics[width=1\columnwidth]{Tx1Rx1LOS.png} \caption{Tx1-Rx1 LoS; $d=81.7 m$.} \vspace*{0mm} \label{fig:LOS TX1-RX1} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \centering \hspace{7mm} \includegraphics[width=1\columnwidth]{TX1RX7NLOS.png} \caption{Tx1-Rx7 NLoS ; $d=83.2 m$.} \vspace*{0mm} \label{fig:NLOS TX1-RX7} \end{subfigure} \caption{LoS and NLoS measurement points for Route One.} \label{fig:TX1} \end{figure*} Route Four places the Tx in the north west corner of PSA, the three Rx locations are placed in an alley between Technical Theatre Laboratory (TTL) and the Scene Dock Theatre (SCD) buildings at distances ranging from 35 to 65 meters approximately. Route Five places the Tx 15 meters south of the Tx location in Route Four and the four Rx locations were placed in the same alley between SCD and TTL as Route Four. The obstruction for this route is provided by the TTL building and foliage as shown in Fig. \ref{fig:Micro_sce} \footnote{Delay domain results for the subset of measurements on Route Four and Five will be presented in \cite{abbasi2022double}. This analysis is significantly different from the statistical analysis of the current work, which is based on a large set of measurements.}. \begin{figure*}[t!] \centering \begin{subfigure}{0.3\textwidth} \centering \hspace{0mm} \includegraphics[width=1\columnwidth]{TX4RX19LOS.png} \caption{Tx4-Rx19 LoS $d=64.6 m$.} \vspace*{0mm} \label{fig:LOS TX4-RX19} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \centering \hspace{0mm} \includegraphics[width=\columnwidth]{TX5RX23NLOS.png} \caption{Tx5-Rx23 NLoS $d=45.5 m$.} \vspace*{0mm} \label{fig:NLOS TX5-RX23} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \centering \hspace{0mm} \includegraphics[width=0.92\columnwidth]{Tx6RX24NLOS.png} \caption{Tx6-Rx24 NLoS $d=20.4 m$} \label{fig:NLOS TX6-RX24} \end{subfigure} \caption{LoS and NLoS sample points for Routes Four, Five and Six.} \label{fig:TX4_5} \end{figure*} Finally, for Route Six, the points are located on the McClintock side of PSA, approximately 10 meters behind the location of the Tx on Route One. The Rx locations were placed on the sidewalk next to McClintock Ave. Similar to the points in Route One, Olin Hall of Engineering (OHE), and RTH building create a "street canyon" environment for this route. The main obstruction of the LoS is provided by the foliage between the Tx and Rx locations. A sample point (Tx6-Rx24) is shown in Fig. \ref{fig:NLOS TX6-RX24}. Table \ref{tab:dist_Tx-Rx} shows a summary of the routes, locations and distances for all the measurement points of the campaign. \begin{table}[ht] \centering \caption{Description of Tx-Rx links and their respective direct distances.} \label{tab:dist_Tx-Rx} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Tx identifier} & \textbf{LoS Rx identifier} & $\mathbf{d_{LoS}}$ \textbf{(m)} & \textbf{NLoS Rx identifier} & $\mathbf{d_{NLoS}}$ \textbf{(m)} \\ \hline \hline \multicolumn{1}{|l|}{\textbf{$Tx_1$}} & 1-6 & 82.5, 64.5, 40.8, 72.3, 49.8, 32.1 & 7-9 & 83.2, 73.6, 46.4 \\ \hline \multicolumn{1}{|l|}{\textbf{$Tx_2$}} & 10-13 & 20.4, 33.9, 45.9, 54.3 & - & - \\ \hline \multicolumn{1}{|l|}{\textbf{$Tx_3$}} & - & - & 14-16 & 62.6, 53.4, 40.7 \\ \hline \multicolumn{1}{|l|}{\textbf{$Tx_4$}} & 17-19 & 36.3, 57.9, 65.7 & - & - \\ \hline \multicolumn{1}{|l|}{\textbf{$Tx_5$}} & - & - & 20-23 & 35, 58.5, 66.8, 45.5 \\ \hline \multicolumn{1}{|l|}{\textbf{$Tx_6$}} & - & - & 24-26 & 20.8, 30,20 \\ \hline \end{tabular} \end{table} \section{Parameters and processing} \subsection{Data processing} The VNA-based measurement setup explained in Section II produces a collection of frequency scans for each Tx-Rx geographical location. Each measurement can be described as a five-dimensional tensor $H_{meas}(f,\phi_{Tx},\tilde{\theta}_{Tx},\phi_{Rx},\tilde{\theta}_{Rx};d)$ where $f$ denotes the frequency points over the 1 GHz bandwidth (145-146 GHz), $\phi_{Tx}$ and $\phi_{Rx}$ denote the azimuth orientation of the Tx and Rx, respectively, $\tilde{\theta}_{Tx}$ and $\tilde{\theta}_{Rx}$ denote elevation orientation of the Tx and Rx, respectively, and $d$ is the Tx-Rx distance. Each tensor, $H_{meas}$, has dimensions of $N \times N^{\tilde{\theta}}_{Tx} \times N^{\phi}_{Tx} \times N^{\tilde{\theta}}_{Rx} \times N^{\phi}_{Rx}$ where $N$ is the number of frequency points per sweep (1001), $ N^{\tilde{\theta}}_{Tx}$ and $ N^{\tilde{\theta}}_{Rx}$ are the number of azimuth directions at the Tx (13) and Rx (36), and $ N^{\phi}_{Tx}$ and $ N^{\phi}_{Rx}$ are the number of elevation directions at the Tx $(3)$ and Rx $(3)$, respectively. Before the processing and parameter analysis we calibrate the measurement (eliminating the effects of the system and antennas) transfer functions. The OTA calibration $H_{OTA}(f)$ is used to obtain the calibrated directional channel transfer function by dividing the measured channel transfer function by the OTA calibration: $H(f,\phi_{Tx},\tilde{\theta}_{Tx},\phi_{Rx},\tilde{\theta}_{Rx};d) =H_{meas}(f,\phi_{Tx},\tilde{\theta}_{Tx},\phi_{Rx},\tilde{\theta}_{Rx};d)/H_{OTA}(f)$. The calibrated channel frequency response is used to compute different parameters such as the directional power delay profile (PDP) as \begin{equation} P_{calc}(\tau,\phi_{Tx},\tilde{\theta}_{Tx},\phi_{Rx},\tilde{\theta}_{Rx},d)=|\mathcal{F}_{f}^{-1}\{H(f,\phi_{Tx},\tilde{\theta}_{Tx},\phi_{Rx},\tilde{\theta}_{Rx},d)\}|^2, \end{equation} where $\mathcal{F}_{f}^{-1}$ is the inverse fast Fourier transform (IFFT) with respect to $f$. To minimize the effects of noise, thresholding and delay gating are applied similar to \cite{gomez-ponce2020,abbasi2021double} that is expressed as \begin{equation} P(\tau)=[P_{calc}(\tau): (\tau\leq\tau_{gate}) \land (P_{calc}(\tau)\geq P_{\lambda})] \end{equation} or $0$ if it does not fulfill these conditions. The value $\tau_{gate}$ is the delay gating threshold set to avoid using long delay bins or points with the "wrap-around" effect of the IFFT. $P_{\lambda}$ is the noise threshold that is selected to ignore the power of delay bins with noise which could particularly distort delay spread and angular spread. For the current measurements, $\tau_{gate}$ is set to 933.33 ns (corresponding to 280 m excess runlength) and $P_{\lambda}$ is selected to be 6 dB above the noise floor (average noise power) of the PDP. From the collection of directional PDPs we selected the strongest beam as the beam-pair with the highest power (max-dir) as \begin{equation} P_{\rm max}(\tau)=P(\tau,\phi_{\hat{i}},\tilde{\theta}_{\hat{j}},\phi_{\hat{k}},\tilde{\theta}_{\hat{l}},d); (\hat{i},\hat{j},\hat{k},\hat{l}) = \max_{i,j,k,l} \sum_\tau P(\tau,\phi_i,\tilde{\theta}_j,\phi_k,\tilde{\theta}_l,d). \end{equation} Finally, an "omni-directional" PDP is constructed by first combining all the elevations by summing over different elevations for each delay bin, and then selecting the azimuth with the strongest contribution. The selection of the strongest azimuth direction per delay bin to reconstruct a PDP is similar to \cite{Hur_omni,abbasi2021ultra}. Overall, this process can be summed up as \begin{equation} P_{\rm omni}(\tau;d)=\max_{\phi_{Tx},\phi_{Rx}} \sum_i \sum_j P(\phi_{Tx},\tilde{\theta}_{Tx}^i,\phi_{Rx},\tilde{\theta}_{Rx}^j;d). \end{equation} where $i,j\in \{1,2,3\}$ represents the elevations ($ \tilde{\theta}_{Tx}^i, \tilde{\theta}_{Rx}^j \in \{-13^\circ,0^\circ,13^\circ\} $) for Tx and Rx, respectively. The adding of the different elevation cuts is meaningful because the spacing of the cuts in the elevation domain was taken as $13^\circ$, which is identical to the (full width half maximum (FWHM)) beamwidth. Thus, the effective elevation pattern of the sum is approximately constant in the range $-13^\circ \le \tilde{\theta}_{Tx} \le 13^\circ$, and has a FWHM of $39^\circ$, and similar at the Rx \subsection{Parameter computation} \label{sec:par} Similar to the analysis performed in \cite{Abbasi2021THz}, we use the directional and omni-directional PDPs described in the previous section to compute several condensed parameters in order to characterize the propagation channels. The computations are based on the noise-thresholded and delay-gated PDPs calculated as described above. \subsubsection{Path loss and shadowing} The first parameter to be computed is the path loss. By definition (\cite{molisch2012wireless}) it is computed as the sum of the power on each delay bin in the PDP. \begin{equation} PL_i(d)=\sum_\tau P_i(\tau,d), \end{equation} where $i$ can denote omni-directional (omni) or the strongest beam (best-dir). To model its behavior as a function of distance, we use the classical single slope "power law" also known as $\alpha - \beta$ model, such that the pathloss in dB is \begin{equation} PL_{\rm dB}(d)=\alpha+10\beta \log_{10}(d)+\epsilon, \end{equation} where $\alpha$ and $\beta$ are the estimated parameters, and $\epsilon$ represents the "Shadowing" or random variation of the data with respect to its mean. It is assumed to follow a zero-mean normal distribution $\epsilon \sim N(0,\sigma)$, where $\sigma$ is the standard deviation of the distribution. To obtain the parameters of the model, we can use approaches such as maximum likelihood estimation (MLE) or ordinary least squares (OLS) \cite{molisch2012wireless,kartunen_PL}. Following common assumptions in the modeling of path loss, the procedure is separated between the ensemble of LoS and NLoS measurement points. An analysis carried out in \cite{karttunen2016path} describes the challenges of an uneven density of distances between the Tx and Rx (in linear and logarithmic scale). This non-uniformity can lead to an increasing in the leverage of some points in the regression analysis compared to others. To compensate for this effect, \cite{karttunen2016path} implemented a weighted regression model for path loss modeling. Each weight ($w_i$) is computed according to the density of points along the distance in $log_{10}$ scale. So, $w_i$ will be larger for points located in low density areas and vice versa. While multiple weighting methods are described in the paper, however, we adopt the approach of "equal weights to N bins over $log_{10}(d)$ ($w_i \propto log_{10}(d)$)", because this strategy corresponds to a least square fitting of "dB vs $log_{10}(d)$". \subsubsection{Delay spread} The rms delay spread (RMSDS) is calculated as the second central moment of the PDP \cite{molisch2012wireless}: \begin{equation} \sigma_\tau=\sqrt{\frac{\int_\tau P_i(\tau)\tau^2 d\tau}{\int_\tau P_i(\tau)d\tau} - \left(\frac{\int_\tau P_i(\tau)\tau d\tau}{\int_\tau P_i(\tau)d\tau}\right)^2}, \end{equation} where $i$ can be "omni" or "max-dir". Noise and delay thresholding are essential for reducing the impact of long-delayed artefacts. Since this parameter is defined for continuous waveforms, therefore to approximate it, we increase the number of samples in the PDPs by oversampling them. Additionally, we apply a Hann window to reduce the impact of the sidelobes in the parameter estimation. \subsubsection{Angular spread} \label{sect:AS} The measurement campaign creates a "virtual" MIMO scenario for each location pair, allowing angular analysis. A way to quantify the dispersion of power over different angular directions is the angular spread. The starting point of its computation is the double-directional angular power spectrum ($DDAPS_{full}$), a function of the power concentration over different directions (particular azimuth, elevation directions) at Tx and Rx. The DDAPS is computed as \begin{equation} DDAPS_{full}(\phi_{Tx},\tilde{\theta}_{Tx},\phi_{Rx},\tilde{\theta}_{Rx};d)=\sum_\tau P(\tau,\phi_{Tx},\tilde{\theta}_{Tx},\phi_{Rx},\tilde{\theta}_{Rx};d). \end{equation} Similar to the delay spread analysis, noise and delay gating are important before the computation of $DDAPS_{full}$ to minimize noise accumulation in directions where no significant MPC is observed. Using the $DDAPS_{full}$, we add the contribution of different elevations from both ends to have a similar DDAPS as \cite{Abbasi2021THz}. \begin{equation} DDAPS(\phi_{Tx},\phi_{Rx};d)=\sum_{\tilde{\theta}_{Tx}}\sum_{\tilde{\theta}_{Rx}} DDAPS_{full}(\phi_{Tx},\tilde{\theta}_{Tx},\phi_{Rx},\tilde{\theta}_{Rx};d). \end{equation} We combine the different elevations we measured since the limited number of elevation cuts (which was imposed by limits on the measurement duration) is insufficient for a detailed elevation analysis. Moreover, since the direction of the primary propagation is well covered, it is expected that there will be less information in other elevation cuts. Finally, to compute the (azimuthal) angular power spectrum (APS) at the Tx, we integrate over $\phi_{Rx}$, and do the same for the APS at the Rx. Using the APS, we compute the angular spread by applying Fleury's definition \cite{fleury2000first}: \begin{equation} \sigma^\circ=\sqrt{\frac{\sum_\phi \left|e^{j\phi}-\mu_\phi \right|^2 APS_k(\phi)}{\sum_\phi APS_k(\phi)}}, \end{equation} where $k$ can be Tx or Rx indicating departure or arrival APS and $\mu_\phi$ can be computed as \begin{equation} \mu_\phi=\frac{\sum_\phi e^{j\phi} APS_k(\phi)}{\sum_\phi APS_k(\phi)}. \end{equation} It is important to mention that the obtained values will be an upper bound for the actual angular spreads of the channel due to the finite horn antenna beamwidth \cite{Abbasi2021THz}. \subsubsection {Power distribution over MPC} In channel analysis, it is important to examine the power distribution of MPCs over the delay domain. Specially, the concentration of power in the "strongest" MPC versus the rest of the MPCs in the channel. Thus, we define $\kappa_1$, a parameter computed as follows: \begin{equation} \kappa_1=\frac{P_i(\tilde{\tau}_1)}{\sum_{\tilde{\tau}=\tilde{\tau}_2}^{\tilde{\tau}_N} P_i(\tilde{\tau})}, \end{equation} where $i$ can be "omni" or "max-dir", and $\tilde{\tau_k}$ is the delay bin of the $k$-th local maximum of the PDP $P_i(\tilde{\tau})$, ordered by magnitude, so that $\tilde{\tau_1}$ signifies the location of the largest local maximum. As explained in \cite{abbasi2020channel}, $\kappa_1$ is different from the "Rice Factor" because it is not possible to differentiate between closely spaced MPCs, therefore, the local maximum of the PDP is not strictly identical to an MPC. To perform the most accurate Rice Factor analysis, HRPE can be used so that MPCs are properly identified, and this will be presented in future work. Similarly as $\sigma_\tau$ we apply oversampling and a Hann window to avoid the sidelobe effects and to have a better estimation of the parameter.\\ In the next section, regression analysis will be added in the estimation of the parameters $\sigma_\tau,\kappa_1$ similar to \cite{Abbasi2021THz}. With this regression, we will observe their behavior with respect to the distance between Tx and Rx. The linear regression model is with respect to logarithmic quantities and it is $Z=\alpha+\beta \log_{10}(d)$. \section{Measurement results} In this section the results for the measurement campaign are discussed. \subsection{Power delay profiles} To start with the measurement analysis, we first present some sample PDPs, characterizing one LoS and two NLoS location pairs. The LoS measurement was taken at a distance of 82.5 m. Fig. \ref{fig:PDP-LOS} presents the omni-directional and max-dir PDPs. The LoS MPC is clearly observed in both the max-dir and omni-directional PDPs. Apart from the LoS MPC, multiple MPCs with runlength $\leq 160$ m with power only up to 30dB lower than the LoS. These "extra" components are diminished in the max-dir as a result of the spatial filtering effect provided by the antennas. In this particular case, for the omni-directional case, we observed several (very weak) MPCs arriving before the LoS MPC. As explained in section II, the maximum measurable excess delay of the system is $1 \mu s$ which leads to 300 m of maximum runlength. Any MPC with delay $\geq 1 \mu s$ will suffer from aliasing, and so be wrapped around in delay domain. This effect was corrected for all figures. Additionally, the PDPs shown are oversampled and windowed using a Hann window to diminish the effect of sidelobes and observe low power MPCs. \\ \begin{figure} \centering \centering \hspace{7mm} \includegraphics[width=0.5\columnwidth]{los_82.5m.eps} \caption{LoS case with $d=82.5 m$ (Tx1-Rx1).} \label{fig:PDP-LOS} \end{figure} For the NLoS case, we present two location pairs, with Tx-Rx distances of 45.5 and 83 m, respectively. A richer multipath scenario is expected because of the attenuation of the LoS component and increase of additional MPCs that arrive at the Rx. In the case of the 45.5 m measurement, we see a concentrated max-dir PDP, and small quantity of additional MPCs with power $\leq 30dB$, similar to a LoS scenario. The scenario for this measurement is shown in Fig. \ref{fig:NLOS TX5-RX23}, and as can be seen, the Tx is set in the PSA building and the Rx is located in the alley between TTL and SCD, creating a "street-canyon" and concentrating (in the delay domain) the power reaching to the Rx, since all components guided by the canyon have fairly similar delays created by different number of reflections on the housewalls, which are just a street width apart. We also note that while the first pronounced peak in the PDP is the strongest one, it is {\em not} a quasi-LoS (as often observed at low frequencies), as shown by the fact that its associated delay is {\em longer} than that of the (theoretical) LoS. \begin{figure*}[t!] \centering \begin{subfigure}{0.45\textwidth} \centering \hspace{7mm} \includegraphics[width=1\columnwidth]{nlos_45.5m.eps} \caption{NLoS case with $d=45.5 m$ (Tx5-Rx23).} \label{fig:PDP-NLOS1} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \centering \hspace{7mm} \includegraphics[width=1\columnwidth]{nlos_83m.eps} \caption{NLoS case with $d=83 m$ (Tx1-Rx7).} \vspace*{0mm} \label{fig:PDP-NLOS2} \end{subfigure} \caption{PDP for two sample NLoS measurement cases.} \label{fig:PDP-NLOS} \end{figure*} The second point is shown at Fig. \ref{fig:NLOS TX1-RX7}, in this case it is observed that there is a larger set of MPCs, especially for the omni-directional PDP, compared to the previous NLoS case. These MPCs are a product of reflections coming from the RTH building. This effect can be noticed in Fig. \ref{fig:APS-NLOS2}, and we see that the first significant MPC is not the strongest one. More details of this scenario will be discussed in the next subsection in more detail. \subsection{Angular power spectrum} This section discusses the Angular Power Spectrum (APS) of the selected sample LoS and NLoS location pairs. For the LoS case, we observe a large concentration of MPCs in the LoS direction, an additional concentration of MPCs can be observed at $\phi_{Tx}=37,\phi_{Rx}=35$. These MPCs correspond to reflections coming off the RTH building, additionally, we can also observe MPCs at angles close to $\phi_{Tx}=0,\phi_{Rx}=180$.\\ \begin{figure} \centering \centering \includegraphics[width=0.5\columnwidth]{los_82.5m_aps.eps} \caption{LoS APS for $d=82.5 m$ (Tx1-Rx1).} \vspace*{0mm} \label{fig:APS-LOS} \end{figure} The NLoS points have a different behavior compared to LoS. In the case of the point Tx5-Rx20, we see a large concentration of MPCs in one main direction, similar as in the sample LoS. However, the center of this concentration is not not in the LOS direction, but rather in the direction of the street, with $\phi_{Tx}=-15,\phi_{Rx}=-27$. This concentration of MPCs are a product of the "street canyon" effect created by the SCD and the TTL building (see Fig. \ref{fig:NLOS TX5-RX23}). An additional concentration of MPCs can be observed at $\phi_{Tx}=-15,\phi_{Rx}=47$; in this case, the Tx horn is still facing towards the canyon but the receiver collects a weaker reflection inside it. For the last NLoS location pair, (Tx1-Rx7) shows, has a distance of ($d=83$ m), and as can be seen in Fig. \ref{fig:APS-NLOS2}, several maximuma in the APS, with the strongest one at $\phi_{Tx}=37,\phi_{Rx}=28$. This corresponds to Tx and Rx looking towards the RTH building, and is thus congruent with the scenario observed in Fig. \ref{fig:NLOS TX1-RX7}. As can be seen in the picture, the LoS is blocked by the pillars in front of the receiver and the right-hand side of the receiver has an opening facing towards McClintock Ave, the OHE, RTH and EEB buildings. Moreover, additional weaker MPCs (approx. 8dB weaker than the strongest MPCs) are observed at $\phi_{Tx}=-38,\phi_{Rx}=27$. These MPCs are reflections from RTH, similar to the previous MPCs, however they reach the receiver from the left hand side gap observed between the inner wall of GER building and the pillar, which means additional attenuation. \begin{figure*} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{nlos_45.5m_aps.eps} \caption{NLoS APS for $d=45.5 m$ (Tx5-Rx23).} \vspace*{0mm} \label{fig:APS-NLOS1} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{nlos_83m_aps.eps} \caption{NLoS APS for $d=83 m$ (Tx1-Rx7).} \vspace*{0mm} \label{fig:APS-NLOS2} \end{subfigure} \caption{Sample NLoS APSes for two cases.} \label{fig:APS-NLOS} \end{figure*} The above discussions not only provide a description of relevant propagation effects, but also support the correctness of the measurements, as the extracted MPCs are in agreement with the geometry of the environment. Further verifications, not shown here for space reasons, were done for other location pairs as well. \subsection{Path loss and shadowing} In this section we start analyzing the ensemble of measurement locations. For the analysis, the points will be separated into LoS and NLoS to analyze their characteristics separately. For the LoS case, Fig. \ref{PLOSS-LOS} shows the path loss analysis using "max-dir", "omni-directional" PDPs and the Friis Model. For all points it can be observed that the path loss for the "max-dir" is larger or equal to the "omni" path loss points ($PL_{max-dir}\geq PL_{omni}$). Max-dir and omni-directional PL models are lower than the Friis model. The PL exponent is $\beta=1.88$, lower than the free space model. This fact is congruent with the scenario because the LoS points in Routes One and Four are in "street canyon" LoS environments (9 of 13 locations), therefore the "waveguiding" effect will produce a path loss lower than the free space. The parameters extracted by the "weighted" regression and the OLS are similar because of the low variations of the points against their linear models, additionally the shadowing shows the same variance in both cases and has a small difference in the mean value. \begin{figure*}[t!] \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{pl_los.eps} \caption{Path loss modeling with $log_{10}(d)$ weighting.} \vspace*{0mm} \label{PLOSS-LOS} \end{subfigure} \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{sha_los.eps} \caption{Shadowing.} \vspace*{0mm} \label{SHA-LOS} \end{subfigure} \caption{Path loss and shadowing models for LoS points.}% \label{fig:los_PL_SHA}% \vspace{-0 mm} \vspace{-5mm} \end{figure*} Fig. \ref{PLOSS-NLOS} shows the regression modeling for the NLoS case. The max-dir points show large values of PL compared to the omni-directional points, since in this case a significant percentage of energy is contained in MPCs whose directions are different from the max-Dir horn orientations. For a similar reason, the path loss exponent for the max-dir and omni-directional case are different ($\beta=2.57,\beta=1.76$ respectively). The omni-directional case has a smaller slope due to more MPCs from different directions provide energy at large distances. The shadowing oscillates between -15 and 15 dB for the omni and max-dir cases. The observed shadowing standard deviations for both cases are 6.21 and 7.89 for the max-dir and omni-directional cases, respectively. A summary of the estimated regression parameters for path loss and statistical parameters for the shadowing with their respecting 95\% confidence interval is shown in Tables \ref{tab:PL} and \ref{tab:sha}. \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{PL.eps} \caption{Linear fitting with $log_{10}(d)$ weighting.} \label{PLOSS-NLOS} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{sha.eps} \caption{Shadowing.} \label{SHA-NLOS} \end{subfigure} \caption{Path loss and shadowing models for NLoS points.}% \vspace{-0 mm} \label{fig:nlos_PL_SHA}% \vspace{-5mm} \end{figure*} In the NLoS case, we observed path loss values larger compared to Friis, except for the point (Tx5-Rx23). This point is located in a corridor between SCD and TTL buildings, (see Fig. \ref{fig:NLOS TX5-RX23}). In this case there exists a very strong reflection, and the associated directional pathloss equals Friis, while the omni-directional pathloss is lower due to the existence of additional MPCs; similar to the LoS situation; this is {\em not} unphysical. \begin{table*}[t!] \centering \caption{Path loss parameters with $95\%$ confidence interval.} \label{tab:PL} {% \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Parameter}}} & \multicolumn{6}{c|}{\textbf{Linear model parameters estimated with 95\% CI}} \\ \cline{2-7} \multicolumn{1}{|l|}{} & \multicolumn{1}{c|}{$\alpha$} & \multicolumn{1}{c|}{$\alpha_{min,95\%}$} & \multicolumn{1}{c|}{$\alpha_{max,95\%}$} & \multicolumn{1}{c|}{$\beta$} & \multicolumn{1}{c|}{$\beta_{min,95\%}$} & \multicolumn{1}{c|}{$\beta_{max,95\%}$} \\ \hline \hline $PL_{omni}^{LoS}$ & 72.88 & 69.91 & 75.86 & 1.93 & 1.74 & 2.11 \\ \hline $PL_{max-dir}^{LoS}$ & 77.33 & 74.1 & 80.57 & 1.88 & 1.68 & 2.08 \\ \hline $PL_{omni}^{LoS} OLS$ & 75.02 & 70.47 & 79.58 & 1.8 & 1.53 & 2.08 \\ \hline $PL_{max-dir}^{LoS} OLS$ & 77.06 & 71.74 & 82.37 & 1.89 & 1.58 & 2.21 \\ \hline $PL_{omni}^{NLoS}$ & 91.28 & 62.71 & 119.85 & 1.76 & -0.05 & 3.56 \\ \hline $PL_{max-dir}^{NLoS}$ & 84.54 & 49.21 & 119.88 & 2.57 & 0.34 & 4.81 \\ \hline $PL_{omni}^{NLoS} OLS$ & 86.81 & 52.96 & 120.66 & 2.03 & -0.01 & 4.07 \\ \hline $PL_{max-dir}^{NLoS} OLS$ & 82.91 & 39.96 & 125.87 & 2.68 & 0.09 & 5.27 \\ \hline \end{tabular}% } \end{table*} \begin{table*}[t!] \centering \caption{Shadowing model parameters with $95\%$ confidence interval.} \label{tab:sha} {% \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Parameter}}} & \multicolumn{6}{c|}{\textbf{Statistical model parameters estimated with 95\% CI}} \\ \cline{2-7} \multicolumn{1}{|l|}{} & \multicolumn{1}{c|}{$\mu$} & \multicolumn{1}{c|}{$\mu_{min,95\%}$} & \multicolumn{1}{c|}{$\mu_{max,95\%}$} & \multicolumn{1}{c|}{$\sigma$} & \multicolumn{1}{c|}{$\sigma_{min,95\%}$} & \multicolumn{1}{c|}{$\sigma_{max,95\%}$} \\ \hline \hline $\epsilon_{omni}^{LoS}$ & 0.09 & -0.35 & 0.52 & 0.72 & 0.52 & 1.19 \\ \hline $\epsilon_{max-dir}^{LoS}$ & -0.01 & -0.5 & 0.48 & 0.8 & 0.58 & 1.33 \\ \hline $\epsilon_{omni}^{LoS} OLS$ & 0 & -0.42 & 0.42 & 0.69 & 0.49 & 1.14 \\ \hline $\epsilon_{max-dir}^{LoS} OLS$ & 0 & -0.49 & 0.49 & 0.8 & 0.58 & 1.33 \\ \hline $\epsilon_{omni}^{NLoS}$ & 0.04 & -3.73 & 3.81 & 6.24 & 4.48 & 10.3 \\ \hline $\epsilon_{max-dir}^{NLoS}$ & 0.18 & -4.59 & 4.94 & 7.89 & 5.66 & 13.02 \\ \hline $\epsilon_{omni}^{NLoS} OLS$ & 0 & -3.76 & 3.76 & 6.21 & 4.46 & 10.26 \\ \hline $\epsilon_{max-dir}^{NLoS} OLS$ & 0 & -4.77 & 4.77 & 7.89 & 5.65 & 13.02 \\ \hline \end{tabular}% } \end{table*} \subsection{RMSDS} The next parameter to evaluate is the RMSDS. In the LoS case, we expect lower values for the max-dir due to the spatial filtering. Similarly, an increase in the RMSDS with increasing distance between the Tx and Rx is expected, due to a large number and difference in runlength of the MPCs. Fig. \ref{fig:RMSDS-LOS}a shows the probability density function of the RMSDS. It is plotted on a logarithmic scale, i.e., dBs, as is common in particular in 3GPP. This representation also allows to easily see the excellent fit of a lognormal distribution to the measurement results. The variance of the max-dir points is approximately $62\%$ the value of the omni-directional case. Fig. \ref{fig:RMSDS-LOS}b shows the RMSDS as a function of distance and the linear regression, showing an increase with distance, as anticipated (and also in agreement with experimental results at lower frequencies). It is also observed that for all measurement points the max-dir values are smaller than the omni-directional. \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{ds_los.eps} \caption{CDF of delay spread.} \vspace*{0mm} \label{fig:RMSDS-LOS-CDF}% \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{dsvd_los.eps} \caption{Linear modeling of $\sigma_\tau$ with weighting.} \vspace*{0mm} \label{fig:RMSDS-LOS-LF}% \end{subfigure} \caption{Modeling of delay spread for LoS cases.}% \vspace{-0 mm} \label{fig:RMSDS-LOS}% \vspace{-5mm} \end{figure*} Fig. \ref{fig:RMSDS-NLOS} shows the RMSDS analysis for the NLoS case. It is observed that the CDFs have a different slope ($\beta_{omni}^{NLoS}=11.91, \beta_{max-dir}^{NLoS}=7.14$). This behavior can be related to the "street-canyon" scenarios of Routes One, Four, and Six. The waveguiding effect allows a concentration of the power and MPCs in a small set of directions, so the max-dir PDPs have low number of MPCs that are concentrated in smaller range of delay bins. A special case of the "waveguiding" effect is the point Tx5-Rx23 ($d=45.5m$), in which the $\sigma_\tau$ values for the omni-directional and max-dir cases are almost equal. A summary of the estimated regression parameters and the statistical analysis are shown in Tables \ref{tab:linear-model-RMSDS}, \ref{tab:RMSDS_CDF}. \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{ds.eps} \caption{CDF} \vspace*{0mm} \label{fig:RMSDS-NLOS-CDF}% \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{dsvd.eps} \caption{Linear fitting with $log_{10}(d)$ weighting.} \vspace*{0mm} \label{fig:RMSDS-NLOS-LF}% \end{subfigure} \caption{Modeling of delay spread for NLoS points.}% \vspace{-0 mm} \label{fig:RMSDS-NLOS}% \vspace{-5mm} \end{figure*} \begin{table*}[t!] \centering \caption{Linear model parameters for $\sigma_{\tau}$ with $95\%$ confidence interval.} \label{tab:linear-model-RMSDS} {% \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Parameter}}} & \multicolumn{6}{c|}{\textbf{Linear model parameters estimated with 95\% CI}} \\ \cline{2-7} \multicolumn{1}{|l|}{} & \multicolumn{1}{c|}{$\alpha$} & \multicolumn{1}{c|}{$\alpha_{min,95\%}$} & \multicolumn{1}{c|}{$\alpha_{max,95\%}$} & \multicolumn{1}{c|}{$\beta$} & \multicolumn{1}{c|}{$\beta_{min,95\%}$} & \multicolumn{1}{c|}{$\beta_{max,95\%}$} \\ \hline \hline $\sigma_{\tau_{omni}}^{LoS}$ & -108.22 & -122.6 & -93.83 & 17.82 & 8.76 & 26.88 \\ \hline $\sigma_{\tau_{max-dir}}^{LoS}$ & -94.11 & -100.92 & -87.29 & 4.99 & 0.7 & 9.28 \\ \hline $\sigma_{\tau_{omni}}^{NLoS}$ & -96.16 & -114.09 & -78.22 & 11.91 & 0.57 & 23.26 \\ \hline $\sigma_{\tau_{max-dir}}^{NLoS}$ & -96.71 & -113.8 & -79.63 & 7.14 & -3.67 & 17.95 \\ \hline \end{tabular}% } \end{table*} \begin{table*}[t!] \centering \caption{Statistical model parameters for $\sigma_{\tau}$ with $95\%$ confidence interval.} \label{tab:RMSDS_CDF} {% \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Parameter}}} & \multicolumn{6}{c|}{\textbf{Statistical model parameters estimated with 95\% CI}} \\ \cline{2-7} \multicolumn{1}{|l|}{} & \multicolumn{1}{c|}{$\mu$} & \multicolumn{1}{c|}{$\mu_{min,95\%}$} & \multicolumn{1}{c|}{$\mu_{max,95\%}$} & \multicolumn{1}{c|}{$\sigma$} & \multicolumn{1}{c|}{$\sigma_{min,95\%}$} & \multicolumn{1}{c|}{$\sigma_{max,95\%}$} \\ \hline \hline $\sigma_{\tau_{omni}}^{LoS}$ & -78.11 & -80.68 & -75.55 & 4.25 & 3.05 & 7.01 \\ \hline $\sigma_{\tau_{max-dir}}^{LoS}$ & -85.8 & -86.98 & -84.62 & 1.95 & 1.4 & 3.22 \\ \hline $\sigma_{\tau_{omni}}^{NLoS}$ & -76.38 & -79.2 & -73.56 & 4.66 & 3.34 & 7.7 \\ \hline $\sigma_{\tau_{max-dir}}^{NLoS}$ & -84.96 & -87.58 & -82.34 & 4.33 & 3.11 & 7.15 \\ \hline \end{tabular}% } \end{table*} \subsection{Angular spread} The next parameter to analyze is the angular spread. In this case, the analysis is separated between the Tx and Rx end. As explained in Section II, the scan range for Tx and Rx are different, so our conjecture is to observe a larger angular spread in the Rx side for both LoS and NLoS cases. Furthermore, the richer number of scattering objects at street level is expected to compound this effect. \\ \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{as_los.eps} \caption{LoS case.} \vspace*{0mm} \label{fig:AS_CDF_LOS}% \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{AS.eps} \caption{NLoS case.} \vspace*{0mm} \label{fig:AS_CDF_NLOS}% \end{subfigure} \caption{Modeling of $\sigma^\circ$ for all points.}% \vspace{-0 mm} \label{fig:AS-CDF}% \vspace{-5mm} \end{figure*} Fig. \ref{fig:AS-CDF} shows the CDF for LoS and NLoS cases. In both cases, the data confirm our hypothesis. For example, in the LoS case the Tx points show a smaller spread compared to the Rx ($\sigma^\circ_{NLoS} Tx < \sigma^\circ_{NLoS} Rx$). This result is related to the fact that reflected MPCs are reflected in the vicinity of the Rx, and are ''seen" by the Tx under angles similar to that of the LoS. On the other hand, the NLoS points show AS points with a similar spread (i.e. $\sigma^\circ_{NLoS} Tx \approx \sigma^\circ_{NLoS} Rx$). A possible cause for this behavior is the waveguiding in the "street canyon" environments, which concentrates the MPCs in a narrower angular range. A summary of the estimated statistical parameters with their $95\%$ confidence interval is shown in Table \ref{tab:stat-model-AS}. \begin{table*}[t!] \centering \caption{Statistical model parameters for $\sigma^\circ$ with $95\%$ confidence interval.} \label{tab:stat-model-AS} {% \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Parameter}}} & \multicolumn{6}{c|}{\textbf{Statistical model parameters estimated with 95\% CI}} \\ \cline{2-7} \multicolumn{1}{|l|}{} & \multicolumn{1}{c|}{$\mu$} & \multicolumn{1}{c|}{$\mu_{min,95\%}$} & \multicolumn{1}{c|}{$\mu_{max,95\%}$} & \multicolumn{1}{c|}{$\sigma$} & \multicolumn{1}{c|}{$\sigma_{min,95\%}$} & \multicolumn{1}{c|}{$\sigma_{max,95\%}$} \\ \hline \hline $\sigma^\circ_{LoS} Tx$ & -0.72 & -0.77 & -0.67 & 0.08 & 0.06 & 0.13 \\ \hline $\sigma^\circ_{LoS} Rx$ & -0.51 & -0.62 & -0.4 & 0.18 & 0.13 & 0.3 \\ \hline $\sigma^\circ_{NLoS} Tx$ & -0.49 & -0.6 & -0.38 & 0.18 & 0.13 & 0.3\\ \hline $\sigma^\circ_{NLoS} Rx$ & -0.33 & -0.45 & -0.21 & 0.19 & 0.14 & 0.32\\ \hline \end{tabular}% } \end{table*} \subsection{Power distribution of MPCs} The final parameter estimated is the $\kappa_1$. Our hypothesis is to observe larger values of $\kappa_1$ in max-dir cases compared the omni-directional ones. Fig. \ref{fig:k1-LOS} shows the estimated values for the LoS case. As can be observed in Fig. \ref{fig:k1_CDF} the LoS points for the omni-directional case have a similar spread compared to the max-dir cases, but significantly smaller mean. Fig. \ref{fig:k1_LS} shows the regression analysis of the power distribution. The observed range oscillates between 4 and 23 dB. As observed in the plot, $\kappa_1$ for the max-dir grows as the distance increases and for the omni-directional case shows a decreasing trend. The filtering effect of the antenna decreases the number of MPCs received by the MPC. As the distance increases, additional MPCs (coming from reflections) suffer from further attenuation and only those in the LoS directions are boosted by the antenna gain. On the other hand, in the omni-directional case, the value of $\kappa_1$ decreases because as the distance increases more MPCs will be collected from different direction apart from the LoS\footnote{An unusual behavior is observed in point Tx1-Rx6 where ($\kappa_1^{omni} > \kappa_1^{max-dir}$), though the difference is small. We conjecture that this is caused by imperfections in the calibration procedure and the generation of omni-directional PDPs from the directional PDPs.}. A summary of the parameters see Tables \ref{tab:linear-model-kappa}, \ref{tab:stat-model-kappa}. \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{k1_los.eps} \caption{CDF.} \vspace*{0mm} \label{fig:k1_CDF}% \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{k1vd_los.eps} \caption{Linear fitting with $log_{10}(d)$ weighting.} \vspace*{0mm} \label{fig:k1_LS}% \end{subfigure} \caption{Modeling of $\kappa_1$ for LoS points.}% \vspace{-0 mm} \label{fig:k1-LOS}% \vspace{-5mm} \end{figure*} In the NLoS case, we observed the values with a range from -10 to 22 dB. This high variability can be related to the multiple points in "street canyon" scenarios (Routes One, Five, and Six). The "street canyon" filters/concentrates the MPCs arriving at the Rx. Furthermore, $\kappa_1$ is reduced when the distance increases, both for the omni- and the max-Dir case. Similarly to the RMSDS analysis, the points Tx5-Rx23 and Tx6-Rx24 shows a different behavior ($\kappa_{1_{omni}}^{NLoS}>\kappa_{1_{max-dir}}^{NLoS}$). This is related to the fact that the strongest MPC angle is between two azimuthal captures, which produces this unusual behavior. More details about the regression analysis and statistical modeling and estimation are shown in Tables \ref{tab:linear-model-kappa},\ref{tab:stat-model-kappa}. \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{k1.eps} \caption{CDF.} \vspace*{0mm} \label{fig:k1_CDF_NLOS}% \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{k1vd.eps} \caption{Linear fitting with $log_{10}(d)$ weighting.} \vspace*{0mm} \label{fig:k1_LS_NLOS}% \end{subfigure} \caption{Modeling of $\kappa_1$ for NLoS points.}% \vspace{-0 mm} \label{fig:k1_NLOS}% \vspace{-5mm} \end{figure*} \begin{table*}[t!] \centering \caption{Linear model parameters for $\kappa_1$ with $95\%$ confidence interval.} \label{tab:linear-model-kappa} {% \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Parameter}}} & \multicolumn{6}{c|}{\textbf{Linear model parameters estimated with 95\% CI}} \\ \cline{2-7} \multicolumn{1}{|l|}{} & \multicolumn{1}{c|}{$\alpha$} & \multicolumn{1}{c|}{$\alpha_{min,95\%}$} & \multicolumn{1}{c|}{$\alpha_{max,95\%}$} & \multicolumn{1}{c|}{$\beta$} & \multicolumn{1}{c|}{$\beta_{min,95\%}$} & \multicolumn{1}{c|}{$\beta_{max,95\%}$} \\ \hline \hline $\kappa_{1_{omni}}^{LoS}$ & 25.59 & 7.49 & 43.7 & -8.87 & -20.27 & 2.53\\ \hline $\kappa_{1_{max-dir}}^{LoS}$ & 1.32 & -16.25 & 18.89 & 8.13 & -2.94 & 19.19 \\ \hline $\kappa_{1_{omni}}^{NLoS}$ & 38.54 & 8.12 & 68.96 & -23.29 & -42.54 & -4.05\\ \hline $\kappa_{1_{max-dir}}^{NLoS}$ & 28.95 & 1.39 & 56.52 & -11.35 & -28.78 & 6.09 \\ \hline \end{tabular}% } \end{table*} \begin{table*}[t!] \centering \caption{Statistical model parameters for $\kappa_1$ with $95\%$ confidence interval.} \label{tab:stat-model-kappa} {% \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Parameter}}} & \multicolumn{6}{c|}{\textbf{Statistical model parameters estimated with 95\% CI}} \\ \cline{2-7} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{$\mu$} & \multicolumn{1}{c|}{$\mu_{min,95\%}$} & \multicolumn{1}{c|}{$\mu_{max,95\%}$} & \multicolumn{1}{c|}{$\sigma$} & \multicolumn{1}{c|}{$\sigma_{min,95\%}$} & \multicolumn{1}{c|}{$\sigma_{max,95\%}$} \\ \hline \hline $\kappa_{1_{omni}}^{LoS}$ & 11.01 & 8.03 & 13.98 & 4.92 & 3.53 & 8.12 \\ \hline $\kappa_{1_{max-dir}}^{LoS}$ & 14.72 & 11.87 & 17.57 & 4.72 & 3.38 & 7.78 \\ \hline $\kappa_{1_{omni}}^{NLoS}$ & 0 & -5.14 & 5.13 & 8.5 & 6.1 & 14.03 \\ \hline $\kappa_{1_{max-dir}}^{NLoS}$ & 10.57 & 6.15 & 14.98 & 7.3 & 5.23 & 12.05 \\ \hline \end{tabular}% } \end{table*} \subsection{Summary of results} In this section, a summary of the estimated parameter for a systems design or channel simulation are shown in Tables \ref{tab:linear-model-summary}, \ref{tab:stat-model-summary}. Table \ref{tab:linear-model-summary} shows the regression analysis (i.e. linear modeling) for the distance dependence of the parameters for both LoS and NLoS cases. Table \ref{tab:stat-model-summary} shows the estimated parameters for the statistical fits/modeling carried out in this analysis for both LoS and NLoS cases. Please note that the presented statistical results are valid for the ranges of distances we measured over ($\approx 20-\approx 85$ m). It is important to note that the parameters obtained in the analysis are directly related to the number of points and the selection of measurement locations. In other words, this analysis is impacted by the fact that the measurement locations were chosen such that reasonable Rx power could be anticipated. An analysis of outage probability should consider a ''blind" selection of points, e.g., on a regular grid, that would allow an assessment of the percentage of points that cannot sustain communications at a given sensitivity level. Also other parameters, which might be correlated to the received power, might conceivably be influenced by the selection of the points. The results in this paper should thus be interpreted as ''conditioned on the existence of reasonable Rx power". Furthermore, while in the current campaign more than 100,000 transfer functions were measured, the number of measured {\em location pairs} is still somewhat limited. Hence, this model is based on a relatively small number of points to provide an initial channel model to give a realistic analysis and model for system design. A larger number of measurement locations will obviously increase the number of measurement locations and increase the validity of the analysis. However, the time required to perform the current campaign was quite significant (several months), and it is among the largest double-directional campaigns ever performed in the THz regime (for any type of environment). Future measurements will be added to improve the model further. \begin{table*}[t!] \centering \caption{Linear model parameters summary.} \label{tab:linear-model-summary} {% \begin{tabular}{|c|c|c|} \hline \multicolumn{1}{|c|}{\textbf{Parameter}} & \multicolumn{1}{c|}{$\alpha$} & \multicolumn{1}{c|}{$\beta$} \\ \hline \hline $PL_{omni}^{LoS}$ & 72.88 & 1.93 \\ \hline $PL_{max-dir}^{LoS}$ & 77.33 & 1.88 \\ \hline $PL_{omni}^{LoS} OLS$ & 75.02 & 1.8 \\ \hline $PL_{max-dir}^{LoS} OLS$ & 77.06 & 1.89 \\ \hline $\sigma_{\tau_{omni}}^{LoS}$ & -108.22 & 17.82 \\ \hline $\sigma_{\tau_{max-dir}}^{LoS}$ & -94.11 & 4.99 \\ \hline $\kappa_{1_{omni}}^{LoS}$ & 25.59 & -8.87 \\ \hline $\kappa_{1_{max-dir}}^{LoS}$ & 1.32 & 8.13 \\ \hline $PL_{omni}^{NLoS}$ & 91.28 & 1.76 \\ \hline $PL_{max-dir}^{NLoS}$ & 84.54 & 2.57 \\ \hline $PL_{omni}^{NLoS} OLS$ & 86.81 & 2.03 \\ \hline $PL_{max-dir}^{NLoS} OLS$ & 82.91 & 2.68 \\ \hline $\sigma_{\tau_{omni}}^{NLoS}$ & -96.16 & 11.91\\ \hline $\sigma_{\tau_{max-dir}}^{NLoS}$ & -96.71 & 7.14\\ \hline $\kappa_{1_{omni}}^{NLoS}$ & 38.54 & -23.29\\ \hline $\kappa_{1_{max-dir}}^{NLoS}$ & 28.95 & -11.35 \\ \hline \end{tabular}% } \end{table*} \begin{table*}[t!] \centering \caption{Statistical model parameters summary.} \label{tab:stat-model-summary} {% \begin{tabular}{|c|c|c|} \hline \multicolumn{1}{|c|}{\textbf{Parameter}} & \multicolumn{1}{c|}{$\mu$} & \multicolumn{1}{c|}{$\sigma$}\\ \hline \hline $\epsilon_{omni}^{LoS}$ & 0.09 & 0.72 \\ \hline $\epsilon_{max-dir}^{LoS}$ & -0.01 & 0.8 \\ \hline $\epsilon_{omni}^{LoS} OLS$ & 0 & 0.69\\ \hline $\epsilon_{max-dir}^{LoS} OLS$ & 0 & 0.8\\ \hline $\sigma^{\circ}_{LoS} Tx$ & -0.72 & 0.08\\ \hline $\sigma^{\circ}_{LoS} Rx$ & -0.51 & 0.18\\ \hline $\sigma_{\tau_{omni}}^{LoS}$ & -78.11 & 4.25\\ \hline $\sigma_{\tau_{max-dir}}^{LoS}$ & -85.8 & 1.95 \\ \hline $\kappa_{1_{omni}}^{LoS}$ & 11.01 & 4.92 \\ \hline $\kappa_{1_{max-dir}}^{LoS}$ & 14.72 & 4.72 \\ \hline $\epsilon_{omni}^{NLoS}$ & 0.04 & 6.24 \\ \hline $\epsilon_{max-dir}^{NLoS}$ & 0.18 & 6.21\\ \hline $\epsilon_{omni}^{NLoS} OLS$ & 0 & 6.21 \\ \hline $\epsilon_{max-dir}^{NLoS} OLS$ & 0 & 7.89 \\ \hline $\sigma^{\circ}_{NLoS} Tx$ & -0.49 & 0.18 \\ \hline $\sigma^{\circ}_{NLoS} Rx$ & -0.33 & 0.19 \\ \hline $\sigma_{\tau_{omni}}^{NLoS}$ & -76.38 & 4.66 \\ \hline $\sigma_{\tau_{max-dir}}^{NLoS}$ & -84.96 & 4.33 \\ \hline $\kappa_{1_{omni}}^{NLoS}$ & 0 & 8.5 \\ \hline $\kappa_{1_{max-dir}}^{NLoS}$ & 10.57 & 7.37\\ \hline \end{tabular}% } \end{table*} \section{Conclusions} In this paper, we presented the results of the first extensive wideband, double-directional THz outdoor channel measurements for microcell scenarios with Tx heights of more than 10 m above the ground. We provide an overview of the measurement methodology and environments, as well as the signal processing to extract parameters characterizing the channels. Most importantly, we provided a parameterized statistical description of our measurement results that can be used to assess THz systems. The key parameters discussed in the current paper include path loss, shadowing, angular spread, delay spread and MPC power distribution. These results are an important step towards drawing some important first conclusions about the implications on system design and deployment in the THz regime. \section*{Acknowledgment} Helpful discussions with Sundeep Rangan, Mark Rodwell and Zihang Cheng are gratefully acknowledged.
proofpile-arXiv_065-18
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{{\Large\selectfont{Appendix}}} \input{ethics} \section{Dataset and Annotations} \subsection{Data Collection and Preprocessing} \label{app:preprocessing} The IT and CL cases come from the Supreme Court of India, Bombay and Kolkata High Courts. For CL cases, we use the cases from the tribunals of NCLAT (National Company Law Appellate Tribunal)\footnote{\url{https://nclat.nic.in/}}, CCI (Competition Commission of India)\footnote{\url{https://www.cci.gov.in/}}, COMPAT (Competition Appellate Tribunal)\footnote{\url{http://compatarchives.nclat.nic.in}}. Since the IT laws are 50 years old and relatively dynamic, we stick to certain sections of IT domain only, whereas we use all the sections for CL domain. We restrict ourselves to the IT cases that are based on Section 147, Section 92C and Section 14A only to limit the subjectivity in cases. We randomly select 50 cases from IT and CL domain each to be annotated. We used regular expressions in Python to remove the auxillary information in the documents (For example: date, appellant and respondent names, judge names etc.) and filter out the main judgment of the document. We use the NLTK\footnote{\url{http://www.nltk.org/}} sentence tokenizer to split the document into sentences. The annotators were asked to annotate these sentences with the rhetorical roles. \subsection{Annotators Details} With the help of law professors, we designed a course project centered around RR annotations for the student annotators. The students \textbf{voluntarily} participated in the annotations as a part of the course project. Moreover, annotators were curious about learning about AI technologies and further contributing towards its progress. There was no compulsion to take part in the annotation activity. The 6 annotators come from an Indian Law University. Three of them specialize in Income Tax domain and the other three specialize in Competition Law domain. \subsection{Rhetorical Roles} \label{app:roles} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{images/newstats.pdf} \caption{Distribution of RR labels in IT and CL documents.} \label{fig:CLstats} \end{figure} We provide the definition of each of the Rhetorical Role in the main paper. Examples for each of the RR are given in Table \ref{app:tab:rr-examples}. Figure \ref{fig:CLstats} provides the number of sentences for each label in the IT and CL dataset. Note that representation of both the domains is similar with the exception of DIS label. \subsection{Secondary and Tertiary Annotation Labels} \label{app:secondary-tertiary} Legal experts pointed out that a single sentence can sometimes represent multiple rhetorical roles (although this is not common). Each expert could also assign secondary and tertiary rhetorical roles to a single sentence to handle such scenarios and motivate future research. On an average annotators assigned secondary role in 5-7\% cases and assigned tertiary roles in 0.5-1\% cases. \subsection{Inter-annotator Agreement} \label{app:agreement} Fleiss Kappa between all (fine-grained) labels is 0.59 for IT and 0.87 for CL, indicating substantial agreement. We provide the inter-annotator agreement (averaged pairwise macro F1 between annotators) upon 13 fine-grained labels in Table \ref{tab:interanno_pairwise_13labels}. Also, we provide the pairwise confusion matrices of annotators $(A_{1}, A_{2})$ and $(A_{2},A_{3})$ for both IT and CL domain in Figure \ref{fig:confusion-mat-remaining}. \begin{table}[h] \centering \begin{tabular}{|c|c|c|} \hline \textbf{Label} & \textbf{IT} & \textbf{CL} \\ \hline \textbf{ARG-P} & 0.74 & 0.90 \\ \hline \textbf{ARG-R} & 0.73 & 0.97 \\ \hline \textbf{FAC} & 0.77 & 0.88 \\ \hline \textbf{ISS} & 0.75 & 0.75 \\ \hline \textbf{PRE-RU} & 0.67 & 0.86 \\ \hline \textbf{PRE-NR} & 0.58 & 0.80 \\ \hline \textbf{PRE-O} & 0.43 & \_ \\ \hline \textbf{STA} & 0.78 & 0.89 \\ \hline \textbf{RLC} & 0.58 & 0.74 \\ \hline \textbf{RPC} & 0.75 & 0.74 \\ \hline \textbf{ROD} & 0.64 & 0.93 \\ \hline \textbf{DIS} & \_ & 0.98 \\ \hline \textbf{NON} & 0.45 & 0.52 \\ \hline \textit{\textbf{F1}} & \textit{0.73} & \textit{0.88} \\ \hline \end{tabular} \caption{Label-wise inter-annotator agreement for all 13 fine-grained labels.} \label{tab:interanno_pairwise_13labels} \end{table} \begin{figure}[] \centering \begin{subfigure}[b]{0.40\textwidth} \centering \includegraphics[width=\textwidth]{images/IT_cm_1_2.pdf} \caption{Between annotators $A_{1}$ and $A_{2}$ for IT domain} \label{fig:cm1} \end{subfigure} \hfill \begin{subfigure}[b]{0.40\textwidth} \centering \includegraphics[width=\textwidth]{images/IT_cm_2_3.pdf} \caption{Between annotators $A_{2}$ and $A_{3}$ for IT domain} \label{fig:cm2} \end{subfigure} \begin{subfigure}[b]{0.40\textwidth} \centering \includegraphics[width=\textwidth]{images/CL_cm_1_2.pdf} \caption{Between annotators $A_{1}$ and $A_{2}$ for CL domain} \label{fig:cm3} \end{subfigure} \begin{subfigure}[b]{0.40\textwidth} \centering \includegraphics[width=\textwidth]{images/CL_cm_2_3.pdf} \caption{Between annotators $A_{2}$ and $A_{3}$ for IT domain} \label{fig:cm4} \end{subfigure} \hfill \caption{Confusion matrix between Annotators for IT and CL domains.} \label{fig:confusion-mat-remaining} \end{figure} \subsection{Annotation Analysis} \label{app:annotation-analysis} Annotation of judgments in order to identify and distinguish between the rhetorical roles played by its various parts is in itself a challenging task even for legal experts. We provide some qualitative examples of sentences and their corresponding rhetorical roles in Table \ref{app:tab:rr-examples} There are several factors involved in the exercise that requires the annotator to retrace the judicial decision making and recreate the impact left by the inputs available to the judge such as certain specific facts of the case, a particular piece of argument advanced by the lawyer representing one of the parties, or a judicial precedent from a higher court deemed applicable in the current case by the lawyer(s) or by the judge or by both. Moreover, the annotator only has access to the current document which is secondary account of what actually happened in the court. These limitations certainly makes the task of the annotator further difficult, and leaves them with no choice other than to make certain educated guesses when it comes to understanding the various nuances, both ostensible and probable, of certain rhetorical roles. It should, however, be noted that such variation need not occur for every rhetorical role, since not all the roles are equally susceptible to it –for instance, the facts of the case as laid down by the judge are more readily and objectively ascertainable by more than one annotator, whereas the boundaries between the issues framed by the judge and those deemed relevant as per the arguments advanced by the lawyers may blur more, especially because if the judge happens to agree with one of the lawyers and adopts their argument as part of the judicial reasoning itself. Similarly, it should also be noted that despite differing in their views of the nature and extent of rhetorical role played by a certain part of the judgment, the annotators may still agree with each other when it comes to identifying and segregating the final ruling made by the judge in that case –this phenomenon of having used two different routes to arrive at the same destination is not uncommon in the reenactment or ex-post-facto analysis of a judicial hearing and decision making process. A cumulative effect of the aforementioned factors can be observed in the results of the annotation. The analysis provided by the three annotators in case of competition law bear close resemblance with each other. On the other hand, in case of income tax law, the analysis provided by Users 1 and 3 bear greater resemblance with each other, compared to the resemblance between Users 1 and 2, or between Users 2 and 3. On a different note, it is also observed that the rhetorical role where the annotators have differed between themselves the most has been the point of Ruling made by the Lower Court, followed by the Ratio. This also ties in with the aforesaid argument that all rhetorical roles are not equally susceptible to the variation caused by the varying levels of success achieved by the different annotators in retracing the judicial thought pattern. \subsection{Annotation Case Studies} \label{app:annotation-case-study} Along with law professors, we analyzed some of the case documents. Please refer to data files for the actual judgment. In the case of CL cases, the best resemblance that has been achieved is in the case of SC\_Competition Commission of India vs Fast Way Transmission Pvt Ltd and Ors 24012018 SC.txt, one would find that the judgment has been written in a manner as to provide specific indicators before every rhetorical role. For instance, before the Ruling by Lower Court starts, reference has been made that this is the opinion given the Competition Commission of India (the lower court in the relevant domain). Similarly, before Arguments made by Petitioner/Respondent, reference has been made that this is the argument made by the lawyer representing the petitioner/respondent. This judgment also provides a nice, consistent flow following the arrangement of the rhetorical roles in order. The relatively smaller size of the judgment also indicates a lower level of complexity (although there need not always be a consistent correlation between the two). On the other hand, if one considers the least resemblance achieved in the competition law domain, in the case of SC\_Excel Crop Care Limited vs Competition Commission of India and Ors 08052017 SC(1).txt, one would find that such specific indicators are usually absent, thus leaving scope for individual discretion and interpretation, the judgment goes back and forth between certain rhetorical roles (Issue, Ruling by Lower Court, Ratio by Present Court, Argument by Petitioner/Respondent, Precedent Relied Upon), and the relatively bigger size also involves additional complexity and analysis, which make room for further nuances as described above. Similarly, if one considers the best resemblance that has been achieved in the income tax domain, in the case of SC\_2014\_17.txt, one would find the case has involved fewer rhetorical roles, cut down on facts (mainly dealing with procedural issues on an appellate stage), and even among the rhetorical roles, it has focused on statutes and provisions thereof and the ratio and ruling. This has significantly reduced the possibility of the aforementioned richer jurisprudence, greater range of precedents, and resulting greater degree of subjective interpretation being at play. On the other hand, if one considers the least resemblance that has been achieved in the income tax domain, in the case of SC\_2008\_1597.txt, discusses Precedents to a greater detail including facts thereof, goes back and forth between certain rhetorical roles instead of maintaining a consistent order, and is not very clear about whether the judge is at times merely reiterating the arguments made by the lawyers, or is demonstrating their own view of such arguments. Collectively, these leave the scope for greater involvement of subjective interpretation of the aforesaid nuances. Yet on an overall basis, the elements of subjectivity, personal discretionary interpretation, and arbitrariness have been minimized by the selection of the chosen domains, along with the methodology adopted for annotation, thus leading to the present success attained in identification of rhetorical roles and using the same for prior relevant case identification and prediction. \section{Evaluation Metrics} \label{app:metrics} We use the Macro F1 metric to evaluate the performance of models upon the task of Rhetorical Role labelling. Macro F1 is the mean of the label-wise F1 scores for each label. Given the true positives ($TP$), false positives ($FP$) and false negatives ($FN$), the F1 score for a single label is calculated as: \begin{equation} F1 = \frac{TP}{TP + \left(\frac{FP+FN}{2}\right)} \end{equation} The pairwise inter-annotator agreement F1 between two annotators $A$ and $B$ is calculated by considering the annotations by annotator $A$ as the true labels and the annotations by annotator $B$ as the predicted labels. We also calculate Fleiss Kappa\footnote{https://en.wikipedia.org/wiki/Fleiss\%27\_kappa} to measure the inter-annotator agreement. \section{Model Training Details} \label{app:model} All of our baseline experiments and training of Label shift prediction models (SBERT and BERT-SC) were conducted on Google Colab\footnote{https://colab.research.google.com/} and used the default single GPU Tesla P100-PCIE-16GB, provided by Colab. Our models were trained upon a single 11GB GeForce RTX 2080 TI. We used the SBERT model provided in the sentence-transformers library\footnote{https://pypi.org/project/sentence-transformers/}. We use the Huggingface\footnote{https://huggingface.co/} implementations of BERT-base and LEGAL-BERT models. Refer to Table \ref{tab:it_full}, \ref{tab:itcl_full} and \ref{tab:cl_full} for dataset-wise results and hyperparameters for each model. We also provide the training time and number of parameters of each model in Table \ref{tab:params_train_time}. For SBERT-Shift, we kept the SBERT model as fixed and tuned the 3 linear layers on top. We used the Binary Crossentropy loss function with Adam Optimizer to tune the model upon the LSP task. For BERT-SC, we fine-tuned the pre-trained BERT-base model upon the LSP task. We used the maximum sequence length of 256 tokens, a learning rate of $2e-5$ and kept the number of epochs as 5 during training. We used the same loss function and optimizer as the SBERT-Shift model. \subsection{Reduced Label Set} \label{app:reducedLabels} Due to the complexity of the task of RR prediction, we consider seven main labels (FAC, ARG, PRE, ROD, RPC, RLC, and STA) only. We plan to explore developing predictive models using fine-grained labels. \noindent \textbf{NON Label:} We ignore sentences with NON (None) labels (about 4\% for IT and 0.5\% for CL). We believe that this was necessary since the inter-annotator agreement for the NON label in both IT and CL domains, has an F1 score as low as 0.45, implying that even the legal experts themselves do not agree whether a particular sentence has a NON label. \noindent \textbf{Dissent Label:} Analysis of the annotated dataset reveals that the IT domain does not have any instance of dissent (DIS) label. There were only three documents (out of 50) in the CL domain having few instances of dissent label. Moreover, the instances of dissent label were present as a contiguous chunk of sentences at the end of the document. Hence, we discarded the sentences with dissent labels. Furthermore, law experts told us that the dissent phenomenon is rare; from a practical (application) point of view, these labels can be discarded. \subsection{Single Sentence Classification Baselines} We train single sentence classification models for the task of rhetorical role labelling. We use BERT-base-uncased and Legal-BERT models and fine-tune them upon the sentence classification task. We also try a variant of using context sentences (left sentence and the right sentence) along with the current sentence to make classification, we call this method BERT-neighbor. We use CrossEntropyLoss as the criterion and Adam as the optimizer. We use a batch size of 32 with a learning rate of 2e-5 and fine-tune for 5 epochs for all our experiments. Refer to Tables \ref{tab:it_full} , \ref{tab:cl_full} and \ref{tab:itcl_full} and for results and more information about the hyperparameters. \subsection{Sequence Classification Baselines} We experiment with Sequence Classification Baselines like CRF with handcrafted features, BiLSTM with sent2vec embeddings and different versions of BiLSTM-CRF in which we varied the input embeddings. We experimented with sent2vec embeddings fine-tuned on Supreme Court Cases of India (same as in \cite{bhattacharya2019identification}). We also tried with sentence embeddings obtained from the BERT-base model. In another experiment, we fine-tuned a pre-trained BERT model upon the task of Masked Language Modelling (MLM) on the unlabelled documents of IT and CL domain, and used this model to extract the sentence embeddings for the BiLSTM-CRF model. We used the same implementation of BiLSTM-CRF from \cite{bhattacharya2019identification}, with Adam optimizer and NLL loss function. Refer to Tables \ref{tab:it_full} , \ref{tab:cl_full} and \ref{tab:itcl_full} for experiment-wise hyperparameters. \subsection{LSP-BiLSTM-CRF and MTL-BiLSTM-CRF models} In our proposed approach of LSP-BiLSTM-CRF, we experiment with two methods of generating shift embeddings, namely BERT-SC and SBERT-Shift. These embeddings were then used as input to train a BiLSTM-CRF with similar training schedules. Refer to Tables \ref{tab:it_full} , \ref{tab:cl_full} and \ref{tab:itcl_full} for other hyperparameters. \begin{table*}[] \small \centering \begin{tabular}{|l|l|c|} \hline \textbf{Model} & \textbf{\begin{tabular}[c]{@{}l@{}}Hyperparameters(E=Epochs),\\ (LR=Learning rate),\\ (BS=Batch Size),\\ (Dim=Embedding dimension),\\ (E1=Embedding dimension Shift),\\ (E2=Embedding dimension RR),\\ (H=Hidden dimension),\end{tabular}} & \multicolumn{1}{l|}{\textbf{IT (Macro F1)}} \\ \hline BERT & LR=2e-5, BS=32, E=5 & 0.56 \\ \hline BERT-neighbor & LR=2e-5, BS=32, E=5 & 0.53 \\ \hline Legal-BERT & LR=2e-5, BS=32, E=5 & 0.55 \\ \hline CRF(handcrafted) & LR=0.01, BS=40, Dim=172, E=300 & 0.55 \\ \hline BiLSTM(sent2vec) & LR=0.01, BS=40, Dim=200, H=100, E=300 & 0.55 \\ \hline BiLSTM-CRF(handcrafted) & LR=0.01, BS=40, Dim=172, H=86, E=300 & 0.57 \\ \hline BiLSTM-CRF(sent2vec) & LR=0.01, BS=40, Dim=200, H=100, E=300 & 0.59 \\ \hline BiLSTM-CRF(BERT emb) & LR=0.01, BS=40, Dim=768, H=384, E=300 & 0.63 \\ \hline BiLSTM-CRF(MLM emb) & LR=0.01, BS=40, Dim=768, H=384, E=300 & 0.58 \\ \hline LSP(SBERT) & LR=0.005, BS=40, Dim=2304, H=1152, E=300 & 0.64 \\ \hline LSP(BERT-SC) & LR=0.005, BS=40, Dim=2304, H=1152, E=300 & 0.65 \\ \hline MTL(MLM emb) & \begin{tabular}[c]{@{}l@{}}LR=0.005, BS=40, E1=2304, E2=768 , H=1152(Shift), \\ H=384(RR), E=300\end{tabular} & 0.67 \\ \hline MTL(BERT-SC) & \begin{tabular}[c]{@{}l@{}}LR=0.005, BS=40, E1=2304, E2=768, H=1152(Shift), \\ H=384(RR), E=300\end{tabular} & 0.70 \\ \hline MTL(BERT-SC) & \begin{tabular}[c]{@{}l@{}}LR=0.005, BS=40, E1=2304, E2=2304, H=1152(Shift), \\ H=384(RR), E=300\end{tabular} & 0.68 \\ \hline MTL(BERT-SC) & \begin{tabular}[c]{@{}l@{}}LR=0.005, BS=40, E1=768, E2=768, H=1152(Shift), \\ H=384(RR), E=300\end{tabular} & 0.64 \\ \hline \end{tabular} \caption{Hyperparameters and results on the IT dataset} \label{tab:it_full} \end{table*} For MTL models, we experimented with different encoders $E_{1}$ and $E_{2}$. We experimented with using Shift embeddings (or BERT embeddings of sentences obtained from pre-trained BERT model) from BERT-SC in both the components. However, the best performing model was the one in which we used shift embeddings for the shift component and BERT embeddings for the RR component. We used the NLL loss in both components of the MTL model weighted by the hyperparameter $\lambda$. We use the Adam Optimizer for training. We provide dataset-wise hyperparameters and results in Tables \ref{tab:it_full} , \ref{tab:cl_full} and \ref{tab:itcl_full}. \begin{table*}[] \small \centering \begin{tabular}{|l|l|c|} \hline \textbf{Model} & \textbf{\begin{tabular}[c]{@{}l@{}}Hyperparameters(E=Epochs),\\ (LR=Learning rate),\\ (BS=Batch Size),\\ (Dim=Embedding dimension),\\ (E1=Embedding dimension Shift),\\ (E2=Embedding dimension RR),\\ (H=Hidden dimension),\end{tabular}} & \multicolumn{1}{l|}{\textbf{IT+CL (Macro F1)}} \\ \hline BiLSTM-CRF(sent2vec) & LR=0.01, BS=40, Dim=200, H=100, E=300 & 0.65 \\ \hline BiLSTM-CRF(BERT) & LR=0.01, BS=40, Dim=768, H=384, E=300 & 0.63 \\ \hline LSP-BiLSTM-CRF(BERT-SC) & LR=0.005, BS=20, Dim=2304, H=1152, E=300 & 0.67 \\ \hline MTL-BiLSTM-CRF(BERT-SC) & \begin{tabular}[c]{@{}l@{}}LR=0.005, BS=20, E1=2304, E2=768, \\ H=1152(Shift), H=384(RR), E=300\end{tabular} & 0.70 \\ \hline MTL-BiLSTM-CRF(BERT-SC) & \begin{tabular}[c]{@{}l@{}}LR=0.005, BS=20, E1=2304, E2=2304, \\ H=1152(Shift), H=384(RR), E=300\end{tabular} & 0.68 \\ \hline MTL-BiLSTM-CRF(BERT-SC) & \begin{tabular}[c]{@{}l@{}}LR=0.005, BS=20, E1=768, E2=768, \\ H=1152(Shift), H=384(RR), E=300\end{tabular} & 0.65 \\ \hline \end{tabular} \caption{Hyperparameters and results on the combined (IT+CL) dataset} \label{tab:itcl_full} \end{table*} \begin{table*}[] \small \centering \begin{tabular}{|l|l|c|} \hline \textbf{Model} & \textbf{\begin{tabular}[c]{@{}l@{}}Hyperparameters(E=Epochs),\\ (LR=Learning rate),\\ (BS=Batch Size),\\ (Dim=Embedding dimension),\\ (E1=Embedding dimension Shift),\\ (E2=Embedding dimension RR),\\ (H=Hidden dimension),\end{tabular}} & \multicolumn{1}{l|}{\textbf{CL (Macro F1)}} \\ \hline BERT & LR=2e-5, BS=32, E=5 & 0.52 \\ \hline BERT-neighbor & LR=2e-5, BS=32, E=5 & 0.51 \\ \hline Legal-BERT & LR=2e-5, BS=32, E=5 & 0.53 \\ \hline CRF(handcrafted) & LR=0.01, BS=40, Dim=172, E=300 & 0.52 \\ \hline BiLSTM(sent2vec) & LR=0.01, BS=40, Dim=200, H=100, E=300 & 0.54 \\ \hline BiLSTM-CRF(handcrafted) & LR=0.01, BS=40, Dim=172, H=86, E=300 & 0.56 \\ \hline BiLSTM-CRF(sent2vec) & LR=0.01, BS=40, Dim=200, H=100, E=300 & 0.61 \\ \hline BiLSTM-CRF(BERT emb) & LR=0.01, BS=40, Dim=768, H=384, E=300 & 0.63 \\ \hline BiLSTM-CRF(MLM emb) & LR=0.01, BS=40, Dim=768, H=384, E=300 & 0.60 \\ \hline LSP(SBERT) & LR=0.005, BS=40, Dim=2304, H=1152, E=300 & 0.63 \\ \hline LSP(BERT-SC) & LR=0.005, BS=40, Dim=2304, H=1152, E=300 & 0.68 \\ \hline MTL(MLM emb) & \begin{tabular}[c]{@{}l@{}}LR=0.005, BS=20, E1=2304, E2=768 , H=1152(Shift), \\ H=384(RR), E=300\end{tabular} & 0.67 \\ \hline MTL(BERT-SC) & \begin{tabular}[c]{@{}l@{}}LR=0.005, BS=20, E1=2304, E2=768, H=1152(Shift),\\ H=384(RR), E=300\end{tabular} & 0.69 \\ \hline MTL(BERT-SC) & \begin{tabular}[c]{@{}l@{}}LR=0.005, BS=20, E1=2304, E2=2304, H=1152(Shift),\\ H=384(RR), E=300\end{tabular} & 0.67 \\ \hline MTL(BERT-SC) & \begin{tabular}[c]{@{}l@{}}LR=0.005, BS=20, E1=768, E2=768, H=1152(Shift),\\ H=384(RR), E=300\end{tabular} & 0.64 \\ \hline \end{tabular} \caption{Hyperparameters and results on the CL dataset} \label{tab:cl_full} \end{table*} \subsection{Hyperparameter $\lambda$} \label{app:sec-lambda} We tuned the hyperparameter $\lambda$ of the MTL loss function upon the validation set. We trained the MTL model with $\lambda \in [0.1, 0.9]$ with strides of 0.1 and show the performance of our method on IT and IT+CL datasets in Figure \ref{app:fig:lambdavariation}. $\lambda= 0.6$ performs the best for the IT domain and also performs competitively on the combined domains. \subsection{Model Distillation} For model distillation experiments we trained the teacher model with same hyperparameters in Table \ref{tab:it_full} on the IT dataset. For the next two iteration of learning a student model, we used 48 unlabelled cases in each iteration. The weighing hyperparameter, $\alpha_{U}$ was kept as 0.3. In each iteration, the student model was trained with a batch size 16, a learning rate of 0.005 and for 300 epochs. \begin{table*}[] \centering \begin{tabular}{|l|ll|ll|} \hline Model & \multicolumn{2}{l|}{No of Parameters} & \multicolumn{2}{l|}{Training Time(min)} \\ \hline & \multicolumn{1}{l|}{IT} & CL & \multicolumn{1}{l|}{IT} & CL \\ \hline BiLSTM(sent2vec) & \multicolumn{1}{l|}{240000} & 240000 & \multicolumn{1}{l|}{15} & 30 \\ \hline BiLSTM-CRF(sent2vec) & \multicolumn{1}{l|}{240000} & 240000 & \multicolumn{1}{l|}{15} & 30 \\ \hline BiLSTM-CRF(BERT emb) & \multicolumn{1}{l|}{3538944} & 3538944 & \multicolumn{1}{l|}{30} & 50 \\ \hline BiLSTM-CRF(MLM emb) & \multicolumn{1}{l|}{3538944} & 3538944 & \multicolumn{1}{l|}{30} & 50 \\ \hline LSP(SBERT) & \multicolumn{1}{l|}{31850496} & 31850496 & \multicolumn{1}{l|}{90} & 250 \\ \hline LSP(BERT-SC) & \multicolumn{1}{l|}{31850496} & 31850496 & \multicolumn{1}{l|}{90} & 250 \\ \hline MTL(MLM emb) & \multicolumn{1}{l|}{35411060} & 35411060 & \multicolumn{1}{l|}{300} & 1200 \\ \hline MTL(BERT-SC) & \multicolumn{1}{l|}{35411060} & 35411060 & \multicolumn{1}{l|}{300} & 1200 \\ \hline \end{tabular} \caption{Approx. number of parameters and computational budget of models.} \label{tab:params_train_time} \end{table*} \begin{table*}[] \centering \begin{tabular}{|l|l|l|} \hline \textbf{Model} & \textbf{IT+CL docs} & \textbf{F1} \\ \hline BERT-ILDC & Predicted ROD \& RPC using BiLSTM-CRF(sent2vec) & 0.55 \\ \hline BERT-ILDC & Predicted ROD \& RPC using MTL(BERT-SC) & 0.56 \\ \hline \end{tabular} \caption{Judgment Prediction results using predicted ROD \& RPC} \label{app:tab:jp} \end{table*} \begin{table*}[] \centering \begin{tabular}{|l|l|} \hline Label & Sentence \\ \hline Fact & \begin{tabular}[c]{@{}l@{}}It has also been alleged that the copies of the notices were also sent,\\ inter alia, to the principal officer of the said company and also to the ladies\\ as mentioned herein before, who has sold the immovable property\\ in question.\end{tabular} \\ \hline Fact & \begin{tabular}[c]{@{}l@{}}For executing this contract, the assessee entered into various contracts \\-Offshore Supply contract and Offshore Service Contracts.\end{tabular} \\ \hline Ruling By Lower Court & \begin{tabular}[c]{@{}l@{}}But the words inland container depot were introduced in Section 2(12)\\ of the Customs Act, 1962, which defines customs port.\end{tabular} \\ \hline Ruling By Lower Court & \begin{tabular}[c]{@{}l@{}}We may also mention here that the cost of superstructure was \\Rs. 2,22,000 as per the letter of the assessee dated 28-11-66 addressed\\ to the ITO during the course of assessment proceedings.\end{tabular} \\ \hline Argument & \begin{tabular}[c]{@{}l@{}}Such opportunity can only be had by the disclosure of the materials to\\ the court as also to the aggrieved party when a challenge is thrown to the\\ very existence of the conditions precedent for initiation of the action.\end{tabular} \\ \hline Argument & \begin{tabular}[c]{@{}l@{}}In this connection, it was urged on behalf of the assessee(s) that, for the\\ relevant assessment years in question, the Assessing Officer was required\\ to obtain prior approval of the Joint Commissioner of Income Tax before\\ issuance of notice under Section 148 of the Act.\end{tabular} \\ \hline Statute & \begin{tabular}[c]{@{}l@{}}In the meantime, applicant has to pay the additional amount of tax with\\ interest without which the application for settlement would not\\ be maintainable.\end{tabular} \\ \hline Statute & \begin{tabular}[c]{@{}l@{}}On the other hand, interest for defaults in payment of advance tax falls\\ under section 234B, apart from sections 234A and 234C, in section\\ F of Chapter XVII.\end{tabular} \\ \hline Ratio of the Decision & \begin{tabular}[c]{@{}l@{}}The State having received the money without right, and having retained\\ and used it, is bound to make the party good, just as an individual\\ would be under like circumstances.\end{tabular} \\ \hline Ratio of the Decision & \begin{tabular}[c]{@{}l@{}}Therefore, the Department is right in its contention that under the\\ above situation there exists a Service PE in India (MSAS).\end{tabular} \\ \hline Ruling by Present Court & \begin{tabular}[c]{@{}l@{}}For these reasons, we hold that the Tribunal was wrong in reducing the\\ penalty imposed on the assessee below the minimum prescribed\\ under Section 271(1)(iii) of the Income-tax Act, 1961.\end{tabular} \\ \hline Ruling by Present Court & \begin{tabular}[c]{@{}l@{}}Hence, in the cases arising before 1.4.2002, losses pertaining to exempted\\ income cannot be disallowed.\end{tabular} \\ \hline Precedent & \begin{tabular}[c]{@{}l@{}}Yet he none the less remains the owner of the thing, while all the\\ others own nothing more than rights over it.\end{tabular} \\ \hline Precedent & \begin{tabular}[c]{@{}l@{}}I understand the Division Bench decision in Commissioner of\\ Income-tax v. Anwar Ali, only in that context.\end{tabular} \\ \hline None & Leave granted. \\ \hline None & There is one more way of answering this point. \\ \hline Dissent & Therefore a constructive solution has to be found out. \\ \hline Dissent & \begin{tabular}[c]{@{}l@{}}In the light of the Supreme Court decision in the case of CCI vs SAIL\\ (supra) t his issue has to be examined.\end{tabular} \\ \hline \end{tabular} \caption{Example sentences for each label.} \label{app:tab:rr-examples} \end{table*} \section{Conclusion} \vspace{-2mm} We introduce a new corpus annotated with rhetorical roles. We proposed a new MTL model that uses label shift information for predicting labels. We further showed via domain transfer experiments the generalizability of the model. Since RR are tedious to annotate, we showed the possibility of using model distillation techniques to improve the system. In the future, we plan to explore cross-domain transfer techniques to perform RR identification in legal documents in other Indian languages. Nevertheless, we plan to grow the corpus. We also plan to apply RR models for other legal tasks such as summarization and information extraction. \section*{Acknowledgements} \vspace{-2mm} We would like to thank anonymous reviewers for their insightful comments. We would like to thank student research assistants Tridib Mandal, Chirag Mittal, Shefali Deshmukh, Shailja Beria, and Syamantak Sinha from West Bengal National University of Juridical Sciences (WBNUJS) for annotating the documents. This work would not have been possible without their help. \section{Rhetorical Roles Corpus} \label{sec:corpus} \noindent \textbf{Corpus Acquisition:} We focus on Indian legal documents in English; however, techniques we develop can be generalized to other legal systems. We consider legal judgments from the Supreme Court of India, High Courts, and Tribunal courts crawled from the website of IndianKanoon ({\url{https://indiankanoon.org/}}). We also scrape Competition Law documents from Indian Tribunal court cases (National Company Law Appellate Tribunal (NCLAT), COMPetition Appellate Tribunal (COMPAT), Competition Commission of India (CCI)). We focus on two domains of the Indian legal system: Competition Law (CL) (also called as Anti-Trust Law in the US and Anti-Monopoly law in China) and Income Tax (IT). CL deals with regulating the conduct of companies, particularly concerning competition. With the help of legal experts, we narrowed down the cases pertinent to CL and IT from the crawled corpus (also see Ethical Considerations in App. \ref{app:ethics}). \noindent \textbf{Choice of CL and IT domains}: India has a common law system where a decision may not be exactly as per the statutes, but the judiciary may come up with its interpretation and overrule existing precedents. This introduces a bit of subjectivity. One of the biggest problems faced during the task of identifying the rhetorical roles in a judgment is that the element of subjectivity involved in the judicial perception and interpretation of different rhetorical roles, ranging from the factual matrix (i.e., perception about facts, relevant facts and facts in an issue may vary) to the statutory applicability and interpretation to determine the fitness of a particular judicial precedent to the case at hand. In order to overcome this particular obstacle, we focus on specific legal domains (CL and IT) that display a relatively greater degree of consistency and objectivity in terms of judicial reliance on statutory provisions to reach decisions \cite{taxmann2021}. \noindent \textbf{Corpus Statistics:} We randomly selected a set of 50 documents each for CL and IT from the set of acquired documents ($\approx$ 1.6k for IT and $\approx$ 0.8k for CL). These 100 documents were annotated with 13 fine-grained RR labels (vs. 8 by \citet{bhattacharya2019identification}) by a team of legal experts. Our corpus is double the size of the RR corpus of \citet{bhattacharya2019identification}. The CL documents have 13,328 sentences (avg. of 266 per document), and IT has a total of 7856 sentences (avg. of 157 per document). Label-wise distribution for IT and CL documents are provided in Appendix \ref{app:roles}. Annotating legal documents with RRs is a tedious as well as challenging task. Nevertheless, this is a growing corpus, and we plan to add more annotated documents. However, given the complexity of annotations, the RR labeling task also points towards looking for model distillation (\S \ref{sec:experiments}) and zero-shot learning-based methods. \noindent \textbf{Annotation Setup:} The annotation team (legal team) consisted of two law professors from prestigious law schools and six graduate-level law student researchers. Annotating just 100 documents took almost three months. Based on detailed discussions with the legal team, we initially arrived at the eight main rhetorical roles (facts, arguments, statues, dissent, precedent, ruling by lower court, ratio and ruling by present court) plus one `none' label. During the annotation, roles were further refined, and the documents were finally annotated with 13 fine-grained labels since some of the main roles could be sub-divided into more fine-grained classes. The list of RRs is as follows (example sentences for each role is in Table \ref{app:tab:rr-examples} in the Appendix \ref{app:roles}): \begin{itemize}[noitemsep,nolistsep] \item \textbf{Fact (FAC):} These are the facts specific to the case based on which the arguments have been made and judgment has been issued. In addition to Fact, we also have the fine-grained label \textbf{Issues (ISS)}. The issues which have been framed/accepted by the present court for adjudication. \item \textbf{Argument (ARG)}: The arguments in the case were divided in two more fine-grained sub-labels: \textbf{Argument Petitioner (ARG-P):} Arguments which have been put forward by the petitioner/appellant in the case before the present court and by the same party in lower courts (where it may have been petitioner/respondent). Also, \textbf{Argument Respondent (ARG-R):} Arguments which have been put forward by the respondent in the case before the present court and by the same party in lower courts (where it may have been petitioner/respondent) \item \textbf{Statute (STA):} The laws referred in the case. \item \textbf{Dissent (DIS):} Any dissenting opinion expressed by a judge in the present judgment/decision. \item \textbf{Precedent (PRE):} The precedents in the documents were divided into 3 finer labels, \textbf{Precedent Relied Upon (PRE-R):} The precedents which have been relied upon by the present court for adjudication. These may or may not have been raised by the advocates of the parties and amicus curiae. \textbf{Precedent Not Relied Upon (PRE-NR):} The precedents which have not been relied upon by the present court for adjudication. These may have been raised by the advocates of the parties and amicus curiae. \textbf{Precedent Overruled (PRE-O):} Any precedents (past cases) on the same issue which have been overruled through the current judgment. \item \textbf{Ruling By Lower Court (RLC):} Decisions of the lower courts which dealt with the same case. \item \textbf{Ratio Of The Decision (ROD):} The principle which has been established by the current judgment/decision which can be used in future cases. Does not include the obiter dicta which is based on observations applicable to the specific case only. \item \textbf{Ruling By Present Court (RPC):} The decision of the court on the issues which have been framed/accepted by the present court for adjudication. \item \textbf{None (NON):} any other matter in the judgment which does not fall in any of the above-mentioned categories. \end{itemize} The dataset was annotated by six legal experts (graduate law student researchers), 3 annotated 50 CL documents, and the remaining 3 annotated 50 IT documents. We used Webanno \cite{de2016web} as the annotation framework. Each legal expert assigned one of the 13 Rhetorical roles to each document sentence. Note that we initially experimented with different levels of granularity (e.g., phrase level, paragraph level), and based on the pilot study, we decided to go for sentence-level annotations as it maintains the balance (from the perspective of topical coherence) between too short (having no labels) and too long (having too many labels) texts. Legal experts pointed out that a single sentence can sometimes represent multiple rhetorical roles (although this is not common). Each expert could also assign secondary and tertiary rhetorical roles to a single sentence to handle such scenarios (also App. \ref{app:secondary-tertiary}). As an example, suppose a sentence is a `Fact' but could also be an `Argument' according to the legal expert. In that case, the expert could assign the rhetorical roles `Primary Fact' and `Secondary Argument' to that sentence. We extended it to the tertiary level as well to handle rare cases. Our corpus is different from the existing corpus \cite{bhattacharya2019identification}. Firstly, we use 13 fine-grained RR labels and the size of the corpus is almost twice. Secondly, we focus on different legal sub-domains (IT and CL vs. Supreme Court Judgments). Lastly, we perform the primary, secondary, and tertiary levels of annotations since, according to legal experts, it is sometimes possible that a sentence might have multiple RR labels. \begin{table}[t] \small \centering \begin{tabular}{|l|l|l|} \hline \textbf{Label} & \textbf{IT} & \textbf{CL} \\ \hline \textbf{AR} & 0.80 & 0.93 \\ \hline \textbf{FAC} & 0.80 & 0.89 \\ \hline \textbf{PR} & 0.70 & 0.86 \\ \hline \textbf{STA} & 0.78 & 0.89 \\ \hline \textbf{RLC} & 0.58 & 0.74 \\ \hline \textbf{RPC} & 0.78 & 0.79 \\ \hline \textbf{ROD} & 0.67 & 0.93 \\ \hline \textbf{DIS} & \_ & 0.99 \\ \hlin \textbf{Macro F1} & 0.73 & 0.88 \\ \hline \end{tabular} \caption{Label-wise Inter-Annotator agreement (F1 Scores). Dissent label instance absent in IT.} \label{tab:interanno_pairwise_8labels} \end{table} \noindent \textbf{Adjudication and Data compilation:} Annotating RR is not a trivial task, and annotators can have disagreements. We followed a majority voting strategy over primary labels to determine the gold labels. There were a few cases ($\approx$ 5\%) where all the three legal experts assigned a different role to the same sentence. We asked the law professors to finalize the primary label in such cases. If the law professors decided to go with a label completely different from the three annotated labels, we went with their verdict. However, such cases were not frequent ($\approx$ 4\% of adjudicated cases). In this paper, for RR prediction, we concentrate on the primary labels and leave explorations of secondary and tertiary labels for future work. \begin{figure}[t] \centering \begin{subfigure}[b]{0.40\textwidth} \centering \includegraphics[width=\textwidth]{images/IT_cm_new.pdf} \caption{IT} \label{fig:IT_cm} \end{subfigure} \hfill \begin{subfigure}[b]{0.40\textwidth} \centering \includegraphics[width=\textwidth]{images/CL_cm_new.pdf} \caption{CL} \label{fig:cm} \end{subfigure} \hfill \caption{Confusion matrix between Annotators $A_{1}$ and $A_{3}$. Numbers represent \% agreement. Dissent label instance is absent in IT.} \label{fig:confusion-mat} \end{figure} \noindent \textbf{Inter-annotator Agreements:} The Fleiss kappa \cite{fleiss2013statistical} between the annotators is 0.65 for the IT domain and 0.87 for the CL domain, indicating a substantial agreement between annotators. Additionally, as done in \citet{bhattacharya2019identification} and \citet{malik-etal-2021-ildc}, we calculate the pair-wise inter-annotator F1 scores. To determine the agreement between the three annotators $A_{1}, A_{2}, A_{3}$ (each for IT and CL domain), we calculate the pairwise F1 scores (App. \ref{app:metrics}) between annotators $(A_{1},A_{2}), (A_{2},A_{3})$ and $(A_{3},A_{1})$. We average these pairwise scores for each label and further average them out. We report the label-wise F1 and Macro F1 in Table \ref{tab:interanno_pairwise_8labels}. The table shows that the agreements between domains differ (0.73 for IT vs. 0.88 for CL). This is mainly due to (as pointed by law professors) the presence of more precedents and a greater number of statutory provisions in IT laws. These factors combine to produce more subjectivity (relative to CL) when it comes to interpreting and retracing judicial decisions. The confusion matrix between the annotators $(A_{1},A_{3})$ is shown in Figure \ref{fig:confusion-mat} (more details in App. \ref{app:agreement}). \noindent \textbf{Analysis:} Annotation of judgments to identify RR is a challenging task even for legal experts. Several factors contribute to this challenge. Annotators need to glean and combine information non-trivially (e.g., facts and arguments presented, the implicit setting, and the context under which the events described in the case happened) to arrive at the label. Moreover, the annotator only has access to the current document, which is a secondary account of what actually happened in the court. These limitations certainly make the task of the annotator more difficult and leave them with no choice other than to make certain educated guesses when it comes to understanding the various nuances, both ostensible and probable, of certain RR. It should, however, be noted that such variation need not occur for every RR since not all the roles are equally susceptible to it. A cumulative effect of the aforementioned factors can be observed in the results of the annotation. The analysis provided by the three annotators in the case of CL bears close resemblance with each other. On the other hand, in the case of IT, the analysis provided by Users 1 and 3 bears a greater resemblance with each other, compared to the resemblance between Users 1 and 2, or between Users 2 and 3. On a different note, it is also observed that the rhetorical role where the annotators have differed between themselves the most has been the point of Ruling made by the Lower Court, followed by the Ratio. This also ties in with the argument that all rhetorical roles are not equally susceptible to the variation caused by the varying levels of success achieved by the different annotators in retracing the judicial thought pattern (details and case studies in App. \ref{app:annotation-analysis}). \section{Ethical Considerations} \label{app:ethics} The proposed corpus and methods do not have direct ethical consequences to the best of our knowledge. The corpus is created from publicly available data from a public resource: \url{www.indiankanoon.org}. The website allows free downloads, and no copyrights were violated. With the help of law professors, we designed a course project centered around RR annotations for the student annotators. The students \textbf{voluntarily} participated in the annotations as a part of the course project. Moreover, annotators were curious about learning about AI technologies and further contributing towards its progress. There was no compulsion to take part in the annotation activity. The cases were selected randomly to avoid bias towards any entity, situation, or laws. Any meta-information related to individuals, organizations, and judges was removed so as to avoid any introduction of bias. For the application of corpus to judgment prediction task, we are not the first ones to do the task of judgment prediction. For the task, we took all the steps (names anonymization and removal of meta-information) as outlined in the already published work of \citet{malik-etal-2021-ildc}. The focus of this paper is rhetorical role prediction, and the task of judgment prediction is only a use-case. Moreover, in this paper we focus mainly on IT and CL cases where facts and scenarios are more objective and there are less biases compared to other types of cases (e.g., criminal and civil cases). As also described by \citet{malik-etal-2021-ildc}, we do not believe that the task could be fully automated, but rather it could augment the work of a judge or legal practitioner to expedite the legal process in highly populated countries. Legal-NLP is a relatively new area; we have taken all the steps to avoid any direct and foreseeable ethical implications; however, a lot more exploration is required by the research community to understand implicit ethical implications. For this to happen, resources need to be created, and we are making initial steps and efforts towards it. \section{Experiments, Results and Analysis} \label{sec:experiments} Due to the complexity of the task of RR prediction and to be comparable with the existing baseline systems, for experiments, we consider 7 main labels (FAC, ARG, PRE, ROD, RPC, RLC, and STA). We plan to explore all fine-grained RR label (13) predictions in the future. Based on recommendations by legal experts, we ignore sentences with NON (None) label (about 4\% for IT and 0.5\% for CL) (more details in App. \ref{app:reducedLabels}). Further, the IT domain did not have any instance of dissent (DIS) label, and CL has only three documents with very few DIS instances. Based on consultations with law experts, we discarded DIS sentences (more details in App. \ref{app:reducedLabels}). We randomly split (at document level) IT/CL into 80\% train, 10\% validation, and 10\% test set. In contrast to \citet{bhattacharya2019identification}, we did not perform cross-validation for better comparison across different models. We also experiment with a combined dataset of IT and CL (IT+CL); the splits are made by combining individual train/val/test split of IT and CL. We experimented with a number of baseline models (Table \ref{tab:RRresultsITCL}, \ref{tab:rrresultscombineddomain}). In particular, we considered BiLSTM with sent2vec embeddings \cite{bhattacharya2019identification}, non-contextual models (single sentence) like BERT \cite{devlin-etal-2019-bert}, LegalBERT \cite{chalkidis-etal-2020-legal} and BERT-neighbour (we take both left and right neighboring sentences in addition to the sentence of interest). We also considered sentence-level sequence prediction models (contextual models): CRF model using handcrafted features provided by \citet{bhattacharya2019identification}, different variants of BiLSTM-CRF, one with handcrafted features, with sent2vec embeddings, with BERT embeddings, and with MLM embeddings. We finetuned BERT with Masked Language Modeling (MLM) objective on the train set to obtain MLM embeddings (CLS embedding) for each of the sentences (App. \ref{app:model} has hyperparameters, training schedule, and compute settings). We use the Macro F1 metric for evaluation (App. \ref{app:metrics}). We tuned the hyperparameter $\lambda$ of the MTL loss function using the validation set. We trained the MTL model with $\lambda \in [0.1,0.9]$ with strides of $0.1$ (Figure \ref{app:fig:lambdavariation}). $\lambda=0.6$ performs the best for the IT domain and performs competitively on the combined domains. \begin{figure}[t] \centering \includegraphics[width=0.30\textwidth]{images/IT_combined_dev_new.pdf} \caption{Variation of F1 score with $\lambda$ on IT and IT+CL domain} \label{app:fig:lambdavariation} \end{figure} \begin{table}[t]\small \centering \begin{tabular}{|l|c|c|} \hline \textbf{Model} & \textbf{IT (F1)} & \textbf{CL (F1)} \\ \hline BERT & 0.56 & 0.52 \\ \hline BERT-neighbor & 0.53 & 0.51 \\ \hline LEGAL-BERT & 0.55 & 0.53 \\ \hline CRF (Handcrafted) & 0.55 & 0.52 \\ \hline BiLSTM (sent2vec) & 0.55 & 0.54 \\ \hline BiLSTM-CRF (handcraft) & 0.57 & 0.56 \\ \hline BiLSTM-CRF (sent2vec) & 0.59 & 0.61 \\ \hline BiLSTM-CRF (BERT emb) & 0.63 & 0.63 \\ \hline BiLSTM-CRF (MLM emb) & 0.58 & 0.60 \\ \hline \textit{LSP (SBERT)} & \textit{0.64} & \textit{0.63} \\ \hline \textit{LSP (BERT-SC)} $\bullet$ & \textit{0.65} & \textit{0.68} \\ \hline \textit{MTL (MLM emb)} & \textit{0.67 } & \textit{0.67} \\ \hline \textit{MTL (BERT-SC)} $\star$ $\diamond$ & \textit{\textbf{0.70}$\pm${0.02}} & \textit{\textbf{0.69}$\pm$0.01} \\ \hline \end{tabular} \caption{Results of baseline and proposed models on IT and CL. LSP and MTL refer to the LSP-BiLSTM-CRF and MTL-BiLSTM-CRF models respectively. $\bullet$ LSP result is significant with $p \le 0.05$ in comparison to baseline (BiLSTM-CRF(sent2vec)). Similarly, MTL (BERT-SC) has significant result in comparison to baseline ($\diamond$, $p \le 0.05$). MTL (BERT-SC) is significant w.r.t. LSP ($\star$, $p \le 0.05$).} \label{tab:RRresultsITCL} \vspace{-2mm} \end{table} \begin{table}[t]\small \centering \begin{tabular}{|l|c|} \hline \textbf{Model} & \textbf{IT+CL (F1)} \\ \hline BiLSTM-CRF (sent2vec) & 0.65 \\ \hline BiLSTM-CRF (BERT embs) & 0.63 \\ \hline LSP-BiLSTM-CRF (BERT-SC) & 0.67 \\ \hline MTL-BiLSTM-CRF (BERT-SC) & \textbf{0.70}$\pm$0.01 \\ \hline \end{tabular} \caption{Results of baseline and proposed models on combined dataset (IT+CL)} \label{tab:rrresultscombineddomain} \vspace{-4mm} \end{table} \begin{table}[t] \small \centering \begin{tabular}{|l|l|l|} \hline \textbf{Label} & \textbf{IT} & \textbf{CL} \\ \hline \textbf{AR} & 0.67$\pm$0.010 & 0.78$\pm$0.005 \\ \hline \textbf{FAC} & 0.78$\pm$0.020 & 0.75$\pm$0.010 \\ \hline \textbf{PR} & 0.69$\pm$0.005 & 0.62$\pm$0.005 \\ \hline \textbf{STA} & 0.79$\pm$0.020 & 0.82$\pm$0.020 \\ \hline \textbf{RLC} & 0.62$\pm$0.005 & 0.53$\pm$0.005 \\ \hline \textbf{RPC} & 0.70$\pm$0.010 & 0.71$\pm$0.010 \\ \hline \textbf{ROD} & 0.66$\pm$0.005 & 0.65$\pm$0.005 \\ \hline \textbf{Macro F1} & 0.70$\pm$0.020 & 0.69$\pm$0.010 \\ \hline \end{tabular} \caption{Label-wise average (across 6 runs) F1 scores of MTL-BiLSTM-CRF (BERT-SC) model.} \vspace{-5mm} \label{tab:predictedlabelwise} \end{table} \noindent \textbf{Results and Analysis:} Among the baseline models (Table \ref{tab:RRresultsITCL}), we note that LEGAL-BERT performs slightly better on the CL domain but slightly worse on the IT domain when compared to pre-trained BERT. It might be attributed to that LEGAL-BERT (trained on EU legal documents, which also has European competition law) is not trained on Indian IT law documents. Using BERT embeddings with BiLSTM-CRF provides better results. Both the proposed approaches outperform the previous approaches by a substantial margin. The MTL approach (with $\lambda=0.6$) provides the best results on both datasets with an average (over six runs) F1 score of $0.70$ (standard deviation of $0.02$) on the IT domain, an average F1 of $0.69 (\pm 0.01)$ on CL domain, and an average F1 score of $0.71 (\pm 0.01)$ for the combined domain. The MTL model shows variance across runs; hence we average the results. Other models were reasonably stable across runs. We use the LSP shift component with BERT-SC as the encoder $E_{1}$ and the pre-trained BERT model as the encoder $E_{2}$ in our MTL architecture. We did not use SBERT since it was under-performing when compared to BERT-SC. We provide the label-wise F1 scores for the MTL model in Table \ref{tab:predictedlabelwise}. Note the high performance on the FAC label and low performance on the RLC label; this is similar to what we observe for annotators (Table \ref{tab:interanno_pairwise_8labels}). Also, the MTL model performs better on the AR label in the CL domain than the IT domain. An opposite trend can be observed for the RLC label. The contribution of the LSP task is evident from the superior performance. We conduct the ablation study of our MTL architecture from multiple aspects. Instead of using shift embeddings from BERT-SC as the encoder $E_{1}$, we use a BERT model fine-tuned upon the MLM task on the IT and CL domain. However, we obtain a comparatively lower score (see App. \ref{app:model}). This observation yet again points towards the significance of the LSP in the task of rhetorical role prediction (results on other encoders in App. \ref{app:model}). The results have two interesting observations: firstly, MTL model performance on IT cases comes close to the average inter-annotator agreement. In the case of CL, there is a gap. Secondly, for the model, the performance on the IT domain is better than the CL domain, but in the case of annotators opposite trend was observed. We do not know the exact reason for this, but the legal experts pointed out that this is possible because the selected documents might be restricted to specific sections of the IT law and model learned solely from these documents alone without any other external knowledge. However, annotators, having knowledge of the entire IT law, might have looked from a broader perspective. \begin{table}[t] \small \centering \begin{tabular}{|c|c|c|c|} \hline \textbf{\begin{tabular}[c]{@{}c@{}}Train\\ Dataset\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Test\\ Dataset\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}BiLSTM-CRF\\ (sent2vec)\end{tabular}} & \textbf{MTL} \\ \hline $G$\textsubscript{train} & $G$\textsubscript{test} & \begin{tabular}[c]{@{}c@{}}0.55\\ \end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{0.59}\\ \end{tabular} \\ \hline $G$\textsubscript{train} & $CL$\textsubscript{test} & \begin{tabular}[c]{@{}c@{}}0.48\\ (12.78\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{0.50}\\ (15.25\%)\end{tabular} \\ \hline $G$\textsubscript{train} & $IT$\textsubscript{test} & \begin{tabular}[c]{@{}c@{}}0.41\\ (25.45\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{0.46}\\ (22.03\%)\end{tabular} \\ \hline $G$\textsubscript{train} & (IT+CL)\textsubscript{test} & \begin{tabular}[c]{@{}c@{}}0.42\\ (23.64\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{0.48}\\ (18.64\%)\end{tabular} \\ \hline (IT+CL)\textsubscript{train} & $G$\textsubscript{test} & \begin{tabular}[c]{@{}c@{}}0.60\\ \end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{0.63}\\ \end{tabular} \\ \hline \end{tabular} \caption{Domain transfer experiments to compare the performance of MTL-BiLSTM-CRF with the baseline BiLSTM-CRF. The number in parenthesis denotes $\Delta_{G}$ : the \% difference between the performance on $G$\textsubscript{test} and the new domain.} \label{tab:domaintransfer} \vspace{-6mm} \end{table} \noindent \textbf{Domain Transfer:} In order to check the generalization capabilities of the MTL model compared to the baseline model, we conducted some domain transfer experiments. We experimented with a RR dataset of $50$ documents (referred to as $G$) by \citet{bhattacharya2019identification}. $G$ dataset comes from a different legal sub-domain (criminal and civil cases) with very less overlap with IT and CL. We tried different combinations of train and test datasets of IT, CL, and $G$. Note that $G$ (criminal and civil cases) has very less overlap with IT and CL cases, so practically, it is a different domain. The results are in Table \ref{tab:domaintransfer}. We can observe that the MTL model generalizes better across the domains than the baseline model. Both the models perform better on the $G$\textsubscript{test} when the combined (IT+CL)\textsubscript{train} set is used. This points towards better generalization. \noindent \textbf{Model distillation:} RR annotation is a tedious process, however, there is an abundance of unlabelled legal documents. We experimented with semi-supervised techniques to leverage the unlabelled data. In particular, we tried a self-training based approach \cite{xie2020self}. The idea is to learn a teacher model $\theta_{tea}$ on the labelled data $D_{L}$. The teacher model is then used to generate hard labels on unlabeled sentences $s_{u} \in d_{i}$: $\hat{y}_{i} = f_{\theta_{tea}}(\hat{d_{i}})\ \forall \hat{d_{i}} \in D_{U}$. Next, a student model $\theta_{stu}$ is learned on labeled and unlabeled sentences, with the loss function for student training given by: $ L_{ST} = \frac{1}{|D_{L}|}\sum_{d_{j}\in D_{L}}L( f_{\theta_{stu}}(d_{j}),y_{j}) + \frac{\alpha_{U}}{D_{U}}\sum_{ \hat{d_{i}}\in D_{U} }L(f_{\theta_{stu}}(\hat{d}_{i}), \hat{y_{i}}) $. Here, $\alpha_{U}$ is a weighing hyperparameter between the labelled and unlabelled data (details in App. \ref{app:model}). The process can be iterated and the final distilled model is used for prediction. The results of model distillation are shown in Table \ref{tab:model-distillation} for two iterations (initializing the teacher model of the current iteration as the learned student model of the previous iteration; further iterations do not improve results). MTL model was run just once, due to variance it shows F1 of 0.68. The results improve for majority of labels with an increment of 0.11 F1 score for the RLC label in the first iteration. Also, the variance of F1 scores across labels decreases. \vspace{-2mm} \subsection{Application of Rhetorical Role to Judgment Prediction} \label{app:judgment} \vspace{-2mm} To check the applicability of RR in downstream applications, as a use-case, we experimented with how RR could contribute towards judgment prediction (ethical concerns discussed later). We use the legal judgment corpus (ILDC) provided by \citet{malik-etal-2021-ildc} and fine-tune a pre-trained BERT model on the train set of ILDC for the task of judgment prediction on the last 512 tokens of the documents. \citet{malik-etal-2021-ildc} observed that training on the last 512 (also the max size of the input to BERT) tokens of a legal document give the best results; we use the same setting. We use this trained model directly for predicting the outcome on 84 IT/CL cases. We removed text corresponding to the final decisions (and extracted gold decisions) from these documents with the help of legal experts. In the first experiment, we use the last 512 tokens of IT/CL cases for prediction. To study the effect of RRs, in another experiment, we extract the sentences corresponding to gold ratio (ROD) and ruling (RPC) RR labels in IT/CL documents and use this as input to the BERT model. We consider these two RR only since, by definition, these sentences denote the principles and the decision of the court related to the issues in the proceedings. There were no ROD or RPC labels for some documents (16 out of 100 for both IT and CL); we removed these in both experiments. The results are shown in Table \ref{tab:JP}. Using the gold RR gives a boost to the F1 score. We also experimented with using predicted RR, and the performance was comparable to that of the BERT model. To explore how predicted rhetorical roles would perform on judgment prediction task, we perform the following experiment. We use our best performing model MTL (BERT-SC), trained on the combined IT+CL domain to check the applicability of rhetorical roles for the task of Judgment Prediction. In the first step, we obtain the predicted rhetorical roles for each sentence in the documents. Next, we select the sentences labeled as ROD or RPC\footnote{We select only these two labels since by definition, these sentences provide the necessary cues towards the judgment.}. Third, we use a BERT base model fine-tuned on the last 512 tokens of each document in the ILDC corpus \cite{malik-etal-2021-ildc} and use it to predict the judgment of the test set documents, given only the predicted ROD and RPC sentences. We compare the results by the MTL model and BiLSTM-CRF baseline on performing judgment prediction with predicted rhetorical roles. Refer to Appendix Table \ref{app:tab:jp} for the results. Since RR prediction for ROD and RPC is not perfect, improving it would greatly enhance the results as shown in Table \ref{tab:JP}. \begin{table}[t] \small \centering \begin{tabular}{|l|l|c|} \hline \textbf{Model} & \textbf{IT+CL docs} & \textbf{F1} \\ \hline BERT-ILDC & last 512 tokens & 0.55 \\ \hline BERT-ILDC & Gold ROD \& RPC & \textbf{0.58} \\ \hline \end{tabular} \caption{Judgment prediction using RR. The model using gold ROD and RPC is found to be statistically significant ($p \le 0.05$).} \label{tab:JP} \vspace{-3mm} \end{table} \begin{table}[t] \small \centering \begin{tabular}{|l|c|c|c|} \hline \textbf{Label} & \textbf{Base MTL} & \textbf{Dist. Iter 1} & \textbf{Dist. Iter 2} \\ \hline \textbf{AR} & 0.62 & 0.70 & \textbf{0.70} \\ \hline \textbf{FAC} & 0.74 & \textbf{0.75} & 0.73 \\ \hline \textbf{PR} & 0.68 & 0.72 & \textbf{0.74} \\ \hline \textbf{STA} & 0.76 & \textbf{0.77} & 0.75 \\ \hline \textbf{RLC} & 0.59 & \textbf{0.70} & 0.70 \\ \hline \textbf{RPC} & 0.67 & 0.63 & \textbf{0.73} \\ \hline \textbf{ROD} & 0.68 & 0.66 & \textbf{0.68} \\ \hline \textbf{Macro F1} & 0.68 & 0.71 & \textbf{0.72}\\ \hline \end{tabular} \caption{Model Distillation: F1 scores of MTL-BiLSTM-CRF (BERT-SC) model after two distillation iterations on the IT domain.} \label{tab:model-distillation} \vspace{-6mm} \end{table} \section{Introduction} \label{sec:intro} The number of legal cases has been growing almost exponentially in populous countries like India. For example, as per the India's National Judicial Data Grid, there are about 41 million cases pending in India \cite{njdc-district}. As per some of recent estimates by a retired Supreme Court of India Judge, it will take about 450 years to clear the backlog of cases \cite{backlogcases2019}. Technology could come to the rescue in dealing with the backlog, for example, if there were a technology (based on NLP techniques) that could help a legal practitioner to extract relevant information from legal documents then it could make the legal process more streamlined and efficient. However, legal documents are quite different from conventional documents used to train NLP systems (e.g., newspaper texts). Legal documents are typically long (tens of pages) \cite{malik-etal-2021-ildc}, unstructured \cite{9679940,10.1007/978-3-030-33220-4_20}, noisy (e.g., grammatical and spelling mistakes due to manual typing in courts) \cite{malik-etal-2021-ildc,kapoor-etal-2022-hldc}, and use different lexicon (legal jargon). The use of a specialized lexicon and different semantics of words makes pre-trained neural models (e.g., transformer-based models) ineffective \cite{chalkidis-etal-2020-legal}. The legal domain has several sub-domains (corresponding to different laws, e.g., criminal law, income tax law) within it. Although some of the fundamental legal principles are common, the overlap between different sub-domains is low; hence systems developed on one law (e.g., income tax law) may not directly work for another law (e.g., criminal law), so there is the problem of a domain shift \cite{bhattacharya2019identification,malik-etal-2021-ildc,kalamkar-EtAl:2022:LREC,kapoor-etal-2022-hldc}. In this paper, we target legal case proceedings in the form of judgment documents. To aid the processing of long legal documents, we propose a method of segmenting a legal document into coherent information units referred to as \textit{Rhetorical Roles} \cite{saravanan2008automatic,bhattacharya2019identification}. We propose a corpus of legal documents annotated with Rhetorical Roles (RRs). RRs could be useful for various legal applications. Legal documents are fairly long, and dividing these into rhetorical role units can help summarize documents effectively. In the task of legal judgment prediction, for example, using RRs, one could extract the relevant portions of the case that contributes towards the final decision. RRs could be useful for legal information extraction, e.g., it can help extract cases with similar facts. Similarly, prior cases similar to a given case could be retrieved by comparing different rhetorical role units. In this work, we make the following contributions: \noindent 1. We create a new corpus of legal documents annotated with rhetorical role labels. In contrast to previous work (8 RRs) \cite{bhattacharya2019identification}, we create a more fine-grained set of 13 RRs. Further, we create the corpus on different legal domains (\S \ref{sec:corpus}). \noindent 2. For automatically segmenting the legal documents, we experiment with the task of rhetorical role prediction: given a document, predict the text segments corresponding to various roles. Using the created corpus, we experiment with various deep text classification and baseline models for the task. We propose new multi-task learning (MTL) based deep model with document level rhetorical role shift as an auxiliary task for segmenting the document into rhetorical role units (\S \ref{sec:models}). The proposed model performs better than the existing models for RR prediction. We further show that our method is robust against domain transfer to other legal sub-domains (\S \ref{sec:experiments}). We release the corpus, model implementations and experiments code: \url{https://github.com/Exploration-Lab/Rhetorical-Roles} \noindent 3. Given that annotating legal documents with RR is a tedious process, we perform model distillation experiments with the proposed MTL model and attempt to leverage unlabeled data to enhance the performance (\S \ref{sec:experiments}). We also show the use-case for RR prediction model. \section{Rhetorical Roles Prediction} \label{sec:models} We would like to automate the process of segmenting a legal document, to develop ML models for the automation, we experiment with the task of Rhetorical Roles prediction. \noindent \textbf{Task Definition:} Given a legal document, $D$, containing the sentences $[s_{1}, s_{2}, ...s_{n}]$, the task of rhetorical role prediction is to predict the label (or role) $y_{i}$ for each sentence $s_{i} \in D$. \noindent \textbf{Baseline Models:} For the first set of baseline models, the task is modeled as a single sentence prediction task, where given the sentence $s$, the model predicts the rhetorical role of the sentence. In this case, the context is ignored. We consider pre-trained BERT \cite{devlin-etal-2019-bert} and LEGAL-BERT \cite{chalkidis-etal-2020-legal} models for this. As another set of baseline models, we consider the task as a sequence labeling task, where the sequence of all the sentences in the document is given as input, and the model has to predict the RR label for each sentence. We used CRF with hand-crafted features \cite{bhattacharya2019identification} and BiLSTM network. \begin{table}[t] \small \centering \begin{tabular}{|c|c|c|} \hline \textbf{Model} & \textbf{Dataset} & \textbf{F1} \\ \hline SBERT-Shift & IT & 0.60 \\ \hline SBERT-Shift & CL & 0.49 \\ \hline SBERT-Shift & IT+CL & 0.47 \\ \hline BERT-SC & IT & 0.66 \\ \hline BERT-SC & CL & 0.64 \\ \hline BERT-SC & IT+CL & 0.64 \\ \hline \end{tabular} \caption{Results for the auxillary task LSP} \label{tab:auxillary} \end{table} \noindent \textbf{Label Shift Prediction:} Rhetorical role labels do not change abruptly across sentences in a document, and the text tends to maintain topical coherence. Given the label $y$ for a sentence $s_{i}$ in the document, we hypothesize that the chances of shift (change) in the label for the next sentence $s_{i+1}$ are low. We manually verified this using the training set and observed that on average in a document, if the label of sentence $s_{i}$ is $y$, then 88\% of the times the label of the next sentence $s_{i+1}$ is same as $y$. Note that this is true only for consecutive sentences, but in general, label shift inertia fades as we try to predict beyond the second consecutive sentence. Since we are performing a sequence prediction task, this alone is not a good model for label prediction. Nevertheless, we think that this label shift inertia can provide a signal (via an auxiliary task) to the main sequence prediction model. Based on this observation, we define an auxiliary binary classification task: Label Shift Prediction (LSP), that aims to model the relationship between two sentences $s_{i}$ and $s_{i+1}$ and predict whether the labels $y_{i}$ for $s_{i}$ and $y_{i+1}$ for $s_{i+1}$ are different (shift occurs) or not. In particular, for each sentence pair $S = \{s_{i}, s_{i+1}\} \in D$, we define the label of LSP task, $Y=1$ if $y_{i}\neq y_{i+1}$, otherwise $Y=0$, here $y_{i}$ is the rhetorical role for sentence $s_{i}$. Note that for the full model at the inference time, the true label of a sentence is not provided; hence predicting a shift in label makes more sense than performing a binary prediction that the next sentence has the same label or not. We model the LSP task via two different models: \noindent \textbf{SBERT-Shift:} We model the label shift via a Siamese network. In particular, we use the pre-trained SBERT model \cite{reimers2019sentence} to encode sentences $s_{i}$ and $s_{i+1}$ to get representations $e_{i}$ and $e_{i+1}$. The combination of these representations ($e_{i}\oplus e_{i+1}\oplus (e_{i}-e_{i+1})$) is passed through a feed-forward network to predict the shift. \noindent \textbf{BERT-SC:} We use the pre-trained BERT model and fine-tune it for the task of LSP. We model the input in the form of sentence semantic coherence task, $[CLS]\oplus s_{i}\oplus [SEP] \oplus s_{i+1} \oplus [SEP]$ to make the final prediction for shift. In general, the BERT-SC model performs better than SBERT-Shift (Table \ref{tab:auxillary}). Due to the superior performance of BERT-SC, we include it to provide label shift information to the final MTL model. The aim of our work is to predict RR, and we use label shift as auxiliary information even if it may not be predicted correctly at all times. As shown in results later, this limited information improves the performance. \noindent \textbf{Proposed Models:} We propose two main models for the rhetorical role prediction: Label Shift Prediction based on BiLSTM-CRF and MTL models. \noindent \textbf{LSP-BiLSTM-CRF:} Signal from label shift is used to aid the RR prediction in the LSP-BiLSTM-CRF model. The model consists of (Figure \ref{fig:auxRR}) a BiLSTM-CRF model with specialized input representation. Let the sentence embedding (from pre-trained BERT) corresponding to $i^{th}$ sentence be $b_{i}$. Let, the representation of the label shift (the layer before the softmax layer in LSP model) between current sentence and previous sentence pair $\{s_{i-1}, s_{i}\}$ be $e_{i-1,i}$. Similarly for the next pair ($\{s_{i}, s_{i+1}\}$) we get $e_{i,i+1}$. The sentence representation for $i^{th}$ sentence is given by $e_{i-1,i} \oplus b_{i} \oplus e_{i,i+1}$. This sentence representation goes as input to the BiLSTM-CRF model for RR prediction. \begin{figure}[t] \centering \includegraphics[width=0.30\textwidth]{images/arch1.pdf} \caption{LSP-BiLSTM-CRF Model \label{fig:auxRR} \end{figure} \noindent \textbf{MultiTask Learning (MTL):} We use the framework of Multitask learning, where rhetorical role prediction is the main task and label shift prediction is the auxiliary task. Sharing representations between the main and related tasks helps in better generalization on the main task \cite{multiTask-survey-2020}. The intuition is that a label shift would help the rhetorical role component make the correct prediction based on the prospective shift. The MTL model (Figure \ref{fig:MTLarch}) consists of two components: the shift detection component and the rhetorical role prediction component. The shift component predicts if a label shift occurs at $i^{th}$ position. The output of the BiLSTM layer of shift component is concatenated with the BiLSTM output of the rhetorical role component. The concatenated output is passed to a CRF layer for the final prediction of the rhetorical role. The loss for the model is given by: $L = \lambda L_{shift} + (1-\lambda ) L_{RR}$, where, $L_{shift}$ is the loss corresponding to label shift prediction and $L_{RR}$ is the loss corresponding to rhetorical role prediction, and hyperparameter $\lambda$ balances the importance of each of the task. If $\lambda$ is set to zero, we are back with our baseline BiLSTM-CRF model. Since there are two components, we experimented with sending the same encodings of sentences to both the components $(E_{1}=E_{2})$, as well as sending different encodings of the same sentence to both components $(E_{1} \neq E_{2})$. The proposed model is very different from the previously proposed BiLSTM-CRF by \citet{bhattacharya2019identification} that does not use any multitasking and label shift information. \begin{figure}[t] \centering \includegraphics[width=0.40\textwidth]{images/arch2.pdf} \caption{MTL architecture for Rhetorical Role Labelling and Shift Prediction.} \label{fig:MTLarch} \vspace{-6mm} \end{figure} \section{Related Work} \label{sec:related} Legal text processing has been an active area of research in recent times. A number of datasets, applications, and tasks have been proposed. For example, Argument Mining \cite{wyner2010approaches}, Information Extraction and Retrieval \cite{tran2019building}, Event Extraction \cite{lagos2010event}, Prior Case Retrieval \cite{jackson2003information}, Summarization \cite{moens1999abstracting}, and Case Prediction \cite{malik-etal-2021-ildc, chalkidis-etal-2019-neural, strickson2020legal, kapoor-etal-2022-hldc}. Recently, there has been a rapid growth in the development of NLP and ML technologies for the Chinese legal system, inter alia, \citet{chen-etal-2019-charge, hu-etal-2018-shot, jiang-etal-2018-interpretable, ijcai2019-567, ye-etal-2018-interpretable}. Few works have focused on the creation of annotated corpora and the task of automatic rhetorical role labeling. \citet{venturi2012design} developed a corpus, TEMIS of 504 sentences annotated both syntactically and semantically. The work of \citet{wyner2013case} focuses on the process of annotation and conducting inter-annotator studies. \citet{savelka2018segmenting} conducted document segmentation of U.S. court documents using Conditional Random Fields (CRF) with handcrafted features to segment the documents into functional and issue-specific parts. Automatic labeling of rhetorical roles was first conducted in \citet{saravanan2008automatic}, where CRFs were used to label seven rhetorical roles. \citet{nejadgholi2017semi} developed a method for identification of factual and non-factual sentences using fastText. The automatic ML approaches and rule-based scripts for rhetorical role identification were compared in \citet{walker2019automatic}. \citet{kalamkar-etal-2022-corpus} create a large corpus of RRs and propose transformer based baseline models for RR prediction. Our work comes close to work by \citet{bhattacharya2019identification}, where they use the BiLSTM-CRF model with sent2vec features to label rhetorical roles in Indian Supreme Court documents. In contrast, we develop a multi-task learning (MTL) based model for RR prediction that outperforms the system of \citet{bhattacharya2019identification}.
proofpile-arXiv_065-19
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} COVID-19 has provided the world with an exceptional challenge. The first relates to the methods that countries use to suppress the spread of the virus and the second is how to re-start economies \cite{chen2020covid}. One of the core methods that could be used to re-start travel within each country is to implement a vaccine passport, and which could be used in future pandemics. Some of the initial approaches on the creation of a vaccine passport have turned to mandatory and centralised approaches using the PKI (Public Key Infrastructure) \cite{dgc}, while others have proposed a non-mandatory and decentralised approach \cite{ouellette2021decentralized}. Within a centralised approach, we normally use a single trust authority to check the signatures on the vaccination passports. This authority will then check the public key of the signer of the passport, and accept it, if it is a trusted entity. In this way, trusted health authorities can sign their own passports whenever someone is immunised. Although a centralised approach is relatively easy to implement, it is open to breaches along with being a single point of failure. To alleviate such concerns, this paper proposes a new approach to sharing digital vaccine passports by combining peer-to-peer (P2P) content sharing with Blockchain technology and smart contracts. Based on~\cite{benet2014ipfs}, the InterPlanetary File System (IPFS) protocol designed to share content in a distributed manner, with no single point of failure or inherent trust in the network, is used in our work. A P2P approach ensures scalability and resilience within the network. However, audibility and security challenges remain unmet in P2P protocols. Blockchain and smart contract technologies are promising solutions to manage vaccine certifications while ensuring security and privacy of citizen data are maintained. In~\cite{HSJAY:2020}, a Blockchain-based solution, involving self-sovereign identification, decentralised storage and re-encryption proxies, was proposed. The solution used Ethereum smart contracts to run transaction calls and record events containing medical information and COVID test updates. A Blockchain-based approach that prevents information tampering such as COVID test results was presented in~\cite{MKSS:2020}. A scalable Blockchain-based platform for data sharing of COVID-19 and vaccination passports was proposed in~\cite{HKGMK:2021}. The evaluation of the designed platform was performed with 27 Blockchain nodes, each of which represents a European member state. A platform for secure COVID-19 passports and digital health certificate, called NovidChain, was built in~\cite{ACKJ:2021}. The platform restricted the propagation of COVID-19 while supporting privacy concerns of citizens. Although all these approaches provide secure mechanisms for creating vaccine certification, none of them consider user consents before collecting and processing their personal data. Moreover, the data accountability of actors and the right to be forgotten, two main requirements of General Data Protection Regulation (GDPR), were not included in the reviewed solutions. GDPR is a European legislation for protecting personal data and enables citizens to control how their data is collected and manipulated by processing entities (actors)~\cite{Virvou:2017} In order to address the aforementioned challenges, this paper proposes a Blockchain-based platform that creates and verifies digital COVID-19 vaccine passports. The architecture makes use of IPFS for storing and distributing citizens' passports data securely. It also involves a smart contract factory hosted on a Blockchain virtual machine to improve transparency, trust and data privacy of citizens. An encryption management is used to keep anonymised data in a public distributed ledger for the purpose of on-chain analysis. Our platform supports GDPR by implementing smart contracts that: (i) receive and record user consent, (ii) verify operations of actors on passport data for detecting GDPR violations, and (iii) track the realization of the {\it right to be forgotten} in an automatic way. The proposed smart contracts are deployed and executed in Ethereum~\cite{eth:2021} and their costs and mining time are investigated in Blockchain test networks. The rest of the paper is structured as follows. Section~\ref{sec:arch} describes the architecture of the platform together with a number of protocols realising the architecture's implementation. Section~\ref{expr} assesses the time taken for creation of vaccine passports via IPFS and the costs required for deploying/ running our implemented smart contracts and their transactions. Finally, Section~\ref{conclu} concludes the paper and identifies directions for future work. \section{Platform Architecture} \label{sec:arch} \begin{figure}[t!] \center \includegraphics[width=0.6\columnwidth]{Arch.pdf} \caption{The architecture of platform registering vaccine passports} \label{fig:arc} \end{figure} The platform for recording and verifying online COVID-19 vaccine passports in Fig.~\ref{fig:arc}, comprises the following layers: \textbf{InterPlanetary file system (IPFS)} for storing and sharing citizen information. Such information includes \textit{surname(s)}, \textit{forename(s)}, \textit{birth date}, \textit{country of vaccination}, \textit{identification number}, \textit{dose number}, \textit{dates of dosage}, \textit{manufacturer}, \textit{vaccine product} and \textit{vaccine/ prophylaxis}. The information is stored in IPFS through an administrator in a medical centre offering the vaccines. IPFS generates a content identifier (CID), which is a label used to point to each citizen's record/ file. \textbf{Encryption management} creates a hash for each CID generated by IPFS, to protect CIDs from unauthorised access. The layer also keeps CIDs and their hashed versions in local databases. \textbf{Blockchain virtual machine and smart contracts factory} hosts the following smart contracts for storing and monitoring immunity passports through Blockchain. (i) \textbf{Policy contract} with two functions: \texttt{purpose()} and \texttt{vote()}. \texttt{purpose()} records what operation (i.e., read, write etc.) will be executed by an actor on citizen's data. Each record shows the purpose of data processing by the actor, a third party processing passport data. \texttt{vote()}retrieves the purposes of data processing from the Blockchain and stores the positive/ negative citizen's consent for each retrieved purpose in the Blockchain. The deployer of the contract is an administrator who provides citizens with the deployment address to receive their votes or consents; (ii) \textbf{Log contract} sends the anonymised version of CID along with its creation time into the Blockchain network. Such records are used as public keys for accessing the passports details. The contract deployer is an administrator identified by a medical centre; (iii) \textbf{Access contract} logs every access to CIDs in a Blockchain. It logs the operation (i.e., read, write, delete, and so on) executed by an actor on citizens' data within IPFS and submits it to the Blockchain. The contract is initially deployed by the administrator of a medical centre; (iv) \textbf{Verification contract} provides the audit trail of actors processing or accessing to citizens' data. The contract involves a function, called as \texttt{verify()}, which identifies the actors collecting or manipulating vaccine records without getting positive consents from citizens as violators. The function is activated by a trusted third party, as referred to \textit{arbiter}. The entities (i.e., citizens, actors etc.) interacting with the platform execute the transactions/ functions of proposed contracts to record data in Blockchains. \textbf{Access control and policy management} establishes a role-based mechanism for reading or updating citizens data. Users, based on their roles, can access CIDs and vaccine passport details, and a copy of such access will be stored in a Blockchain. This layer also determines a set of privacy policies in forms of $\langle ``actor", ``operation", ``purpose"\rangle$, as referred to \textit{data processing purposes}. An administrator through the layer communicates with the smart contract factory to store such purposes of data usage in a Blockchain. A \textbf{User interface} implements a front-end decentralized application (DApp) for citizens in order to readily interact with the platform. Technically, it is connected to the contract interface(s) created and hosted on Ethereum Blockchain virtual machine. The interface also enables citizens to retrieve the purposes of data processing from Blockchain with a more legible format and get their votes (positive/ negative consent) to the predefined purposes. A citizens' consents will be considered as the inputs for \texttt{vote()} function involved in the \textit{policy} smart contract. \begin{figure*}[t!] \center \includegraphics[width=0.6\textwidth, frame]{Arch-vaccine.pdf} \caption{Interactions within the platform} \label{fig:arcvac} \end{figure*} Figure~\ref{fig:arcvac} represents the realization of the platform's architecture and shows interactions within the architecture. These interactions based on their order can be classified into four phases: \textit{agreement}, \textit{passport creation}, \textit{access control} and \textit{verification}. The description of such phases along with the interactions' steps are provided below. \\ \begin{figure}[t!] \center \includegraphics[width=0.46\textwidth]{vaccine-agreement.pdf} \caption{A protocol for agreement phase} \label{fig:agree} \end{figure} \noindent{\bf Agreement Phase:} A protocol capturing interactions among citizens, administrator and Blockchain for recording purposes of data processing and user consent. The interactions' steps \textit{one} to \textit{four} in Fig.~\ref{fig:arcvac} are realised through this phase. Figure~\ref{fig:agree} shows a protocol in the form of a sequence diagram to describe the realisation. The main entities are user interface, administrator and Blockchain. The administrator as the data controller deploys the policy contract (step~1 in Fig.~\ref{fig:arcvac}) and activates the \texttt{purpose()} function to send data usage purposes into Blockchain (step~2 in Fig.~\ref{fig:arcvac}). Each record contains: (i) \textit{actor} who will update or collect citizen' passport data (ii) \textit{operation} that shows what actions (e.g., read, write etc.) will be carried out by the actor on citizens' data, and (iii) \textit{purpose} that describes the operation and its purpose. Once such records have been added to the Blockchain network, the administrator provides a user interface with the deployment address of policy contract in order to make the records accessible to end users (citizens). By activating the \texttt{vote()} function, citizens can retrieve the data usage purposes from a Blockchain and give their consent before any processing on their personal data (step~3 in Fig.~\ref{fig:arcvac}). The citizens' votes will be kept in the Blockchain as evidence for future verification (step~4 in Fig.~\ref{fig:arcvac}). This phase realises Recitals (32) and (43) of GDPR under which data subjects (citizens) should give their consent for any operation undertaken by data processors (actors) on their personal data. \begin{figure}[t!] \center \includegraphics[width=0.46\textwidth]{vaccine-passport.pdf} \caption{A protocol for passport creation phase} \label{fig:pass} \end{figure} \noindent{\bf Passport Creation Phase: } In this phase the vaccine passport details are stored in IPFS and their associated anonymised CIDs are recorded in Blockchain. It realises steps \textit{five} to \textit{nine} in Fig.~\ref{fig:arcvac} to create the vaccine certifications. Figure~\ref{fig:pass} depicts a sequence diagram for the phase. After collecting citizens data by passport administrator, the data is sent to IPFS (step~5 in Fig.~\ref{fig:arcvac}). The data is recorded and a CID is automatically generated via IPFS (step~6 in Fig.~\ref{fig:arcvac}), and sent to the encryption management layer (step~7 in Fig.~\ref{fig:arcvac}). Following that, the administrator, by deploying log contract (step~8 in Fig.~\ref{fig:arcvac}), submits the anonymised version of CID to Blockchain (step~9 in Fig.~\ref{fig:arcvac}). \begin{figure}[t!] \center \includegraphics[width=0.45\textwidth]{access-vaccine.pdf} \caption{A protocol for accessing to passport data} \label{fig:access} \end{figure} \noindent{\bf Access Control Phase:} This phase monitors and verifies all access requests to citizen passport data. It realises steps \textit{ten} to \textit{thirteen} in Fig.~\ref{fig:arcvac}. The sequence diagram depicted in Fig.~\ref{fig:access} is a protocol for access control management in the platform. Citizens and trusted parties through user interface are able to send their request for access to passports data (step~10 in Fig.~\ref{fig:arcvac}). The request also contains the operation (such as update) that will be carried out on the data. Upon the receipt of request, a monitoring agent in the access control management layer is automatically activated to check the authorisation of requester. In case of authorised access, the agent runs the access smart contract in order to record \textit{requester ID}, \textit{access time} and \textit{permitted/ executable operation(s)} (e.g., view, update etc.) in the Blockchain (steps~11 and 12 in Fig.~\ref{fig:arcvac}).\footnote{For privacy purposes, a hashed version of requester ID, which refers to their Blockchain account is stored on-chain.} Such records are used for future verification. Following that, the hashed/ anonymised version of CID is decrypted, and the passport data is retrieved from IPFS to be accessible for the requester (step~13 in Fig.~\ref{fig:arcvac}). \begin{figure}[t!] \center \includegraphics[width=\columnwidth]{verify-phase.pdf} \caption{A protocol for verifying GDPR violations} \label{fig:verify} \end{figure} \noindent{\bf Verification Phase:} In this phase~Fig.~\ref{fig:verify} violators who access or manipulate citizens data without getting their consents are identified. It realises steps \textit{fourteen} to \textit{sixteen} in Fig.~\ref{fig:arcvac}. The arbiter via user interface sends their verification request for reporting violators to citizens or legal officers. After approving the authentication and authorization of arbiter by the access control management layer, the verification contract is deployed and its address is delivered to the arbiter. The deployment address enables the arbiter to access the records already stored by both \textit{policy} and \textit{access} contracts. Then, the arbiter, by running the \textit{verify} function (step 14 in Fig.~\ref{fig:arcvac}), flags non-compliance operations of actors on citizen's data and reports GDPR violators (step~15 in Fig.~\ref{fig:arcvac}). The \textit{verify} function checks the following items to detect violations: (i) whether the actors stored by \textit{access} contract conform to those logged via \textit{policy} contract; (ii) whether the operations of each actor recorded through \textit{access} contract conform to those recorded via \textit{policy} contract; (iii) whether the operations of each actor logged by \textit{access} contract were already confirmed by the data subject (citizen) or not. Assuming that $A_c$ is the set of actors who got positive consent from data subject via the \textit{policy} contract; $A_e$ is the set of actors executed operations on citizen passport's data and recorded by the \textit{access} contract; $O_{a}$ is the set of operations of an actor $a \in A_c$ that received positive consent from data subject via the \textit{policy} contract; and $\mathcal{O}_{a}$ is the set of operations executed by $a \in A_c$ on passport data and recorded via the \textit{access} contract. Given these assumptions, Algorithm~\ref{verify1} presents the verification of actors implemented as a part of \textit{verify} function. The set $V$ denotes violators. The inputs of the function are the deployment addresses of both \textit{policy} and \textit{access} smart contracts. \small \begin{algorithm} \caption{ ~Verifying actors}\label{verify1} \begin{algorithmic}[l] \Function{verify}{} \State $ V \gets\emptyset ;$ \If {$A_e \not \subseteq A_c$} \State $ V \gets V \cup A_e\!\setminus \!A_c;$ \EndIf \For {$\mbox{all}~a \in A_c$} \If {$\mathcal{O}_{a} \not \subseteq O_{a}$} \State $ V \gets V \cup \{a\};$ \EndIf \EndFor \State $\textbf{return}~~V;$ \EndFunction \end{algorithmic} \end{algorithm} \normalsize As seen from the algorithm, a violation is flagged if: (i) an actor processes passport data without the confirmation of a data subject; and (ii) an accepted actor executes an operation already rejected by the citizen. The outputs of the \textit{verify} function will be visible for the arbiter in a legible format (step~16 in Fig.~\ref{fig:arcvac}). \begin{figure}[t!] \center \includegraphics[width=0.45\textwidth]{right-forgot.pdf} \caption{A protocol for erasure of vaccine data} \label{fig:rightForg} \end{figure} \noindent{\bf Right to be Forgotten:} Citizens have the right to get from the passport administrator (data controller) the erasure of their data without any delay (Art.~17 of GDPR). In order to meet this GDPR principle through our proposed architecture, a protocol is presented in Fig.~\ref{fig:rightForg}. It provides a basis for data accountability of administrators or responsible actors to track whether the citizen's data have been deleted without undue delay. A citizen is able to submit a data erasure request, which is received by the monitoring agent in the access control management layer. The identification and permission of the citizen is then verified by the agent and the \textit{access} smart contract is deployed to record a copy of the request in a Blockchain. Upon the receipt of the deployment address of the contract by the administrator, the CID related to vaccine passport's data is collected and the citizens' personal data is deleted within IPFS. After erasing the data, a confirmation is sent to the citizen. Moreover, a copy of such confirmation denoting the data have been removed by ``who'' and ``when'' is stored in the Blockchain as evidence for future verification. In order to detect any violation with regards to the right automatically, the \textit{verification} smart contract is extended to include an \texttt{erase\_verify()} function. By retrieving the blocks containing erasure requests/ confirmation and created by the \textit{access} contract, the function flags the violator if: (i) erasure confirmation has not been recorded by the administrator in the Blockchain; or (ii) the time difference between erasure request and erasure confirmation logged by the administrator is greater than a deadline already determined through the purposes of data processing in the agreement stage. Both cases are investigated by an arbiter, e.g.if the citizen observes that their data is still available in IPFS while the deadline had been passed, a claim can be made by the citizen and submitted to the arbiter. \section{Experimental Results} \label{expr} Our experiments evaluate the creation of vaccine passports using IPFS and the implementation of our proposed smart contracts using Blockchain test networks. To demonstrate the scalability of our proposed solution, the CID generation time in IPFS was evaluated. We chose to measure the time taken to generate 10 to 100 CID values for simulated vaccination passports which may be added to the IPFS network. Our vaccination passport is encoded as a JSON object, and the contents are derived from the fields used by NHS in real-world vaccination scenarios (described in Section \ref{sec:arch}). A script was used to generate the passport data. Each passport created will generate a unique Community Health Index (CHI) number: a 10-digit value used to identify patients in Scotland. The use of a unique CHI number for each passport ensures the IPFS CID generated will also be unique (recall that the IPFS CID is derived from the content of a resource using cryptographic hashing). Each passport object generated is 452 bytes in size. \begin{figure}[h!] \begin{center} \includegraphics[width=0.7\columnwidth]{figures/ipfs_cid_result.pdf} \label{dag01} \caption{IPFS CID generation average time}\label{fig:ipfs_generation_time} \end{center} \end{figure} A private instance of the IPFS network \cite{IPFS2021b} was created as a Docker image for the purposes of this evaluation. A private IPFS network (one which is isolated from the public network) was used to ensure no experimental data was accidentally added to the public P2P network. The Linux time \cite{LinuxTime} command was used to monitor the time taken for the IPFS command to generate CIDs for 10 to 100 CIDs. We monitor the time taken (user+sys time) for the \texttt{ipfs add -r} command to generate CIDs recursively for all content (from 10 to 100 passports). Figure \ref{fig:ipfs_generation_time} shows the CID generation time for 10 to 100 CIDs derived from simulated vaccination passport data. A minimum of 0.37 seconds was observed (for 10 CIDs) while a maximum of 0.59 seconds was observed. On 6 July 2021 there were 387,286 vaccinations total dosages given to UK citizens based on figures released by the UK government~\cite{UK2021}. If we assume a vaccination passport entry was to be created for all 387,286 vaccination events, and assume it takes a maximum of 0.54 seconds to generate 100 CIDs, our results show that it would take around \textbf{38 minutes} to generate a CID for all passports which we believe to be a reasonable amount of time. \subsection{Investigation of Proposed Smart Contracts} A Blockchain-based prototype has been developed and assessed through both Ganache~\cite{ganache:2021} and Ropsten~\cite{Ropsten:2021} test networks. We implemented our smart contracts on Ethereum using the Solidity language~\cite{solidit:2021}. Ganache is a local test network that provides multiple default gas and ether values, which can be applied as a currency to change Blockchain states when running function calls. Ropsten is a public test network involving a set of miners and gives detailed information relevant to miners. However, it has a gas limit of 4712388 for executing a smart contract. Our proposed smart contracts have been written with a minimum gas usage for each function activation. They were compiled and successfully tested using Remix, being a browser-based development environment for Solidity. The contracts \textit{Policy}, \textit{Log}, \textit{Access} and \textit{Verification} were deployed in the aforementioned networks. The amount of gas used for contract deployment was 792065 for \textit{Policy}, 157339 for \textit{Log}, 796253 for \textit{Access}, and 1223998 for \textit{Verification}. The results represent the computational cost for executing each contract. However, changing the number of actors and their access request has an impact on the transaction costs and mining time. To this end, the following investigations are provided. \begin{figure}[t!] \center \includegraphics[width=0.7\columnwidth]{chart-gas.pdf} \caption{Number of actors and cost} \label{fig:expgas} \end{figure} \subsubsection{Number of Actors and Transaction Costs} The experiment which changes the number of actors, ranging from one to ten, evaluates the cost required for activating transactions. The assumption is that the gas price unit is 20 \textit{gwei} and the number of operations (i.e., read, write, delete) carried out by each actor on vaccine data is three. Our proposed smart contracts have been deployed in the Ganache test network. We calculated the average costs in Ether (ETH) after five times execution of functions with different parameters (values). Figure~\ref{fig:expgas} illustrates the results of this experiment. As seen, the lowest costs in Ether are allocated to the transactions with one actor and the highest values belong to those involving ten actors.\footnote{The actors for \textit{Log} contract are citizens, whose anonymised CIDs are recorded in Blockchain.} Furthermore, when the number of actors increases, the verification cost rises more sharply compared to the other contracts. In fact, the \textit{Verification} contract has a high complexity, since it must call both \textit{Policy} and \textit{Access} smart contracts in order to check the GDPR compliance of actors and report violators. \begin{figure}[t!] \center \includegraphics[width=0.7\columnwidth]{chart-time.pdf} \caption{Number of actors and mining time} \label{fig:exptime} \end{figure} \subsubsection{Number of Actors and Mining Time} This investigation represents the impact of changing the number of actors on the time taken for mining process under different gas prices. The number of actors is varied from one to five and each of which executes two operations on citizens data. We have various scales of gas prices (i.e., 1, 6 and 12 \textit{gwei}). The Ropsten test network was used to get the results of this experiment, as it gives a measurement of the time taken from execution to mining of a block. The \textit{verification} contract was deployed in Ropsten and its \textit{verify} function was activated five times to reach an average mining time. Figure~\ref{fig:exptime} shows the results of this evaluation. Given a fixed gas price, there is a fluctuation in the trend of the chart. As a result, the time totally depends on the interest of miners to validate/ mine the transactions created from the activation of \textit{verify} function and the number of actors does not have an impact on mining time. However, when the amount of gas price increases, the mining time decreases significantly. It shows that higher gas prices motivate miners to accelerate the mining process and block creation. \section{Conclusion} \label{conclu} This paper describes a Blockchain-based platform for the creation of online COVID-19 vaccine certifications. The platform uses IPFS and smart contracts to safely keep citizens' information and improve data accountability of third parties processing or accessing this information. The purposes of data processing of actors are automatically sent into a Blockchain and the platform enables citizens to give a positive /negative consent for each purpose via a smart contract. The proposed approach meets GDPR requirements and non-sensitive data is only stored on-chain for auditing trail of actors. Compared to the recent Blockchain-based platforms presented in cloud and IoT ecosystems for keeping and verifying some health data~\cite{BaratiRana:2020,BRPT:2020}, our platform benefits from a technical solution for checking data erasure requests, which is a significant right in GDPR. A sample of vaccine passport's template has been implemented and the evaluations show that only less than one second takes for generating 100 passports' CIDs in IPFS. Furthermore, the proposed smart contracts have been tested in both Ganache and Ropsten environments. The results demonstrate a noticeable increase in the verification cost when the number of actors is augmented. Given a fixed gas price used for execution of smart contract's opcodes, the investigation represents miners normally take an arbitrary time for mining transactions/ blocks. Future work focuses on the development of the proposed architecture for implementing both access control and encryption management layers. Moreover, the performance and scalability of the presented platform still require further tests and evaluations in terms of time, CPU usage, memory and so on. The integration of the platform in cloud environment and the management of CIDs generated by IPFS remain other challenges for future investigation. {\bf Acknowledgment}: This work is partially funded under the EPSRC ``Privacy Aware Cloud Ecosystems" (PACE) project. \bibliographystyle{ACM-Reference-Format} \section{Introduction} COVID-19 has provided the world with an exceptional challenge. The first relates to the methods that countries use to suppress the spread of the virus and the second is how to re-start economies \cite{chen2020covid}. One of the core methods that could be used to re-start travel within each country is to implement a vaccine passport, and which could be used in future pandemics. Some of the initial approaches to the creation of a vaccine passport have turned to mandatory and centralised approaches using the PKI (Public Key Infrastructure) \cite{dgc}, others have proposed a non-mandatory and decentralised approach \cite{ouellette2021decentralized}. Within a centralised approach, we normally use a single trust authority to check the signatures on passports. This authority will then check the public key of the signer of the passport, and accept it, if it is a trusted entity. In this way, trusted health authorities can sign their own passports whenever someone is immunised. This approach, though, while relatively easy to implement, leaves the centralised infrastructure open to breaches, along with a major problem around the revocation of signing keys. Blockchain and smart contracts have been combined with platforms creating vaccine certificates, to increase trust in such certificates and privacy of citizens data. In~\cite{HSJAY:2020}, a Blockchain-based solution involving self-sovereign identification, decentralised storage and re-encryption proxies was proposed. The approach used Ethereum smart contracts to run transaction calls and record events containing medical information and COVID-19 test updates. A Blockchain-based approach that prevents information tampering such as COVID-19 test results was presented in~\cite{MKSS:2020}. The approach supports monitoring of COVID-19 spread at an earlier stage via a passengers' travel history. A scalable, Blockchain-based platform for data sharing of COVID-19 and vaccination passports was proposed in~\cite{HKGMK:2021}. The evaluation of the designed platform was performed with 27 Blockchain nodes, each of which represents a European member state. A platform for secure COVID-19 passports and digital health certificate, called NovidChain, was built in~\cite{ACKJ:2021}. The platform restricted the propagation of COVID-19 while supporting privacy concerns and ensuring citizen's self-sovereignty for accessing their data. Although all these approaches provide secure mechanisms for creating vaccine certificates, none of them considers user consent before processing their personal data. Moreover, the data accountability of actors and the right to be forgotten which are the main requirements of General Data Protection Regulation (GDPR) were not studied in the reviewed approaches. GDPR is a European legislation for protecting personal data and enables citizens to control how their data is collected and manipulated by processing entities (actors)~\cite{Virvou:2017,ABRP:2021}. In order to address the aforementioned challenges, this paper presents an architecture for a Blockchain-based platform that creates and verifies digital COVID-19 vaccine passports. The platform makes use of IPFS for storing and distributing citizens' passport data securely. It also involves a smart contract factory to improve transparency, trust and data privacy of citizens. Our proposed platform supports GDPR by implementing smart contracts that: (i) receive and record user consent, (ii) verify the operations of actors on passport data, and (iii) track the realization of the right to be forgotten. The proposed smart contracts are deployed and tested using the Ethereum virtual machine, and their costs and mining time are investigated in Blockchain test networks. The rest of the paper is structured as follows. Section~\ref{back} briefly describes digital vaccine passports and IPFS. Section~\ref{sec:arch} describes the architecture of the platform together along with a number of protocols used in the implementation. Section~\ref{expr} assesses the time taken for creation of vaccine passports via IPFS and the costs required for deploying our implemented smart contracts and their transactions. Finally, Section~\ref{conclu} concludes the paper and identifies directions for future work. \begin{center} \begin{figure*} \begin{center} \label{glass01} \caption{Overview of the aims of GLASS\cite{glass2021}} \end{center} \end{figure*} \end{center} \section{Background \& Context} \label{back} This section provides brief descriptions of digital vaccine passports and the InterPlanetary File System (IPFS). \subsection{Vaccine Passports} Phelan~\cite{phelan2020covid} outlines that those who hold vaccination passports could be exempt from self-isolation when they travel. \subsubsection{Digital Green Certificate} In order to re-start international travel, the EU Commission has defined the Digital Green Certificate (DGC) programme \cite{dgc}, which defines three certificates: \emph{vaccination certificates}, \emph{test certificates} (NAAT/RT-PCR test or a rapid antigen test), and \emph{certificates for persons who have recovered from COVID-19}. This makes use of a centralised system that will receive the digitally signed records. Each of these will be signed by a trusted health care entity, and checked against the public key of that trusted entity. In the best case, key pairs would be issued to every trusted health care professional to sign passports. This would allow fine-grain control on signing and audit each signing authority. If there was a breach of the private key though, the public key would be revoked, and it would have a minimal impact. What is likely to happen is that a countrywide health authority will have a single signing key pair. A single breach of the private key will bring down every single DGC signed by that authority, as all of the passports will be marked as untrusted. For cybercriminals, the private key will be a key target, as it will be of significant worth to them on the open market. \subsection{IPFS} IPFS \cite{benet2014ipfs, ipfs} is a distributed, Peer-to-Peer (P2P) content sharing protocol. Traditionally, resources have been shared using a location-based approach (e.g. a URL or file path). In IPFS, every individual resource which is to be shared on the P2P network is identified and located using an identifier which is derived directly from the content of the resource. IPFS does not centralise the storage of resources. Instead, peers on the network will distribute data individually and any peer who downloads the content will also become a distributor of that data. Furthermore, the P2P nature of IPFS means we can reduce the latency involved in sourcing data, along with building resilience. IPFS can be used to represent any number of digital content including websites, folders, images, documents and even databases. IPFS distributes data on the P2P network by first breaking down resources into blocks of $256 \times 1024$ (262,144) bytes by default \cite{IPFS2021a}. Breaking down resources into blocks allows for deduplication (thus saving space) and storage of content in a distributed manner. Each block of data is content addressable using a Content-Identifier (CID). A Merkle Directed Acyclic Graph (DAG) data structure is used to represent each block of data and dictate the relationship between resources (e.g. parent folder and child files). Lastly, a Distributed Hash-Table (DHT) is used to allow peers to route and locate desired resources on the network (i.e. which peer is storing certain blocks and where they are located). The sections which follow describe CIDs, Merkle DAG and DHT in greater detail. \noindent {\bf CID:} are used in IPFS to achieve the goal of sharing resources using a content-based approach, assigning a unique identifier to each content resource (e.g. text file, image file, images and so on) to be shared on the IPFS network. \emph{self-describing} identifier \cite{Multiformats2020} which uses a cryptographic hash (SHA-256) to address the content. Since the CID hash is derived directly from the content (i.e. the text "hello world"), the CID will remain the same regardless of the filename or any other associated metadata. This allows for a degree of assurance that one is downloading the correct content from the IPFS network so long as the CID is known and trusted. \noindent {\bf Merkle DAG}~\cite{merkle1987digital} is a cryptographic hashing function to represent and derive nodes of data from a root node. It is commonly described as a Merkle \emph{tree} due to the fact that the data structure it represents resembles an upside down tree (where the top node is the root). A Merkle tree can be used to verify the integrity of a data structure in a scalable manner since the root node's cryptographic hash can be used to verify the entire data structure represented by this algorithm. A Merkle DAG is an acyclic variation of a Merkle Tree with unique properties. Firstly, data structures represented by Merkle DAG are \emph{directed} which means there is a forward direction defining the relationship between two nodes (e.g. parent folder points to child document). In the context of IPFS, Merkle DAG is used to represent folders and files shared on the P2P network. \noindent{\bf DHT:} is a key-value lookup table which maps content hash values (CID) to the location of content (i.e. peers which host the files) in IPFS. DHT is used for routing and informing peers where resources are located on the P2P network \cite{benet2014ipfs}. Each peer on the IPFS network will store and maintain a list of known peers as new nodes join the network. The DHT algorithm implemented by IPFS is named Kademlia \cite{maymounkov2002kademlia}. Kademlia uses a unique address in the range of $0$ to $2^{256-1}$ to identify each peer on the IPFS network \cite{IPFS2021} and uses the exclusive OR ($XOR$) function to calculate the distance between each peer (thus allowing nodes to determine their nearest neighbours). This approach allows for the node lookup time to be $O(log(N))$ (logarithmic time) \cite{IPFS2021, maymounkov2002kademlia} therefore ensuring scalability in the IPFS network even with a large number of peers. \section{Systems Architecture} \label{sec:arch} \begin{figure}[t!] \center \includegraphics[width=0.65\columnwidth]{Arch.pdf} \caption{The architecture of platform registering vaccine passports} \label{fig:arc} \end{figure} A conceptual architecture for recording and verifying online COVID-19 vaccine passports is proposed in Fig.~\ref{fig:arc}, and consists of the following layers: \textbf{InterPlanetary file system (IPFS)} is a peer-to-peer network for storing and sharing the information relevant to the citizens who have been vaccinated. In Scotland, such information includes \textit{surname(s)}, \textit{forename(s)}, \textit{DOB}, \textit{country of vaccination}, \textit{identification number}, \textit{dose number}, \textit{dates of dosage}, \textit{manufacturer}, \textit{vaccine product} and \textit{vaccine/prophylaxis}. The information is stored in IPFS through an administrator in a medical centre offering the vaccines. IPFS generates a content identifier (CID), which is a label used to point to each citizen's record/ file. \\ \textbf{Encryption management} anonymises or creates a hash for each CID generated by IPFS. The anonymisation is used for protecting CIDs from unauthorised access. The layer also keeps CIDs and their hashed versions in a local database. \\ \textbf{Blockchain virtual machine and smart contracts factory} hosts the following smart contracts for storing and monitoring immunity passports through a Blockchain. \begin{itemize} \item \textbf{Policy contract} involves two functions, called as \texttt{purpose()} and \texttt{vote()}. The former records what operation (i.e., read, write etc.) will be executed by which actor on citizen's data. Each record shows a purpose of data processing by the actor who is a third party processing passports' data. As an example, an actor can be the provider of a cloud-based service who has a contract with medical centers in order to collect and profile the data for analytic purposes. The latter function retrieves the purposes of data processing from the Blockchain and stores the positive/negative citizen's consent for each retrieved purpose in the Blockchain. The deployer of the contract is administrator that provides citizens with deployment address to receive their votes (positive or negative consents). \item \textbf{Log contract} sends the anonymised version of CID along with its creation time into the Blockchain network. Such records are used as public keys for accessing the passports details. The \emph{contract deployers} are trusted administrators identified by medical centres. \item \textbf{Access contract} logs every access to CIDs in a Blockchain. It logs the operation (i.e., read, write, delete, and so on) which is executed by an actor on citizens' data within IPFS and submits it to the Blockchain. The contract is deployed by the \emph{access control manager} in the system. \item \textbf{Verification contract} provides the audit trail of actors processing or accessing to citizens' data. The contract involves a function, called as \texttt{verify} which identifies the actors collecting or manipulating vaccine records without getting positive consents from citizens as violators. The function is activated by a trusted third party, as referred to \textit{arbiter}. \end{itemize} The deployers or agents existing in upper or lower layers interact with the proposed contracts to record data in Blockchains. \\ \textbf{Access control and policy management} establishes a role-based mechanism for reading or updating citizens data. Users based on their roles can access to CIDs and vaccine passports details, and a copy of such access will be sent into Blockchain. The layer also determines a set of privacy policies in forms of $\langle ``actor", ``operation", ``purpose"\rangle$, as referred \textit{data processing purposes}. An administrator in the layer communicates with the smart contract factory to store such purposes of data usage in a Blockchain. \\ \textbf{User interface} implements a user-friendly and front-end decentralized application (DApp) for citizens in order to readily interact with the platform. Technically, it is connected to the contracts' interfaces created and hosted on Ethereum Blockchain virtual machine. The interface also enables citizens to retrieve the purposes of data processing from Blockchain with a more legible format and get their votes (positive/ negative consent) to the predefined purposes. In fact, the citizens' consents will be considered as the inputs for \texttt{vote()} function involved in the \textit{policy} smart contract. There are four phases for realizing the architecture: \textit{agreement}, \textit{passport creation}, \textit{access control} and \text{verification}. \begin{figure}[t!] \center \includegraphics[width=0.45\textwidth]{vaccine-agreement.pdf} \caption{A protocol for agreement phase} \label{fig:agree} \end{figure} \subsection{Agreement phase} This phase presents a protocol for demonstrating the attractions among citizens, system administrator and Blockchain for recording purposes of data processing and citizens' consents. Figure~\ref{fig:agree} shows the protocol in the form of a sequence diagram. As seen, the main entities are user interface, system administrator and Blockchain. The administrator as the data controller, first, deploys the policy contract and activates the purpose function to send data usage purposes into Blockchain. Precisely, each record contains: (i) \textit{actor} who will update or collect citizen' passport data (ii) \textit{operation} that shows what actions (e.g., read, write etc.) will be carried out by the actor on citizens' data, and (iii) \textit{purpose} that describes the operation is used for what. Once such records have been added to the Blockchain network, the administrator provides a user interface with the deployment address of policy contract in order to make the records accessible to end users (citizens). The user interface, then, by activating the vote function, enables users to retrieve the data usage purposes from Blockchain and freely give their consents to them before any processing on their personal data. The users' votes will be kept in the Blockchain as evidence for future verification. This phase realises Recitals (32) and (43) of GDPR under which data subjects (citizens) should give their consent for any operation undertaken by data processors (actors) on their personal data. \begin{figure}[t!] \center \includegraphics[width=0.45\textwidth]{vaccine-passport.pdf} \caption{A protocol for passport creation phase} \label{fig:pass} \end{figure} \subsection{Passport creation phase} This phase represents the steps in which the vaccine passports details will be stored in IPFS and their associated anonymised CIDs are recorded in Blockchain. Figure~\ref{fig:pass} depicts the protocol of the phase. After collecting citizens data by passport administrator, the data is sent to IPFS. Then, the data is recorded and a CID is automatically generated via IPFS. Once the CID has been received, it is forwarded to the encryption management layer so as to be anonymised. Following that, the passport administrator, by deploying log contract, submits the anonymised version of CID to Blockchain. There are several techniques for data anonymisation~\cite{Sedayao:2012} such as hashing, permutation among others, each of which can be exploited for mapping the CIDs into the anonymised ones. \begin{figure}[t!] \center \includegraphics[width=0.45\textwidth]{access-vaccine.pdf} \caption{A protocol for accessing to passport data} \label{fig:access} \end{figure} \subsection{Access control phase} This phase monitors and verifies all the access requests to passport citizen data. The sequence diagram depicted in Fig.~\ref{fig:access} is a protocol for the access control. Citizens and trusted parties through user interface are able to send their request for access to passports data. The request also contains the operation (such as update and so on) that will be carried out on the data. Upon the receipt of request, the access control management service's agent checks the authorisation of requester. In case of authorised access, the agent deploys the access smart contract in order to record \textit{requester ID}, \textit{access time} and \textit{permitted/ executable operation(s)} (e.g., view, update etc.) in the Blockchain.\footnote{For privacy purposes, a hashed version of requester ID, which refers to their Blockchain account is stored on-chain.} Such records are used for future verification. Following that, the hashed/anonymised version of CID is decrypted, and finally the passport data is retrieved from IPFS to be accessible for the requester. \begin{figure}[t!] \center \includegraphics[width=0.45\textwidth]{verify-phase.pdf} \caption{A protocol for verifying GDPR violations} \label{fig:verify} \end{figure} \subsection{Verification phase} This phase, through the protocol represented in Fig.~\ref{fig:verify}, detects the violators who access or manipulate citizens data without getting their consents. The arbiter via user interface sends their verification request for reporting violators once claimed by citizens or legal offices. After approving the authorization of arbiter by access control management service's agent, the verification contract is deployed and its address is forwarded to the arbiter. The deployment address enables the arbiter to access the records already stored by both \textit{policy} and \textit{access} contracts. Then, the arbiter, by running the verify function, flags any violation on citizens data and identifies violators. The verify function checks the following items to detect violations: \begin{enumerate} \item whether the actors stored by access contract conform to those logged via policy contract or not; \item whether the operations of each actor recorded through access contract conform to those recorded via policy contract or not; \item whether the operations of each actor logged by access contract were already confirmed by the data subject (citizen) or not. \end{enumerate} Assuming that $A_c$ is the set of actors with positive consent from data subject via the policy contract; $A_e$ is the set of actors executed operations on citizen passport's data and recorded by the access contract; $O_{a}$ is the set of operations of actor $a \in A_c$ got positive consent from data subject via the user policy contract; and $\mathcal{O}_{a}$ is the set of operations executed by $a \in A_c$ on passport data and recorded via the access contract. Given these assumptions, Algorithm~\ref{verify1} presents the verification of actors implemented as a part of \textit{verify} function. \small \begin{algorithm} \caption{ ~Verifying actors}\label{verify1} \hspace*{\algorithmicindent} Let $V$ be a set denoting violators \\ \hspace*{\algorithmicindent} \textbf{Input:} policy \& access deployment addresses\\ \hspace*{\algorithmicindent} \textbf{Output:} $V$ \begin{algorithmic}[l] \Function{verify}{} \State $ V \gets\emptyset ;$ \If {$A_e \not \subseteq A_c$} \State $ V \gets V \cup A_e\!\setminus \!A_c;$ \EndIf \For {$\mbox{all}~a \in A_c$} \If {$\mathcal{O}_{a} \not \subseteq O_{a}$} \State $ V \gets V \cup \{a\};$ \EndIf \EndFor \State $\textbf{return}~~V;$ \EndFunction \end{algorithmic} \end{algorithm} \normalsize As seen from the algorithm, a violation is flagged if: (i) an actor processes passport data without the confirmation of data subject; and (ii) an accepted actor executes an operation already rejected by the data subject. \begin{figure}[t!] \center \includegraphics[width=0.45\textwidth]{right-forgot.pdf} \caption{A protocol for erasure of vaccine data} \label{fig:rightForg} \end{figure} \subsection{Right to be forgotten} Citizens have the right to get from the passport administrator (data controller) the erasure of their data without any delay (Art.~17 of GDPR). In order to realize this GDPR principle through our proposed architecture, a protocol is presented in Fig.~\ref{fig:rightForg}. It provides a basis for data accountability of administrators to track whether the citizen's data have been deleted without undue delay. As represented from the figure, a citizen, through user interface, is able to submit a data erasure request, which is received by access control management service's agent. The identification and permission of the citizen is then verified by the agent and the access smart contract is deployed so as to record a copy of the request in a Blockchain. Upon the receipt of the deployment address of the contract by the passport administrator, the CID related to vaccine passport's data is collected and the citizens' personal data is removed within IPFS. After erasing the data, a confirmation is sent to the citizen. Moreover, a copy of such confirmation denoting the data have been removed by who and when is stored in the Blockchain as evidence for future verification. In order to detect any violation with, the \textit{verification} smart contract is extended to include a \texttt{erase\_verify()} function. By retrieving the blocks containing erasure requests/ confirmation and created by the access contract, the function flags the violator if: \begin{itemize} \item the erasure confirmation has not been recorded by the administrator in the Blockchain; or \item the time difference between erasure request and erasure confirmation logged by the administrator is greater than a short deadline already determined through the purposes of data processing in the agreement phase. \end{itemize} Both cases are investigated by the arbiter with regards to a claim received from the citizen who is the owner of the passport. For instance, after the submission of an erasure request, if the citizen observes that their data is still available in IPFS while the deadline had been passed, a claim can be made by the citizen and submitted to the arbiter. The claim should involve the erasure request's time and anonymised CID. \section{Experimental Results} \label{expr} Our experiments cover the evaluations related to the creation of vaccine passports using IPFS and the implementation of our proposed smart contracts using Blockchain test networks. \subsection{IPFS CID Generation Time for Vaccination Passports} To demonstrate the scalability of our proposed solution, the CID generation time in IPFS was evaluated. We chose to measure the time taken to generate 10, 20... up to 100 CID values for simulated vaccination passports which may be added to the IPFS network. Our vaccination passport is encoded as a JSON object, and the contents are derived from the fields used by NHS Scotland in real-world vaccination scenarios (described in Section \ref{sec:arch}). The JSON string for an example passport is shown in Appendix \ref{appendix:passportjson}. A script was used to generate the passport data. Each passport created will generate a unique Community Health Index (CHI) number: a 10-digit value used to identify patients in Scotland. The use of a unique CHI number for each passport ensures the IPFS CID generated will also be unique (recall that the IPFS CID is derived from the content of a resource using cryptographic hashing). Each passport object generated is 452 bytes in size. \begin{figure}[h!] \begin{center} \includegraphics[width=0.75\columnwidth]{figures/ipfs_cid_result.pdf} \label{dag01} \caption{IPFS CID Generation Time}\label{fig:ipfs_generation_time} \end{center} \end{figure} A private instance of the IPFS network \cite{IPFS2021b} was created as a Docker image for the purposes of this evaluation. A private IPFS network (one which is isolated from the public network) was used to ensure no experimental data was accidentally added to the public P2P network. The Linux \textbf{time} \cite{LinuxTime} command was used to monitor the time taken for the IPFS command to generate CIDs for 10, 20, ... 100 CIDs. An example of the $time$ command used in conjunction with IPFS for evaluating time taken for generating ten unique vaccination passport CIDs is as follows\footnote{We attribute this approach to redirecting the time output to \cite{stackoverflow}}: \begin{lstlisting} { time ipfs add -r 10 ; } 2> 10_result.txt \end{lstlisting} In the above code listing, we monitor the time taken (user+sys time) for the \textbf{ipfs add -r 10} command to execute. The \textbf{ipfs add -r} command is used to generate CIDs recursively for all content (i.e. 10 simulated vaccination passport JSON objects in this example) within a folder named \textbf{10}. The same measuring approach is taken for 20 unique passports, 30 and so on. Figure \ref{fig:ipfs_generation_time} shows the generation time for 10, 20... 100 CIDs derived from simulated vaccination passport data. For 10 CIDs, the time varied between 0.37s and 0.59s. On 6 June 2021 there were 387,286 total vaccinations given to UK citizens based on figures released by the UK government \cite{UK2021}. If we assume a vaccination passport entry was to be created for all 387,286 vaccination events, and assume it takes a maximum of 0.59s to generate 100 CIDs, our results show that it would take around \textbf{38 minutes} to generate a CID for all passports. This demonstrates that the CIDs can scale to a large population size. \subsection{Investigation of proposed smart contracts} A prototype has been developed using both Ganache~\cite{ganache:2021} and Ropsten~\cite{Ropsten:2021} test networks. We implemented our smart contracts on Ethereum via Solidity language. Ganache is a local test network that provides multiple default gas and ether values, which can be applied as a currency to change Blockchain states when running function calls. Ropsten is a public test network involving a set of miners and gives detailed information relevant to miners. However, it has a gas limit of 4712388 for executing a smart contract. Our proposed smart contracts have been written with a minimum gas usage for each function activation. They were compiled and successfully tested using Remix, being a browser-based development environment for Solidity. The contracts \textit{Policy}, \textit{Log}, \textit{Access} and \textit{Verification} were deployed in the aforementioned networks. The amount of gas used for contract deployment was 792065 for \textit{Policy}, 157339 for \textit{Log}, 796253 for \textit{Access}, and 1223998 for \textit{Verification}. The results represent the computational cost for executing each contract. However, changing the number of actors and their access requests has an impact on the transaction costs and mining time. \begin{figure}[t!] \center \includegraphics[width=0.7\columnwidth]{chart-gas.pdf} \caption{The relationship between number of actors and cost} \label{fig:expgas} \end{figure} \subsubsection{Number of actors and transaction costs} The experiment which changes the number of actors, ranging from one to ten, evaluates the cost required for activating transactions. The assumption is that the gas price unit is 20 \textit{gwei} and the number of operations (i.e., read, write, delete) carried out by each actor on vaccine data is three. Our proposed smart contracts have been deployed in the Ganache test network. We calculated the average costs in Ether (ETH) after five times execution of functions with different parameters (values). Figure~\ref{fig:expgas} illustrates the results of this experiment. As seen, the lowest costs in Ether are allocated to the transactions with one actor and the highest values belong to those involving ten actors.\footnote{The actors for \textit{Log} contract are citizens, whose anonymised CIDs are recorded in Blockchain.} Furthermore, when the number of actors increases, the verification cost rises more sharply compared to the other contracts. In fact, the \textit{Verification} contract has a high complexity, since it must call both \textit{Policy} and \textit{Access} smart contracts in order to check the GDPR compliance of actors and report violators. \begin{figure}[t!] \center \includegraphics[width=0.7\columnwidth]{chart-time.pdf} \caption{The relationship between number of actors and mining time} \label{fig:exptime} \end{figure} \subsubsection{Number of actors and mining time} This investigation represents the impact of changing the number of actors on the time taken for mining process under different gas prices. The number of actors is varied from one to five and each of which executes two operations on citizens data. We have various scales of gas prices (i.e., 1, 6 and 12 \textit{gwei}). The Ropsten test network was used to get the results of this experiment, as it gives a measurement of the time taken from execution to mining of a block. The \textit{verification} contract was deployed in Ropsten and its \textit{verify} function was activated five times to reach an average mining time. Figure~\ref{fig:exptime} shows the results of this evaluation. Given a fixed gas price, there is a fluctuation in the trend of the chart. As a result, the time totally depends on the interest of miners to validate/ mine the transactions created from the activation of \textit{verify} function and the number of actors does not have an impact on mining time. However, when the amount of gas price increases, the mining time decreases significantly. It shows that higher gas prices motivate miners to accelerate the mining process and block creation. \section{Conclusion} \label{conclu} The design of a Blockchain-based platform for the creation of online COVID-19 vaccine certificates is proposed. The platform uses IPFS and smart contracts to support privacy of citizens' information and supports data accountability for third party access/ processing of this information. The purposes of data processing (carried out by actors) is automatically sent into a Blockchain and the platform enables citizens to give a vote (positive /negative consent) for each purpose via a smart contract. The proposed approach meets GDPR requirements, and only non-sensitive data is stored within the Blockchain for auditing purposes. Compared to other Blockchain-based platforms for cloud and IoT ecosystems for keeping and verifying patient data~\cite{BaratiRana:2020,BRPT:2020}, our platform provides a technical solution for checking data erasure requests, which is a significant user right in GDPR (referred to as ``right to be forgotten"). A vaccine passport template has been implemented as a prototype, and our evaluations shows that it takes less than one second to generate 100 passport CIDs in IPFS. The created smart contracts have been tested in both Ganache (on a local machine) and Ropsten (a global Blockchain network) environments and results in the transactions cost increasing noticeable when the number of actors increases (as expected). The results of these experiments can be used to support capacity planning of a vaccine certificate network. Given a fixed gas price used for execution of smart contract opcodes, the investigation demonstrates that miners can take an arbitrary time for mining blocks. Future work focuses on the implementation of both access control and encryption management layers of the designed architecture. The development of the proposed platform in cloud environment and the management of CIDs generated by IPFS remain other aspects for future investigation. \\ \noindent {\bf Acknowledgment:} This work has been carried out in the GLASS (SinGLe Sign-on eGovernAnce paradigm based on a distributed file exchange network for Security, transparency, cost effectiveness and truSt) project \cite{glass2021}. \bibliographystyle{IEEEtran} \subsection{Consent models} Liang et al \cite{liang2017integrating} used Hyperledger Fabric to share consent information, as illustrated in Figure \ref{jain}. With this a user could share healthcare information with insurance companies, in order to get an insurance quote. On a data sharing event, an event record is created as a tuple of {recordhash, owner, receiver, time, location, expiry-date, signature}, and submitted to the blockchain network in order to convert health records into a transaction. Every action on the health record is then recorded and is thus accountable. The research project uses Hyperledger Fabric membership service component and the channel scheme, and uses: \begin{itemize} \item Membership service provider (CA). The CA focuses on membership enrollment. Each participating node is issued with enrollment certificates and transaction certificates for the network and create the access control list during channel establishment \item Chaincode. This is the code responsible for the deploy, invoke and query transactions, and isolates each of the participating parties in the network. \end{itemize} License accoUntability and CompliancE (LUCE) data sharing platform built on the Ethereum platform \cite{havelange2019luce}. It allows citizens to rectify and erase data in relation to General Data Protection Regulation's (GDPR) rights. LUCE tracks data in terms of licensing terms and has three core roles: Data Providers, Data Requesters, and Supervisory Authorities (Figure \ref{luce1}). Figure \ref{luce2} provides an outline of the architecture, and where we have states of: \begin{itemize} \item Sharing document: publish. \item Accessing dataset: query, request, accessing terms, accepting licensing terms, and download token. \item Monitoring compliance: access token, access data, reports compliance, access token, replication and checking compliance. \item GDPR compliance: rights to access; rights to erase and right to rectification; and supervisory authority. \end{itemize} \begin{figure}[h!] \includegraphics[width=0.95\linewidth]{figures/luce1.png} \caption{LUCE roles \cite{havelange2019luce}} \label{luce1} \end{figure} \begin{figure}[h!] \includegraphics[width=0.95\linewidth]{figures/luce2.png} \caption{LUCE architecture \cite{havelange2019luce}} \label{luce2} \end{figure} Jaiman et al \cite{jaiman2020consent} created a blockchain-based data-sharing consent model for health data, and where smart contracts represent a citizen's consent over their health data. It uses two ontologies defined within the Ethereum blockchain (Figure \ref{jain3}): \begin{itemize} \item Data Use Ontology (DUO). This defines citizen consent. \item Automatable Discovery and Access Matrix (ADA-M). This defines the formatting of queries from data requesters. \end{itemize} \begin{figure}[h!] \begin{center} \includegraphics[width=0.95\linewidth]{figures/jai.png} \caption{Hyperledger to share consent information \cite{liang2017integrating}} \label{jain} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[width=0.95\linewidth]{figures/jaima3.gif} \label{jain3} \caption{Ethereum data-sharing consent model \cite{jaiman2020consent}} \end{center} \end{figure} \subsubsection{Right to be forgotten}\label{sect:5.1} Politou et al \cite{politou2020delegated} define the Right to be Forgotten (RtbF) as requirement of GDPR, and thus for data to be erased. Unfortunately, enforcing this across the entire IPFS network is not actually feasible. They implemented an anonymous protocol for delegated content erasure requests within IPFS, and where erasure is only allowed by the original content provider or their delegates. Figure \ref{erase} provides an overview of the system. \begin{figure}[h!] \begin{center} \includegraphics[width=0.95\linewidth]{figures/erase.png} \label{erase} \caption{Erasure in IPFS \cite{politou2020delegated}} \end{center} \end{figure} \subsection{InterPlanetary File System} The IPFS (InterPlanetary File System) implements a distributed infrastructure using P2P methods, and where there is no centralised server. As with Torrent networks, it is defined as being censorship-resistant \cite{henningsen2020mapping}. Benet \cite{benet2014ipfs} outlines that IPFS can be likened to the Web where we use content-address hyperlinks, but where a single BitTorrent swarm exchange objects within one Git repository. IPFS breaks files up into blocks or chunks and traces and uses a Merkle DAG (Direct Acyclic Graph) to define the version control of files and a distributed hashtable. Within a traditional blockchain infrastructure we sequentially store transactions. This can take some time to create a consensus through the building of blocks. With a DAG, each of the transactions becomes a block, and it thus speeds up the consensus mechanisms. Sergio Demian Lerner \cite{lerner2015dagcoin} outlined that in a DAG there were no fixed blocks, and that each transaction brings with it, its own proof of work. Within this he defined the usage of a fast cache for the most recent transactions, and where older transactions cannot be used as a reference. \subsubsection{Architecture} Chen et al \cite{chen2017improved}, as shown in Figure \ref{chen}, define four core layers for storage (Layer 4), routing (Layer 3), virtual chain (Layer 2), and blockchain (Layer 1). Within the blockchain layer, it is possible to build a new blockchain or use Bitcoin's blockchain. A significant and prominently distributed database technology that elaborates the blockchain technology is Blockstack \cite{ali2017blockstack}. Blockstack operates by default using the Gaia distributed database that is able to store its data decentralized in the users' web browsers instead of a centralized web server, thus enhancing privacy. Blockstack recently released Stacks 2.0 which is built on top of the Bitcoin blockchain, in order to utilise smart contracts and decentralised applications \cite{ali2020stacks}. For the virtual chain layer, the transactions are processed and verified, and then sent to the blockchain layer to be stored. Each transaction must have been signed by the private key of a sender, and these are verified by their public key. Typically transactions are for a node to bind its IP address and its associated account (such as defined by its public key), and also one to declare the files that it is associated with. Files can either be \textbf{long-term immutable} or \textbf{occasionally mutable}, and are broken into blocks to be bartered in the BitSwap protocol. Each of the blocks are then identified with a \textbf{content identifier (CID)}. With the \textbf{Bitswap protocol} nodes distribute \textit{want-lists} to their peers, and contains the list of CIDs for blocks that they want to receive. Each node remembers which blocks its peers want. Whenever a node receives a block it checks its list to see if one of its connected peers wanted the received block. The BitSwap protocol involves a node having two lists: the blocks they want; and the blocks they have. Nodes thus barter between themselves. Within a BitTorrent exchange, the data is exchanged in a single torrent. The routing layer extracts the information from Layer 2 and maps the routing address of an account to their associated files or blocks under their account. The storage layer (Layer 4) is where the data itself is stored (mutable storage and immutable storage). In \cite{chen2017improved}, the authors make improvements to IPFS by adding a zig-zag file storage structure in order to provide a triple replication scheme (for frequently used data) and for an erasure codes storage scheme (for infrequency used data). The authors define that the method can differentiate between hot data and cold data. Within hot data storage we store data near the computation and where there is fast access to the data, whereas cold data can be store within a cloud storage. \begin{figure}[h!] \begin{center} \includegraphics[width=0.95\linewidth]{figures/chen01.gif} \caption{IPFS Architecture \cite{chen2017improved}} \label{chen} \end{center} \end{figure} \subsubsection{IoT integration} Muralidharan et al \cite{muralidharan2019interplanetary} implemented an IoT network using IPFS where nodes and data are addressed using unique cryptographic hashes. Overall routing is implemented using Distributed Hash Tables (DHTs), and which can be used to find and publish data for peers. The proposed architecture is defined in Figure \ref{mural1}. These DHTs can store data within the network, and can thus reduce the latency in accessing data. A Merkle DAG along with Git versioning keeps the infrastructure up-to-date. For security, IPFS uses secure file sharing and encrypted communication. \begin{figure}[h!] \begin{center} \includegraphics[width=0.95\linewidth]{figures/mural1.gif} \caption{Proposed IoT-IPFS architecture \cite{muralidharan2019interplanetary}} \label{mural1} \end{center} \end{figure} Nizamuddin et al \cite{nizamuddin2019decentralized} implemented an IPFS infrastructure using Ethereum. They have documented their Solidity smart contracts on GitHub \cite{gitNizamuddin} and tested them under Remix - an Ethereum IDE (Integrated Development Environment) which allows for development of smart contracts \cite{remix2021}. Figure \ref{niz02} outlines the implementation of the smart contract and which defines Developers (D) and Approvers (A). These are identified by Ethereum addresses, and where the creator of a document creates one smart contract for each document. Developers are then responsible in uploading the document (off-chain) to the IPFS file system. The system requires a two-thirds majority of new developers/approvers to be approved by existing approvers (Figure \ref{naz03}). \begin{figure} \begin{center} \includegraphics[width=0.95\linewidth]{figures/naz03.jpg} \caption{Document approval sequence approval \cite{nizamuddin2019decentralized}} \label{naz03} \end{center} \end{figure} \subsubsection{Performance} Henningsten et al \cite{henningsen2020mapping} measured the performance of IPFS and used a Kademlia-based distributed hash table (DHT). They had an average of 44,474 nodes of which 52.19\% resided behind a NAT. They found that the infrastructure was robust and was resistance against Sybil attacks. Unfortunately they identified weaknesses related to performance and in the privacy of queries. \begin{figure} \begin{center} \includegraphics[width=0.95\linewidth]{figures/ipfs02.jpg} \caption{Ethereum smart contract-based system for controlled document sharing \cite{nizamuddin2019decentralized}} \label{niz02} \end{center} \end{figure} Naz et al \cite{naz2019secure} implements a data sharing model (Figure \ref{naz01}) with a number of entities: \begin{itemize} \item Owner. This relates to the entity who is sharing the data, such as a government entity. \item Customer. This relates to an entity which can download files from an IPFS server using reconstructed hashes. \item Workers. These help customers to de-crypt content, authenticate new customers through signatures, and query smart contracts customer data requests. \item Arbitrator. This entity resolves disputes between buyers and sellers for requested content. \end{itemize} With Naz's model \cite{naz2019secure}, an owner creates meta data for a file they wish to share, such as for filename, file type, file size, and a description. This information, and a complete copy of the file data, is then added to the IPFS. An example is (\cite{naz2019secure}): \begin{lstlisting}[language=javascript] //upload the plain file meta ipfs.files.add(buf, function (err, meta_result) { if(err) { console.log(err); return res.sendStatus(500); } console.log(meta_result); res.json({ "meta_hash": meta_result[0].hash, "file_hash": fileMeta.hash, "address": recipient_addr, "email": recipient_email }); }: \end{lstlisting} \begin{figure} \begin{center} \includegraphics[width=0.95\linewidth]{figures/naz01.png} \caption{Data sharing on IPFS \cite{naz2019secure}} \label{naz01} \end{center} \end{figure} Once loaded onto the IPFS, the owner receives the hashes of the data back, who then contacts trusted worker nodes. These worker nodes have their key-pairs stored within smart contracts, and are responsible for decrypting content. The file hashes are split into $k$ shares using the Shamir Secret Share (SSS) method, and encrypted using $n$ random keys. These shares are then stored - along with security information - on a blockchain. It is important to encrypt these hashes as an adversary could rebuild the file based on the hashes. Only valid customers who have paid for access, can then rebuild the files. A share for $S$ can then be {$S_1$, ... ,$S_n$} shares, and where there are $n$ shares and a threshold of $k$. Overall $k$ shares are required to rebuild the secret. These shares are stored and encrypted in a smart contact and can only be decrypted and rebuild by verified workers (who are found by their public key by the owner). Ali et al \cite{ali2017iot} used a side-chain method to keep network privacy and where a validator node runs the side chain (Figure \ref{side1}). Within the network, each IoT device has public and private keys and which they use to encrypt data for the validator. The validator then adds data onto a side chain. A smart contact then stores that the only communication is between the device and the validator. It also stores the public key and hash of the IPFS storing data on a device, and the public key and access rights of requesters from the consortium. \begin{figure}[h!] \begin{center} \includegraphics[width=0.95\linewidth]{figures/side.png} \caption{Side-chain method \cite{ali2017iot}} \label{side1} \end{center} \end{figure} Kumar et al \cite{kumar2019implementation} outlined a way to implement IPFS networks in order to reduce transaction size of block in the blockchain, and a content-addressed based access of transactions. With this, miners store transactions on the IPFS distributed system storage. They then get back the IPFS hash of transaction and store in a block on the blockchain. Their experimental setup can be seen in Figure~\ref{kumar}. \begin{figure}[h!] \begin{center} \includegraphics[width=0.95\linewidth]{figures/p246-kumar-fig-1-source-large.gif} \caption{Working model of blockchain and IPFS \cite{kumar2019implementation}} \label{kumar} \end{center} \end{figure} \subsubsection{Applications} Sun et al \cite{sun2020blockchain} used a ciphertext policy attribute-based encryption (CP-ABE) and IPFS to securely store and sharing of electronic medical records. CP-ABE to control the access to encrypted files, and IPFS then stores these in a distributed form. Figure \ref{fig:sun2020blockchain} provides an overview of the system and which is made up of: Blockchain, IPFS, medical system, and data user (DU). In this case CP-ABE is use to generate an encryption key based on a policy which is made-up from a number of attributes. A workflow is \cite{sun2020blockchain} : \begin{itemize} \item Initially a public-private key pair is created for Bob (the patient) based on his attributes. Bob then goes to hospital and doctor Alice diagnoses a condition. \item Alice encrypts the diagnosis (R) with Bob's public key (CT) and signs the diagnosis ( $CT' = (CT , sig_R$)). She then uploads to IPFS and generates an index for keywords. The IFPS returns a hash address (HASHID) for the file ($h$). \item On receipt of the hash address, the Alice encrypts $h$ with a random number, and hashes the medical records and its index with SHA256. The hash value ($h_R$) and the encrypted hash ($h'$) are then stored on the blockchain by broadcasting the transaction. $h_R$ is the hash value of the record, and $h'$ is the encrypted hash address. \item A transaction ID (TRANSID) is then returned from the blockchain. \end{itemize} To recall the record: \begin{itemize} \item Bob sends an access request with keywords to the hospital. If he has rights of access, a search token is returned ($ST_w = (ID,h,\gamma$). \item Bob verifies the hash address $h$ which contains in the search token $ST_w$, and downloads the encrypted medical record (CT') using the IPFS for hash address $h$. \item Bob decrypts the ciphertext with his private key and obtains the file. \end{itemize} The advantages of this scheme is that Bob needs to be authorized to gain access to private data, and needs to be authorized. The weaknesses of the system include its inability to revoke attributes for access and expired users \cite{sun2020blockchain}. \begin{figure}[h!] \centering \includegraphics[width=0.9\linewidth]{figures/yao1-2982964-large.gif} \caption{Health care storage in IPFS \cite{sun2020blockchain}} \label{fig:sun2020blockchain} \end{figure} Taborda et al \cite{taborda2020decentralized} have created a Web platform to store information on hotels for an image repository. This uses IPFS and blockchain in order to improve security and access control. Hao et al \cite{hao2018safe} define a data storage system which uses IPFS to store video, images, and real-time monitoring data reported from sensors in agricultural products tracking (Figure \ref{arg}). Nizamuddin \cite{nizamuddin2018ipfs} defines an IPFS/smart contract solution to prove the originality and authenticity of published work and digital content. They use Ethereum smart contracts in order to govern, manage, and provide traceability and visibility of the history of digital content from its original version to the current version. In the work they create a use case of an online book publication (Figure \ref{book}). Figure \ref{book2} outlines the message sequence diagram for validation. Vishwavidyapeetham et al \cite{vishwavidyapeetham2018blockchain} apply IPFS to making notes within research projects. It uses a traditional encryption method to create a secure documents for record keeping and Ethereum smart contracts to track the encrypted files. Patsakis \cite{patsakis2019hydras}. define Resource Identifier Generation Algorithms and which extend Domain Generation Algorithms (DGA) - and which are often used by cybercriminals for bot management and communication. Their system extends beyond DNS to use IPFS. overall it hosts malicious content and explores ways that a botmaster deploys and controls bots. Figure \ref{bot} provides an overview of the system. \begin{figure}[h!] \begin{center} \includegraphics[width=0.95\linewidth]{figures/arg.png} \caption{Agricultural products tracking \cite{taborda2020decentralized}} \label{arg} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[width=0.95\linewidth]{figures/book.png} \caption{Book tracking system \cite{nizamuddin2018ipfs}} \label{book} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[width=0.95\linewidth]{figures/book2.png} \caption{Smart contract tracking system \cite{nizamuddin2018ipfs}} \label{book2} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[width=0.95\linewidth]{figures/pi.png} \caption{Record keeping system on IPFS \cite{vishwavidyapeetham2018blockchain}} \label{pi} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[width=0.95\linewidth]{figures/bot.png} \caption{Hydras and IPFS: a decentralised playground for malware \cite{patsakis2019hydras}} \label{bot} \end{center} \end{figure} Karapapas et al \cite{karapapas2020ransomware} define that decentralized systems provide opportunities for cybercriminals to perform illegal activities. A scenario for this is defined in Figure \ref{ransom}. \begin{figure}[h!] \begin{center} \includegraphics[width=0.95\linewidth]{figures/ransom.png} \caption{Ransomware on IPFS \cite{karapapas2020ransomware}} \label{ransom} \end{center} \end{figure} Overall Nizamuddin et al \cite{nizamuddin2019decentralized} define a number of drivers for a business model using cryptocurrencies. With its integrated version control systems, the infrastructure will enable tamper-proof version control. \section{GLASS Architecture} GLASS thus aims to create new models for digital governance, and which supports the integration of targeted, engaging and effective policies for the citizen. This will focus a range of key technologies including Distributed Ledgers, Big Data Analytics, Machine Learning, Artificial Intelligence, AIBots and NetApps technologies, and aim for a digital-by-default design, and which support interoperable and cross-border integration. The overall architecture is defined in Figure \ref{glass02}. \begin{center} \begin{figure*} \begin{center} \includegraphics[width=0.95\linewidth]{figures/glass02.png} \label{glass02} \caption{GLASS Architecture\cite{glass2021}} \end{center} \end{figure*} \end{center}
proofpile-arXiv_065-20
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Micro-Expressions (MiEs) are transient facial expressions that typically last for $0.04$ to $0.2$ seconds~\cite{matsumoto2008culture,ekman1997face}. Unlike conventional facial expressions (or Macro-Expressions, MaEs) that last for longer than $0.2$ seconds, MiEs are involuntary. They are difficult to be pretended, and thus more capable of revealing people's genuine emotions. MiE recognition underpins various valuable applications such as lie detection, criminal justice, and psychological consultation. The difficulty in collecting and labeling MiEs poses huge challenges in building MiE recognition datasets~\cite{ben2021video}. First, collecting \emph{involuntary} MiEs is strenuous, even in a controlled environment~\cite{ben2021video}. Unlike MaEs, which participants can easily ``perform'', MiEs are too vague and subtle to precisely interpret. Second, correctly labeling MiEs is difficult. It usually requires domain knowledge from psychology experts, and oftentimes even experts cannot guarantee a high accuracy of annotations. As a consequence, scales of existing MiE recognition datasets are severely limited: they typically consist of a few hundreds of samples from dozens of identities (refer Fig.~\ref{fig:compare_datasets} for an illustrative summary). Shortage of training data would compromise the development of MiE recognition algorithms. \begin{figure}[t] \begin{minipage}[c]{0.4\textwidth} \includegraphics[width=\textwidth]{./figures/scatter.pdf} \end{minipage} \hfill \begin{minipage}[c]{0.56\textwidth} \caption{We present a large-scale synthetic MiE training dataset, {MiE-X}, created by the proposed protocol. It is two magnitudes larger than existing real MiE recognition datasets in terms of number of MiE samples and number of identities. Compared with existing real-world MiE datasets, MiE-X allows the MiE classifier~\cite{Liu2018A} to achieve consistently higher accuracy evaluated on the real-world MiE dataset CompMiE~\cite{see2019megc}. \label{fig:compare_datasets} \end{minipage} \end{figure} In this work, we aim to address the data shortage issue by proposing a useful protocol for \emph{synthesizing} MiEs. This protocol has three steps. First, we conveniently obtain a large number of faces from existing face datasets. Second, we compute sensible AUs. Third, we employ a conditional generative model to ``add'' MiEs onto these faces. Conditional facial expression generation is a well-studied problem, and we adopt an off-the-shelf algorithm, GANimation~\cite{pumarola2018ganimation}, which employs coefficients of Action Units (AUs) as the generative conditions. At the core of this synthesis protocol, we contribute in finding three types of AUs helpful in the second step. The \textbf{first} type, intuitively, are AUs extracted from real-world, annotated MiE datasets. Specifically, we extract AU coefficients of annotated MiE samples and use these AU coefficients as conditions to transfer corresponding MiEs to faces of other identities. The \textbf{second} type are AUs extracted from {early-stage MaEs}. The formation of macro-expressions consists of a process of facial muscle movements, and we find early stages of these movements usually share similar values of AUs to those of MiEs. The \textbf{third} type are AU combinations given by expert knowledge. For example, human observations suggest that AU12 (\texttt{{Lip Corner Puller}}) is often activated when the subject is ``happy'', so we set AU12 to be slightly greater than $0$ when synthesizing a ``happy'' MiE. In this regard, this work is an early attempt to explore the underlying \emph{computational} mechanism of micro-expressions, and it would be of value for the community facilitating the understanding of micro-expressions and the design of learning algorithms. Using the proposed three types of AUs, our protocol allows us to create a large-scale synthetic dataset, \textbf{MiE-X}, to improve the accuracy of data-driven MiE recognition algorithms. As shown in Fig.~\ref{fig:compare_datasets}, MiE-X is two orders of magnitude larger than existing real-world datasets. Notably, despite being synthetic, MiE-X can be effectively used to train MiE recognition models. When the target application has the same label space as MiE-X, we can directly use MiE-X to train a recognition model, achieving competitive results to those trained on real-world data. Otherwise, MiE-X can be used for pre-training, and its pre-training quality outperforms ImageNet \cite{deng2009imagenet}. Our experiment shows that MiE-X consistently improves the accuracy of frame-based MiE recognition methods and a state-of-the-art video-based method. \begin{itemize \item We introduce a large-scale MiE training dataset created by a useful protocol, for training MiE recognition models. The database will be released. \item We identify three types of AUs that allow for synthesizing trainable MiEs in the protocol. They are: AUs extracted from real MiEs, mined from early-stage of MaEs and provided by human experts of facial expressions. \item Our experiments reveal interesting properties of MiEs: they generalize across identities, are close to early-stage MaEs, and can be manually defined. \end{itemize} \section{Related Work}\label{sec:relatedwork} \textbf{Facial micro-expression recognition.}~Many MiE recognition systems use handcrafted features, such as 3DHOG \cite{polikovsky2009facial}, FDM~\cite{xu2017microexpression} and LBP-TOP \cite{zhao2007dynamic} descriptors. They describe facial texture patterns. Variants and extensions of LBP-TOP have also been proposed \cite{wang2014lbp,huang2015facial,huang2016spontaneous}. Afterwards, deep learning based solutions were proposed \cite{patel2016selective,kim2016micro,hao2017deep,peng2017dual,khor2018enriched,liong2018off}. Petal \emph{et al.}~\cite{patel2016selective} use the VGG model pretrained on ImageNet \cite{deng2009imagenet} and perform fine-tuning for MiE recognition. In ELRCN \cite{khor2018enriched}, the network input is enriched by the concatenation of the RGB image, optical flow and derivatives of optical flow~\cite{shreve2011macro}. To reduce computation cost and prevent overfitting, it is common to use representative frames as model input. For example, Peng \emph{et al.}~\cite{peng2018macro} and Li \emph{et al.}~\cite{li2018can} select the onset frame, apex frame and offset frame in each micro-expression video. Branches~\cite{Liu2018A} uses the onset and apex frame as model input. Following this practice, we focus on synthesising representative frames for MiEs. \textbf{Deep learning from synthetic data.} Deep learning using synthetic data has drawn recent attention. Many works use graphic engines to generate virtual data and corresponding ground truths. Richter \emph{et al.} \cite{richter2016playing} use a 3D game engine to simulate training images with pixel-level label maps for semantic segmentation. In~\cite{sakaridis2018semantic}, prior human knowledge is used to constrain the distribution of synthetic target data. Tremblay \emph{et al.}~\cite{tremblay2018training} randomize the parameters of the simulator to force the model to handle large variations in object detection. Learning-based approaches~\cite{kar2019meta,ruiz2018learning,yao2020simulating} try to find the best parameter ranges in simulators so that the domain gap between generated content and the real-world data is minimized. Another line of works uses generative adversarial networks (GANs) to generate images for learning. For example, the label smoothing regularization technique is adopted for generated images \cite{zheng2017unlabeled}. Camstyle~\cite{zhong2018camera} trains camera-to-camera person appearance translation to generate new training data. CYCADA~\cite{hoffman2017cycada} reconstructs images and introduces semantic segmentation loss on these generated images to maintain consistent semantics. \textbf{Action Units (AUs) in facial analysis.} Action Units are defined according to the Facial Action Coding System (FACS)~\cite{eckman1978facial}, which categorizes the fundamental facial muscles movements by their appearance on the face. Correlations between Action Units and emotions are widely discussed in literature~\cite{du2014compound,ekman1997face,polikovsky2013facial}. This work uses such correlations where we look for and validate effective AUs as generative conditions to synthesis realistic and trainable MiEs. \section{Preliminaries}\label{sec:preliminary} MiE recognition aims to classify emotion categories of a given face video clip. In practice, the video clips should be first processed by a \emph{spotting} algorithm to determine the onset (starting time), apex (time of the highest expression intensity) and offset (ending time) frames. In this work, we assume all data have been processed by spotting algorithms \cite{ben2021video,see2019megc} and focus on the recognition task. Emotion labels in existing datasets are usually different, ranging from 3 to 8 categories. In this work, we use a unified and balanced label space to synthesize MiE-X. Specifically, during synthesis, we choose the most basic categories (\texttt{positive}, \texttt{negative}, \texttt{surprise}, as defined in MEGC) and merge other emotion labels into these three categories. If the label space in the target dataset is different from MiE-X, we need to fine-tune the model further. In the following sections, when mentioning {action units (AUs)}, we by default refer to the AU coefficient vector $\mathbf{z} \in [0,1]^d$. Each dimension in vector $\mathbf{z}$ indicates the intensity of a specific action unit. There are usually $d=17$ dimensions \cite{pumarola2018ganimation,baltrusaitis2018openface}. \section{Synthesizing Micro-Expressions}\label{sec:Synthesis} \begin{figure}[t] \begin{minipage}[c]{0.5\textwidth} \includegraphics[width=\textwidth]{./figures/Framwork_v4.pdf} \end{minipage} \hfill \begin{minipage}[c]{0.49\textwidth} \caption{ Overview of the proposed protocol for synthesizing our MiE recognition dataset. We generate MiE samples (a triplet containing onset frame $\mathbf{x_o}$, apex frame $\mathbf{x_a}$ and the emotion label $y$) with a pretrained GANimation~\cite{pumarola2018ganimation} model \emph{G}, faces in the wild and AU vectors ($\mathbf{z}_o$, $\mathbf{z}_a$) introduced in Section~\ref{sec:AUs}. } \label{figure:overview} \end{minipage} \end{figure} \subsection{The Proposed Protocol} Given a face image, an emotion label $y\in\{\texttt{positive}, \texttt{negative}, \texttt{surprise}\}$, and an onset-apex AU pair $(\mathbf{z_o}, \mathbf{z_a})$, our protocol uses GANimation \cite{pumarola2018ganimation} to generate an MiE sample consisting of two representative frames (refer Fig.~\ref{figure:overview}) First, we randomly select an ``in-the-wild'' face image $\mathbf{x}$ from a large pool of identities (we use the EmotionNet~\cite{fabian2016emotionet} dataset) as the template face upon which we add MiEs. Then, we find an onset AU $\mathbf{z_o}$, an apex AU $\mathbf{z_a}$, and the corresponding emotion label $y$. A triplet of $(\mathbf{z_o}, \mathbf{z_a}, y)$ could be computed from three different sources, which are elaborated in Section~\ref{sec:AUs}. Finally, a conditional generative model $G$ is employed to transfer the onset and apex AUs to the template face $\mathbf{x}$, producing an onset frame $\mathbf{x_o} = G(\mathbf{x}, \mathbf{z_o})$ and an apex frame $\mathbf{x_a} = G(\mathbf{x}, \mathbf{z_a})$, whose emotion label is $y$ (same as the label of $\mathbf{x}$). Here, we adopt GANimation~\cite{pumarola2018ganimation} as $G$, which identity-preserving and only changes facial muscle movements. Training details of GANimation are provided in supp. materials. Please note that the protocol uses existing techniques and that we do not claim it as our main finding. Also note that we do not synthesize entire video sequences of MiEs, but only the onset (the beginning) and apex (most intensive) frames. The motivation is three-fold. First, a full MiE clip may contain up to 50 frames, so a dataset of full MiEs can be 25 times as large as a dataset of representative frames (2 frames per MiE). Second, recent literature on MiE recognition (\textit{e}.\textit{g}., \cite{peng2018macro,li2018can,Liu2018A}) indicate that using representative frames suffice to obtain very competitive accuracy. Last, synthesizing video sequences in a realistic way is much more challenging than static frames, requiring smooth motions and consistency over time. We leave video-level MiE generation to future work. \subsection{Major Finding: Action Units That Constitute Trainable MiEs}\label{sec:AUs} In the protocol, we make the major contribution in finding three sources of AUs that are most helpful to define the onset and apex AUs, to be described below. \textbf{AUs extracted from real MiEs.} An intuitive source of MiE AUs are, of course, real-world MiE data. Assume we have a real-world MiE dataset with $M$ MiE videos, where each video is annotated with the onset and apex frames. For each video, we extract the onset and apex AUs and record the emotion label, forming a set of AUs $\mathcal{Z}^{\text{MiE}} = \{(\mathbf{z_o}^{(m)}, \mathbf{z_a}^{(m)}\}_{m=1}^{M}$ and labels $\mathcal{Y}^{\text{MiE}} = \{y^{(m)}\}_{m=1}^{M}$. Here, AU coefficients are extracted with the OpenFace toolkit~\cite{baltrusaitis2018openface}. When synthesizing MiEs with a certain emotion category based on $\mathbf{z}^{\text{MiE}}$, we randomly draw a pair of AUs from $\mathcal{Z}^{\text{MiE}}$ that have the desired emotion label. \emph{Discussion.} Despite being a valuable source of MiEs AUs, existing real-world MiE data are severely limited in size, so $\mathcal{Z}^{\text{MiE}}$ is far from being sufficient. If we had more MiE data, it would be interesting to further study whether our method can synthesize a better dataset. At this point, to include more MiE samples in our synthetic training set, we find another two AU sources below. \textbf{AUs extracted from early-stage of real MaEs.} Abundant MaE videos exist in the community, which have a similar set of emotion labels with MiE datasets. These MaE videos usually start from a neutral expression, leak subtle muscle movements in early frames, and present obvious expressions later. In our preliminary experiments, we observe that AUs extracted from early frames of MaE videos have similar values as those of MiE clips. This suggests that MiEs and \emph{early-stage} of MaEs have similar intensities in muscle movements, rendering the latter a potential source to simulate MiEs. In leveraging MaE videos as an AU source, we regard the first frame of MaE clips, which usually has a neutral expression, as our onset frame. The selection of the apex frame is more challenging. However, we empirically observe that existing MaE clips usually present MiE-liked AU intensities in the first half of the video. Therefore, we use two hyperparameters to find the apex frame approximately. Suppose an MaE clip has $n$ frames. An apex frame is randomly drawn from frame index $\lfloor\alpha \times n\rfloor$, where $\lfloor \cdot\rfloor$ rounds a number down to the nearest integer. The selections of $\alpha$ and $\beta$ are briefly discussed in Section~\ref{sec:exp:analysis}. \emph{Discussion.} Different MaE datasets may be different in the frame index of the onset and apex frames, so in practice we need to do a rapid scanning to roughly know them. But this process is usually quick, and importantly reliable, because 1) a certain dataset usually follows a stable pattern in terms of the onset and apex positions and 2) onset and apex states usually last for a while. As such, while this procedure requires a bit manual work, it is still very valuable considering the gain it brings (large-scale MiE data). \begin{figure*}[t] \begin{center} \includegraphics[width=\textwidth]{./figures/compute_three_AUs.pdf} \caption{Examples of how to compute $\mathbf{z}^{\text{MiE}}$, $\mathbf{z}^{\text{MaE}}$ and $\mathbf{z}^{\text{exp}}$. \textbf{(a)} We compute $\mathbf{z}^{\text{MiE}}$ from representative frames (\textit{i}.\textit{e}., the onset frame and the apex frame) of real-world MiE videos. \textbf{(b)} Early frames in real-world macro-expression videos are used to obtain $\mathbf{z}^{\text{MaE}}$. The hyperparameters of choosing the frame indices are selected in Section~\ref{sec:exp:analysis}. \textbf{(c)} We specify an emotion type (\textit{e}.\textit{g}., sad) and then the AU distribution from the Expert Mapping table~\cite{du2014compound}, which determine the activated AU entries. Then we assign activated AU entries with intensity values (red bars) and others with $0$. The hyperparameters of constraining intensity values are experimented in Section~\ref{sec:exp:analysis}.} \label{figure: compute_three_AUs} \end{center} \end{figure*} \textbf{AUs defined by expert knowledge.} Studies reveal strong relationships between AUs and emotions~\cite{du2014compound,ekman1997face,polikovsky2013facial}. Some explicitly summarize the posterior probability of each AU entry being activated for each emotion label: $P(z_i>0|y)$, where $z_i$ indicates the $i$-th entry of AU vector $\mathbf{z}$. The posterior probabilities, for simplicity, are usually modeled with a Bernoulli distribution~\cite{du2014compound}, \textit{i}.\textit{e}., $P(z_i>0|y) = p$ and $P(z_i=0|y) = 1 - p$. We find the AU distribution summarized by experts another effective source of AUs for synthesizing trainable MiEs. We use the expert knowledge mainly to find the apex AUs, where we resort to a mapping table \cite{du2014compound} that describes the aforementioned posterior probabilities. Given an emotion label, when generating the apex AUs $\mathbf{z}_{a}^{\text{exp}}$, we first decide which entries in $\mathbf{z}_{a}^{\text{exp}}$ should be activated ($>0$) by drawing samples from the Bernoulli distribution. We then determine the intensities of the activated entries by randomly sampling from a uniform distribution with a fixed interval $[\mu, \nu]$. The selection of hyperparameters $\mu, \nu$ is briefly discussed in Section~\ref{sec:exp:analysis}. On the other hand, for the onset AUs $\mathbf{z^{\text{exp}}_o}$, we set them to zero vectors, which means that no action unit is activated, thus representing a neutral face. Examples of how to compute the above three types of AUs are provided in Fig.~\ref{figure: compute_three_AUs}. \emph{Discussion.} We use three basic expression categories (\texttt{positive}, \texttt{negative}, \texttt{surprise}) when synthesizing MiE-X, because these three classes form the largest common intersection between the label sets from the three sources. If we could have more fine-grained label space, it would be interesting to further explore how the label space affects the training quality of MiE-X. \begin{figure*}[t] \centering \begin{center} \includegraphics[width=\textwidth]{./figures/generated_examples.pdf} \caption{Examples of MiE apex frames from \textbf{(a)} synthetic (MiE-X) and \textbf{(b)} real-world (the SMIC dataset \cite{li2013spontaneous}) micro-expression data. In (a), we show three columns of synthesized MiE apex frames corresponding to three types of Action Units (AUs), \textit{i}.\textit{e}., $\mathbf{z}^{\text{MiE}}$,$\mathbf{z}^{\text{MaE}}$, $\mathbf{z}^{\text{exp}}$ described in Section \ref{sec:AUs}. Both real-world data and synthetic data the shown under classes labels \texttt{positive}, \texttt{negative}, and \texttt{surprise}.} \label{figure: Different_data} \end{center} \end{figure*} \subsection{The MiE-X dataset} \label{method:MIEX} With the above three types of AUs and a large pool of in-the-wild faces, we eventually are able to synthesize a large-scale MiE recognition dataset, coined MiE-X. MiE-X contains 5,000 identities, each with 9 MiE samples\footnote{For each ID and each of the three classes \texttt{positive}, \texttt{negative}, and \texttt{surprise}, we generate three MiE samples corresponding to three types of AUs. Each sample has an onset and an apex frames, totaling 9 MiE samples and 18 frames per ID. , resulting in 45,000 samples in total. To our knowledge, MiE-X is the first large-scale MiE dataset and is more than two orders of magnitude larger than existing real-world MiE datasets. Visualization of the generated apex frames in MiE-X is provided in Fig.~\ref{figure: Different_data}; comparisons with existing MiE datasets are illustrated in Fig.~\ref{fig:compare_datasets}. The strength of MiE-X as training data comes from its diversity in identity and MiE patterns\footnote{We also acknowledge GANimation that provides us with realistic facial images.}. For instance, it contains 5,000 human identities, encouraging models to learn identity-invariant expression features. At the same time, the three sources of AUs are complementary, provide a wide range of AU values, and sometimes have random AU perturbations. MiE-X alleviates overfitting risks and allows algorithms to consistently improve their accuracy. \section{Experiment}\label{sec:exp} \subsection{Experimental setups}\label{sec:exp:setup} \textbf{Baseline classifiers.}\label{sec:exp:classifier} Two image-based MiE recognition methods are mainly evaluated in this paper: the \textbf{Branches}~\cite{Liu2018A} and \textbf{ApexME}~\cite{li2018can}. Both are trained for 80 epochs. More details are provided in supplementary materials. \textbf{Real-world datasets.} We report experimental results on commonly-used real-world datasets: \textbf{CompMiE~\cite{see2019megc}}, \textbf{MMEW} and \textbf{SAMM}. CompMiE is proposed by the MiE recognition challenge MEGC2019~\cite{see2019megc} which merges three existing real MiE datasets into one. The three component datasets are CASME II~\cite{yan2014casme}, SAMM~\cite{davison2016samm,davison2018objective}, and SMIC~\cite{li2013spontaneous}, respectively. CompMiE has the same label space (Section~\ref{sec:AUs}) as MiE-X and consists of 442 samples from 68 subjects in total. MMEW and SAMM have 234 and 72 samples, respectively, and their label spaces are different with MiE-X\footnote{Label space of MMEW: \texttt{happiness}, \texttt{surprise}, \texttt{anger}, \texttt{disgust}, \texttt{fear}, \texttt{sadness}; Label space of SAMM: \texttt{happiness}, \texttt{surprise}, \texttt{anger}, \texttt{disgust}, \texttt{fear}.}. The MaE dataset \textbf{CK+}~\cite{lucey2010extended} is a commonly used real-world MaE dataset containing 327 videos. Its label space is also merged into the same one as CompMiE. When generating MiE-X (see Section~\ref{sec:Synthesis}), we extract $\mathbf{z}^{\text{MiE}}$ and $\mathbf{z}^{\text{MaE}}$ from CompMiE and CK+, respectively. \textbf{Evaluation protocols.} We use subject-wise $k$-fold cross-validation, commonly performed in the community \cite{ben2021video,li2018can,khor2018enriched}. Specifically, when real-world data are used in testing, we split them into $k$ subsets. Each time, we use $k-1$ subsets for training and the rest $1$ subset for testing. The average accuracy of the $k$ tests is reported. For CompMiE, $k=3$; for MMEW and SAMM, $k=5$. To evaluate the effectiveness of MiE-X, we replace real training sets (\textit{i}.\textit{e}., $k-1$ subsets) with MiE-X when MiE-X is used for direct deployment. Note that, for each fold, MiE-X samples whose AUs (\textit{i}.\textit{e}., $\mathbf{z}^{\text{MiE}}$) are computed from real MiE samples in the test subset will not be used in training. If MiE-X is used for pre-training, where a fine-tuning stage is required, the $k-1$ subsets will be used for fine-tuning. Other real-world datasets (\textit{e}.\textit{g}., MMEW, SMIC) are also used for pre-training to form comparisons with MiE-X\footnote{We discard those samples in real-world datasets that overlap with the test subset.} Experiment is categorized as follows. \begin{itemize \item {Pre-training with MiE-X (or other competing datasets) and fine-tuning on target training set.} We adopt this setting especially when the source domain has a different label space from the target domain. \item {Training (or fine-tuning) with MiE-X (or other competing datasets) followed by direct model deployment.} If the target domain and training dataset share the same label space, models obtained from the training set can be directly used for inference on the target test set. \end{itemize} \textbf{Metrics.} We mainly use unweighted F1-score (UF1) and unweighted average recall (UAR) \cite{see2019megc}. UF1 and UAR indicate the average F1-score and recall, respectively, over all classes. We also report the conventional recognition rate on the MMEW~\cite{ben2021video} and SAMM~\cite{davison2018objective} datasets to compare with the state of the art. By default, we run each experiment ($k$-fold cross-validation) 3 times and report the mean and standard variance of the results in the last epoch. Moreover, we provide the best accuracy among all epochs for reference (Table~\ref{table:SOTA}). \subsection{Effectiveness of the Synthetic Database}\label{sec:exp:main_results} \begingroup \setlength{\tabcolsep}{3.9pt} \begin{table}[t] \centering \footnotesize \caption{Effectiveness of MiE-X in model (pre-)training. Models are pre-trained using MiE-X or other real-world datasets and then fine-tuned on real-world training data \textit{i}.\textit{e}., CompMiE, or the combination of CompMiE and CK+~\cite{lucey2010extended}. UF1 (\%) and UAR (\%) are reported on the CompMiE dataset after three-fold cross-validation. ApexME~\cite{li2018can} and Branches~\cite{Liu2018A} are used as baselines. We observe consistent accuracy improvement when models are pre-trained with MiE-X. In addition, when directly deploying the MiE-X pretrained model, the accuracy is also competitive.} \begin{tabular}{c|cc|cc|cc} \toprule \multicolumn{1}{c|}{{Pre-training}} & \multicolumn{2}{c|}{{Fine-tuning}} & \multicolumn{2}{c|}{ApexME~\cite{li2018can}} & \multicolumn{2}{c}{Branches~\cite{Liu2018A}}\\ \cline{2-7} MiE data & CompMiE & CK+ & UF1 & UAR & UF1 &UAR \\ \midrule - & \cmark & & 41.8 $\pm$ 0.7 & 41.9 $\pm$ 0.7 & 43.6 $\pm$ 0.5 & 44.6 $\pm$ 0.6 \\ - & \cmark & \cmark & 45.0 $\pm$ 0.5 & 45.5 $\pm$ 1.0 & 45.2 $\pm$ 0.5 & 47.0 $\pm$ 0.6 \\ SMIC~\cite{li2013spontaneous} & \cmark & & 45.0 $\pm$ 1.7 & 44.8 $\pm$ 1.9 & 42.8 $\pm$ 0.8 & 41.4 $\pm$ 0.9 \\ CASME~\cite{yan2014casme} & \cmark & & 44.0 $\pm$ 1.2 & 45.1 $\pm$ 0.5 & 40.7 $\pm$ 0.9 & 41.4 $\pm$ 0.9 \\ SAMM~\cite{davison2016samm} & \cmark & & 43.7 $\pm$ 0.7 & 42.8 $\pm$ 0.5 & 42.3 $\pm$ 1.4 & 42.9 $\pm$ 1.7 \\ MMEW~\cite{ben2021video} & \cmark & & 43.3 $\pm$ 0.8 & 44.4 $\pm$ 1.2 & 43.3 $\pm$ 1.3 & 44.1 $\pm$ 1.5 \\ \cline{1-7} MiE-X & & & 45.2 $\pm$ 0.5 & 46.3 $\pm$ 0.5 & 47.7 $\pm$ 0.5 & 48.9 $\pm$ 0.8 \\ MiE-X & \cmark & & 46.9 $\pm$ 0.9 & \textbf{48.3} $\pm$ 0.9 & 50.7 $\pm$ 0.9 & 52.1 $\pm$ 1.4 \\ MiE-X & \cmark & \cmark & \textbf{47.0} $\pm$ 0.8 & 48.2 $\pm$ 0.4 & \textbf{52.3} $\pm$ 0.7 & \textbf{52.3} $\pm$ 0.4 \\ \bottomrule \end{tabular} \label{table:Fine_Tuning} \end{table} \endgroup \textbf{Effectiveness of MiE-X in training models for direct deployment.} MiE-X has the same label space with CompMiE. So models trained with MiE-X can be directly evaluated on the CompMiE. In Table \ref{table:Fine_Tuning}, ApexME and Branches trained with MiE-X alone produce an UF1 of 45.2\% and 47.7\%, respectively, which outperforms the training set composed of CompMiE and CK+. \textbf{Effectiveness of MiE-X in model pre-training.} First, when using MiE-X for model pre-training, we observe consistent improvement over not using it (Table~\ref{table:Fine_Tuning}). For example, when we perform fine-tuning on CompMiE using the ApexME method, pre-training with MiE-X brings 5.1\% and 7.1\% improvement in UF1 and UAR, respectively, over not using MIE-X. Second, we compare MiE-X with existing datasets (\textit{i}.\textit{e}., SMIC, CASME, SAMM, and MMEW) of their effectiveness as a pre-training set, on which we train the baseline MiE classifiers (\textit{i}.\textit{e}., ApexME, Branches). We do three-fold cross-validation on CompMiE. For each fold, we use the dataset (e.g., SMIC) we would like to evaluate as the pre-training data. Samples are removed from the training set if they also appear in the test subset of CompMiE in the current fold. Then we fine-tune the model on the training subset of CompMiE. Results are shown in both Table~\ref{table:Fine_Tuning} and Fig.~\ref{fig:compare_datasets}. We observe that the model pre-trained on MiE-X significantly outperforms those pre-trained on other datasets. For instance, when we pre-train Branches on MiE-X, the final fine-tuning results on CompMiE in UF1 and UAR are 7.4\% and 8.0\% higher than using MMEW as the pre-training data. This phenomenon validates the effectiveness of our dataset and the proposed synthesis procedure. \begingroup \setlength{\tabcolsep}{6.4pt} \begin{table}[t] \centering \footnotesize \caption{Comparison with the state-of-the-art MiE recognition methods on MMEW and SAMM datasets. We re-implement ApexME, Branches and DTSCNN, which are pretrained with either ImageNet or MiE-X (grey). We report the mean recognition accuracy (\%) and standard variance. $\dagger$ donates vide-based methods. ``Last'' means test result in the last epoch, and ``Best'' refers to the best accuracy among all epochs.} { \begin{tabular}{l|cccc} \toprule \multirow{2}{*}{Methods} & \multicolumn{2}{c}{MMEW} & \multicolumn{2}{c}{SAMM}\\ \cline{2-5} & Last & Best & Last & Best \\ \midrule FDM~\cite{xu2017microexpression} & \quad34.6 & $-$ & 34.1 & $-$\\ LBP-TOP~\cite{zhao2007dynamic} & \quad 38.9 & $-$ & 37.0 & $-$\\ DCP-TOP~\cite{ben2018learning} & \quad42.5 & $-$ & 36.8 & $-$\\ ApexME~\cite{li2018can} & \quad 48.5 $\pm$ 0.6 & 58.3 $\pm$ 0.9 & 41.3 $\pm$ 0.6 & 54.9 $\pm$ 0.7 \\ \rowcolor{mygray} ApexME \textbf{+ MiE-X} & \quad 55.9 $\pm$ 2.0 & 61.4 $\pm$ 0.8 & 46.4 $\pm$ 0.7 & 60.3 $\pm$ 1.1 \\ Branches~\cite{Liu2018A} & \quad 50.1 $\pm$ 0.6 & 58.3 $\pm$ 0.6 & 44.5 $\pm$ 0.7 & 53.3 $\pm$ 0.5 \\ \rowcolor{mygray} Branches \textbf{+ MiE-X} & \quad 56.8 $\pm$ 1.1 & 61.5 $\pm$ 1.0 & 48.7 $\pm$ 1.0 & 56.3 $\pm$ 0.8 \\ \midrule TLCNN$^\dagger$~\cite{wang2018micro} & $-$ &\quad 69.4 & $-$ & 73.5 \\ DTSCNN$^\dagger$~\cite{peng2017dual} & 60.9 $\pm$ 1.3 & \quad 71.1 $\pm$ 1.1 & 51.6 $\pm$ 1.8 & 60.6 $\pm$ 1.1 \\ \rowcolor{mygray} DTSCNN$^\dagger$ \textbf{+ MiE-X} & 63.1 $\pm$ 1.0 & \quad 74.3 $\pm$ 0.5 & 55.5 $\pm$ 1.4 & 73.9 $\pm$ 0.9 \\ \bottomrule \end{tabular} } \centering \label{table:SOTA} \end{table} \endgroup \begin{figure*}[t] \centering \begin{center} \includegraphics[width=\textwidth]{./figures/ablation_ingredients.pdf} \caption{Comparing training effectiveness of real-world data and various synthetic datasets sourced from different combinations of AUs. We compare UF1 \textbf{(a)} and UAR \textbf{(b)} on CompMiE. ``n.s.'' means the difference is {not statistically significant} ($i.e., p$-value $>$ 0.05). $*$ denotes {statistically significant} ($i.e., 0.01 < p$-value $< 0.05$). $**$ and $***$ mean {statistically very significant} ($i.e., 0.001 < p$-value $< 0.01$) and {statistically extremely significant} ($i.e., p$-value $< 0.001$), respectively. We observe decreased accuracy if we remove any of the three types of AUs. When all the three types are used for database creation, both UF1 and UAR exceed results obtained by training on real-world data, with very high statistical confidence. \label{figure: ingredients} \end{center} \end{figure*} \textbf{Positioning within the state of the art.} We follow a recent survey~\cite{ben2021video} and compare with the state of the art on two datasets, MMEW~\cite{ben2021video} and SAMM~\cite{davison2018objective}, all under 5-fold cross validation. Results are summarized in Table~\ref{table:SOTA}. We re-implemented three baselines (ApexME, Branches and DTSCNN), pretrained on either ImageNet or MiE-X. To pretrain the video-base method DTSCNN, we use a simple variant of MiE-X where each sample has multiple frames. Specifically, when computing $\mathbf{z}^{\text{MiE}}$ and $\mathbf{z}^{\text{MaE}}$, we extract AUs for all the frames between the onset and apex frames. All these extracted AUs are used for frame generation. For $\mathbf{z}^{\text{exp}}$, we linearly interpolate 8 AU vectors between the onset and apex AU vectors, thus generating 10 frames per sample. Table 2 clearly informs us that MiE-X pre-training improves the accuracy of all the three methods. Importantly, when MiE-X is used for pre-training, MiE recognition accuracy is very competitive: DTSCNN achieves accuracy (best epoch) of 74.3 $\pm$ 0.5 \% and 73.9 $\pm$ 0.9 \% on MMEW and SAMM, respectively. \subsection{Further Analysis} \label{sec:exp:analysis} All experiments in this section are performed on the Branches baseline \cite{Liu2018A}. \textbf{Comparisons of various AU combinations.} Fig.~\ref{figure: ingredients} evaluates various AU combinations on CompMiE. We have the following observations. \textbf{First}, none of the three types of AUs are dispensable. We observe that the best recognition accuracy is obtained when all three types of AUs are used, which outperforms training with CompMiE+CK+ by 1.7\% and 2.0\% in UF1 and UAR, respectively. Importantly, if we remove any single type of AUs, the UF1 and UAR scores decrease. For example, when removing $\mathbf{z}^{\text{MiE}}$, $\mathbf{z}^{\text{MaE}}$, $\mathbf{z}^{\text{exp}}$ one at a time, the decrease in UF1 score is 1.6\%, 1.0\% and 1.6\%, respectively. \textbf{Second}, using two types of AUs outperforms using only a single type with statistical significance. For example, when using $\mathbf{z}^{\text{MiE}}$ and $\mathbf{z}^{\text{MaE}}$, UF1 is higher than using $\mathbf{z}^{\text{MaE}}$ alone by 2.15\%. In fact, the three AU types come from distinct and trustful sources, allowing them to be complementary and effective. This also explains why all three AU types are better than any combination of two. \textbf{Third}, when using a single type of AUs, we find $\mathbf{z}^{\text{MaE}}$ or $\mathbf{z}^{\text{exp}}$ produces much higher UF1 and UAR than $\mathbf{z}^{\text{MiE}}$. Their superiority could be explained by their diversity. Compared with $\mathbf{z}^{\text{MiE}}$, MiEs generated from $\mathbf{z}^{\text{MaE}}$ and $\mathbf{z}^{\text{exp}}$ are much more diverse. Specifically, when constructing $\mathbf{z}^{\text{MaE}}$, the index of apex frame is randomly drawn from a range $\lfloor\alpha \times n\rfloor$ and $\lfloor\beta \times n\rfloor$. Similarly, the randomness of AU intensities is also introduce by hyperparameter $\mu$ and $\nu$ when generating $\mathbf{z}^{\text{exp}}$. In contrast, the index of the apex frame is fixed when constructing $\mathbf{z}^{\text{MiE}}$. \textbf{Lastly}, we compare results that employ two real-world training datasets. The first is CompMiE, described as in Section~\ref{sec:exp:setup}, and the second is a combination of CompMiE and CK+. It is shown that CompMiE + CK+ outperforms CompMiE by an obvious margin, suggesting that \emph{early-stage of MaEs highly correlate with MiEs}. These results motivated us to mine effective AUs ($\mathbf{z}^{\text{MaE}}$) from MaEs. \begin{figure}[t] \centering \begin{center} \includegraphics[width=1\textwidth]{./figures/diverisity_scale.pdf} \caption{\textbf{(a)-(b)}: Impact of the number of AU triplets \textbf{(a)}, IDs \textbf{(b)} and MiE samples \textbf{(c)}. In \textbf{(a)-(b)}, we use $\mathbf{z}^{\text{MaE}}$ and $\mathbf{z}^{\text{exp}}$ for database synthesis, while in \textbf{(c)} all three types AUs are used. We employ the Branches method \cite{Liu2018A}. When we gradually increase the numbers, the three-fold cross validation accuracy (UF1, \%) on CompMiE first improves and then remains stable in all the three subfigures. \label{fig:numbers_ID_AU_img} \end{center} \end{figure} \textbf{Impact of the number of AUs, IDs and MiE samples in MiE-X.} For MiE-X, the IDs, AUs and MiE samples are all important, and we now investigate how their quantities influence MiE recognition accuracy by creating MiE-X variants with different numbers of IDs, AU triplets and samples. Here, please note that the diversity is highly relevant to the number of distinct IDs/AUs/samples, so sometimes we use number and diversity interchangeably. When studying AU and ID diversity, we set the AU combination to be $\mathbf{z}^{\text{MaE}} + \mathbf{z}^{\text{exp}}$ because their diversity can be easily changed by specifying the number of sampling times from the uniform distributions (refer Section~\ref{sec:AUs}). When investigating the number of MiE samples, we use all three types of AUs. To evaluate the influence of \textbf{AU} diversity, we set the number of MiE samples and IDs to 30,000 and 5,000 {(6 samples per ID)}, respectively in all the dataset variations. The AU diversity can be customized by allowing multiple identities to share the same AU triple. Specifically, the number of AU triplets is set to 4,000, 6,000, 10,000, 30,000 and From the experimental results in Fig.~\ref{fig:numbers_ID_AU_img} (a), we observe the effectiveness of synthetic data generally increases when AU diversity is improved. For example, the UF1 score increases by 1.8\%, when the number of distinct AU triplets increases from 4,000 to 10,000. When the number of AUs is greater than 10,000, the curve reaches saturation. To study the diversity of \textbf{IDs}, we fix the number of MiE samples and AU triplets in MiE-X to be 30,000. We set the ID number as 700, 1,000, 1,700, and 5,000, achieved by randomly selecting face images from the EmotionNet~\cite{fabian2016emotionet} dataset\footnote{Note that each image in EmotionNet usually denotes a different identity.}. In this experiment, an ID {generates more than 6} MiE samples using AU triplets randomly drawn from the pool of 30,000. Results in Fig.~\ref{fig:numbers_ID_AU_img} (b) show that more IDs leads to a higher recognition accuracy. For example, UF1 of synthetic dataset increases from 44.4\% to 45.8\% when the number of IDs increases from 700 to 1,700. When the number of IDs exceeds 1,700, the curve becomes stable. To study the impact of the number of \textbf{MiE samples}, we fix the number of AU triplets to 9,000 and the number of IDs to 1,000. We then gradually increase the generated samples from 9,000 to 54,000 by reusing more AU triplets on each ID. Experimental results are shown in Fig.~\ref{fig:numbers_ID_AU_img} (c). We find the effectiveness of the synthetic training set generally increases when more samples are included and that curve becomes flat when the number of samples are greater than 36k. For example, the UF1 is improved by 1.0\%, when the number of samples increases from 9k to 36k. When the number of samples increases from 36k to 54k, there is a slight UF1 improvement of 0.2\%. This observation is expected because when the number of IDs and AUs are fixed, the total information contained in the dataset is constrained. From the above experiments, we conclude that MiE-X benefits from more AUs, IDs and samples within a certain range. \begin{wraptable}{r}{0.5\textwidth} \centering \setlength{\tabcolsep}{1mm}{ \small \caption{ Performance comparison between training with and without side faces. Evaluation is on the CompMiE dataset. } \label{tab:pose} \resizebox{0.4\textwidth}{!}{ \begin{tabular}{|c|c|c|} \hline & \emph{w/} side & \emph{w/o} side \\ \hline UF1 (\%) & 47.7 $\pm$ 0.5 & 47.4 $\pm$ 0.8 \\ \hline \end{tabular} }} \end{wraptable} \textbf{Impact of face poses.} We use 5,000 IDs with frontal faces to synthesize a training set variant which is compared with MiE-X composed of faces of various poses. To find the frontal faces, we manually select 10 frontal faces in the EmotionNet dataset as queries and for each search for 500 faces with similar facial landmarks detected by a pretrained MTCNN landmark detector \cite{7553523}. {Table~\ref{tab:pose} summarizes the results on CompMiE, where we do not observe obvious difference between the two training sets.} This can possibly be explained by the fact that real-world MiE datasets mostly contain frontal faces collected in laboratory environments. Therefore, pose variance in MiE-X may not significantly influence performance on existingtests. Nevertheless, we speculate using various poses to generate MiE-X would benefit MiE recognition in uncontrolled environments. \textbf{Analysis of other hyperparameters.} Due to the lack of validation data in real-world MiE datasets, we mostly used prior knowledge and intuition to choose the hyperparameters. Specifically, we chose $\alpha = 0.3$, $\beta = 0.5$ and $\mu = 0.1$, $\nu = 0.3$ in experiments. Here, we briefly analyze these two sets of hyperparameters involved in the AU computation on CompMiE using cross-validation. $[\alpha,\beta]$ is the interval from which the apex frames for computing $\mathbf{z}^{\text{MaE}}$ are randomly selected. Specifically, we analyze three options: ($\alpha = 0.1$, $\beta = 0.3$), ($\alpha = 0.3$, $\beta = 0.5$) and ($\alpha = 0.5$, $\beta = 0.7$). The number of identities is 5,000. Recognition accuracy of the three options is given by Fig.~\ref{fig:exp:hyperparameter} (a), where $\alpha = 0.3$, $\beta = 0.5$ produces the highest UF1 score. This result is in accordance with our intuition: the first $30\%$ to $50\%$ frames of an MaE would be more similar to an MiE. $[\mu$, $\nu]$ is the interval from which the intensities of expert-defined AUs are uniformly sampled. Similarly, we analyze three options, \textit{i}.\textit{e}., ($\mu = 0.1$, $\nu = 0.3$), ($\mu = 0.3$, $\nu = 0.5$) and ($\mu = 0.5$, $\nu = 0.7$). This is inspired by observing AU coefficients of real MiEs: the intensity of each action unit is not large, \textit{i}.\textit{e}., $< 0.7$ in most cases, because micro-expressions have {subtle} facial muscle movements. Results are shown in Fig.~\ref{fig:exp:hyperparameter} (b): the intensity range $[0.1, 0.3]$ is superior. Because the highest value of an MaE AU is $1.0$, the value of $[\mu$,$\nu]$ delivers another intuitive message: {facial AU intensities of MiEs are around $10\%$ to $30\%$ those of MaEs.} \subsection{Understanding of MiEs: A Discussion} \textbf{MiEs generalize across faces.} AUs extracted from real MiEs provide closest resemblance to true MiEs and are thus indispensable. These AUs $\mathbf{z}^\text{MiE}$ are generalizable because they can be transplanted to faces of different identities. The fact that a higher number of face identities generally leads to a higher accuracy indicates the benefit of adding AUs $\mathbf{z}^\text{MiE}$ to sufficiently many faces to improve MiE recognition towards identity invariance. \begin{wrapfigure}{r}{0.5\textwidth} \centering \includegraphics[width=0.5\textwidth]{./figures/hyper_para_UF1.pdf} \caption{Impact of hyperparameters in computing $\mathbf{z}^{\text{MaE}}$ and $\mathbf{z}^{\text{exp}}$. UF1 (\%) on the CompMiE dataset is reported in each sub-figure. \textbf{(a):} MiE-X is composed by $\mathbf{z}^{\text{MaE}}$ only. Three groups of $\alpha$ and $\beta$ values are tested. \textbf{(b):} MiE-X is made from $\mathbf{z}^{\text{exp}}$ only. Three groups of $\mu$ and $\nu$ are investigated. $*$ and $**$ have the same meaning as Fig.~\ref{figure: ingredients}. } \label{fig:exp:hyperparameter} \end{wrapfigure} \textbf{Early-stage MaEs resemble real MiEs.} To our knowledge, we make very early attempt to leverage MaEs for MiE generation. Although the two types of facial expressions differ significantly in their magnitude of facial movement, we find AUs in initial stages of MaEs are effective approximations to those in MiEs. \textbf{Expert knowledge is transferable to MiEs.} While AUs annotated by experts are used to describe MaEs, we find expert AUs with reduced magnitude are effective in synthesizing MiEs. We therefore infer from a computer vision viewpoint that MiEs are related to normal expressions but with lower intensity. Moreover, by examining the complementary nature of the three types of AUs, we infer that expert knowledge adds some useful computational cues, which do not appear in MaEs and real MiEs but can be humanly defined. Nevertheless, our work is limited in that the psychological aspects of MiEs are not considered, which will be studied in future with cross-disciplinary collaborations. \section{Conclusion} This paper addresses the data lacking problem in MiE recognition. An important contribution is the introduction of a large-scale synthetic dataset, MiE-X, with standard emotion labels to improve MiE model training. In the synthesis protocol, we feed faces in the wild, desired emotion labels and AU triplets (our focus) to a generation model. Specifically, sourced from real MiEs, early-stage MaEs, and expert knowledge, three types of AUs are identified as useful and complementary to endorse an effective protocol. This understanding of the role of AUs in effective MiE synthesis is another contribution of this work. Experiment on real-world MiE datasets indicates MiE-X is a very useful training set: models (pre-)trained with MiE-X consistently outperform those (pre-)trained on real-world MiE data. In addition, this paper reveals some interesting computational properties of MiEs, which would be of value for further investigation. \clearpage \input{eccv2022submission.bbl} \bibliographystyle{splncs04} \end{document} \section{More Analysis on $\mathbf{z}^{\text{MiE}}$ and $\mathbf{z}^{\text{MaE}}$. } Fig.~\ref{fig:bar_au_comparision} visualizes the averaged values of Action Unit (AU) vectors of $\mathbf{z}^{\text{MiE}}$ (extracted from \textit{micro-expressions}) and $\mathbf{z}^{\text{MaE}}$ (extracted from \textit{early-stage macro-expressions}), respectively, under different micro-expression categories. The setting of AU numbers is same to that used in GANimation~\cite{pumarola2018ganimation}~\footnote{17 representative AU numbers are utilized (\textit{i.e.,} AU1, AU2, AU4, AU5, AU6, AU7, AU9, AU10, AU12, AU14, AU15, AU17, AU20, AU23, AU25, AU26, and AU45).}. \begin{center} \centering \includegraphics[width=\textwidth]{figures/bar_au_comparision.pdf} \captionof{figure}{Averaged values of 17 Action Units calculate on $\mathbf{z}^{\text{MiE}}$ (extracted from micro-expressions) and $\mathbf{z}^{\text{MaE}}$ (extracted from early-stage macro-expressions), respectively, under different emotion categories (\textit{i}.\textit{e}., \texttt{negative}, \texttt{positive} and \texttt{surprise}). For each category, we have two observations: 1) $\mathbf{z}^{\text{MiE}}$ and $\mathbf{z}^{\text{MaE}}$ share similar trends among different AU. For example, both $\mathbf{z}^{\text{MiE}}$ and $\mathbf{z}^{\text{MaE}}$ have high values of AU7 for Negative. It means they are depicting for the same AU of each kind of expressions, 2) $\mathbf{z}^{\text{MiE}}$ and $\mathbf{z}^{\text{MaE}}$ have different values, which means they are complementary to each other.} \label{fig:bar_au_comparision} \end{center} There are two major findings. First, $\mathbf{z}^{\text{MiE}}$ and $\mathbf{z}^{\text{MaE}}$ show similar activation patterns when they have same emotion labels. For example, when the emotion category is positive, both types of AUs have relatively large values in AU7, AU12 and AU14. Second, $\mathbf{z}^{\text{MiE}}$ and $\mathbf{z}^{\text{MaE}}$ show very different average values for some AU numbers. For instance, their values in AU10, AU14, and AU17 are dissimilar for all the three categories. This suggests $\mathbf{z}^{\text{MiE}}$ and $\mathbf{z}^{\text{MaE}}$ complement each other to some extent. This might explain why using synthetic data generated from both $\mathbf{z}^{\text{MiE}}$ and $\mathbf{z}^{\text{Mae}}$ for MiE recognition learning achieves higher accuracy than only using one of them (refer Fig. 5 of the main paper). \begin{wraptable}{r}{0.48\textwidth} \centering \setlength{\tabcolsep}{1mm}{ \small \caption{ Comparing two MiE-X variants with different ID sources. We report the three-foldd cross-validation results on the CompMiE dataset. EmotionNet and CelebA are compared. The Branches method \cite{Liu2018A} is used. } \begin{tabular}{|c|c|c|} \hline ID Source & EmotionNet & CelebA \\ \hline UF1 (\%) & 47.7 $\pm$ 0.5 & 47.2 $\pm$ 0.7 \\ \hline \end{tabular} } \label{tab:change_ID_source} \end{wraptable} \section{Synthesizing MiE-X with A Different ID Source} In the main paper, we use the EmotionNet~\cite{fabian2016emotionet} dataset to sample face IDs. To analyze the generalization ability of the proposed MiE generation protocol on different face datasets, we use CelebA~\cite{liu2015faceattributes} to replace EmotionNet for ID sampling while controlling the numbers of AU triplets and IDs to be unchanged. The generated MiE-X variant whose IDs come from CelebA is evaluated on the CompMiE dataset by three-fold cross validation. Comparative results are shown in Table \ref{tab:change_ID_source}. From the results, we do not observe significant difference in UF1 score after we change the ID source. The results also suggest one of the major findings: MiEs generalize cross faces. \section{Synthesizing MiE-X Using Different Datasets as AU Sources} \begin{table} \centering \setlength{\tabcolsep}{2mm}{ \small \caption{ Performance comparison between MiE-X variants with different ID sources. We report the three-fold cross-validation results (UF1, \%) on the CompMiE dataset. The Branches method \cite{Liu2018A} is used. } \label{tab:change_AU_source} \resizebox{0.98\textwidth}{!}{ \begin{tabular}{|c|cc|cc|cc|} \hline AU & \multicolumn{2}{c|}{$\mathbf{z}^{\text{MiE}}$ source} & \multicolumn{2}{c|}{$\mathbf{z}^{\text{MaE}}$ source} & \multicolumn{2}{c|}{$\mathbf{z}^{\text{MiE}}$ + $\mathbf{z}^{\text{MaE}}$ source} \\ \cline{2-7} Source & CompMiE & MMEW & CK+ & Oulu & CompMiE + CK+ & MMEW + Oulu \\ \hline UF1 (\%) & 34.0 $\pm$ 1.1 & 37.2 $\pm$ 1.4 & 43.9 $\pm$ 1.3 & 42.33 $\pm$ 1.4 & 46.1 $\pm$ 1.5 & 47.5 $\pm$ 0.7\\ \hline \end{tabular} }} \end{table} In the main paper, $\mathbf{z}^{\text{MiE}}$ and $\mathbf{z}^{\text{MaE}}$ are extracted from the CompMiE \cite{see2019megc} dataset and the CK+ \cite{lucey2010extended} dataset, respectively. It is intriguing to investigate the effectiveness of MiE-X when we change its AU sources. Specifically, to compute $\mathbf{z}^{\text{MiE}}$ and $\mathbf{z}^{\text{MaE}}$, we use MMEW~\cite{ben2021video} and Oulu~\cite{zhao2011facial}, respectively. The AU computation and data generation protocols remain the same. We compare MiE-X variants with different AU sources in Table~\ref{tab:change_AU_source}. We observe that our MiE synthesis protocol can still generate competitive MiE datasets with alternative AU sources. For example, the MiE-X variant with AUs from MMEW and Oulu achieves an UF1 score of 47.5\% on CompMiE, while UF1 of the same model trained on CompMiE is 43.6\% (refer Table 1 in the main paper). \section{Implementation Details of GANimation} This paper employs the GANimation method~\cite{pumarola2018ganimation} to synthesize MiEs. It is an image-to-image translation model for face expression manipulation. Given a face image $\mathbf{x}^\mathbf{z}$ with Action Units (AUs) $\mathbf{z}$ and a target AU set $\mathbf{z'}$, GANimation aims to learn a single mapping function $G : (\mathbf{x}^\mathbf{z}, \mathbf{z'}) \rightarrow \mathbf{x}^\mathbf{z'}$ such that the generated face image not only has the same identity as the original image but also manifests the target AUs. We follow the default settings of GANimation during training. Specifically, we use Adam with a learning rate of 0.0001 and batch size 25. We train for 30 epochs and linearly decay the rate to 0 over the last 10 epochs. The weight coefficients for loss terms are the same as those in the original paper. With a single GTX 2080TI GPU, two days are needed to train this model. \section{Implementation Details of The Baseline Classifiers} We adopt the Branches~\cite{Liu2018A} and the Apex \cite{peng2018macro} as the baseline methods for MiE recognition in the main paper. Branches won the first place in~\cite{see2019megc} has two branches which do not share weights. An MiE sample has an onset frame and an apex frame. The two frames are fed into the two branches (using ResNet-18~\cite{he2016deep} as backbones), respectively, and their embeddings after global average pooling are concatenated. The classier has two fully connected (FC) layers with dimension 128 and 32, respectively. We use Adam with a learning rate of 0.0001 and batch size 32. We train for 80 epochs on real-world data 30 epochs on MiE-X. Compared with Branches, Apex only has one CNN branch and takes the apex frame as the input. It also uses ResNet-18 as the backbone. The training strategy of Apex is the same as those in training Branches. With a single GTX 2080TI GPU, around 10 hours are needed to train the baseline classifiers. \input{supplementary.bbl} \bibliographystyle{splncs04} \end{document}
proofpile-arXiv_065-21
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} In dictionary learning, the ultimate goal is to obtain an overcomplete dictionary to represent original samples. Similar to subspace learning, the to-be-learned dictionary can be further utilized to solve different categories of problems, such as image denoising \cite{peng2014decomposable}, visual classification \cite{wang2020class}. Many classical methods, including D-KSVD \cite{zhang2010discriminative}, LC-KSVD \cite{jiang2013label}, LEDL~\cite{shao2020label} \emph{et al.}, introduce discriminative information by adding the one-hot label matrix to the objective function. These label-embedded approaches are powerful in supervised learning, while in semi-supervised and unsupervised learning, the deficiency of labels leads to a big reduction in the effect. Fortunately, the development of Self-Supervised Learning (SSL) provides us a novel perspective to solve this challenge. The core idea of SSL is to set a pretext task to generate a universal model for the downstream task. SSL has been demonstrated to effectively address the problem caused by inadequate labeled data in the training process. Combined with SSL, we propose a Self-Supervised Dictionary Learning (SSDL) framework. Like most SSL-based methods, the critical point of the challenge is setting up an appropriate pretext task. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{Figures/Difference_G_HG_pLHG.pdf} \end{center} \caption{The comparison among Graph, Hypergraph, and the proposed $p$-Laplacian Attention Hypergraph (pLA-Hypergraph). In graph structure, each edge contains two vertices. In hypergraph structure, the hyperedge is able to connect multi vertices. And in our pLA-Hypergraph, different hyperedges have different weights, which are represented by different thicknesses.} \label{figure: Difference_G_HG_pLHG} \end{figure} \begin{figure*}[t] \begin{center} \includegraphics[width=0.9\linewidth]{Figures/flowchart.pdf} \end{center} \caption{The Self-Supervised Dictionary Learning framework. There are two steps: $i)$ Employ the pAHL block to generate pseudo label $\mathbf{F}$ for the unlabeled data. $ii)$ Embed the pseudo label into the dictionary learning model to obtain the dictionary $\mathbf{D}$. More details please refer to section~\ref{sec:Methodology}.} \label{figure: flowchart} \end{figure*} This paper proposes a $p$-Laplacian Attention Hypergraph Learning (pAHL) based pretext task to generate a pseudo label matrix and then employ it in the downstream task (e.g., DL methods). Hypergraph learning was first proposed by Zhou \emph{et al.}\cite{zhou2007learning} in 2007. It is capable of predicting labels according to mining and aggregating high-order relations within data. A hypergraph is composed of a vertex set and hyperedge set. Each hyperedge can connect any number of vertices. Compared with the simple graph, which is only able to reflect the pair-wise relations among vertices, hypergraph is more flexible and can mine deeper relations of data. But there exists an inadequate part in traditional Laplacian-based hypergraph learning: Each hyperedge plays the equal important role in police decisions, which may lead to lose the key information sometimes. (As an example, assume that a person's weight is relevant to their diet habits and genes, but obviously, the diet habit contributes more. If we consider that these two attributes are equally important in predicting people's weight, the results would be affected.) Thus, we follow \cite{ma2018hypergraph} and introduce $p$-Laplacian regularizer to generate attention weight for each hyperedge. Note that, when $p=2$, the $p$-Laplacian regularizer is equal to the Laplacian one. We show the differences among Graph, Hypergraph, and p-Laplacian Attention Hypergraph (pLA-Hypergraph) in Figure~\ref{figure: Difference_G_HG_pLHG}. After $p$-Laplacian Attention Hypergraph Learning, we embed the generated pseudo label matrix into a basic dictionary learning model. Figure~\ref{figure: flowchart} shows the flowchart. In summary, the main contributions focus on: \begin{itemize} \item We propose a Self-Supervised Dictionary Learning (SSDL) approach. To our best knowledge, it is the first attempt to enhance dictionary learning from the perspective of self-supervising. Specifically, we introduce $p$-Laplacian Attention Hypergraph Learning (pAHL) as the pretext task to generate a pseudo label matrix for label-embedded dictionary learning. \item The proposed pAHL block is a model-agnostic method that can be employed in arbitrary standard dictionary learning to construct SSDL framework. In this paper, we just try to embed the pAHL block into a basic dictionary learning approach. \item We utilize the learned dictionary in two human activity recognition tasks. The experimental results demonstrate that our SSDL is powerful, and the proposed pAHL block significantly improve the dictionary structure's performances. \end{itemize} \section{Methodology} \label{sec:Methodology} In this section, we introduce the details of the self-supervised dictionary learning algorithm. First, we introduce $p$-Laplacian based Attention Hypergraph to generate pseudo labels for the unlabeled training data. Then, we embed the pseudo label information into the standard dictionary learning framework. Figure~\ref{figure: flowchart} shows the flowchart, and Algorithm~\ref{Algorithm: SSDL} elaborates the algorithm procedure. \subsection{Pseudo Label Generation via $p$-Laplacian Attention Hypergraph} \label{subsec:p-Laplacian Hypergraph Learning} \textbf{Hypergraph Construction} A suitable hypergraph structure is beneficial to mine high-order relations among samples. Different from simple graph structure, a hypergraph $\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{W})$ is composed of vertex set $\mathcal{V}$, hyperedge set $\mathcal{E}$, and weight matrix of hyperedge $\mathbf{W}$. The $\mathbf{W}$ is a diagonal matrix, each element denotes the weight of the corresponding hyperedge. Besides, there exist two degree matrices in hypergraph learning, including vertex degree matrix $\mathbf{D}_v$ and hyperedge degree matrix $\mathbf{D}_e$. We use the incidence matrix $\mathbf{H}\in{\mathbb{R}}^{|\mathcal{V}|\times{|\mathcal{E}|}}$ to represent connections between hyperedges and vertices, and define the elements in the incidence matrix as follows: \begin{equation} \scriptsize \begin{split} \mathbf{H}= \left\{\begin{array}{cc} {\exp \left(-dis\left(v, v_{c}\right)^{2}\right)} & {\text { if } v \in e} \\ {0} & {\text{ o.w. }} \end{array}\right. \end{split} \label{equation: elements_in_H} \end{equation} where $e$ denotes one hyperedge in $\mathcal{E}$, $v$ denotes a vertex in $\mathcal{V}$. $dis$ indicates the operator to compute the distance. Following, we formulate the degree matrices as: \begin{equation} \scriptsize \begin{split} \delta({e})= \sum_{{v} \in \mathcal{V}} \mathbf{H}({v},{e}) \end{split} \label{equation: vertex_degree} \end{equation} \begin{equation} \scriptsize \begin{split} d({v})= \sum_{{e} \in \mathcal{E}} \mathbf{W}({e} ) \mathbf{H}({v},{e}) \end{split} \label{equation: hyperedge_degree} \end{equation}\\ \textbf{$p$-Laplacian Attention Hypergraph Learning} Follow \cite{gao2012laplacian}, we formulate the normalized hypergraph Laplacian regularizer as: \begin{equation} \scriptsize \begin{split} \mathbf{\Delta}_l=\mathbf{I}_v - \mathbf{D}_v^{-\frac{1}{2}} \mathbf{H} \mathbf{W} \mathbf{D}_e^{-1} \mathbf{H}^T \mathbf{D}_v^{-\frac{1}{2}} \end{split} \label{equation: Laplacian} \end{equation} where $\mathbf{I}_v \in \mathcal{R}^{|\mathcal{V}| \times |\mathcal{V}|}$ denotes the identity matrix. In most hypergraph learning tasks, the elements in $\mathbf{W}$ are set to $1$, represent that different hyperedges contribute equally for nodes aggregation. While in our paper, we introduce $p$-Laplacian to approximate the relations of hyperedges, to further aggregate high-order information, which can be formulated as: \begin{equation} \scriptsize \begin{split} \mathbf{\Delta}_{pl}=\mathbf{I}_v - \mathbf{D}_v^{-\frac{1}{2}} \mathbf{H} \left( \mathbf{I}_e - \mathbf{L}_p \right) \mathbf{D}_e^{-1} \mathbf{H}^T \mathbf{D}_v^{-\frac{1}{2}} \end{split} \label{equation: P-Laplacian} \end{equation} where $\mathbf{I}_e \in \mathbb{R}^{|\mathcal{E}| \times |\mathcal{E}|}$ denotes the identity matrix. $\mathbf{L}_p = \mathcal{Q} \Lambda \mathcal{Q}^T$. $\mathcal{Q}= \left(q^1,q^2,\cdots,q^M \right)$ denotes the full eigenvector, and $\Lambda= \left(\Lambda^1,\Lambda^2,\cdots,\Lambda^M \right)$ denotes the corresponding eigenvalue. According to \cite{luo2010eigenvectors}, we solve the $p$-Laplacian embedding as: \begin{equation} \scriptsize \begin{split} &\mathop {\arg \min}\limits_{\mathcal{Q}} f_1(\mathcal{Q}) = \sum_{{m} \in \mathcal{M}} \frac{\sum_{i,j \in |\mathcal{V}|} w_{ij} |q_i^m - q_j^m|^p} {\left\|q^m\right\|_p^p}\\ &{\rm{s}}.t.{\kern 4pt} \mathcal{Q}^T\mathcal{Q} = \mathbf{I} \end{split} \label{equation: p-Laplacian embedding} \end{equation} where $w_{ij}$ is the element in $\mathbf{W}$. Here, we use the gradient method to solve Equation~\ref{equation: p-Laplacian embedding} as: \begin{equation} \scriptsize \begin{split} \frac{\partial f_1}{\partial q_i^m} = \frac{1}{\left\|q^m\right\|_p^p} \left [ \sum_j w_{ij} \phi_p \left( q_i^m - q_j^m \right) - \frac{\phi_p \left( q_i^m \right)}{\left\|q^m\right\|_p^p} \right ] \end{split} \label{equation: p-Laplacian gradient1} \end{equation} where $\phi_p$ is defined that $\phi_p\left( x \right) = |x|^{p-1}sig\left( x \right)$. $sig$ denotes the operator to compute the negative and positive signs. To enforce the orthogonality, we follow \cite{liu2018p} to update $\mathcal{Q}$ until convergence as: \begin{equation} \scriptsize \begin{split} \mathcal{Q} = \mathcal{Q} - \beta \left( \frac{\partial f_1}{\partial \mathcal{Q}} - \mathcal{Q} \left( \frac{\partial f_1}{\partial \mathcal{Q}} \right)^T \mathcal{Q} \right) \end{split} \label{equation: p-Laplacian Q} \end{equation} where $\beta$ is the step length. At last, we obtain the corresponding eigenvalue as : \begin{equation} \scriptsize \begin{split} \Lambda^m = \frac{\sum_{i,j \in |\mathcal{V}|} w_{ij} |q_i^m - q_j^m|^p} {\left\|q^m\right\|_p^p} \end{split} \label{equation: p-Laplacian lambda} \end{equation}\\ \textbf{Pseudo Label Generation} Assume parts of training data have labels, define initial label embedding matrix as $\mathbf{O} \in \mathbb{R}^{C \times N}$, where $C$ denotes the total number of classes. For labeled samples, $\mathbf{O}_{ij}$ is $1$ if the $j$-th sample belongs to the $i$-th class, and it is $0$ otherwise. For unlabeled samples, we set all elements to $0.5$. We formulate the objective function as: \begin{equation} \scriptsize \begin{split} &\mathop {\arg \min}\limits_{\mathbf{F}} f_2(\mathbf{F}) = \text{tr} \left( \Delta_{pl} \mathbf{F}^T \mathbf{F} \right) + \lambda \left\| \mathbf{F} - \mathbf{O} \right\|_F^2\\ \end{split} \label{equation: Pseudo Label Obj} \end{equation} where $\lambda$ is the parameter to balance the objective function. According to \cite{zhou2007learning}, we directly obtain the pseudo label as: \begin{equation} \scriptsize \begin{split} \mathbf{F} = \left(\mathbf{I}_v + \frac{1}{\lambda} \mathbf{\Delta}_{pl} \right) ^{-1} \mathbf{O} \end{split} \label{equation: Pseudo Label} \end{equation} where $\mathbf{F} \in \mathbb{R}^{C \times N}$ is the predicted pseudo label matrix. Unlike the one-hot truth label matrix, the $\mathbf{F}$ is soft. \subsection{Self-Supervised Dictionary Learning} \label{sec: Self-Supervised Dictionary Learning} The above section shows that the learned pseudo label information only relies on the hypergraph structure. That is to say, the proposed $p$-Laplacian Attention Hypergraph Learning (pAHL) is a model-agnostic approach, which can be embedded into any dictionary learning framework. Here, we just introduce the pAHL block into a standard dictionary learning. The objective function can be formulated as: \begin{equation} \scriptsize \begin{split} &\mathop {\arg \min}\limits_{\mathbf{D}, \mathbf{S},\mathbf{B}} f_3(\mathbf{D},\mathbf{S},\mathbf{B})\\ &=\left\| \mathbf{X} - \mathbf{D}\mathbf{S} \right\|_F^2 + 2\alpha \left\| \mathbf{S} \right\|_{\ell_1} + \gamma {\kern 2pt} \left\| \mathbf{F} - \mathbf{B}\mathbf{S} \right\|_F^2 \\ &{\kern 5pt}{\rm{s}}.t.\left\| {{{\bf{d}}_{ \bullet k}}} \right\|_2^2 \le 1, {\kern 5pt}\left\| {{{\bf{b}}_{ \bullet k}}} \right\|_2^2 \le 1 {\kern 4pt} \left( {k = 1,2, \cdots K} \right)\\ \end{split} \label{equation: Objective_function} \end{equation} where $\mathbf{X}=[{\mathbf{x}}_1,{\mathbf{x}}_2,\dots,{\mathbf{x}}_{N}] \in \mathbb{R}^{dim \times N}$ denotes the training data, ${\mathbf{x}}_i$ ($i = 1, 2, \dots$) denotes the feature embedding of the $i$-${th}$ sample, $dim$ denotes the dimension size of each sample, $N$ is the number of training samples. $\mathbf{D} \in \mathbb{R}^{dim \times K}$ represents the to-be-learned dictionary, $K$ is the dictionary base size. $\mathbf{B} \in \mathbb{R}^{C \times K}$ represents the to-be-learned classifier, $C$ denotes the class number. $\mathbf{S} \in \mathbb{R}^{K \times N}$ denotes the sparse codes for dictionary. $\alpha$ and $\gamma$ are the positive scalar constants. We alternate update $\mathbf{S}$, $\mathbf{D}$ and $\mathbf{B}$ until the objective function doesn’t descend. $\mathbf{S}$ can be solved as: \begin{equation} \scriptsize \begin{split} \mathbf{S}_{kn}= {\frac{max\left(\mathcal{J},\alpha \right) + min\left(\mathcal{J},\alpha \right)} {\left(\mathbf{D}^T\mathbf{D} +\gamma {\kern 2pt} \mathbf{B}^T\mathbf{B}\right)_{kk}}} \end{split} \label{equation: optimization_UpdateS_Skn_1} \end{equation} where \begin{equation} \scriptsize \begin{split} \mathcal{J}&=\left(\mathbf{D}^T\mathbf{X}+\gamma \mathbf{B}^T \mathbf{F} \right)_{kn} -\sum_{l=1,l\neq k}^{K} \left(\mathbf{D}^T\mathbf{D} +\gamma \mathbf{B}^T\mathbf{B} \right)_{kl}\mathbf{S}_{ln} \end{split} \label{equation: optimization_UpdateS_J} \end{equation} Then we introduce BCD~\cite{liu2014blockwise} to update $\mathbf{B}$ and $\mathbf{D}$ as: \begin{equation} \scriptsize \begin{split} \mathbf{D}_{\bullet k} = \frac{\mathbf{X}\left(\mathbf{S}_{k \bullet} \right)^T -\mathbf{\tilde{D}}^k \mathbf{S} \left(\mathbf{S}_{k \bullet} \right)^T} {\| \mathbf{X}\left(\mathbf{S}_{k \bullet} \right)^T -\mathbf{\tilde{D}}^k \mathbf{S} \left(\mathbf{S}_{k \bullet} \right)^T \|_2} \end{split} \label{equation: optimization_UpdateD} \end{equation} \begin{equation} \scriptsize \begin{split} \mathbf{B}_{\bullet k} = \frac{\mathbf{F}\left(\mathbf{S}_{k \bullet} \right)^T -\mathbf{\tilde{B}}^k \mathbf{S} \left(\mathbf{S}_{k \bullet} \right)^T} {\| \mathbf{F}\left(\mathbf{S}_{k \bullet} \right)^T -\mathbf{\tilde{B}}^k \mathbf{S} \left(\mathbf{S}_{k \bullet} \right)^T \|_2} \end{split} \label{equation: optimization_UpdateB} \end{equation} where $ \mathbf{\tilde{D}}= \left\{ \begin{array}{cc} {\mathbf{D}_{\bullet p}} & {p \neq{k}} \\ {\mathbf{0}} & {p = k} \end{array}\right.$, $ \mathbf{\tilde{B}}= \left\{ \begin{array}{cc} {\mathbf{B}_{\bullet p}} & {p \neq{k}} \\ {\mathbf{0}} & {p = k} \end{array}\right.$, $\mathbf{0}$ denotes zero matrix. We conduct the Self-Supervised Dictionary Learning method in Algorithm~\ref{Algorithm: SSDL}. \begin{algorithm}[t] \DontPrintSemicolon \KwInput{$\mathbf{X} \in \mathbb{R}^{dim \times N}$} \KwOutput{$\mathbf{D} \in \mathbb{R}^{dim \times K}$, $\mathbf{S} \in \mathbb{R}^{K \times N}$} Construct hypergraph $\mathbf{H}$ by \textbf{Equation \ref{equation: elements_in_H}}.\\ \While{i \textless maxitem} { Solve $p$-Laplacian embedding, update eigenvector $\mathcal{Q}$ by \textbf{Equation \ref{equation: p-Laplacian gradient1}, \ref{equation: p-Laplacian Q}}.\\ Update eigenvalue $\Lambda$ by \textbf{Equation \ref{equation: p-Laplacian lambda}}.\\ } Obtain $p$-Laplacian-based attention hypergraph regularizer $\mathbf{\Delta}_{pl}$ by \textbf{Equation \ref{equation: P-Laplacian}}.\\ Generate pseudo label $\mathbf{F}$ by \textbf{Equation \ref{equation: Pseudo Label Obj}, \ref{equation: Pseudo Label}}.\\ \While{j \textless maxitem} { Update sparse codes $\mathbf{S}$ by \textbf{Equation \ref{equation: optimization_UpdateS_Skn_1},\ref{equation: optimization_UpdateS_J}}.\\ Update dictionary $\mathbf{D}$ by \textbf{Equation \ref{equation: optimization_UpdateD}}.\\ Update classifier $\mathbf{B}$ by \textbf{Equation \ref{equation: optimization_UpdateB}}.\\ } \caption{Self-Supervised Dictionary Learning} \label{Algorithm: SSDL} \end{algorithm} \section{Experiment} \label{sec: experiment} Dictionary learning has been widely applied in many fields. Here we evaluate the learned dictionary in human activity recognition tasks. There are two datasets, including Stanford 40 Actions (Stanford40) \cite{yao2011human} dataset and UIUC Sports Event (UIUC-SE) \cite{li2007and} dataset. We first introduce the experimental setup. Then compare the proposed SSDL with state-of-the-art methods. Next, we try to embed the proposed pAHL block into other classical methods to evaluate the model-agnostic ability. Following, we conduct ablation studies to analyze our method. At last, we discuss something about the pretext task. \subsection{Experimental Setup} \label{sec: exp_set_up} For all the datasets, we employ standard Resnet to extract feature embedding with $2,048$ dimensions, select $70\%$ for training, the rest for testing, and only $40\%$ training data has labels. For the $p$ and $\lambda$ in pretext task, they play the key roles to obtain a suitable pseudo label matrix for dictionary learning. We fix them to $1.8$, $0.1$ for Stanford40, and $2.2$, $0.1$ for UIUC-SE. There is a trick to tune the two parameters, for more details, please refer to section~\ref{sec: Ablation Studies}. In dictionary learning, we set the dictionary size $K$ to half the number of training samples for the two datasets, and $\alpha=2^{-14}$, $\gamma=2^{-12}$ for Stanford40 dataset, $\alpha=2^{-12}$, $\gamma=2^{-12}$ for UIUC-SE dataset. The details are also discussed in section~\ref{sec: Ablation Studies}. \begin{table}[t] \caption{Recognition results with $40\%$ label rates.} \label{table: Recognition_results_40} \begin{center} \begin{tabular}{lcc} \toprule \textbf{Methods$\backslash$Datasets} & \textbf{Stanford40} & \textbf{UIUC-SE} \\ \midrule SRC (TPAMI \cite{wright2009robust}, 2009) & 66.0$\%$ & 88.4$\%$ \\ CRC (ICCV \cite{zhang2011sparse}, 2011) & 70.1$\%$ & 94.2$\%$ \\ NRC (PR \cite{xu2019sparse}, 2019) & 67.7$\%$ & 89.7$\%$ \\ SLRC (TPAMI \cite{deng2018face}, 2018) & 65.3$\%$ & 93.4$\%$ \\ Euler-SRC (AAAI \cite{liu2018euler}, 2018) & 66.9$\%$ & 90.2$\%$ \\ \midrule ADDL (TNNLS \cite{zhang2018jointly}, 2018) & 74.8$\%$ & 95.7$\%$ \\ FDDL (ICCV \cite{yang2011fisher}, 2011) & 73.3$\%$ & 94.2$\%$ \\ LC-KSVD (TPAMI \cite{jiang2013label}, 2013) & 67.7$\%$ & 89.1$\%$ \\ LC-PDL (IJCAI \cite{zhang2019scalable}, 2019) & 73.3$\%$ & 91.3$\%$ \\ LEDL (NC \cite{shao2020label}, 2020) & 72.9$\%$ & 91.8$\%$ \\ CDLF (SP \cite{wang2020class}, 2020) & 72.7$\%$ & 92.4$\%$ \\ \midrule \textbf{SSDL} & \textbf{75.9$\%$} & \textbf{96.4$\%$} \\ \bottomrule \end{tabular} \end{center} \end{table} \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{Figures/pAHL_LEDL_CDLF.pdf} \end{center} \caption{Comparison results about pAHL-LEDL and pAHL-CDLF on Stanford40 dataset with $40\%$ label rates.} \label{figure: pAHL_LEDL_CDLF} \end{figure} \begin{figure*}[t] \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \begin{center} \includegraphics[width=1\linewidth]{Figures/label_rate.pdf} \end{center} \label{figure: label_rate} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \begin{center} \includegraphics[width=1\linewidth]{Figures/p_parameter.pdf} \end{center} \label{figure: p_parameter} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \begin{center} \includegraphics[width=1\linewidth]{Figures/lambda_parameter.pdf} \end{center} \label{figure: lambda_parameter} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \begin{center} \includegraphics[width=1\linewidth]{Figures/alpha_beta_parameter.pdf} \end{center} \label{figure: alpha_beta_parameter} \end{minipage}% }% \caption{Ablation studies} \end{figure*} \subsection{Experimental Results} \label{sec: exp_result} We compare our SSDL with other state-of-the-art methods. We split these approaches into two categories, which are separated by horizontal lines in Table~\ref{table: Recognition_results_40}: $i)$ Traditional machine learning methods (directly use the testing samples to fit the training samples), including SRC \cite{wright2009robust}, CRC \cite{zhang2011sparse}, NRC \cite{xu2019sparse}, SLRC \cite{deng2018face} and Euler-SRC \cite{liu2018euler}. $ii)$ Dictionary learning methods, including ADDL \cite{zhang2018jointly}, FDDL \cite{yang2011fisher}, LC-KSVD \cite{jiang2013label}, LC-PDL \cite{zhang2019scalable}, LEDL \cite{shao2020label}, CDLF \cite{wang2020class}. We show the recognition results with $40\%$ labeled training data in Table~\ref{table: Recognition_results_40} and have the following observations. From Table~\ref{table: Recognition_results_40}, we can see that our SSDL can outperform all other methods at least $1.1\%$ and $0.7\%$ on the Stanford40 and UIUC-SE datasets, respectively. Compared with the traditional methods, our SSDL has significant improvements, but we need to consume more resources when training the dictionary. Compared with other state-of-the-art dictionary learning based approaches, SSDL has at least $0.7\%$ improvement. For the label-embedded dictionary learning methods (LC-KSVD, LC-PDL, LEDL, CDLF), SSDL's recognition accuracies can exceed them at least $2.6\%$. This phenomenon has demonstrated the efficiency of our method to some extent. However, our SSDL just embeds the pAHL based pretext task into a basic dictionary learning model. As mentioned in section~\ref{sec:intro}, the pAHL block is a model-agnostic method that can be embedded into any standard dictionary learning algorithm, such as LC-KSVD, LC-PDL, LEDL, CDLF. That is to say, we may achieve higher recognition accuracies if we try to embed our pAHL block into these models. To evaluate this statement, we expand pAHL block to LEDL and CDLF on the Stanford40 dataset. The results are shown in Figure~\ref{figure: pAHL_LEDL_CDLF}. Obviously see that, compared with original methods, the pAHL-embedded LEDL and CDLF can achieve more powerful performances than SSDL. \subsection{Ablation Studies} \label{sec: Ablation Studies} The SSDL approach has achieved outstanding performance. It is interesting to recognize what are the factors affecting the experimental results. For this purpose, we design two ablation studies to discuss the proposed SSDL method. $i)$ One of our approach's main contributions is to reduce the dependence on labeled data for dictionary learning. Thus, we design an ablation study on the UIUC-SE dataset to observe the effect of label rates. From Figure~\ref{figure: label_rate}, we can see that, with the decrease of label rates, the performances of the two methods are decreasing, but our method is much slower than the other one. $ii)$ There are mainly four parameters ($p$, $\lambda$, $\alpha$, $\gamma$) influence the results. We set all the evaluated experiments to $40\%$ label rate on the UIUC-SE dataset. Here, we first discuss the $p$ and $\lambda$ in the pretext task. We adjust $p$ and $\lambda$ to obtain a pseudo label matrix. Usually, we fine-tune the two parameters according to the final results (as an example, in our paper, we can adjust the two parameters by the recognition accuracy). Here, we give a trick to easier ensure the two optimal parameters. Specifically, we first use the training data to generate a model with $p$ and $\lambda$. Then employ the training model to compute the cross-entropy loss of testing data. At last, adjust the parameters until achieving the minimum loss. The influence of $p$ and $\lambda$ are separately shown in Figure \ref{figure: p_parameter}, \ref{figure: lambda_parameter}. The y-axis denotes the testing data's loss. We obtain the minimum loss near $p=2.2$ and $\lambda=0.1$. For $\alpha$ and $\gamma$, they interact with each other. Thus we explore the impact of these two parameters simultaneously. Figure~\ref{figure: alpha_beta_parameter} shows the experimental results. The proposed SSDL approach is not sensitive to these two parameters. \subsection{Pretext Task} \label{sec: Pretext Task} In our framework, we set our proposed pAHL as the pretext task. Actually, it is flexible to select other methods, such as GL \cite{zhou2003learning}, HL \cite{zhou2007learning}, HL-W \cite{gao20123}, DHSL \cite{zhang2018dynamic}, to predict the pseudo label for dictionary learning. We employ the cross entropy loss to describe the influence. Results are shown in Table~\ref{table: loss}. Obviously see that, our pAHL is able to get better performance than GL, HL, and HL-W, but obtain similar results with DHSL. \begin{table}[t] \caption{Cross-entropy loss on pretext task with $40\%$ label rates.} \label{table: loss} \begin{center} \begin{tabular}{lcc} \toprule \textbf{Methods$\backslash$Datasets} & \textbf{Stanford40} & \textbf{UIUC-SE} \\ \midrule GL (NIPS \cite{zhou2003learning}, 2003) & 0.47 & 0.51 \\ HL (NIPS \cite{zhou2007learning}, 2007) & 0.64 & 0.42 \\ HL-W (TIP \cite{gao20123}, 2012) & 0.51 & 0.36 \\ DHSL (IJCAI \cite{zhang2018dynamic}, 2018) & 0.49 & \textbf{0.27} \\ \midrule \textbf{pAHL} & \textbf{0.43} & 0.31 \\ \bottomrule \end{tabular} \end{center} \end{table} \section{Conclusion} Label-embedded dictionary learning is a typical technology in machine learning. However, limited to introducing the label information, this category of approaches is only appliable in supervised learning. Inspired by the self-supervised idea, we propose a self-supervised dictionary learning method to expand label-embedded dictionary learning to semi-supervised and unsupervised learning. To our best knowledge, this is the first attempt to solve this dictionary learning challenge from the self-supervised perspective. Experimental results have demonstrated the efficiency of our method. \section{Acknowledgements} The paper was supported by the National Natural Science Foundation of China (Grant No. 62072468), the Natural Science Foundation of Shandong Province, China (Grant No. ZR2019MF073, ZR2018MF017), the Open Research Fund from Shandong Provincial Key Laboratory of Computer Network (No. SDKLCN-2018-01), Qingdao Science and Technology Project (No. 17-1-1-8-jch), the Fundamental Research Funds for the Central Universities, China University of Petroleum (East China) (Grant No. 20CX05001A), the Major Scientific and Technological Projects of CNPC (No. ZD2019-183-008), and the Creative Research Team of Young Scholars at Universities in Shandong Province (No.2019KJN019). \bibliographystyle{IEEEbib}
proofpile-arXiv_065-22
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} We consider the problem of certifying the stability of either continuous-time dynamical systems \begin{align} \label{def:dyn-sys} \dot{x} =f(x) \end{align} or discrete-time systems \begin{align} \label{def:dyn-sys} x^+ =f(x), \end{align} where $f:D \rightarrow \mathbb{R}^n$ is a continuous function with equilibrium point at the origin and $D \subseteq \mathbb{R}^n$. The stability is certificated via a Lyapunov function $V (x)$, where it is assumed that the function $V (x)$ is defined by a parameter vector $p$ and satisfies (w.l.o.g. for continuous-time case): \begin{equation} \label{Lypcon} \begin{split} V(0) &= 0 \\ V(x) &>0 \;\; \forall x \in D\setminus \{ 0 \}\\ \dot{V}(x) & \leq 0 \;\; \forall x \in D\setminus \{ 0 \}. \end{split} \end{equation} If $V (x)$ exists then the equilibrium of $f(x)$ is stable. Furthermore, if $\dot{V}(x) < 0$, then the equilibrium of $f(x)$ is asymptotically stable\cite[p. 100]{khalil1996}. \\ From a computational point of view proving the (asymptotic) stability for a general dynamical system is undecidable, as the problem (\ref{Lypcon}) can be reduced to the statement of Richardson's Theorem \cite{richardson1969}. Furthermore the numerical construction may lead to invalid output due the absence of the sufficient accuracy or inconsistent floating-point effects \cite{ahmed2020,rump1988algorithms}. As opposed to the pure numerical methods, formal verification provides an effective framework for automated program synthesis. That is, we aim for a routine which tries simultaneously to construct a Lyapunov function and output additionally a formal certificate such that (\ref{Lypcon}) holds. {\bf{Contributions and Outline of this work.}}\\ The goal of the present work is to extend the Satisfiability Modulo Theories (SMT) based approach to synthesize the Lyapunov functions for nonpolynomial and switching systems. We start with a review of related work in Section II. The necessary preliminaries of SMT and CEGIS are presented in Section III. The main result, i.e. the framework of constructions of Lyapunov function for the desired classes of dynamical systems is derived in Section IV. In Section V we show the application and the effectiveness of our CEGIS framework with some example systems. Finally, in Section VI, we give an outlook where the possible incorporation of the CEGIS framework may produce interesting results. \section{Related Works} As mentioned in the previous Section, the algorithmic construction of Lyapunov functions is a notorious difficult task and often discussed problem in nonlinear control theory. An overview about computational methods for Lyapunov functions is given in \cite{giesel2015}. Several methods are limited to a special case of systems, such as positive dimensional polynomial systems \cite{ji2013}. A well-established method to synthesize Lyapunov functions is Sum of Squares Decomposition (SOS) \cite{papachristodoulou2002}. There are also optimization based method as \cite{argeitis2014,forsman1991}. Additionally machine learning approaches may be used in context of construction of Lyapunov functions, Mcgough \cite{mcgough2010} employs an evolutionary algorithm to construct candidate Lyapunov functions. Another approach is to train a neural network to approximate a Lyapunov function \cite{gruene2020,gaby2021}. Besides the construction, the formal verification of the Lyapunov function is an interesting and important topic by itself \cite{Althoff2014-pow-sys-form-ver,Chan2016-form-ver-sys,osinenko2017constructive}, since one has to be careful whether the output of the implemented numerical construction is indeed a Lyapunov function. Thus, the construction of Lyapunov function can be viewed as a certain program synthesis problem. Moreover, it is possible to synthesize and verify programs simultaneously. One of the main approaches consists of searching for counterexamples for first-order logical formulas and reusing them to support the program construction routines \cite{abate2018,polgreen2020,reynolds2015}. This framework is called Counterexample Guided Inductive Synthesis (CEGIS). An important task for the speed and convergence of the method is finding and selecting the examples \cite{pu2018}. Counterexample based approaches have been also employed in control applications \cite{verdier2020,ravanbakhsh2015}. Ahmed \cite{ahmed2020} introduces a framework to synthesize Lyapunov functions for linear, non-linear (polynomial), and parametric models. The CEGIS approach for the synthesis of a Lyapunov function parameterized by the neural networks are discussed in \cite{Dai2020}. A possible approach to deal with non-polynomial systems, is to analytically transform the problem into a polynomial one\cite{papachristodoulou2002}. The SMT-solver dReal, used in \cite{kapinski2014}, allows to add a numerical error bound specified by the user, to handle such functions \cite{gao2013}. In \cite{han2015} a non-polynomial is formulated by an uncertain polynomial system with parameter ranges obtained from the truncated Taylor expansion and a parameterizable remainder. Construction for nonpolynomial systems may often result in time out of CEGIS procedure \cite{ravanbakhsh2015}. Therefore one needs to be cautious in choosing the required approximation structures. \section{Preliminaries} \subsection{Satisfiability modulo theories using Z3} SMT generalize Boolean satisfiability by adding equality reasoning, arithmetic, fixed-size bit-vectors, arrays, quantifiers, and other useful first-order theories \cite{Moura2008}. In this work the SMT-Solver Z3 \cite{Z3} is used which tries to verify whether a given first-order formula is satisfiable (\texttt{sat}) or \texttt{unsat} otherwise. For an easy example consider the following inequality system: \begin{equation} x_1^2+x_2^2 \le 1, \; x_1+x_2 = 1. \end{equation} The implementation using Z3Py, an interface for Z3 in Python, results in: \begin{align*} &\texttt{x1,x2 = Reals('x1 x2') } \\ &\texttt{s = Solver()} \\ &\texttt{s.add(x1**2+x2**2<=1, x1+x2==1)} \\ &\texttt{print(s.check())} \end{align*} In this case Z3 returns \texttt{sat}, since the inequalities are satisfiable. \texttt{print(s.model())} returns a possible solution, here \texttt{[x2 = 1/2, x1 = 1/2]}. If the second formula is changed to \texttt{x1+x2==2}, Z3 would return \texttt{unsat}, since obviously no solution for this particular inequality system exists. \subsection{Counterexample Guided Synthesis Using Z3} Counterexample Guided Synthesis (CEGIS) is an iterative algorithm in which two phases, the verification phase and the learner phase, are performed alternately. In the verification phase the properties program (a Lyapunov function in our case) are checked. If the desired properties are not fulfilled and a counterexample is found, this counterexample is passed to the learner phase. In the learner phase, a new candidate is generated, which has to fulfill all previous counterexamples. This procedure repeats until no more counterexamples can be found. The learner-verifier framework is illustrated in \reffig{fig:CEGIS_sheme}. \begin{figure}[!h] \centering \begin{tikzpicture}[>=latex \node (learner) {Learner}; \node[ right=1cm of learner] (verifier) {Verifier}; \draw [->] (learner.north) to [out=30,in=150] node[above,sloped] {$V(x)$} (verifier.north); \draw [->] (verifier.south) to [out=210,in=-30] node[below,sloped] {$x_{\text{ce}}$} (learner.south); \end{tikzpicture} \caption{Counterexample based synthesis of a Lyapunov function.} \label{fig:CEGIS_sheme} \end{figure} In the following, we investigate how CEGIS can be used to construct Lyapunov functions and thus formally prove the stability of the dynamical system. {\textbf {Input preparation}} -- The candidate Lyapunov function $V(x)$ is parameterized by a parameter vector $p$. To verify the conditions (\ref{Lypcon}) one calculates $\dot{V}(x)$ as \begin{equation} \dot{V}(x) = \nabla V \cdot f(x) = \begin{pmatrix} \dfrac{\partial V(x)}{\partial x_1} f_1(x) \\ \vdots \\ \dfrac{\partial V(x)}{\partial x_n} f_n(x) \end{pmatrix}. \end{equation} An initial parameter vector $p$ is selected randomly and passed to the verifier. \\ {\textbf {Verifier}} -- A counterexample for the candidate Lyapunov function $V(x)$ does not satisfy (\ref{Lypcon}). If asymptotic stability is to be proved, correspondingly $\dot{V}(x) < 0$ must not hold. Consequently, the following system of inequalities must be solved. \begin{equation} V(x)<0 \lor \dot{V}(x)\geq 0, \, x \in D\setminus \{ 0 \} \end{equation} The system of such inequalities can be implemented in Z3 as follows. \begin{align*} &\texttt{verifier = Solver()} \\ &\texttt{verifier.add(Or(V<=0,V\_dot>=0))} \\ &\texttt{verifier.check()} \end{align*} The set $D$ and the separate consideration of the equilibrium is implemented by additional boundaries (e.g. \texttt{verifier.add(x<2, x!=0} ). If no solution exists, the output of the \texttt{verifier.check()} is \texttt{unsat}. That is, no counterexample exists and the candidate function satisfies (\ref{Lypcon}) and thus is a Lyapunov function for the system $f(x)$. If the system is satisfied, the output of the verifier is \texttt{sat}, \texttt{verifier.model()} gives a counterexample $x_{\text{ce}}$ for the candidate function. This counterexample is passed to the learner as an additional inequality constraint. \\ { \textbf{Learner}} -- The counterexamples from the verifier are added to the existing set of examples $X_{\text{ce}}$. Thus, $m=|X_{\text{ce}}|$ counterexamples result into following $2m$ inequalities: \begin{equation} V(x_{\text{ce}\,i})>0,\, \dot{V}(x_{\text{ce}\,i})<0, \;i=1,\dots,m. \label{learner} \end{equation} This is implemented and verified in Z3: \begin{align*} &\texttt{learner = Solver()} \\ &\texttt{learner.add(V>0,V\_dot<0) } \\ &\texttt{learner.check()} \end{align*} If the inequality system has no solution, the output of the check is \texttt{unsat} and no Lyapunov function with the given structure exists. If the output of the check is \texttt{sat}, \texttt{learner.model()} gives a new parameter vector $p$ for a new candidate function. It should be noted that due to the undecidable nature of the problem, the algorithm does not necessarily terminate and the number of necessary iterations even may depend on the initial parameters as it will be shown later. The algorithm is summarized in \reffig{alg}. \begin{figure}[!h] \centering \vspace*{3px} \begin{tikzpicture}[>=latex \node [](initial) {$V,D$}; \node[state, below=0.5cm of initial] (verifier) { \begin{tabular}{l} \multicolumn{1}{c}{\textbf{Verifier}} \\ calculate $\dot{V}(x)$ \\ solve $V(x)<0, \lor \dot{V}(x)\geq 0$ \\ \phantom{solve }$ x \in D\setminus \{ 0 \}$ \\ \end{tabular} }; \draw [->] (initial.south) to (verifier.north); \node [below=0.75cm of verifier](no_LP) {Lyapunov function found}; \draw [->] (verifier.south) to node[left] {\texttt{unsat}} (no_LP.north); \node[state, below=0.5cm of no_LP] (learner) { \begin{tabular}{l} \multicolumn{1}{c}{\textbf{Learner}} \\ add $x_{\text{ce}}$ to $X_{\text{ce}}$ \\ solve $ V(x_{\text{ce}\,i})>0,\, \dot{V}(x_{\text{ce}\,i})<0$\\ \phantom{solve }$\forall x_{\text{ce}\,i} \in X_{\text{ce}}$ \end{tabular} }; \node [below=1cm of learner](AuxNode) {}; \node [left=-0.5cm of AuxNode](no_LP) {\begin{tabular}{c} no Lyapunov \\ function exists \end{tabular}}; \node [right=+0.5cm of AuxNode](stop) {stop}; \draw [->] ($(no_LP.north |- learner.south)$) -- node[left] {\texttt{unsat}} (no_LP.north); \draw [->] ($(stop.north |- learner.south)$) -- node[right] {$|X_{\text{ce}}|>m_\text{max}$} (stop.north); \draw[->](learner.west) -- ++(-0.2,0) -- node[left] {\begin{tabular}{r} \texttt{sat} \\ $V$ \end{tabular}} ($(learner.west |- verifier.west)+(-0.2,0)$) -- (verifier.west); \draw[<-](learner.east) -- ++(+0.2,0) -- node[right] {\begin{tabular}{l} \texttt{sat} \\ $x_{\text{ce}}$ \end{tabular}} ($(learner.east |- verifier.east)+(+0.2,0)$) -- (verifier.east); \end{tikzpicture} \caption{Algorithm counterexample based synthesis of a Lyapunov function.} \label{alg} \end{figure} \section{Main result} In this section we present the framework of construction of Lyapunov functions for certain chosen classes of dynamical systems. \subsection{Constructions for Nonpolynomial Systems} In order to handle nonpolynomial dynamical systems in Z3, we may try to approximate these by some polynomial series and bound the estimation error. We include the resulted error bound to the inequality system to verify the original problem. For instance, consider the exponential function, which can be approximated by the following Taylor series: \begin{equation} \exp (x) := \sum_{n=0}^{N} \dfrac{x^n}{n!} + \sum_{n=N+1}^{\infty} \dfrac{x^n}{n!}. \end{equation} For a constrained state space $|x|\leq1+\frac{N}{2}$, the remainder term $\left | \varepsilon(x) \right |$ is bounded by \begin{equation} \label{cons_exp} \left | \sum_{n=N+1}^{\infty} \dfrac{x^n}{n!} \right | = \left | \varepsilon(x) \right | \leq 2\dfrac{|x|^{N+1}}{(N+1)!}. \end{equation} The order of the Taylor series $N$ must satisfy at least $|x|\leq1+\frac{N}{2} \forall x\in D$ such that (\ref{cons_exp}) is valid. The exponential function is replaced with the corresponding Taylor series and the additional variable $\varepsilon$ \begin{equation} \label{Eq:exp_series} \exp (x) := \sum_{n=0}^{N} \dfrac{x^n}{n!} + \varepsilon(x). \end{equation} The constraint (\ref{cons_exp}) is added to the verifier's inequality system. The search space for the counterexamples includes now also all possible values for $\varepsilon_\mathrm{ce}$. Thus, the resulting counterexamples $x_{\text{ce}\,i}$ with the approximation error $\varepsilon_\mathrm{ce\,i}$ are then inserted into the learner's inequality system. Since the deviation of the approximation is bounded, the Lyapunov function is also verified for the original system. However, it may be necessary to increase the order $N$ of the polynomial series and tighten the approximation error. \subsection{Switching Systems} \label{sec:switch} Given a state-dependent switching system \begin{equation} \dot{x} = \begin{cases} f_1(x) & \text{if\ } x\in D_1 \\ f_2(x) & \text{if\ } x\in D_2 \\ \cdots \end{cases}, \end{equation} where $D_i$ are nonoverlapping subsets of $D \subseteq \ensuremath{\mathbb{R}}^n$. For the stability analysis it can be exploited that the properties of each subsystem $f_i$ are of concern only in the regions where this system is active $D_i$. $V(x)$ has to satisfy (\ref{Lypcon}) in all subsets independently to verify the stability of the whole switched system \cite{wang2010}. To handle such switching systems the algorithm must be adjusted. For each region $D_i$ (\ref{Lypcon}) must be verified, where $\dot{V}(x)$ is calculated separately in each case. Only if the candidate function can be verified for every subset, a single Lyapunov function for the whole switching system is found. \subsection{Discrete-Time Systems} \label{sec:dis} We adapt the CEGIS-framework for reasoning on discrete-time systems. Given a $n$-dimensional discrete autonomous dynamical system \begin{equation} x_{k+1} =f(x_k) \end{equation} with $x=0$ be an equilibrium and $x \in D \subseteq \ensuremath{\mathbb{R}}^n$. If there exists a function $V(x)$ satisfying \begin{equation*}\label{Lypdis} \begin{split} V(0) &= 0 \\ V(x) &>0 \;\; \forall x \in D\setminus \{ 0 \}\\ V(f(x)) -V(x) &\leq 0 \;\; \forall x \in D\setminus \{ 0 \}, \end{split} \end{equation*} then the equilibrium $x=0$ is stable. Furthermore, if $V(f(x)) -V(x) < 0$, the equilibrium $x=0$ is asymptotically stable. Accordingly, in the algorithm the calculation of $\dot{V}(x)$ is replaced by $V(f(x)) -V(x)$. Additional, for discrete-time systems also a Lyapunov function with absolute value is possible. The absolute value can be represented as a \texttt{if}-condition in the solver Z3. \section{Examples} In this section we illustrate the presented approach on several examples of dynamical systems and discuss the performance and limitations of the CEGIS procedure. \subsection{Simple numerical system} Consider the system \begin{equation} \begin{split} \dot{x_1} =& x_2\\ \dot{x_2} =& -x_1 - x_2 \end{split} \end{equation} with $D=\ensuremath{\mathbb{R}}$. Assume a quadratic Lyapunov function $V(x) = p_1 x_1^2 + p_2 x_2^2 $ then, \begin{equation} \dot{V}(x) = \nabla V \cdot f(x) = 2p_1 x_1 x_2 + 2p_2 x_2 (-x_1- x_2). \end{equation} Choosing an initial parameter vector $p=[0,0]$ the learner returns the counterexample $x_{\text{ce}} = [1,-1]$. Thus the learner determines a candidate Lyapunov function $V(x) = \frac{1}{4} x_1^2 + \frac{1}{4} x_2^2$, which is verified by Z3. If other initial parameters are chosen, the performance of the algorithm changes as well: with an initial $p=[-1,1/2]$ two iterations are needed to find the Lyapunov function $V(x) = \frac{1}{2} x_1^2 + \frac{1}{2} x_2^2$. With $p=[-1,1]$ no Lyapunov function can be found even after 100 iterations. \begin{comment} $p=[0,0]$ :1 iteration $\rightarrow$ 0.18 s $\rightarrow$ $V(x) = \frac{1}{4} x_1^2 + \frac{1}{4} x_2^2$ \\ $p=[-1,1/2]$ :2 iteration $\rightarrow$ 0.27 s $\rightarrow$ $V(x) = \frac{1}{2} x_1^2 + \frac{1}{2} x_2^2$ \\ $p=[-1,1]$ :no result after 100 iteration $\rightarrow$ 161.99 s \\ $p=[-1,1]$ with only last 10 counterexamples: 13 iterations $\rightarrow$ 3.48 s $\rightarrow$ $V(x) = \frac{1}{2} x_1^2 + \frac{1}{2} x_2^2$ \end{comment} \subsection{Continuous Nonpolynomial Scalar System} Consider the system \begin{equation} \dot{x} = f(x) = x^2 - \exp(x) + 1\\ \end{equation} with $D=\{x: x \in [-2,2]\}$. We assume a quadratic candidate Lyapunov function $V(x)= px^2$ and calculate \begin{equation} \dot{V}(x) = \nabla V \cdot f(x) = 2px(x^2 - \exp(x) + 1). \end{equation} The exponential function is replaced by the following polynomial series and an approximation error $\varepsilon$. \begin{align} \dot{V}(x) = 2px (x^2+1-\sum_{n=0}^{N} \dfrac{x^n}{n!} - \varepsilon(x)) \end{align} The approximation error $\varepsilon$ is bounded as shown in (\ref{Eq:exp_series}). That is, the convergence is valid for all $x\in [-2,2]$ and the order of the Taylor series must be at least 2. For $N=2$ and a initial $p=-1$ the algorithm finds the counterexamples $x_{\text{ce}} = 1, \varepsilon_{\text{ce}}=0$ and $x_{\text{ce}} = \frac{3}{2}, \varepsilon_{\text{ce}}=1$. There is no solution for the resulting inequality system (\ref{learner}), the learner results \texttt{unsat}. So there is no quadratic Lyapunov function for this approximation. However, this does not mean that none exists for the original system. If you increase the order of the Taylor series $N=3$ and choose an initial $p=-1$ the algorithm returns a Lyapunov function $V=x^2$. Because of the constraints on the error, this is also a Lyapunov function for the original problem. \subsection{Multidimensional System with Trigonometric Function} Consider the system \begin{equation} \begin{split} \dot{x_1} &= -x_1^3 + x_2 \\ \dot{x_2} &= -\sin (x_1) - x_2 \end{split} \end{equation} with $D=\{x_1: x_1 \in [-3,3]\}$. Assume a quadratic candidate Lyapunov function $V(x)= p_1 x_1^2 + p_2 x_2^2$, then \begin{equation} \begin{split} \dot{V}(x) = \nabla V \cdot f(x) = &2p_1 x_1 (-x_1^3 + x_2) \\ &+ 2p_2 x_2 (-\sin (x_1) - x_2) \end{split} \end{equation} The sine function is replaced by the following Taylor series and an approximation error $\varepsilon$. \begin{equation} \sin(x) := \sum_{n=0}^{N} (-1)^n \dfrac{x^{2n+1}}{(2n+1)!} + \varepsilon(x) \end{equation} The approximation is bounded as follows \begin{equation} \left | \varepsilon(x) \right | \leq \dfrac{|x|^{N+1}}{(N+1)!}. \end{equation} With an initial $p=[-1,0.5]$ and $N=3$ the algorithm returns a Lyapunov function $V = \frac{1}{2} x_1^2 + \frac{1}{2} x_2^2$. \subsection{Switched System} Consider a piecewise defined system \begin{equation} \begin{array}{lll} \dot{x_1}=-x_2; & \dot{x_2}=2x_1 & \text{if }x_1x_2 \leq 0 \\ \dot{x_1}=-2x_2; & \dot{x_2}=x_1 & \text{if }x_1x_2 > 0 \end{array} \end{equation} with $D=\ensuremath{\mathbb{R}}^2$. The algorithm is adapted as described in Section \ref{sec:switch} and the Lyapunov function $V = \frac{1}{2} x_1^2 + \frac{1}{4} x_2^2$ is found. Note, $V$ serves as a Lyapunov function only in the suitable regions. If the switching rule is changed to \begin{equation} \begin{array}{lll} \dot{x_1}=-x_2; & \dot{x_2}=2x_1 & \text{if }x_1x_2 > 0 \\ \dot{x_1}=-2x_2; & \dot{x_2}=x_1 & \text{if }x_1x_2 \leq 0 \end{array} \end{equation} then the Learner returns \texttt{unsat} and no Lyapunov function with the given structure exists. Indeed, the system is unstable as can be shown analytically. \subsection{Discrete-Time System} Consider a discrete-time nonlinear system \begin{equation} \begin{split} x_{1,k+1} &= \dfrac{1}{2}x_{1,k} - \dfrac{1}{4}\arctan(x_{2,k}) \\ x_{2,k+1} &= -\dfrac{1}{4}x_{1,k} + \dfrac{3}{4}x_{2,k} \end{split} \end{equation} with $D=\{x_1: x_1 \in [-1,1]\}$. To handle the $\arctan $ it is approximated with a Taylor series with $N=5$. \begin{equation} \label{Eq:arctan_series} \arctan(x) := \sum_{n=0}^{N} (-1)^n \dfrac{x^{2n+1}}{(2n+1)} + \varepsilon(x) \end{equation} The Remainder is bounded as follows \cite{medina2006}. \begin{equation} \left | \varepsilon(x) \right | \leq \dfrac{|x|^{2N+3}}{2(2N+3)} \end{equation} Assume a candidate Lyapunov function $V(x)= p_1 |x_1| + p_2 |x_2|$. A Lyapunov function $V(x)= \frac{1}{2} |x_1| + \frac{1}{2} |x_2|$ is verified. \\ \section{Conclusions and Outlook} We extended the CEGIS framework to certify the stability of a given continuous/discrete-time nonpolynomial system. We illustrated the effectiveness of the approach on several examples of dynamical systems. By synthesizing barrier certificates not only stability but also other temporal properties can be investigated, by formulating a control policy. In addition to stability analysis of dynamical systems, the CEGIS approach can also be used for automatic controller design by synthesizing a control law or a control Lyapunov function \cite{han2015,ravanbakhsh2015,verdier2020}. Moreover important barrier certificates and reachability propositions can be addressed formally using Z3 and CEGIS framework \cite{jagtap2021,anand2019}. Such a framework is especially useful for approaches where the controller needs to be automatically adjusted online, as in an adaptive control approach. In particular, an interesting application of CEGIS-framework arises in the stability analysis of adaptive dynamic programming schemes, where one has to guarantee that the desired critic parameters force the positive definiteness of the so-called Q-function approximations at each time step of the scheme \cite{beckenbach2018,beckenbach2019_overviewQMPC}. Using CEGIS one may construct the required Q-functions and verify the conditions of the LaSalle-Yoshizawa theorem \cite[Theorem 18]{krstic1995} simultaneously. An expansion of the approach is to consider the framework of symbolic regression \cite{mcgough2010} by trying to find suitable parametric structures for $V$ and thus support the automation idea. \bibliographystyle{IEEEtran}
proofpile-arXiv_065-23
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The `cosmic web' is a term used to evoke the structure of the Universe on the very largest of scales. In this model, dense clusters and galaxy groups are connected by diffuse filaments, forming a web like structure, and are interspersed with large, empty voids. Galaxy surveys have provided a strong empirical basis for this model \citep[e.g.][]{Baugh2004}, whilst cosmological simulations have shown it to be a consequence of gravitational instabilities acting upon small density perturbations in the early Universe \citep[e.g.][]{Cen1999,Dave2001}. These same simulations, however, have predicted something more: that up to 40\% of the baryonic content of the Universe resides along these filaments and around the periphery of clusters and galaxy groups, existing in a diffuse, highly ionised plasma, the so-called `warm-hot intergalactic medium' (WHIM). To date, the WHIM has proven difficult detect, however a number of recent works in this area have made increasingly convincing claims to have made detection \citep[see, e.g.:][]{Eckert2015,Nicastro2018,Tanimura2019,Tanimura2020,Macquart2020}. This sparse, weakly magnetised WHIM is also predicted to have an associated radio signature, the `synchrotron cosmic web' \citep[see:][]{Brown2011,Vazza2015,Vazza2019}. As part of ongoing large-scale structure formation, cosmological simulations predict strong accretion shocks---having Mach numbers in the range $\mathcal{M} \sim 10 - 100$)---from in-falling matter along filaments and around the outskirts of clusters. These shocks should be capable of accelerating the electrons from within the WHIM to high energies by way of diffusive shock acceleration and this population of high energy electrons, in turn, are expected to radiate this energy as synchrotron emission as they interact with weak intercluster magnetic fields. In this way, the cosmic web is expected to have a synchrotron radio signature that traces out accretion shocks along its boundaries. The detection and confirmation of this radio emission would allow us to validate models of the large-scale structure of the Universe, as well as giving us insight into the poorly understood intercluster magnetic environments at the sites of these shocks. This synchrotron cosmic web, however, is predicted to be extremely faint and has proven especially difficult to detect. Large-scale magnetohydrodynamic simulations by \citet{Vazza2019}, for example, point to a large population of radio-relic-like shocks well below the level of direct detection of any current or future radio telescopes. Only a small fraction of the very brightest knots in these shocks rise to the level of direct detection, and these are located principally around the most massive galaxy clusters \citep{Hodgson2021a}. \citet{Vacca2018} did in fact claim direct detection of numerous large-scale synchrotron sources associated with the cosmic web, but follow-up observations by \citet{Hodgson2020} rebuffed these claims. More recently, \citet{Govoni2019} claimed the detection of a radio `ridge' extending between clusters Abell 309 and 401, suggesting a diffuse, energetic and magnetised plasma extending between the merging clusters. Whilst this detection goes some way to validating our models, in this particular case the energy is provided by the merging dynamics of the clusters and is qualitatively different to the more general mechanisms of the synchrotron cosmic web. Other attempts to detect the synchrotron cosmic web have turned to statistical detection techniques to reveal faint emission sources buried beneath the noise of our current observations. Foremost among these methods is the cross-correlation technique. This method involves constructing a `best guess' kernel of the probable locations of cosmic web emission, and performing a radial cross-correlation of this kernel with the radio sky. A positive correlation at \SI{0}{\degree} offset is, in theory, indicative of a detection. In this way, this method hopes to reduce the noise by effectively integrating over a large enough area of the sky. Both \citet{Brown2017} and \citet{Vernstrom2017} used cross-correlation methods, and both were unable to make a definitive detection. In the former study, no positive correlation was detected. In the latter, \citet{Vernstrom2017} did indeed report a correlation, however the association of other sources such as active galactic nucleii (AGNs), star forming galxies (SFGs), and other cluster emission with their correlation kernel meant they were unable to attribute the peak at \SI{0}{\degree} to the cosmic web alone. Recently, however, \citet{Vernstrom2021} (herein: V2021) have reported definitive detection of the synchrotron cosmic web using an alternative statistical method known as stacking. Their method attempted to measure the mean intercluster radio emission between pairs of close-proximity luminous red galaxies (LRGs). LRGs are known to have a strong association with the centre of clusters and galaxy groups \citep{Hoessel1980,Schneider1983,Postman1995}. Close-proximity pairs of LRGs therefore are likely to indicate close-proximity overdense regions of our Universe, and in turn we expect some fraction of these to be connected by a filament. Thus, V2021 stacked hundreds of thousands of low-frequency radio images of such pairs, which were rotated and rescaled so as to align all pairs to a common grid, before being averaged so as to find the mean image. After subtracting out a model for the LRG and cluster contribution, they reported finding excess emission with $> 5 \sigma$ significance along the length of the intercluster region. Moreover, this excess was detected by two independent instruments---in the Galactic and Extragalactic All-sky MWA\footnote{Murchison Widefield Array \citep[MWA;][]{Tingay2013}} survey \citep{Wayth2015,HurleyWalker2017} and by the Owens Valley Radio Observatory Long Wavelength Array \citep[OVRO-LWA;][]{Eastwood2018}---and across four frequencies ranging from \SIrange[range-phrase=--,range-units=single]{73}{154}{\mega \hertz}. A null test, formed by stacking physically distant LRG pairs for which we do not expect a connecting filament to exist, returned no excess emission. After excluding multiple alternative explanations, V2021 suggested the most likely explanation for this excess intercluster signal was the cosmic web itself. The result reported in V2021 is convincing, but it is also surprising. Previous intercluster magnetic field estimates provided upper limits on the order of just a few nG \citep[e.g.][]{Pshirkov2016,OSullivan2019,Vernstrom2019}. However, the reported excess emission supports intercluster magnetic field strengths averaging \SIrange[range-phrase=--,range-units=single]{30}{60}{\nano \gauss}, and moreover these estimates are strictly a lower limit as some significant fraction of stacked pairs will not in fact be connected by a filament. More recent follow-up work by \citet{Hodgson2021b}, which stacked a simulated radio sky---including cosmic web emission---from the FIlaments and Galactic RadiO \citep[FIGARO;][]{Hodgson2021a} simulation, failed to reproduce excess intercluster emission. With perfect knowledge of their simulated sky, this work stacked the known locations of dark matter halos rather than LRGs. They reported excess emission being detected on the immediate interior of halo pairs, associated with asymmetric accretion shocks onto clusters and galaxy groups, but no detectable emission along the true intercluster region. This work also explored the role of other contaminating sources, such as AGN, SFG, and radio halo populations, as well as the effect of sidelobes from the dirty interferometric beam, finding none of these to be significant. The discrepancy between V2021 and these simulated results, which build on our current best simulations of the cosmic web, remain difficult to explain. Given the importance of the result of V2021, in this present study we attempt to reproduce and corroborate their result. We do so using the upgraded MWA Phase II instrument \citep{Wayth2018}, observing at \SI{118}{\mega \hertz}, and take advantage of improvements to calibration and imaging pipelines that have appeared since the original GLEAM survey. We image 14 fields spanning the same LRG pairs as used in V2021, which we then stack using independent stacking and modelling pipelines. Our aim is to closely adhere to the methodology used in V2021 whilst seeking to measure the excess intercluster emission more accurately, thanks to the improved noise characteristics of our observations. Throughout this paper, we assume a $\Lambda$CDM cosmological model, with density parameters $\Omega_{\text{BM}} = 0.0478$ (baryonic matter), $\Omega_{\text{DM}} = 0.2602$ (dark matter), and $\Omega_{\Lambda} = 0.692$, and the Hubble constant $H_0 =$ \SI{67.8}{\kilo \metre \per \second \per \mega \parsec}. All stated errors indicate one standard deviation. \section{Luminous red galaxy pairs} Only a few thousand clusters are currently catalogued with robust X-ray or Sunyaev–Zeldovich measurements \citep[e.g.][]{MCXC2011,PlanckCollab2016}. This number is much smaller than the expected number of clusters and galaxy groups \citep[e.g.][]{Wen2018}, and is also too few to be useful for our present purposes as we do not expect the faint, intercluster emission to become detectable above our field noise after stacking so few images. Instead, as with V2021, we turn to using LRGs as a proxy for such overdense regions. LRGs are massive, especially luminous early-type galaxies, and are closely associated with overdense regions of the Universe. This association, however, comes with some caveats as explored in detail in \citet{Hoshino2015}. To summarise briefly here, in the first instance not all massive clusters have a LRG as their central galaxy: for the most massive clusters the probability of this association peaks at 95\%, however this association steeply drops off for lower mass systems, reaching just 70\% for clusters of mass $M_{200} =$ \SI{E14}{\solarmass}. An additional error introduced from the brightest LRGs located in clusters that do not align with the cluster centre, and this `miscentred' fraction is substantial at \SIrange[range-phrase=--,range-units=single]{20}{30}{\percent}. Nonetheless, as with V2021, these caveats are acceptable given the vastly greater number of potential clusters that LRGs allow us to identify. It does, however, mean that any excess emission attributed to intercluster regions are strictly lower limits. As with V2021, we use the LRG catalogue from \citet{Lopes2007}. This catalogue incorporates approximately $1.4$ million LRGs extracted from the fifth data release of the Sloan Digital Sky Survey \citep{York2000}, out to a redshift of $z < 0.70$. \citet{Lopes2007} have used an empirical based method to calculate spectroscopic redshifts for this population from just three bands (\textit{gri}), with an estimated error $\sigma = 0.027$ for $z < 0.55$, and $\sigma = 0.040$ up to $z = 0.70$. In V2021, a list of LRG pairs was calculated that met the following set of conditions. First, the separation between the pairs was less than \SI{15}{\mega \parsec}. The metric used was the comoving distance, and this condition also included a lower bound of \SI{1}{\mega \parsec} (T. Vernstrom, personal communication). Additionally, the pairs were required to have an angular separation on the celestial sphere in the range \SI{20}{\arcminute}~$< \theta <$~\SI{180}{\arcminute}. We find 1,078,730 such valid pairs, of which 601,435 ultimately overlap with our fields, and label this catalogue `Max \SI{15}{\mega \parsec}'. In \autoref{fig:lrgmap} we show the location of the LRG pair population on the celestial sphere, with those in red overlapping with our fields. V2021 reported finding just 390,808 pairs that satisfied these conditions. As it turns out, this reduced catalogue was the result of a bug in their code (T. Vernstrom, personal communication). Therefore, to compare like for like, we additionally include this abridged catalogue as `LRG-V2021'. We also provide two additional catalogues of LRG pairs with differing selection criteria. In the first, we reduce the maximum spatial separation to \SI{10}{\mega \parsec} (`Max \SI{10}{\mega \parsec}'), of which there are 270,458 (153,433 overlapping) entries. The motivation for this catalogue arises from our expectation that intercluster emission should be brighter for cluster pairs in closer proximity to each other where they are more likely to be interacting, possibly triggering pre- or post-merger shocks known to produce synchrotron emission. We also include a final catalogue with modified angular constraints, such that the minimum and maximum angular separations are shifted to \SI{15}{\arcminute} and \SI{60}{\arcminute}, respectively (`Max \SI{60}{\arcminute}'). This catalogue contains 824,773 (436,899 overlapping) entries, and is motivated by concerns about resolving out large-scale angular structures, which is an aspect we discuss later in \autoref{sec:resolvingout}. \begin{figure} \centering \includegraphics[width=\linewidth,clip,trim={0 0cm 0 0cm}]{lrgmap.png} \caption{LRG pair distribution on the sky. The red points indicate pairs used in our stacks, whilst the grey are those pairs either outside our field or are within an exclusion zone.} \label{fig:lrgmap} \end{figure} \begin{figure*} \centering \includegraphics[width=\linewidth]{lrg-pairs.pdf} \caption{The LRG pair distributions by redshift (left), angular separation (centre), and spatial separation (right), for each of the LRG pair catalogues used in our stacks.} \label{fig:lrgpairs} \end{figure*} \begin{table*} \centering \begin{tabular}{rccccccc} \toprule & \textbf{Spatial criteria} & \textbf{Angular criteria} & \textbf{N} & $\mathbf{\expval{z}}$ & $\mathbf{\expval{\Delta r}}$ & $\mathbf{\expval{\Delta \theta}}$\\ & [Mpc] & [arcminute] & & & [arcminute] & [Mpc] \\ \midrule \textbf{Max 15 Mpc} & $1 < r < 15$ & $20 < \theta < 180$ & 601,435 (1,078,730) & 0.18 & 11.5 & 67.8 \\ \textbf{Max 10 Mpc} & $1 < r < 10$ & $20 < \theta < 180$ & 153,433 (270,458)& 0.098 & 7.6 & 65.8 \\ \textbf{Max 60$'$} & $1 < r < 15$ & $15 < \theta < 60$ & 436,899 (824,773) & 0.32 & 11.4 & 28.5 \\ \textbf{LRG-V2021} & $1 < r < 15$* & $20 < \theta < 180$* & 219,684 (390,808) & 0.14 & 10.3 & 83.8 \\ \bottomrule \end{tabular} \caption{LRG pair statistics comparison between each of the LRG pair catalogues. We show the spatial and angular selection criteria for each catalogue, the number of LRG pairs that overlap with our fields (and the total pairs), their mean redshift, their mean angular separation, and their mean spatial separation, respectively. Spatial distances use a comoving metric. *Due to an error in the work of V2021, the LRG-V2021 catalogue is an incomplete catalogue that nonetheless adheres to these ranges.} \label{tab:lrgpairs} \end{table*} In \autoref{fig:lrgpairs}, we show the redshift, angular separation and spatial separation distributions of each of these LRG catalogues, for those LRG pairs that overlap with our fields and are included in our stacks. Note the double peak structure present in the redshift distribution of the Max 15 Mpc and Max \SI{60}{\arcminute} catalogues: this is a function of the underlying distribution of the LRG catalogue which exhibits a small peak around $z = 0.08$ and much larger peak around $z = 0.5$, combined with the effect at increasing redshifts of the duel constraints of the minimum angular separation and the maximum spatial separation. In the case of the \SI{15}{\mega \parsec} criterion, we find a mean redshift of $\expval{z} = 0.185$, a mean separation of $\expval{r} =$~\SI{11.6}{\mega \parsec}, and a mean angular separation of $\expval{\theta} =$~\SI{67}{\arcminute}. The mean values for the other LRG pair catalogues are provided in \autoref{tab:lrgpairs}. \section{Observations \& data processing} \subsection{Data selection} The original GLEAM survey, which was used in V2021, was observed using the MWA Phase I \citep{Tingay2013}. This consisted of 128 tiles positioned to give a maximum baseline of approximately \SI{3}{\kilo \metre} when observing at zenith, and a large number of baselines under \SI{100}{\metre}; when observing near the horizon these baselines are significantly foreshortened. The upgrade to the MWA Phase II \citep{Wayth2015} in late 2017 was primarily a reconfiguration of the tile positions: the same 128 tiles were positioned to give an increased maximum baseline of almost \SI{6}{\kilo \metre} as well as a much smoother distribution of baselines. The effect of these changes was to give Phase II almost twice the resolution as well as a better behaved dirty beam with reduced sidelobes, whilst otherwise leaving the point source sensitivity unchanged. Sidelobe confusion is a major source of noise in Phase I observations, whereas in Phase II observations the higher resolution allows much deeper cleaning, which has flow on effects to further reduce image noise even when accounting for the resolution difference. In the observations used in this study, we take advantage of these improved characteristics of the MWA Phase II. We have drawn our observations from those made in preparation for the upcoming GLEAM-X survey \citep{HurleyWalker2017b}. GLEAM-X has observed the sky at frequencies ranging from \SIrange[range-phrase=--,range-units=single]{72}{231}{\mega \hertz} in short duration `snapshots' of approximately 2 minutes. These observation runs are typically observed at a fixed pointing in a `drift scan' mode, where the celestial sphere is allowed to freely rotate through the primary beam. We have identified 14 fields to image that best span the LRG population. These fields are centred at declinations of $\delta = \{+2^\circ, +18^\circ\}$ and spanning the right ascension range of $120^\circ \leq \alpha \leq 240^\circ$ at intervals of \SI{20}{\degree}. From the archive of GLEAM-X observations, we filter for snapshots observed at \SI{118.5}{\mega \hertz} and where their pointing centres are located near the centre of these 14 fields, with a tolerance $\alpha \pm 5^\circ$ and $\delta \pm 3^\circ$. There are 512 observations that match this criteria, made during runs in February--March 2018, May--June 2018, January--February 2019, and March 2019. After calibration, imaging, and quality control checks, however, this number is reduced to 291 snapshots, constituting approximately 10 hours of observations. In \autoref{tab:fields} we tabulate these 14 fields and some of their properties. \begin{table*} \centering \begin{tabular}{ccccccp{0.35\linewidth}} \toprule \textbf{ID} & \textbf{RA} & \textbf{Dec} & \textbf{Snapshot}s & \textbf{Noise} & \textbf{Model deviation} & \textbf{Notes} \\ & [deg] & [deg] & & [\SI{}{\milli \jansky \per \beam}] & $\mu$ | $\sigma$ [dex] & \\ \midrule 1 & 120 & 3 & 19 & 8.2 & 0.000 / 0.025 & \\ 2 & 140 & 3 & 35 & 5.8 & 0.001 / 0.021 & Hydra A present in field, peaking at $\sim$\SI{260}{\jansky \per \beam}. \\ 3 & 160 & 3 & 28 & 7.1 & 0.000 / 0.022 & Affected by sidelobes from Virgo A\\ 4 & 180 & 3 & 35 & 6.6 & -0.002 / 0.028 & Virgo A present in field peaking at $\sim$\SI{526}{\jansky \per \beam}, and second bright source present 3C 273 peaking at $\sim$\SI{105}{\jansky \per \beam}.\\ 5 & 200 & 3 & 31 & 8.8 & 0.000 / 0.022 & Large-scale sidelobe pattern present from Centaurus A which is positioned south out-of-field in sidelobe of the primary beam. Virgo A also present in field.\\ 6 & 220 & 3 & 15 & 8.4 & 0.001 / 0.026 & \\ 7 & 240 & 3 & 11 & 10.0 & -0.001 / 0.022 & Hercules A present in field peaking at $\sim$\SI{377}{\jansky \per \beam}\\ 8 & 120 & 18 & 11 & 8.0 & -0.002 / 0.027 & \\ 9 & 140 & 18 & 17 & 6.2 & 0.000 / 0.026 & Large-scale sidelobe pattern from Virgo A in south, out-of-field. \\ 10 & 160 & 18 & 20 & 6.4 & -0.001 / 0.030 & \\ 11 & 180 & 18 & 24 & 6.8 & -0.001 / 0.035 & Both Virgo A and 3C 273 present in field.\\ 12 & 200 & 18 & 19 & 9.5 & -0.004 / 0.037 & Virgo A present, as well as large-scale sidelobe pattern from Centaurus A, positioned south out-of-field.\\ 13 & 220 & 18 & 11 & 11.1 & -0.003 / 0.036 & \\ 14 & 240 & 18 & 7 & 14.6 & 0.001 / 0.031 & \\ \bottomrule \end{tabular} \caption{A summary of the 14 fields imaged, observed by the MWA Phase II instrument at \SI{118}{\mega \hertz}. The fields span the right ascension range \SI{120}{\degree} to \SI{240}{\degree} in \SI{20}{\degree} increments, at declinations of \SI{3}{\degree} and \SI{18}{\degree}. We indicate the number of \SI{112}{\second} duration snapshots used in each field mosaic, and the resulting noise at the centre of the field. The model deviation describes the ratio of the measured flux density of sources after performing source finding, in comparison to the original calibration sky model; the $\mu$ term describes the mean values of these ratios, whilst $\sigma$ shows the standard deviation of these ratios.} \label{tab:fields} \end{table*} \subsection{Data processing} All observations are centred at \SI{118.5}{\mega \hertz}, of \SI{112}{\second} duration, spanning a bandwidth of \SI{30.72}{\mega \hertz}, and correlated at a resolution of \SI{10}{\kilo \hertz} and \SI{0.5}{\second}. This is further averaged to \SI{40}{\kilo \hertz} and \SI{4}{\second} prior to calibration and imaging to ease data storage and processing requirements. All subsequent data processing occurs on a per-snapshot basis until final mosaicing. Calibration is performed using an in-field radio sky model. This sky model has been constructed in preparation for the GLEAM-X survey, and is principally based on the GLEAM sky catalogue. It does, however, include a number of additional sources, including better models for the so-called `A-team' of extremely bright radio sources, such as Hydra A, Virgo A, Hercules A, and Centaurus A, all of which populate our fields. The GLEAM sky catalogue is known to have an error of \SI{8.0(5)}{\percent} up to declination \SI{18.5}{\degree}, and an uncertainty of \SI{11(2)}{\percent} for more Northern declinations. We calibrate on all sources from this sky model that are within a \SI{20}{\degree} radius of our field centre, and having a primary beam-attenuated apparent flux density of at least \SI{700}{\milli \jansky}. These sources are predicted into the visibilities using the full embedded element primary beam model \citep{Sokolowski2017}. Calibration is performed using the updated MWA calibrate tool \footnote{See \url{https://github.com/torrance/MWAjl/}.}, which finds a full Jones matrix solution for each antenna, independently for each pair of channels, and with the solution interval set to the duration of the snapshot. Baselines longer than approximately \SI{3.7}{\kilo \metre} are excluded from consideration during calibration, since these baselines are increasingly sensitive to angular scales of a higher resolution than the original GLEAM catalogue. Calibration solutions are visually inspected, and any antennae which have failed to well converge are flagged at this time. No self-calibration is performed, as in practice we have found this to be unnecessary. Imaging is performed using \texttt{wsclean} \citep{Offringa2014}. We weight baselines using the Briggs formulation, with a robustness factor of +1; additionally baselines smaller than $15 \lambda$, which are sensitive to emission on angular scales larger than \SI{3.8}{\degree}, are excluded to avoid any kind of large-scale contamination from Galactic emission. Cleaning is then performed down to a threshold that depends on two factors: cleaning continues until first the `auto-mask' threshold is reached, which is set at a factor of 3 times the residual map noise, and then cleaning continues only on those pixels previously cleaned down to the `auto-threshold' limit, which we set as the estimated residual map noise. Typical values for the residual map noise of these individual snapshots is around \SIrange[range-phrase=--,range-units=single]{15}{20}{\milli \jansky}. During imaging, we split the \SI{30.72}{\mega \hertz} band into four equally sized channels to account for the typical flux density changes of sources over this frequency range due both to intrinsic properties and beam attenuation. We do, however, perform joint-channel cleaning, where clean peaks are chosen based on a full-bandwidth mean map, and the peak value is estimated using a linear fit across each output channel. Note that whilst \texttt{wsclean} does have multiscale clean functionality, we have chosen not to use this, so that any faint, extended emission sources \textit{remain} in the residual maps after cleaning. We image and clean instrumental polarizations (e.g.\@ XX, XY, YX, YY) independently, which is important since sources at this low elevation become strongly polarised as a result of the primary beam. These instrumental polarization images are later combined based on the primary beam model to produce Stokes I images. Finally, after imaging is completed, we keep both the restored and residual Stokes I images for each snapshot for later processing: the restored map is used to verify and correct field calibration, whilst the residual map provides us with a point-source subtracted map to be ultimately used in stacking, without the need for complex wavelet subtraction techniques as used in V2021. As a first order effect of ionospheric electron density variations, we observe direction-dependent shifts in the apparent position of radio sources, and these effects become increasingly strong at the low frequencies observed by the MWA. Without resolving this positional error, we not only risk introducing astrometric errors, but additionally sources in the final mosaic can appear blurred and point sources have a peak to integrated flux density ratio that is less than unity. To resolve this, typical MWA workflows make image based corrections to `warp' the image, and align the apparent position of sources with their position in the sky model (for example, see \citealp{HurleyWalker2018}). We follow this method by first source finding on the restored image using \texttt{Aegean} \citep{Hancock2018}, and cross-matching these sources with our sky model. We include only those sources that are isolated by at least \SI{1}{\arcminute} radius from any other sky model source to avoid any ambiguous matches, and as a quality control check we require at least 200 cross-matches in a snapshot or else it is discarded. Then by measuring the angular offset of apparent position to that of the sky model, we interpolate across these deviations and thus warp the image to correct for this effect. In an effort to match the sensitivity to extended emission of the MWA Phase II instrument to that of its Phase I counterpart, we proceed by convolving both the restored and residual images. At \SI{118.5}{\mega \hertz}, a typical dirty beam size at Briggs +1 weighting has major and minor axes of approximately \SI{2.3 x 1.8}{\arcminute}, whilst this size varies significantly by declination due to the foreshortening effect of the array at low elevations. We use \texttt{miriad} \citep{Sault1995} to convolve each snapshot to a circularised resolution of \SI{3}{\arcminute}, defined at zenith. We discuss the effects of this convolution step, and our sensitivity to extended emission, in \autoref{sec:resolvingout}. Our snapshots are ready to be stacked and mosaiced. To do this we must first ensure all images are on the same projection, which are presently in a slant orthographic projection (`SIN') with the projection origin at each snapshot's zenith. To minimise reprojection errors, which can be significant, we choose to reproject each snapshot onto the mean projection shared amongst the snapshots for a particular field, leaving the SIN projection origin approximately at the MWA zenith. We additionally mask the region within \SI{15}{\degree} of the horizon for each snapshot, as these low-elevation observations are subject to significant errors. With these steps completed, we perform the weighted mean of all snapshots, with the weight based on the estimated local map noise $\sigma$, as $\nicefrac{1}{\sigma^2}$.\footnote{The local noise map is calculated using the median absolute deviation from the median (MADM) applied to a residual image that has \textit{not} been primary beam corrected. We choose to use this image for our noise estimation as it has had bright sources removed and, prior to beam correction, the noise does not vary spatially, thus allowing for the easy calculation of a global value. We then apply a beam correction to this constant noise map so as to obtain an estimate of the local noise map, which varies spatially as a function of the primary beam.} A final quality check is included during this mosaicing step, whereby any snapshot with a map noise in excess of \SI{35}{\milli \jansky \per \beam} (increased to \SI{45}{\milli \jansky \per \beam} for field 14) is discarded. In this way, we create mosaics of the residuals, the restored images, as well as the estimated noise. Finally, we verify our calibration by source finding on the final mosaic and comparing the measured flux density to the sky model flux density. As reported by \citet{HurleyWalker2017}, we observe a declination-dependent flux density error. In \autoref{fig:deccorrection}, we present the kind of diagnostic used to check the flux density values for each field. For example, the top panel of this figure shows this error across field 10, showing the measured to model integrated flux density ratio increases from approximately unity to as high as 1.3 times at declination \SI{+30}{\degree}. To correct for this effect, we model this error as a simple linear function of declination, as depicted by the dashed black line, and scale the image accordingly. The centre panel in \autoref{fig:deccorrection} shows the effect of this correction: the mean flux density density ratio is reduced from 0.044 dex to -0.001 dex; the spread of ratios is reduced from a standard deviation of 0.0436 dex to 0.030 dex; and in this particular instance, the apparently bimodal distribution is corrected to appear much more normally distributed. As shown in \autoref{tab:fields}, the mean flux density ratio across all fields is very nearly 0 {dex} after this correction, whilst the standard deviation lies in the range \SIrange[range-phrase=--,range-units=single]{0.021}{0.037}{dex}. These values are well within the stated errors of the GLEAM sky model. As a final sanity check, we also show in the lower panel of \autoref{fig:deccorrection} the ratio of peak to integrated flux density, where we can observe a good clustering of values around unity, suggesting that our ionospheric corrections are satisfactory. In \autoref{fig:field10}, we present a zoom of field 10, showing both the restored and residual images. The brightest source in this field is \SI{34.6}{\jansky \per \beam}. In the residual map, the estimated noise is just \SI{7.5}{\milli \jansky \per \beam}, with a maximum value of \SI{41}{\milli \jansky \per \beam}. The central residual map noise in the other fields is listed in \autoref{tab:fields}. Prior to stacking, we convert the residual maps from flux density (in units \SI{}{\jansky \per \beam}) to temperature (units \SI{}{\kelvin}), taking into account the spatially varying restoring beam dimensions. In this way, the stacked images are brought to have consistent units despite variable beam sizes, and this is consistent with the method employed in V2021. \begin{figure} \centering \vspace{-1.2cm} \includegraphics[width=\linewidth,clip,trim={0.4cm 0.4cm 0.2cm 0.2cm}]{deccorrection.pdf} \caption{Example calibration diagnostics for field 10, showing the declination correction. The 731 measured sources are compared to the calibration model. Prior to correction, their ratio has mean 0.044 dex and standard deviation 0.044 dex; after correction the mean becomes -0.001 dex and standard deviation 0.030 dex. \textit{Top:} The measured to model flux density ratio, as a function of declination. The dashed line indicates the fit which is later used as an image-based correction. \textit{Centre:} The distribution of measured to model flux density ratios for all 731 sources prior to (blue) and after (red) correction. Note in this case, the simple declination correction resolves the initial bimodal distribution. \textit{Bottom:} The measured ratio of peak to integrated flux density, showing peak and integrated flux density of point sources are very nearly identical.} \label{fig:deccorrection} \end{figure} \begin{figure*} \centering \vspace{-1.6cm} \includegraphics[width=\linewidth,clip,trim={1.8cm 1.5cm 2.5cm 0cm}]{field10.pdf} \caption{The central region of field 10, centred at RA \SI{160}{\degree}, Dec \SI{18}{\degree}, with \SI{3}{\arcminute} resolution. \textit{Left:} The full mosaic, with all clean components restored into the image. The peak flux density in this image is \SI{3.8}{\jansky \per \beam}, whilst elsewhere in the field it is as high as \SI{34.6}{\jansky \per \beam}. \textit{Right:} The residual image, with all clean components subtracted out. This inset has a mean noise of \SI{7.5}{\milli \jansky \per \beam}, whilst the peak value is \SI{41}{\milli \jansky \per \beam}.} \label{fig:field10} \end{figure*} \subsection{Exclusion zones} \label{sec:exclusionzones} We introduce a number of exclusion zones to our fields to improve the quality of our stacked images. In the first instance, during stacking we truncate around the edges of all fields where the beam power reaches less than 10\% of its peak value. This excludes regions of high noise from being included in our stacks. Then, we visually inspect each field and identify areas to exclude based on two criteria. First, we check for extremely bright sources and draw exclusion zones around them, since their residuals tend to be areas of high noise. In the case of Virgo A, this exclusion zone is sizable in some fields as a result of small calibration errors throwing flux some distance away from the source. Secondly, we search for extended sources that remain in the residuals. Many of these are extended AGN sources that have been cleaned to the level of the noise in individual snapshots but which reappear above the noise once we mosaic, and appear as extended islands of emission typically a few beams in width. These visually inspected regions are collated and nulled in the image prior to stacking.\footnote{For the sake of reproducibility, these exclusion regions are included in the associated data release as DS9 region files.} \section{Stacking and model subtraction} Having created deep, well-calibrated images of our 14 fields with point sources subtracted, we can now turn to stacking the LRG pairs. We stack LRG pairs in an effort to drive down the uncorrelated noise in our images, and meanwhile reveal any correlated mean emission that might bridge the LRG pairs. In this section we detail the construction both of these stacked images as well as the process used to construct the LRG models that we ultimately subtract in an effort to detect any excess cosmic web emission. \subsection{Stacking} We have implemented our stacking methodology similarly to V2021. We first identify a maximum scaling size, which is at least the maximum pixel distance of any single LRG pair across all fields. All halo pairs are subsequently strictly up-scaled to this size. We iterate though each LRG pair, once for each field. If the LRG pair is located within the field and does not overlap with an exclusion zone, we proceed to stack this pair. To do this, we identify the pixel coordinates of the pair within the field projection, and calculate both the pixel distance between these coordinates, as well as the angle between their connecting line and horizontal. We rotate and scale the pixel coordinates of the entire field such that pixel distance becomes the maximum pixel distance, and that their connecting line is rotated to horizontal. Finally, we linearly interpolate these values onto a rectangular grid whose centre is the point equidistant the LRG pair. This final map is now ready to be stacked alongside all other LRG pairs. LRG pairs are weighted by a function of the estimated noise of the field. This estimated noise map is scaled and rotated identically to the field itself. When it comes time to stack, we weight each LRG pair by the inverse square of this map. Note that this noise map is spatially varying and, especially near the edges of the field where the underlying noise is rapidly changing, it is possible for the weighting used for a single LRG pair to vary across the length of the pair. We also track the sum of these weights, and in the final step divide the LRG pair stack by the weight stack to arrive at the weighted mean stack. See \autoref{sec:weighting} for more detail on the stack weighting. We construct a coordinate system on the final stacked images that places one LRG at $x = -1$, the other at $x = +1$, and the midpoint at the origin. The $y$ direction is scaled identically, and we will herein refer to this as the normalised coordinate system. \begin{figure} \centering \vspace{-0.5cm} \includegraphics[width=\linewidth,clip,trim={0 0 0 0}]{projectionerror.pdf} \caption{The maximum error associated with treating a SIN projection as a simple Cartesian grid, obtained at the maximum declination \SI{+32}{\degree}. \textit{Top:} The maximum transverse error along a constant-declination \SI{180}{\arcminute} line as a result of geodesics being curved in pixel space. \textit{Bottom:} The maximum longitudinal error along a constant-hour-angle \SI{180}{\arcminute} line, as a result of non-uniform pixel sizes.} \label{fig:projectionerror} \end{figure} At no point during stacking do we reproject the maps: they are rotated and scaled in pixel coordinates only. An alternative would have been to reproject each pair onto a common projection, but as we have noted earlier, our experience is that such reprojection creates scaling of the flux density values. Using pixel coordinates on an underlying SIN projection, however, has its own downsides whereby: geodesics on the sky are not, in general, straight lines in pixel coordinates; and the angular distance per pixel is not constant. In a SIN projection, these effects are most pronounced at the highest declinations where the field deviates most significantly from a Cartesian grid. They are, however, much smaller than the resolution element of the MWA. For example, in \autoref{fig:projectionerror}, we consider the worst case scenario of an LRG pair at the maximum separation of \SI{180}{\arcminute}, and at the most northern declination of \SI{+32}{\degree}. The upper panel shows the transverse error that results from geodesics not being straight lines in pixel space which peaks at 0.003, whilst the lower panel shows the longitudinal error due to non-uniform pixel sizes which peaks at 0.01. The majority of our LRG pairs have a significantly smaller angular separation, and these errors are markedly smaller in these cases. These errors are small enough that we deem the simplicity and flux correctness of stacking in pixel space to be preferable. \subsection{Model subtraction} \label{sec:modelling} \begin{figure*} \centering \includegraphics[width=\linewidth,clip,trim={2.9cm 0.5cm 2.7cm 0}]{fullstack-maxdist15-temp-examplemodel.pdf} \caption{An example showing the LRG model construction and subtraction from the stacked image, with all coordinates in the normalised coordinate system such that the LRG peaks are at $x = \{-1, 1\}$ and the $y$ direction scales identically. \textit{Left:} The original mean stacked image, with the dashed arcs indicating the exterior sweep over which each radially-averaged one-dimensional model is constructed. The LRG peaks rise to just over \SI{4}{\kelvin}; we have set the colour sclae limits on these images to make the noise, at \SI{24.6}{\milli \kelvin}, visible. \textit{Centre:} The model sum map, produced by interpolating the one-dimensional model for each LRG peak onto the two-dimensional map. \textit{Right:} The residual image, after subtracting the model from the original mean stack.} \label{fig:examplemodel} \end{figure*} Model construction is implemented identically to V2021. It is assumed that emission about each LRG peak, either due to radio emission from the LRG itself or nearby cluster emission, should be radially symmetric. Any cosmic web emission spanning the LRG pair will appear as an excess against this model. Thus, we construct our model based on the \SI{180}{\degree} sweep \textit{exterior} to the LRG pair and we radially average this to form a one-dimensional profile as a function of radial distance. The implementation of this involves binning pixels based on their radial distance, with the bin width set as 1 pixel, before each bin is then averaged. We can then create a function that linearly interpolates over these bins, allowing us to produce a full two dimensional model independently for each LRG as a function of radial distance. Note by creating a model for each LRG peak independently, we are assuming the contribution from each peak is negligible for radial distances $r > 2$. Finally, we sum the LRG model contribution for each peak to produce the final model. We show an example of this process in \autoref{fig:examplemodel}. In the left panel we show the original mean stacked image. The LRG peaks rise to just over \SI{4}{\kelvin}, however we have set the colour scale limits on these images to make the noise, at \SI{24.6}{\milli \kelvin}, visible. The dashed arcs indicate the exterior sweep over which each radially-averaged one-dimensional model is constructed. These models are then linearly interpolated onto the two-dimensional map and summed, so as to produce the model, shown in the central panel. Finally, we produce the residual stack by subtracting out the model from the mean stack, shown in the rightmost panel. Note the absence of all large-scale structures in the residual, including the LRG peaks themselves as well as the surrounding depressions caused by the MWA dirty beam. We additionally provide the results of a synthetic test of our stacking and modelling processes in \autoref{sec:synthetic}. \subsection{Noise characteristics} \begin{figure} \centering \includegraphics[width=\linewidth]{fullstack-maxdist15-noise.pdf} \caption{An example of the noise characteristics of the residual stack, in this case from the Max \SI{15}{\mega \parsec} stack. \textit{Top:} The pixel distribution of the residual map, showing an approximately normal distribution. The dashed black line shows the Gaussian fit to the distribution, parameterised as $\sigma =$~\SI{24.6}{\milli \kelvin}. \textit{Bottom:} The radially average autocorrelation of the residual stack, showing the autocorrelation as having a half width at half maximum (dotted black line) of 0.074.} \label{fig:noise} \end{figure} To determine the significance of any excess signal in the residual stack, it is necessary to characterise the noise of our images. The original fourteen fields consist of real radio emission on top of a background of Gaussian noise. During stacking, the noise in these fields goes down proportionally to the inverse square root of the number of stacks. The presence of real emission peaks in the residual field maps does not affect this, since these peaks are uncorrelated from stack to stack. In the upper panel of \autoref{fig:noise}, we show an example of the pixel distribution of one of our residual stacks, showing that it very nearly approximates a normal distribution, as indicated by the dashed black line, with $\sigma =$~\SI{24.6}{\milli \kelvin}. All our stacks exhibit this kind of normal distribution of pixel values, and so we will characterise them by reference to the standard deviation of their residual maps. The noise, however, is spatially correlated. In the original fields prior to stacking, this spatial correlation is on the scale of dirty beam. In the stacked images, however, this is not the case, since during stacking we rescale each LRG pair. To characterise the effective resolution of the stacked image we perform a radially averaged two-dimensional auto-correlation of the residual stack, and we present an example of this in the lower panel of \autoref{fig:noise}. We observe in this plot both an extended peak in this function, showing the spatial correlation of pixels persists in the stacked images, and also a slight depression showing the cumulative sidelobes of the stacked, dirty beams. We characterise the effective resolution by measuring the full width at half maximum (FWHM). In this case, the half width at half maximum of the autocorrelation is 0.074, corresponding to a FWHM value of the residual map of 0.105. These two metrics---the standard deviation and the effective resolution---allow us to understand the significance of any potential signal in our stacks. Specifically, peaks of excess emission that deviate significantly from the measured map noise, or extended emission on scales greater than the effective resolution, are tell-tale markers that we are encountering signal that deviates from otherwise stochastic noise. \section{Results and Discussion} \subsection{Stacking results} \begin{figure*} \centering \begin{subfigure}{\linewidth} \centering \includegraphics[width=0.7\linewidth,clip,trim={2cm 1cm 0cm 2cm}]{PhaseII/fullstack-maxdist15-beforeafter.pdf} \caption{\textit{Top left:} The original mean stack image, with overlays indicating the region over which the transverse mean (dashed orange horizontal lines) and longitudinal mean (dashed green vertical lines) are calculated. \textit{Top right:} The mean stacked image with the colour scale reduced to $\pm 5 \sigma$ to emphasise the noise. \textit{Bottom left:} The model image, on the same colour scale. \textit{Bottom right:} The residual stack after model subtraction, with the colour scale set to $\pm 5 \sigma$.} \label{fig:max15a} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[width=0.8\linewidth,clip,trim={0 0 0 -0.5cm}]{PhaseII/fullstack-maxdist15-modelled.pdf} \caption{\textit{One:} The one-dimensional profile along $y = 0$ for both the stacked image (blue) and the model (red). \textit{Two:} The one-dimensional profile along $y = 0$ of the residual stack, renormalised to the estimated map noise. \textit{Three:} The transverse mean along the region $-0.2 < y < 0.2$ of the residual stack, renormalised to the estimated map noise. \textit{Four:} The longitudinal mean along the region $-0.95 < x < 0.95$ of the residual stack, renormalised to the estimated map noise. The black rule in the top left shows the FHWM of the effective resolution.} \label{fig:max15b} \end{subfigure} \caption{The Max \SI{15}{\mega \parsec} stack, with mean LRG peaks of \SI{4292}{\milli \kelvin}, residual noise of \SI{25}{\milli \kelvin}, and effective resolution of 0.11.} \label{fig:max15} \end{figure*} \begin{figure*} \centering \begin{subfigure}{\linewidth} \centering \includegraphics[width=0.7\linewidth,clip,trim={2cm 1cm 0cm 2cm}]{PhaseII/fullstack-maxdist10-beforeafter.pdf} \caption{\textit{Top left:} The original mean stack image, with overlays indicating the region over which the transverse mean (dashed orange horizontal lines) and longitudinal mean (dashed green vertical lines) are calculated. \textit{Top right:} The mean stacked image with the colour scale reduced to $\pm 5 \sigma$ to emphasise the noise. \textit{Bottom left:} The model image, on the same colour scale. \textit{Bottom right:} The residual stack after model subtraction, with the colour scale set to $\pm 5 \sigma$.} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[width=0.8\linewidth,clip,trim={0 0 0 -0.5cm}]{PhaseII/fullstack-maxdist10-modelled.pdf} \caption{\textit{One:} The one-dimensional profile along $y = 0$ for both the stacked image (blue) and the model (red). \textit{Two:} The one-dimensional profile along $y = 0$ of the residual stack, renormalised to the estimated map noise. \textit{Three:} The transverse mean along the region $-0.2 < y < 0.2$ of the residual stack, renormalised to the estimated map noise. \textit{Four:} The longitudinal mean along the region $-0.95 < x < 0.95$ of the residual stack, renormalised to the estimated map noise. The black rule in the top left shows the FHWM of the effective resolution.} \end{subfigure} \caption{The Max \SI{10}{\mega \parsec} stack, with mean LRG peaks of \SI{4699}{\milli \kelvin}, residual noise of \SI{51}{\milli \kelvin}, and effective resolution of 0.12.} \label{fig:max10} \end{figure*} \begin{figure*} \centering \begin{subfigure}{\linewidth} \centering \includegraphics[width=0.7\linewidth,clip,trim={2cm 1cm 0cm 2cm}]{PhaseII/fullstack-maxdist15-ang-15-60-beforeafter.pdf} \caption{\textit{Top left:} The original mean stack image, with overlays indicating the region over which the transverse mean (dashed orange horizontal lines) and longitudinal mean (dashed green vertical lines) are calculated. \textit{Top right:} The mean stacked image with the colour scale reduced to $\pm 5 \sigma$ to emphasise the noise. \textit{Bottom left:} The model image, on the same colour scale. \textit{Bottom right:} The residual stack after model subtraction, with the colour scale set to $\pm 5 \sigma$.} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[width=0.8\linewidth,clip,trim={0 0 0 -0.5cm}]{PhaseII/fullstack-maxdist15-ang-15-60-modelled.pdf} \caption{\textit{One:} The one-dimensional profile along $y = 0$ for both the stacked image (blue) and the model (red). \textit{Two:} The one-dimensional profile along $y = 0$ of the residual stack, renormalised to the estimated map noise. \textit{Three:} The transverse mean along the region $-0.2 < y < 0.2$ of the residual stack, renormalised to the estimated map noise. \textit{Four:} The longitudinal mean along the region $-0.95 < x < 0.95$ of the residual stack, renormalised to the estimated map noise. The black rule in the top left shows the FHWM of the effective resolution.} \end{subfigure} \caption{The Max \SI{60}{\arcminute} stack, with mean LRG peaks of \SI{3769}{\milli \kelvin}, residual noise of \SI{30}{\milli \kelvin}, and effective resolution of 0.26.} \label{fig:max60} \end{figure*} \begin{figure*} \centering \begin{subfigure}{\linewidth} \centering \includegraphics[width=0.7\linewidth,clip,trim={2cm 1cm 0cm 2cm}]{PhaseII/fullstack-V2021-beforeafter.pdf} \caption{\textit{Top left:} The original mean stack image, with overlays indicating the region over which the transverse mean (dashed orange horizontal lines) and longitudinal mean (dashed green vertical lines) are calculated. \textit{Top right:} The mean stacked image with the colour scale reduced to $\pm 5 \sigma$ to emphasise the noise. \textit{Bottom left:} The model image, on the same colour scale. \textit{Bottom right:} The residual stack after model subtraction, with the colour scale set to $\pm 5 \sigma$.} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[width=0.8\linewidth,clip,trim={0 0 0 -0.5cm}]{PhaseII/fullstack-V2021-modelled.pdf} \caption{\textit{One:} The one-dimensional profile along $y = 0$ for both the stacked image (blue) and the model (red). \textit{Two:} The one-dimensional profile along $y = 0$ of the residual stack, renormalised to the estimated map noise. \textit{Three:} The transverse mean along the region $-0.2 < y < 0.2$ of the residual stack, renormalised to the estimated map noise. \textit{Four:} The longitudinal mean along the region $-0.95 < x < 0.95$ of the residual stack, renormalised to the estimated map noise. The black rule in the top left shows the FHWM of the effective resolution.} \end{subfigure} \caption{The LRG-V2021 stack, with mean LRG peaks of \SI{4540}{\milli \kelvin}, residual noise of \SI{42}{\milli \kelvin}, and effective resolution of 0.09.} \label{fig:v2021} \end{figure*} In \autoref{fig:max15}, we present the stacked results for the Max \SI{15}{\mega \parsec} catalogue. This catalogue consists of 601,435 LRG pairs, allowing our stack to reach a noise of \SI{25}{\milli \kelvin}, more than twice as deep as the \SI{118.5}{\mega \hertz} stack in V2021. As can be seen in the upper left panel of \autoref{fig:max15a}, the peaks at $x = \pm 1$ are the dominant features, and have a mean value of \SI{4292}{\milli \kelvin}. The upper right panel in \autoref{fig:max15a} shows the same stacked image, only with the colour scale adjusted down so as to emphasise the noise. We now note the shallow depressions around each of the peaks, which are attributable to the dirty beam's negative sidelobes. The LRG model is shown in the bottom left panel, and the bottom right panel in \autoref{fig:max15a} shows the residual stack, after model subtraction. The model construction methodology is surprisingly effective, leaving no trace either of the sharp peaks at $x = \pm 1$, as well as removing the sidelobe depressions. There is no readily apparent excess emission in the residual image. In the top panel of \autoref{fig:max15b}, we compare the one-dimensional slice through $y = 0$ of both the mean stacked image (blue) and model (red). The stacked image and model are so similar that we scarcely observe any of the stacked plot. Note that the widths of the peaks are narrower than observed in V2021: the peaks here have a FWHM value of 0.11, and whilst this value is not given in V2021, their peaks appear visually much wider. These peaks will be in part a function of the instrumental dirty beam, however this is not sufficient to explain this discrepancy; we discuss this more in \autoref{sec:peakwidth}. In the second panel of \autoref{fig:max15b}, we show the one-dimensional $y = 0$ slice through the residual image, where we have renormalised the scale to the estimated map noise. There are no peaks in this residual exceeding $3 \sigma$. In the third panel, we display the mean value in the range $y = \pm 0.2$ as a function of $x$, and renormalise based on the estimated map noise. The aim of this transverse mean is to bring out faint, wide signals that might be present along the intercluster stacks. For this LRG catalogue, we observe no peaks exceeding $3 \sigma$. Finally, in the lower panel we display the longitudinal mean in the range $-0.95 < x < 0.95$, as a function of $y$. For a faint signal that spans the length of the intercluster stack, we would expect this plot to show a peak at $y = 0$, however we observe no statistically significant signal. We conclude there is no statistically significant excess emission along the bridge for the Max \SI{15}{\mega \parsec} stack. The stacked results for the Max \SI{10}{\mega \parsec} catalogue are shown in \autoref{fig:max10}. With just a quarter of the LRG pairs as the larger Max \SI{15}{\mega \parsec} catalogue, the estimated noise of this stack is higher at \SI{51}{\milli \kelvin}, just a slight improvement on the stated noise in the \SI{118.5}{\mega \hertz} stack in V2021. The peaks at $x = \pm 1$ are higher than the previous stacks, at \SI{4699}{\milli \kelvin}, which is a result of the catalogue sampling from a more local redshift space, whilst their widths have a similar FWHM of 0.12. Once again, however, the residual image and one-dimensional slices show no indication of statistically significant excess emission along the bridge. Likewise, the stacked results for the Max \SI{60}{\arcminute} and LRG-V2021 catalogues, in \autoref{fig:max60} and \autoref{fig:v2021} respectively, also show no evidence of excess emission. The Max \SI{60}{\arcminute} stack has a noise of \SI{30}{\milli \kelvin} and a large effective resolution of 0.26 that is a result of reduced lower angular threshold and corresponding variation in scaling during stacking. Similarly, the peak width has increased to 0.23. This LRG catalogue also samples significantly deeper in redshift space than the others, with the result that the LRG peaks are diminished in comparison, with a mean value of \SI{3769}{\milli \kelvin}. One small $\sim2.85\sigma$ peak is visible in the one-dimensional profile at $x = -0.73$, however its width matches the effective resolution, and similar peaks throughout the residual image suggest it is consistent with the noise. Meanwhile, the LRG-V2021 stack has a noise of \SI{41}{\milli \kelvin}, approximately \SI{30}{\percent} lower than the equivalent \SI{118.5}{\mega \hertz} stack in V2021. It has a peak in the longitudinal profile at $y = 0.14$ that reaches a significance of $3.04 \sigma$, but otherwise shows no evidence of intercluster signal and certainly not the kind of large-scale, clearly evident excess emission as shown in V2021. The analysis of each of our LRG catalogue stacks leaves us unable to corroborate the detection of V2021. \subsection{Sensitivity to extended emission} \label{sec:resolvingout} \begin{figure*} \centering \includegraphics[width=\linewidth,clip,trim={4cm 0 4cm 0}]{psfs.pdf} \caption{A comparison of dirty beams used in V2021 and the present study, measured at \SI{118.5}{\mega \hertz} and pointing $\alpha =$~\SI{180}{\degree} $\delta =$~\SI{18}{\degree}. {White dashed contours trace a response of zero, so as to better show the negative sidelobe regions.} \textit{Left:} The Phase I dirty beam with baseline weighting Briggs -1, as used in GLEAM, having a resolution of \SI{3.74 x 2.56}{\arcminute}. \textit{Centre:} The Phase II dirty beam with baseline weighting Briggs +1, as used in the current study, and having a resolution of \SI{3.2 x 1.9}{\arcminute}. \textit{Right:} The Phase II dirty beam, after convolution, having a resolution of \SI{4.2 x 3.1}{\arcminute}.} \label{fig:psfs} \end{figure*} \begin{figure} \centering \includegraphics[width=\linewidth]{sensitivity.pdf} \caption{The sensitivity of Phase I, II, and Phase II (convolved) to extended emission. The plot shows the response at the centre of simulated circular Gaussians of varying sizes, with the simulated sources having a constant peak surface brightness of \SI{1}{\jansky \per \square \deg}. For large, extended emission sources, there exists a threshold angular scale above which the central response begins to drop, as these sources become increasingly `resolved out'. On the other hand, for very small angular sizes, the simulated source becomes smaller than the dirty beam (i.e.\@ is unresolved) whilst maintaining the same peak surface brightness; the total flux of the source thus rapidly drops to zero as does the instrumental response.} \label{fig:sensitivity} \end{figure} The chief distinction between the MWA Phase I and Phase II instruments is the location of the antennas, and in turn, each instrument's respective dirty beam. As noted previously, the point source sensitivity is unchanged. However, these modified baselines may make the instrument less sensitive to extended emission, potentially even resolving out large-scale emission such as the cosmic web, and this may be a factor in our non-detection. In \autoref{fig:psfs} we show the dirty beams of the Phase I and Phase II instrument, as well as the effective dirty beam of the Phase II instrument after our convolution to \SI{3}{\arcminute} (at zenith) resolution. These dirty beams have been generated from archival \SI{118.5}{\mega \hertz} MWA observations at the centre of field 11 ($\alpha =$~\SI{180}{\degree}, $\delta =$~\SI{18}{\degree}) to best model the effect of the low elevation pointings on the dirty beam. The Phase I dirty beam is produced with a Briggs -1 baseline weighting scheme such that it matches the original GLEAM imaging parameters, and has a resolution of approximately \SI{3.74 x 2.56}{\arcminute}. Note the sizeable negative sidelobes around the beam, owing to a dense core of short baselines. The Phase II dirty beam is produced with the same baseline weighting as used in the present work, Briggs +1, as well as its lower baseline length threshold of $15 \lambda$, and has a resolution of approximately \SI{3.2 x 1.9}{\arcminute}. After convolution, this grows to a resolution of \SI{4.2 x 3.1}{\arcminute} at the centre of the field. \citet{Hodgson2020} developed an empirical method to measure an instrument's sensitivity to large-scale emission, which we draw on here. Often angular sensitivity is estimated solely based on the angular size of the fringe patterns of the shortest baselines in an array, however, this does not take into account the imaging parameters, baseline weightings, and most importantly, the cumulative effect of the instrument's baselines in determining angular sensitivity. Instead, the method we use here proceeds by simulating a range of extended emission sources---in our present case circular Gaussian sources---directly into the visibilities of an observation, and then producing a dirty image of the source. Given a surface flux density of \SI{1}{\jansky \per \square \deg} at the Gaussian peak, we can understand the instrument's response by measuring the flux density at the centre of the Gaussian in the dirty image. If we iterate through many such circular Gaussian sources of increasing size, we will identify a threshold angular scale at which point the central response will begin to reduce, above which scales the dirty image of the Gaussian will start to `hollow out' in the centre and become increasingly dark. In this way we can identify the relative sensitivity of the instrument over a range of angular scales as well as the angular scale at which emission begins to resolve out. In \autoref{fig:sensitivity} we show the results of this exercise, where we have measured the central response to circular Gaussians having a FWHM up to \SI{180}{\arcminute} in extent. It is immediately apparent that the larger beam size of the Phase I instrument makes it more sensitive than Phase II to large scale emission features, as we'd expect. Moreover, the Phase I instrument does not begin to resolve out structure on these spatial scales; in fact, it continues to gain sensitivity over this range. The sensitivity of the Phase II instrument, on the other hand, begins to slowly decline on angular scales larger than \SI{30}{\arcminute}, and then more rapidly decline on scales larger than approximately \SI{50}{\arcminute}. The effect of convolving the Phase II dirty beam is dramatic, amplifying its sensitivity to extended sources more than a factor of two. Crucially, it also makes the instrument more sensitive than Phase I. It does not forestall the angular scales on which the instrument begins to resolve out structure, however it remains more sensitive than Phase I to extended emission out to approximately \SI{130}{\arcminute}. When considering whether these differences in sensitivity to extended emission can account for our non-detection of the synchrotron cosmic web we need to understand the typical angular scales we would expect. In the first instance, the majority of LRG pairs in each catalogue are separated by less than \SI{60}{\arcminute}, with the exception of the LRG-V2021 catalogue which has a median separation of \SI{79}{\arcminute}. We should expect our observations to be at least as sensitive as V2021 for those LRG pairs with separations less than \SI{60}{\arcminute}, and specifically with regards to the Max \SI{60}{\arcminute} stack, there is no risk of resolving out structure across the entirety of its LRG pair catalogue. Secondly, we do not expect the emission spanning the intercluster region to be as wide as it is long: whilst our selection criteria allows for these bridges to span up to \SI{180}{\arcminute}, we should expect the width of the bridge to be significantly more narrow. Any MWA baseline fringes aligned approximately along the narrower width will not be at risk of resolving out the emission, and this will reduce the overall effect. Finally, as simulations by \citet{Vazza2019} and further showcased in \citet{Hodgson2021a} have shown, the morphology of the cosmic web is expected to consist of radio relic-like accretion shocks. These typically appear as long extended arcs of emission, usually with a well-defined edge along the shock itself, with many such shocks spanning the length of the intercluster region. Crucially, these kinds of emission mechanisms do not form a broad, continuous bridge of emission that we might risk resolving out, rather they are punctuated and individually consist of sharply defined edges that interferometers are well-suited to detect. For these reasons, we do not believe we are adversely affected by the higher resolution MWA Phase II instrument. \subsection{The expected peak widths} \label{sec:peakwidth} \begin{figure} \centering \includegraphics[width=\linewidth]{psfstack.pdf} \caption{One dimensional profiles of stacked dirty beams for Phase I (green) and Phase II (convolved; blue), in comparison to the Max 15 Mpc stacked model profile (red). The stacked dirty beams approximate a minimum peak profile for purely unresolved LRG sources, and the similarity to the Max 15 Mpc stacked model profile suggests this profile is dominated principally by unresolved sources.} \label{fig:psfstack} \end{figure} As noted, a key difference between our results and those of V2021 is that the width of the peaks at $x = \{-1, 1\}$ of our stacks are much narrower. In this section, we want to understand the expected minimum size of these peaks. This condition of minimum peak width occurs when the angular scale of the LRG emission (or other spatially correlated emission) is much smaller than the instrumental dirty beam, that is, when the LRG emission is unresolved and approximately `point-like'. In this case, the instrumental response is simply the dirty beam itself. We can then model the expected minimum-width LRG peak profile by stacking the dirty beam in the following way: for each LRG pair, we calculate the angular distance between the pair and find a scaling factor to upscale onto the maximum angular separation, being \SI{3}{\degree}; we use this scaling factor to stretch the one dimensional profile of the dirty beam; and then sum this alongside other similarly scaled profiles. We build in two additional assumptions in this simple model: first, for each pair we create a one-dimensional profile of the dirty beam at a uniformly random angle through the two-dimensional peak response, which assumes that the orientation of LRG pairs on the sky are approximately uniform; second, that each LRG has an equal contribution to the sum. A key limitation of this exercise is the use of a single dirty beam, as shown previously in \autoref{fig:psfs}; these dirty beams have been generated for a fixed position on the sky, and at these low elevations the dirty beam is especially sensitive to the foreshortening effects of declination changes. Nonetheless, this exercise will give us a good approximation of the minimum peak sizes. We show the results of this exercise in \autoref{fig:psfstack} for both the Phase I (green) and Phase II (convolved; blue) dirty beams calculated across the Max 15 Mpc catalogue, as well as the model profile of the left peak of the Max 15 Mpc stack (red), shown previously in \autoref{fig:max15}. The FWHM of the Phase II (convolved) peak is 0.12, which compares to the actual model peak width of 0.11. The similarity in both the peak shape and width between this exercise and the actual model suggests the peaks in our stacks are dominated principally by unresolved sources. In comparison, the peak widths of the stacks in V2021 appear significantly wider. In \autoref{fig:psfstack}, we also show the results of the same exercise for the Phase I dirty beam, showing a remarkably similar peak width to our own. That the peaks of V2021 are markedly wider would suggest that a significant proportion of sources in their stacks appear as resolved at Phase I resolution. Moreover, the lack of a `stepped peak', caused by the addition of a dominant unresolved population and a fainter resolved population, would suggest that the resolved population actually dominates in the V2021 stacks. This is a fundamental discrepancy with our own results, for which we do not currently have an explanation. \subsection{The effect of CLEANing} \label{sec:cleaning} \begin{figure} \centering \includegraphics[width=\linewidth,clip,trim={0.9cm 0cm 1cm 1cm}]{maskedfield.pdf} \caption{An example of masking the restored fields using a threshold of \SI{250}{\milli \jansky \per \beam}, with masked sources depicted here as white. Note the presence of a low to medium brightness population of radio sources still clearly visible.} \label{fig:maskedfield} \end{figure} \begin{figure*} \centering \begin{subfigure}{\linewidth} \centering \includegraphics[width=0.7\linewidth,clip,trim={2cm 1cm 0cm 2cm}]{PhaseII/fullstack-maxdist15-masked250mJy-beforeafter.pdf} \caption{\textit{Top left:} The original mean stack image, with overlays indicating the region over which the transverse mean (dashed orange horizontal lines) and longitudinal mean (dashed green vertical lines) are calculated. \textit{Top right:} The mean stacked image with the colour scale reduced to $\pm 5 \sigma$ to emphasise the noise. \textit{Bottom left:} The model image, on the same colour scale. \textit{Bottom right:} The residual stack after model subtraction, with the colour scale set to $\pm 5 \sigma$.} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[width=0.8\linewidth,clip,trim={0 0 0 -0.5cm}]{PhaseII/fullstack-maxdist15-masked250mJy-modelled.pdf} \caption{\textit{One:} The one-dimensional profile along $y = 0$ for both the stacked image (blue) and the model (red). \textit{Two:} The one-dimensional profile along $y = 0$ of the residual stack, renormalised to the estimated map noise. \textit{Three:} The transverse mean along the region $-0.2 < y < 0.2$ of the residual stack, renormalised to the estimated map noise. \textit{Four:} The longitudinal mean along the region $-0.95 < x < 0.95$ of the residual stack, renormalised to the estimated map noise. The black rule in the top left shows the FHWM of the effective resolution.} \end{subfigure} \caption{The Max \SI{15}{\mega \parsec} stack after masking fields at a threshold of \SI{250}{\milli \jansky \per \beam}, with mean LRG peaks of \SI{8776}{\milli \kelvin} above the background, residual noise of \SI{64}{\milli \kelvin}, and effective resolution of 0.12.} \label{fig:masked} \end{figure*} One point of difference between V2021 and the present study is the technique used to subtract bright point sources. V2021 used a wavelet decomposition technique, whereby image features on small angular scales were identified by imaging a limited range of wavelet scales. These small-scale image maps were searched for all pixels having values greater than $5 \sigma$ of the map noise, which were then subtracted from the original maps. This technique subtracted out the brightest pixels of point sources but left a residual ring around the sources at values lower than $5 \sigma$. Only compact, point-like sources were subtracted from the images, thus leaving extended sources; it's unclear what kind of additional filtering was applied to extended sources such as AGN lobes as this is not documented. This differs with the present technique of using residuals after cleaning. Our cleaning process uses \texttt{wsclean} and its auto-mask and auto-threshold functionality. This worked by cleaning peaks of emission that are brighter than $3 \sigma$, and when this was exhausted, cleaning was allowed to continue within a mask defined by the existing clean components down to a level of $1 \sigma$. Recall that multiscale cleaning was disabled, and so this process removed \textit{peak} emission that was greater than $3 \sigma$; large, diffuse extended emission that did not peak above this threshold remained in the image. In practice, typical snapshot noise was approximately \SI{20}{\milli \jansky \per \beam}, meaning that peak emission fainter than approximately \SI{60}{\milli \jansky \per \beam} was left in the images. Compare this to the $5 \sigma$ threshold used in V2021, which corresponds to approximately \SI{175}{\milli \jansky \per \beam}. Thus there is significantly more emission remaining in the images of V2021. For these point source subtraction differences to contribute to the detection in V2021, this would imply that the excess emission arises from a population of especially bright sources that are visible in our own mosaics at levels of greater than \SI{60}{\milli \jansky \per \beam} and which have been partially cleaned. \citet{Hodgson2020} showed that the luminosity of accretion shocks around the periphery of dark matter halos throughout their simulated cosmic web approximated a power law as a function of dark matter halo mass; in their \SI{15 x 15}{\degree} simulated field, there existed a few bright points of cosmic web emission, with the brightest at \SI{64}{\milli \jansky \per \beam} (using a Phase I MWA beam). Note, however, that these sources were located around the periphery of bright clusters, not in the true intercluster region, that they were morphologically akin to radio relics, and were likely stationary accretion shocks around massive clusters. Only a handful of such bright, outlier emission sources were predicted as part of the simulation. To investigate this further, as an exercise we have re-run the stacking process using the restored field images, rather than the residuals. To mitigate the effects of bright point source emission, we have masked bright sources, but note that we have extended the threshold for this masking out to \SI{250}{\milli \jansky \per \beam}. The motivation for this much higher threshold is to capture emission that is present in the original V2021 images, but which we have removed by our deeper cleaning. \autoref{fig:maskedfield} shows an example of one of these masked fields, where we can clearly see a large population of sub-\SI{250}{\milli \jansky} sources still present. We show the stacked results of this exercise in \autoref{fig:masked}. Firstly, note that the mean residual value is much greater than zero. This results from the significant number emission sources present in the image when masking to only a threshold of \SI{250}{\milli \jansky \per \beam}, and this non-zero background represents a kind of mean, stacked background temperature. Despite this, the peaks at $x = \{-1, 1\}$ have almost doubled against this background temperature, when compared with the Max 15 Mpc stack in \autoref{fig:max15}, showing that there is a considerable number of LRG sources (or sources otherwise correlated with the LRG population) with a peak brightness greater than approximately \SI{60}{\milli \jansky \per \beam}. As a side-effect of the number of sources remaining in the image, however, the noise has also increased compared to the Max 15 Mpc stack, by a factor of just over 2.5 times at \SI{62}{\milli \kelvin}. Note also the absence of the negative sidelobes about the LRG peaks. The extra emission of the LRG peaks compared to the original Max 15 Mpc stacks is the result of restored emission that has been convolved with an elliptical Gaussian fitted to the dirty beam, and this additional component will not have sidelobes; these brighter Gaussian sources in the stacks, combined with the overall higher noise, have washed out the subtle sidelobes of the fainter, uncleaned sources. Turning now to the detection of excess emission, we can observe in \autoref{fig:masked} that there is a peak of emission in the residual image centred at $(x, y) = (-0.57, 0.035)$, slightly off the $y$-axis, and peaking at $4.58 \sigma$ significance. This peak corresponds to the peak in the one-dimensional profile also at $x = -0.57$. The width of the peak is slightly extended beyond the FWHM typical of the rest of the residual image. A second, smaller peak is also evident in the residuals at $(x, y) = (-0.07, 0.13)$ with $4.2 \sigma$ significance, and is also visible in the transverse mean. Combined, these two peaks contribute to a peak in the longitudinal mean, at $y = 0.04$ with $3.2\sigma$ significance. Note also the presence of a $4.1 \sigma$ peak outside and to the left of the stacked intercluster region, at $(x, y) = (-1.85, 0.17)$. Are these emission peaks in the stacked residuals of \autoref{fig:masked} evidence of the cosmic web? We can immediately note that these emission peaks have not reproduced the broad, excess emission of the kind in V2021 that filled the intercluster bridge; instead these are much more localised peaks. We can also note the asymmetry of the left peak at $x = -0.57$, which is not reproduced on the right: this would suggest that this is not a generalised feature of the intercluster region. Additionally, the $4.1 \sigma$ peak to the left of the intercluster region cannot, by its location, be attributable to intercluster cosmic web emission. To investigate further, we have jackknife sampled the Max 15 Mpc catalogue, excluding a randomly selected 10\% of the catalogue, and stacked each of the ten sub-catalogues. With 90\% of the original catalogue in each stack, the noise is very similar, varying between \SIrange[range-phrase=\ to\ ,range-units=single]{63}{65}{\milli \kelvin}. We find that the peak at $x = -0.6 \pm 0.1$ is present in each stack, with at least a significance of $3.1 \sigma$, with the exception of one of the sub-catalogues, where it is entirely consistent with the noise, and peaking at most at $2.4 \sigma$. Similarly, the peaks at $x = -0.07$ and $x = -1.85$ are also each absent in one of the sub-catalogues. This exercise suggests that these peaks are not generalised features shared across the sample, but the effect of bright outlier emission left in the original fields. This exercise suggests the absence of the broad excess emission feature found in V2021 in our own stacks is not a side-effect of cleaning. \subsection{Stacking the original GLEAM survey} \begin{figure*} \centering \begin{subfigure}{\linewidth} \centering \includegraphics[width=0.7\linewidth,clip,trim={2cm 1cm 0cm 2cm}]{GLEAM/fullstack-maxdist15-beforeafter.pdf} \caption{\textit{Top left:} The original mean stack image, with overlays indicating the region over which the transverse mean (dashed orange horizontal lines) and longitudinal mean (dashed green vertical lines) are calculated. \textit{Top right:} The mean stacked image with the colour scale reduced to $\pm 5 \sigma$ to emphasise the noise. \textit{Bottom left:} The model image, on the same colour scale. \textit{Bottom right:} The residual stack after model subtraction, with the colour scale set to $\pm 5 \sigma$.} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[width=0.8\linewidth,clip,trim={0 0 0 -0.5cm}]{GLEAM/fullstack-maxdist15-modelled.pdf} \caption{\textit{One:} The one-dimensional profile along $y = 0$ for both the stacked image (blue) and the model (red). \textit{Two:} The one-dimensional profile along $y = 0$ of the residual stack, renormalised to the estimated map noise. \textit{Three:} The transverse mean along the region $-0.2 < y < 0.2$ of the residual stack, renormalised to the estimated map noise. \textit{Four:} The longitudinal mean along the region $-0.95 < x < 0.95$ of the residual stack, renormalised to the estimated map noise. The black rule in the top left shows the FHWM of the effective resolution.} \end{subfigure} \caption{Original GLEAM survey images at \SI{118.5}{\mega \hertz}, stacked using the Max \SI{15}{\mega \parsec} LRG catalogue, displaying mean LRG peaks of \SI{4600}{\milli \kelvin}, residual noise of \SI{87}{\milli \kelvin}, and effective resolution of 0.16.} \label{fig:GLEAMstacks} \end{figure*} We have every expectation that we should be able to detect the excess emission in our Phase II observations, given the low noise characteristics of our fields, our sensitivity to large-scale angular structures, and the additional LRG pairs that we have used in our stacks. It is still possible, however, that there is some aspect of these new observations or our image processing pipelines that has obscured or removed the synchrotron cosmic web. And so these concerns have led us to return to the original GLEAM survey data, and attempt to reproduce the results of V2021 by stacking an identical data set at \SI{118.5}{\mega \hertz}. To stack the GLEAM survey data, we first start with the full zenith equal area (ZEA) projection images at \SI{118.5}{\mega \hertz}, which cover the right ascension regions spanned by our LRG pairs. Unlike V2021, we leave these images in their original projection. We mask bright points by selecting all emission regions with values greater than $5 \sigma$ of the local noise. To do this, we measure the spatially variable background---which is primarily the result of Galactic emission---as well as the noise using the Background and Noise Estimation tool \citep[BANE;][]{Hancock2018}. The mask is then created by subtracting out the background emission from the full projection, dividing by the noise image, and then masking all regions that exceed $5 \sigma$ of the local noise value. This process is substantially simpler than the wavelet subtraction method used in V2021, and the inclusion of the background subtraction step mitigates their stated concerns about masking. We additionally `grow' all masked regions by 2 pixels, which we have found to be sufficient to avoid visually obvious rings of faint emission around the masks. Note that even after growing the masks slightly, this process leaves surrounding negative sidelobes about the masked regions, and this results in the remaining non-masked region having an overall negative mean. As with the previous masked stacks in \autoref{sec:cleaning}, this will affect the `zero point' of the final stacks. As previously, exclusion zones are identified around exceptionally bright sources, along sidelobe artefacts, and in one additional region where the background estimation had not been adequate due to a sharp change in the background brightness. Finally, the map is converted to temperature using the associated dirty beam map that described the major and minor axis variation of the beam. We then proceed to stack all LRG pairs from the Max \SI{15}{\mega \parsec} catalogue that overlap with the images, of which there are 645,950 unique pairs. All stacks are weighted by the inverse square of the local noise. In \autoref{fig:GLEAMstacks}, we show the results of the GLEAM stacking with the Max \SI{15}{\mega \parsec} catalogue. This residual image is noticeably different from the one presented in V2021. In the first instance, there is no obvious, large-scale region of excess emission. In V2021, this excess region spanned the length of the intercluster region, and surprisingly was wider than it was long. The residual image is also highly uniform, again differing from V2021 where all the residual images, including the null tests, displayed a distinctive large scale pattern. Curiously, the noise in this image is at \SI{87}{\milli \kelvin}, which is higher than reported in V2021 even though we stack a much larger number of LRG pairs.\footnote{Note that the stated value in V2021 was calculated assuming the average noise in the original images reduced from stacking $N$ LRG pairs as a factor of $1 / \sqrt{N}$, rather than being measured directly.} The one-dimensional profiles similarly display little evidence of excess emission in the residual, with the exception of a peak that reaches $3 \sigma$ in the integrated profile, at $x \approx -0.5$, and which has a width very slightly wider than the effective resolution. There are at least 3 other peaks of similar magnitude and size throughout the residual image that cannot be attributed to intercluster cosmic web emission by reason of their location in the map, and so we must conclude that this peak is unexceptional. Ultimately, we are unable to reproduce the broad and extended excess emission signal found in V2021, even when using the same data set, raising questions that these differences in results are due to the stacking procedure. In \autoref{sec:rosat}, we perform a similar stacking procedure on the ROSAT broad X-ray data, as was performed in V2021. In this case, however, we detect a strong $12 \sigma$ signal for the Max \SI{15}{\mega \parsec} catalogue. This confirms the detection of V2021 for this data set, and provides us with confidence that our stacking and model subtraction processes will detect excess emission when it is present, and suggests that the discrepancy in results arises elsewhere in the analysis. \section{Conclusion} We have attempted to reproduce the detection of excess emission spanning LRG pairs in low frequency radio data, as reported by V2021, and which they attributed to synchrotron emission along filaments spanning pairs of close-proximity clusters and galaxy groups. To reproduce their work, we have adhered very closely to their methodology: using the same LRG catalogue and selection criteria for pairs, stacking radio images at \SI{118.5}{\mega \hertz}, and modelling the LRG and cluster contribution in the same way as V2021. We differ from V2021 primarily in that we use the upgraded MWA Phase II array, which has almost twice the resolution as the Phase I instrument used in V2021, and that our calibration, imaging and point-source subtraction pipelines utilised improved workflows that have been developed since the original GLEAM survey. We have not been able to reproduce their result. Indeed, we have not been able to reproduce their result across a number of LRG pair catalogues, including the original abridged catalogue used in V2021, as well as a much larger catalogue that uses the full range of LRG pairs that meet the original selection criteria of V2021. We reach noise levels in our final stacks consistently lower than those of V2021, and more than twice as deep when using the full range of available LRG pairs. At these noise levels, their reported filamentary temperature should appear as approximately an $8 \sigma$ detection. Our residual stacks, however, are consistent with noise. Our biggest concern with using MWA Phase II is the potential that we resolve out large, extended structures. However, we have shown that we are at least as sensitive to extended sources as Phase I out to $\sim$\SI{125}{\arcminute} thanks to our extra convolution step, and that even for extended emission up to the maximum separation of \SI{180}{\arcminute}, the likely shape and structure of this emission will reduce the effects of resolving out structure. Moreover, we have provided results of an additional LRG pair catalogue, with sources separated by \SIrange[range-phrase=\ to\ ]{15}{60}{\arcminute}, that mitigates these concerns; the stacking results of this catalogue reach noises lower than those of V2021 and yet still do not reproduce their observed excess emission. In addition, we have returned to the original GLEAM survey data where we have performed stacking using the expanded Max \SI{15}{\mega \parsec} LRG pair catalogue. Whilst we do find an isolated peak at just above $3 \sigma$ significance, we find this to be an unremarkable feature of the residuals and certainly not the broad, extended excess emission as found in V2021. This non-detection is in spite of clearly reproducing the excess emission after stacking the ROSAT broad X-ray data, giving us good confidence in our stacking and modelling processes. If our results hold true, we have provided in this work the strongest limits on synchrotron emission from intercluster filaments. However, the discrepancy with the work of V2021 is concerning and begs explanation. Whilst our Phase II results alone left open the possibility that this discrepancy arose due to a real, intrinsic property of the emission, our inability to reproduce the results additionally with GLEAM points to a much more likely possibility: that an error has been made in these detection attempts either by V2021 or ourselves. To this end, we are making publicly available the images of our fields, our stacking and modelling code, and the stacked images themselves, in the hope that if we have indeed erred, it can be quickly identified. Given the significance of the V2021 result, and the surprising implications on our understanding of cosmic magnetism, there is a pressing need reproduce their detection. \section*{Acknowledgements} This scientific work makes use of the Murchison Radio-astronomy Observatory, operated by CSIRO. We acknowledge the Wajarri Yamatji people as the traditional owners of the Observatory site. Support for the operation of the MWA is provided by the Australian Government (NCRIS), under a contract to Curtin University administered by Astronomy Australia Limited. We acknowledge the Pawsey Supercomputing Centre which is supported by the Western Australian and Australian Governments. This work was supported by resources provided by the Pawsey Supercomputing Centre with funding from the Australian Government and the Government of Western Australia. We acknowledge the use of NASA's SkyView facility (\url{http://skyview.gsfc.nasa.gov}) located at NASA Goddard Space Flight Center. \bibliographystyle{pasa-mnras}
proofpile-arXiv_065-24
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In this paper we present and analyze a high-order time discontinuous Galerkin finite element method for the time integration of second order differential problems as those stemming from e.g. elastic wave propagation phenomena. Classical approaches for the time integration of second order differential systems employ implicit and explicit finite differences, Leap-frog, Runge-Kutta or Newmark schemes, see e.g. \cite{Ve07,Bu08,QuSaSa07} for a detailed review. In computational seismology, explicit time integration schemes are nowadays preferred to implicit ones, due to their computational cheapness and ease of implementation. Indeed, although being unconditionally stable, implicit methods are typically computationally expensive. The main drawback of explicit methods is that they are conditionally stable and the choice of time step imposed by the Courant-Freidrichs-Levy (CFL) condition can sometimes be a great limitation. To overcome this limitation one can employ local time stepping (LTS) algorithms \cite{GrMi13,DiGr09,CoFoJo03,Dumbser2007arbitrary} for which the CFL condition is imposed element-wise leading to an optimal choice of the time step. The unique drawback of this approach is the additional synchronization process that one need to take into account for a correct propagation of the wave field from one element to the other. In this work, we present an implicit time integration method based on a discontinuous Galerkin (DG) approach. Originally, DG methods \cite{ReedHill73,Lesaint74} have been developed to approximate \textit{in space} hyperbolic problems \cite{ReedHill73}, and then generalized to elliptic and parabolic equations \cite{wheeler1978elliptic,arnold1982interior,HoScSu00,CockKarnShu00, riviere2008discontinuous,HestWar,DiPiEr}. We refer the reader to \cite{riviere2003discontinuous,Grote06} for the application of DG methods to scalar wave equations and to \cite{Dumbser2007arbitrary,WiSt2010,antonietti2012non, ferroni2016dispersion,antonietti2016stability,AnMa2018, Antonietti_etal2018,mazzieri2013speed,AnMaMi20,DeGl15} for the elastodynamics problem. The DG approach has been used also to approximate initial-value problem where the DG paradigm shows some advantage with respect to other implicit schemes such as the Johnson's method, see e.g. \cite{JOHNSON1993,ADJERID2011}. Indeed, since the information follows the positive direction of time, the solution at time-slab $[t_n,t_{n+1}]$ depends only on the solution at the time instant $t_n^-$. By employing DG methods in both space and time dimensions it leads to a fully DG space-time formulation such as \cite{Delfour81,Vegt2006,WeGeSc2001,AnMaMi20}. More generally, space-time methods have been largely employed for hyperbolic problems. Indeed, high order approximations in both space and time are simple to obtain, achieving spectral convergence of the space-time error through $p$-refinement. In addition, stability can be achieved with local CFL conditions, as in \cite{MoRi05}, increasing computational efficiency. Space-time methods can be divided according to which type of space-time partition they employ. In structured techniques \cite{CangianiGeorgoulisHouston_2014,Tezduyar06}, the space-time grid is the cartesian product of a spatial mesh and a time partition. Examples of applications to second order hyperbolic problems can be found in \cite{StZa17,ErWi19,BaMoPeSc20}. Unstructured techniques \cite{Hughes88,Idesman07} employ grids generated considering the time as an additional dimension. See \cite{Yin00,AbPeHa06,DoFiWi16} for examples of applications to first order hyperbolic problems. Unstructured methods may have better properties, however they suffer from the difficulty of generating the mesh, especially for three-dimensional problems. Among unstructured methods, we mention Trefftz techniques \cite{KrMo16,BaGeLi17,BaCaDiSh18}, in which the numerical solution is looked for in the Trefftz space, and the tent-pitching paradigm \cite{GoScWi17}, in which the space-time elements are progressively built on top of each other in order to grant stability of the numerical scheme. Recently, in \cite{MoPe18,PeScStWi20} a combination of Trefftz and tent-pitching techniques has been proposed with application to first order hyperbolic problems. Finally, a typical approach for second order differential equations consists in reformulating them as a system of first order hyperbolic equations. Thus, velocity is considered as an additional problem's unkwnown that results in doubling the dimension of the final linear system, cf. \cite{Delfour81,Hughes88,FRENCH1993,JOHNSON1993,ThHe2005}. The motivation for this work is to overcome the limitations of the space-time DG method presented in \cite{AnMaMi20} for elastodynamics problems. This method integrates the second order (in time) differential problem stemming from the spatial discretization. The resulting stiffness matrix is ill-conditioned making the use of iterative solvers quite difficult. Hence, direct methods are used forcing to store the stiffness matrix and greatly reducing the range of problems affordable by that method. Here, we propose to change the way the time integration is obtained, resulting in a well-conditioned system matrix and making iterative methods employable and complex 3D problems solvable. In this work, we present a high order discontinuous Galerkin method for time integration of systems of second-order differential equations stemming from space discretization of the visco-elastodynamics problem. The differential (in time) problem is firstly reformulated as a first order system, then, by imposing only weak continuity of tractions across time slabs, we derive a discontinuous Galerkin method. We show the well posedness of the proposed method through the definition of a suitable energy norm, and we prove stability and \emph{a priori} error estimates. The obtained scheme is implicit, unconditionally stable and super-optimal in term of accuracy with respect to the integration time step. In addition, the solution strategy adopted for the associated algebraic linear system reduces the complexity and computational cost of the solution, making three dimensional problems (in space) affordable. The paper is organized as follows. In Section \ref{Sc:Method} we formulate the problem, present its numerical discretization and show that it is well-posed. The stability and convergence properties of the method are discussed in Section \ref{Sc:Convergence}, where we present \textit{a priori} estimates in a suitable norm. In Section \ref{Sc:AlgebraicFormulation}, the equations are rewritten into the corresponding algebraic linear system and a suitable solution strategy is shown. Finally, in Section \ref{Sc:NumericalResults}, the method is validated through several numerical experiments both in two and three dimensions. Throughout the paper, we denote by $||\aa||$ the Euclidean norm of a vector $\aa \in \mathbb{R}^d$, $d\ge 1$ and by $||A||_{\infty} = \max_{i=1,\dots,m}\sum_{j=1}^n |a_{ij}|$, the $\ell^{\infty}$-norm of a matrix $A\in\mathbb{R}^{m\times n}$, $m,n\ge1$. For a given $I\subset\mathbb{R}$ and $v:I\rightarrow\mathbb{R}$ we denote by $L^p(I)$ and $H^p(I)$, $p\in\mathbb{N}_0$, the classical Lebesgue and Hilbert spaces, respectively, and endow them with the usual norms, see \cite{AdamsFournier2003}. Finally, we indicate the Lebesgue and Hilbert spaces for vector-valued functions as $\bm{L}^p(I) = [L^p(I)]^d$ and $\bm{H}^p(I) = [H^p(I)]^d$, $d\ge1$, respectively. \section{Discontinuous Galerkin approximation of a second-order initial value problem} \label{Sc:Method} For $T>0$, we consider the following model problem \cite{kroopnick}: find $\bm{u}(t) \in\bm{H}^2(0,T]$ such that \begin{equation} \label{Eq:SecondOrderEquation} \begin{cases} P\ddot{\bm{u}}(t) + L\dot{\bm{u}}(t)+K\bm{u}(t) = \bm{f}(t) \qquad \forall\, t \in (0,T], \\ \bm{u}(0) = \hat{\bm{u}}_0, \\ \dot{\bm{u}}(0) = \hat{\bm{u}}_1, \end{cases} \end{equation} where $P,L,K \in \mathbb{R}^{d\times d}$, $d\geq 1$ are symmetric, positive definite matrices, $\hat{\bm{u}}_0, \hat{\bm{u}}_1 \in \mathbb{R}^d$ and $\bm{f} \in \bm{L}^2(0,T]$. Then, we introduce a variable $\bm{w}:(0,T]\rightarrow\mathbb{R}^{d}$ that is the first derivative of $\bm{u}$, i.e. $\bm{w}(t) = \dot{\bm{u}}(t)$, and reformulate problem \eqref{Eq:SecondOrderEquation} as a system of first order differential equations: \begin{equation} \label{Eq:FirstOrderSystem1} \begin{cases} K\dot{\bm{u}}(t) - K\bm{w}(t) = \boldsymbol{0} &\forall\, t\in(0,T], \\ P\dot{\bm{w}}(t) +L\bm{w}(t) + K\bm{u}(t) = \bm{f}(t) &\forall\, t\in(0,T], \\ \bm{u}(0) = \hat{\bm{u}}_0, \\ \bm{w}(0) = \hat{\bm{u}}_1. \end{cases} \end{equation} Note that, since $K$ is a positive definite matrix, the first equation in \eqref{Eq:FirstOrderSystem1} is consistent with the definition of $\bm{w}$. By defining $\bm{z} = [\bm{u},\bm{w}]^T\in\mathbb{R}^{2d}$, $\bm{F}=[\bm{0},\bm{f}]^T\in\mathbb{R}^{2d}$, $\bm{z}_0 = [\hat{\bm{u}}_0,\hat{\bm{u}}_1]^T\in\mathbb{R}^{2d}$ and \begin{equation}\label{def:KA} \widetilde{K} = \begin{bmatrix} K & 0 \\ 0 & P \end{bmatrix}\in\mathbb{R}^{2d\times2d}, \quad A = \begin{bmatrix} 0 & -K \\ K & L \end{bmatrix}\in\mathbb{R}^{2d\times2d}, \end{equation} we can write \eqref{Eq:FirstOrderSystem1} as \begin{equation} \label{Eq:FirstOrderSystem2} \begin{cases} \tilde{K}\dot{\bm{z}}(t) + A\bm{z}(t) = \bm{F}(t) & \forall\, t\in(0,T], \\ \bm{z}(0) = \bm{z}_0. \end{cases} \end{equation} To integrate in time system \eqref{Eq:FirstOrderSystem2}, we first partition the interval $I=(0,T]$ into $N$ time-slabs $I_n = (t_{n-1},t_n]$ having length $\Delta t_n = t_n-t_{n-1}$, for $n=1,\dots,N$ with $t_0 = 0$ and $t_N = T$, as it is shown in Figure \ref{Fig:TimeDomain}. \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{time_domain.png} \caption{Example of time domain partition (bottom). Zoom of the time domain partition: values $t_n^+$ and $t_n^-$ are also reported (top).}\label{Fig:TimeDomain} \end{figure} Next, we incrementally build (on $n$) an approximation of the exact solution $\bm{u}$ in each time slab $I_n$. In the following we will use the notation \begin{equation*} (\bm{u},\bm{v})_I = \int_I \bm{u}(s)\cdot\bm{v}(s)\text{d}s, \quad \langle \bm{u},\bm{v} \rangle_t = \bm{u}(t)\cdot \bm{v}(t), \end{equation*} where $\aa\cdot\bm{b}$ stands for the euclidean scalar product between tho vectors $\aa,\bm{b}\in\mathbb{R}^d$. We also denote for (a regular enough) $\bm{v}$, the jump operator at $t_n$ as \begin{equation*} [\bm{v}]_n = \bm{v}(t_n^+) - \bm{v}(t_n^-) = \bm{v}^+ -\bm{v}^-, \quad \text{for } n\ge 0, \end{equation*} where \begin{equation*} \bm{v}(t_n^\pm) = \lim_{\epsilon\rightarrow 0^\pm}\bm{v}(t_n+\epsilon), \quad \text{for } n\ge 0. \end{equation*} Thus, we focus on the generic interval $I_n$ and assume that the solution on $I_{n-1}$ is known. We multiply equation \eqref{Eq:FirstOrderSystem2} by a (regular enough) test function $\bm{v}(t)\in\mathbb{R}^{2d}$ and integrate in time over $I_n$ obtaining \begin{equation} \label{Eq:Weak1} (\widetilde{K}\dot{\bm{z}},\bm{v})_{I_n} + (A\bm{z},\bm{v})_{I_n} = (\bm{F},\bm{v})_{I_n}. \end{equation} Next, since $\bm{u} \in\bm{H}^2(0,T]$ and $\bm{w} = \dot{\bm{u}}$, then $\bm{z}\in\bm{H}^1(0,T]$. Therefore, we can add to \eqref{Eq:Weak1} the null term $\widetilde{K}[\bm{z}]_{n-1}\cdot\bm{v}(t_{n-1}^+)$ getting \begin{equation} \label{Eq:Weak2} (\widetilde{K}\dot{\bm{z}},\bm{v})_{I_n} + (A\bm{z},\bm{v})_{I_n} +\widetilde{K}[\bm{z}]_{n-1}\cdot\bm{v}(t_{n-1}^+) = (\bm{F},\bm{v})_{I_n}. \end{equation} Summing up over all time slabs we define the bilinear form $\mathcal{A}:\bm{H}^1(0,T)\times\bm{H}^1(0,T)\rightarrow\mathbb{R}$ \begin{equation} \label{Eq:BilinearForm} \mathcal{A}(\bm{z},\bm{v}) = \sum_{n=1}^N (\widetilde{K}\dot{\bm{z}},\bm{v})_{I_n} + (A\bm{z},\bm{v})_{I_n} + \sum_{n=1}^{N-1} \widetilde{K}[\bm{z}]_n\cdot\bm{v}(t_n^+) + \widetilde{K}\bm{z}(0^+)\cdot\bm{v}(0^+), \end{equation} and the linear functional $\mathcal{F}:\bm{L}^2(0,T)\rightarrow\mathbb{R}$ as \begin{equation} \label{Eq:LinearFunctional} \mathcal{F}(\bm{v}) = \sum_{n=1}^N (\bm{F},\bm{v})_{I_n} + \widetilde{K}\bm{z}_0\cdot\bm{v}_{0}^+, \end{equation} where we have used that $\bm{z}(0^-) = \bm{z}_0$. Now, we introduce the functional spaces \begin{equation} \label{Eq:PolynomialSpace} V_n^{r_n} = \{ \bm{z}:I_n\rightarrow\mathbb{R}^{2d} \text{ s.t. } \bm{z}\in[\mathcal{P}^{r_n}(I_n)]^{2d} \}, \end{equation} where $\mathcal{P}^{r_n}(I_n)$ is the space of polynomial defined on $I_n$ of maximum degree $r_n$, \begin{equation} \label{Eq:L2Space} \mathcal{V}^{\bm{r}} = \{ \bm{z}\in\bm{L}^2(0,T] \text{ s.t. } \bm{z}|_{I_n} = [\bm{u},\bm{w}]^T\in V_n^{r_n} \}, \end{equation} and \begin{equation} \label{Eq:CGSpace} \mathcal{V}_{CG}^{\bm{r}} = \{ \bm{z}\in[\mathbb{C}^0(0,T]]^{2d} \text{ s.t. } \bm{z}|_{I_n} = [\bm{u},\bm{w}]^T\in V_n^{r_n} \text{ and } \dot{\bm{u}} = \bm{w} \}, \end{equation} where $\bm{r} = (r_1,\dots,r_N) \in \mathbb{N}^N$ is the polynomial degree vector Before assessing the discontinuous Galerkin formulation of problem~\eqref{Eq:FirstOrderSystem2}, we need to introduce, as in \cite{ScWi2010}, the following operator $\mathcal{R}$, that is used only on the purpose of the analysis and does not need to be computed in practice. \begin{mydef} \label{Def:Reconstruction} We define a reconstruction operator $\mathcal{R}:\mathcal{V}^{\bm{r}}\rightarrow\mathcal{V}^{\bm{r}}_{CG}$ such that \begin{equation} \label{Eq:Reconstruction} \begin{split} (\mathcal{R}'(\bm{z}),\bm{v})_{I_n} &= (\bm{z}',\bm{v})_{I_n} + [\bm{z}]_{n-1}\cdot\bm{v}(t_{n-1}^+) \quad \forall\, \bm{v}\in[\mathcal{P}^{r_n}(I_n)]^{2d}, \\ \mathcal{R}(\bm{z}(t_{n-1}^+)) &= \bm{z}(t_{n-1}^-) \quad \forall\, n =1,\dots,N. \end{split} \end{equation} \end{mydef} \noindent Now, we can properly define the functional space \begin{equation} \label{Eq:DGSpace} \begin{split} \mathcal{V}_{DG}^{\bm{r}} = \{& \bm{z}\in\mathcal{V}^{\bm{r}} \text{ and }\exists\, \hat{\bm{z}} = R(\bm{z}) \in\mathcal{V}_{CG}^{\bm{r}}\}, \end{split} \end{equation} and introduce the DG formulation of \eqref{Eq:FirstOrderSystem2} reads as follows. Find $\bm{z}_{DG}\in\mathcal{V}_{DG}^{\bm{r}}$ such that \begin{equation} \label{Eq:WeakProblem} \mathcal{A}(\bm{z}_{DG},\bm{v}) = \mathcal{F}(\bm{v}) \qquad \bm{v}\in\mathcal{V}_{DG}^{\bm{r}}. \end{equation} For the forthcoming analysis we introduce the following mesh-dependent energy norm. \begin{myprop} \label{Pr:Norm} The function $|||\cdot|||:\mathcal{V}_{DG}^{\bm{r}}\rightarrow\mathbb{R}^{+}$, is defined as \begin{equation} \label{Eq:Norm} |||\bm{z}|||^2 = \sum_{n=1}^N ||\widetilde{L}\bm{z}||_{\bm{L}^2(I_n)}^2 + \frac{1}{2}(\widetilde{K}^{\frac{1}{2}}\bm{z}(0^+))^2 + \frac{1}{2}\sum_{n=1}^{N-1}(\widetilde{K}^{\frac{1}{2}}[\bm{z}]_n)^2 + \frac{1}{2}(\widetilde{K}^{\frac{1}{2}}\bm{z}(T^-))^2, \end{equation} with $ \widetilde{L} = \begin{bmatrix} 0 & 0 \\ 0 & L^{\frac{1}{2}} \end{bmatrix}\in\mathbb{R}^{2d\times2d}. $ Moreover a norm on $\mathcal{V}_{DG}^{\bm{r}}$. \end{myprop} \begin{proof} It is clear that homogeneity and subadditivity hold. In addition, it is trivial that if $\bm{z} = 0$ then $|||\bm{z}|||=0$. Therefore, we suppose $|||\bm{z}||| = 0$ and observe that \begin{equation*} ||\widetilde{L}\bm{z}||_{\bm{L}^2(I_n)}=||L^{\frac{1}{2}}\bm{w}||_{\bm{L}^2(I_n)}=0 \quad \forall n=1,\dots,N. \end{equation*} Since $L$ is positive definite we have $\bm{w} = \textbf{0} $ on $[0,T]$. Hence, $\bm{w}'=\textbf{0}$ on $[0,T]$. Using this result into \eqref{Eq:DGSpace} and calling $\bm{v} = [\bm{v}_1,\bm{v}_2]^T$, we get \begin{equation*} (\hat{\bm{w}}',\bm{v}_2)_{I_n} = 0 \quad \forall \bm{v}_2 \in [\mathcal{P}^r_n(I_n)]^d \text{ and }\forall n=1,\dots,N. \end{equation*} Therefore $\hat{\bm{w}}'=\textbf{0}$ on $[0,T]$. In addition, from \eqref{Eq:DGSpace} we get $\textbf{0}=\bm{w}(t_1^-)=\hat{\bm{w}}(t_1^+)$ that combined with the previous result gives $\hat{\bm{w}}=\textbf{0}$ on $[0,T]$. Now, since $\hat{\bm{z}}\in \mathcal{V}^{\bm{r}}_{CG}$, we have $\hat{\bm{u}}' = \hat{\bm{w}} = \textbf{0}$ on $[0,T]$. Therefore using again \eqref{Eq:DGSpace} we get \begin{equation*} (\bm{u}',\bm{v}_1)_{I_n} + [\bm{u}]_{n-1}\cdot \bm{v}_1(t_{n-1}^+)= 0 \quad \forall \bm{v}_1 \in [\mathcal{P}^r_n(I_n)]^d \text{ and }\forall n=1,\dots,N. \end{equation*} Take $n = N$, then $[\bm{u}]_{N-1}=\textbf{0}$ (from $|||\bm{z}||| = 0$) and therefore $\bm{u}'=\textbf{0}$ on $I_N$. Combining this result with $\bm{u}(T^-)=\textbf{0}$ we get $\bm{u}=\textbf{0}$ on $I_N$ from which we derive $\textbf{0}=\bm{u}(t_{N-1}^+)=\bm{u}(t_{N-1}^-)$. Iterating until $n=2$ we get $\bm{u}=\textbf{0}$ on $I_n$, for any $n=2,\dots,N$. Moreover \begin{equation*} \textbf{0}=\bm{u}(t_1^+)=\bm{u}(t_1^-)=\hat{\bm{u}}(t_1^+)=\hat{\bm{u}}(t_1^-)=\hat{\bm{u}}(0^+)=\bm{u}(0^-), \end{equation*} since $\hat{\bm{u}}' = \textbf{0}$ on $I_1$. Using again $|||\bm{z}|||=0$ we get $\bm{u}(0^+)=\textbf{0}$, hence $[\bm{u}]_0=\textbf{0}$. Taking $n=1$ we get $\bm{u}=\textbf{0}$ on $I_1$. Thus, $\bm{z}=\textbf{0}$ on $[0,T]$. \end{proof} The following result states the well-posedness of \eqref{Eq:WeakProblem} \begin{myprop} \label{Pr:WellPosedness} Problem~\eqref{Eq:WeakProblem} admits a unique solution $\bm{u}_{DG} \in \mathcal{V}_{DG}^{\bm{r}}$. \end{myprop} \begin{proof} By taking $\bm{v} = \bm{z}$ we get \begin{equation*} \mathcal{A}(\bm{z},\bm{z}) = \sum_{n=1}^N (\widetilde{K}\dot{\bm{z},}\bm{z})_{I_n} + (A\bm{z},\bm{z})_{I_n} + \sum_{n=1}^{N-1} \widetilde{K}[\bm{z}]_n\cdot\bm{z}(t_n^+) + (\widetilde{K}^{\frac{1}{2}}\bm{z})^2. \end{equation*} Since $\widetilde{K}$ is symmetric, integrating by parts we have that \begin{equation*} (\widetilde{K}\dot{\bm{z}},\bm{z})_{I_n} = \frac{1}{2}\langle \widetilde{K}\bm{z},\bm{z} \rangle_{t_n^-} - \frac{1}{2}\langle \widetilde{K}\bm{z},\bm{z} \rangle_{t_{n-1}^+}. \end{equation*} Then, the second term can be rewritten as \begin{equation*} (A\bm{z},\bm{z})_{I_n} = (-K\bm{w},\bm{u})_{I_n} + (K\bm{u},\bm{w})_{I_n} + (L\bm{w},\bm{w})_{I_n} = ||\widetilde{L}\bm{z}||_{I_n}^2, \end{equation*} cf. also \eqref{def:KA}. Therefore \begin{equation*} \mathcal{A}(\bm{z},\bm{z}) = \sum_{n=1}^N ||\widetilde{L}\bm{z}||_{I_n}^2 + (\widetilde{K}^{\frac{1}{2}}\bm{z}(0^+))^2 + \frac{1}{2}\sum_{n=1}^{N-1} (\widetilde{K}^{\frac{1}{2}}[\bm{z}]_n)^2 + (\widetilde{K}^{\frac{1}{2}}\bm{z}(T^-))^2 = |||\bm{z}|||^2. \end{equation*} The result follows from Proposition~\ref {Pr:Norm}, the bilinearity of $\mathcal{A}$ and the linearity of $\mathcal{F}$. \end{proof} \section{Convergence analysis}\label{Sc:Convergence} In this section, we first present an \textit{a-priori} stability bound for the numerical solution of \eqref{Eq:WeakProblem} that can be easily obtained by direct application of the Cauchy-Schwarz inequality. Then, we use the latter to prove optimal error estimate for the numerical error, in the energy norm \eqref{Eq:Norm}. \begin{myprop} Let $\bm{f} \in \bm{L}^2(0,T]$, $\hat{\bm{u}}_0, \hat{\bm{u}}_1 \in \mathbb{R}^d$, and let $\bm{z}_{DG} \in \mathcal{V}_{DG}^{\bm{r}}$ be the solution of \eqref{Eq:WeakProblem}, then it holds \begin{equation} \label{Eq:Stability} |||\bm{z}_{DG}||| \lesssim \Big(\sum_{n=1}^N ||L^{-\frac{1}{2}}\bm{f}||_{\bm{L}^(0,T)}^2+(K^{\frac{1}{2}}\hat{\bm{u}}_0)^2+(P^{\frac{1}{2}}\hat{\bm{u}}_1)^2\Big)^{\frac{1}{2}}. \end{equation} \end{myprop} \begin{proof} From the definition of the norm $|||\cdot|||$ given in \eqref{Eq:Norm} and the arithmetic-geometric inequality we have \begin{equation*} \begin{split} |||\bm{z}_{DG}|||^2 &= \mathcal{A}(\bm{z}_{DG},\bm{z}_{DG}) = \mathcal{F}(\bm{z}_{DG}) = \sum_{n=1}^N (\bm{F},\bm{z}_{DG})_{I_n} + \widetilde{K}\bm{z}_0\cdot\bm{z}_{DG}(0^+) \\ &\lesssim \frac{1}{2}\sum_{n=1}^N ||L^{-\frac{1}{2}}\bm{f}||_{\bm{L}^2(I_n)}^2 + \frac{1}{2}\sum_{n=1}^N ||\widetilde{L}\bm{z}_{DG}||_{\bm{L}^2(I_n)}^2 + (\widetilde{K}^{\frac{1}{2}} \bm{z}_{0})^2 + \frac{1}{4}(\widetilde{K}^{\frac{1}{2}} \bm{z}_{DG})^2 \\ &\lesssim \frac{1}{2}\sum_{n=1}^N ||L^{-\frac{1}{2}}\bm{f}||_{\bm{L}^2(I_n)}^2 + (\widetilde{K}^{\frac{1}{2}} \bm{z}_{0})^2 + \frac{1}{2}|||\bm{z}_{DG}|||^2. \end{split} \end{equation*} Hence, \begin{equation*} |||\bm{z}_{DG}|||^2 \lesssim \sum_{n=1}^N ||L^{-\frac{1}{2}}\bm{f}||_{\bm{L}^2(I_n)}^2 + (K^{\frac{1}{2}} \hat{\bm{u}}_{0})^2 + (P^{\frac{1}{2}}\hat{\bm{u}}_{1})^2. \end{equation*} \end{proof} Before deriving an a priori estimate for the numerical error we introduce some preliminary results. We refer the interested reader to \cite{ScSc2000} for further details. \begin{mylemma} \label{Le:Projector} Let $I=(-1,1)$ and $u\in L^2(I)$ continuous at $t=1$, the projector $\Pi^r u \in \mathcal{P}^r(I)$, $r\in\mathbb{N}_0$, defined by the $r+1$ conditions \begin{equation} \label{Eq:Projector} \Pi^r u (1) = u(1), \qquad (\Pi^r u,q)_{I} = 0 \quad\forall\, q\in\mathcal{P}^{r-1}(I), \end{equation} is well posed. Moreover, let $I=(a,b)$, $\Delta t = b-a$, $r\in\mathbb{N}_0$ and $u\in H^{s_0+1}(I)$ for some $s_0\in\mathbb{N}_0$. Then \begin{equation} \label{Eq:ProjectionError} ||u-\Pi^r u||_{L^2(I)}^2 \le C\bigg(\frac{\Delta t}{2}\bigg)^{2(s+1)}\frac{1}{r^2}\frac{(r-s)!}{(r+s)!}||u^{(s+1)}||_{L^2(I)}^2 \end{equation} for any integer $0\le s \le \min(r,s_0)$. C depends on $s_0$ but it is independent from $r$ and $\Delta t$. \end{mylemma} Proceeding similarly to \cite{ScSc2000}, we now prove the following preliminary estimate for the derivative of the projection $\Pi^r u$. \begin{mylemma} \label{Le:DerivativeProjectionErrorInf} Let $u\in H^1(I)$ be continuous at $t=1$. Then, it holds \begin{equation} \label{Eq:DerivativeProjectionErrorInf} ||u'-\big(\Pi^r u\big)'||_{L^2(I)}^2 \le C(r+1)\inf_{q \in \mathcal{P}^r(I)} \Bigg\{||u'-q'||_{L^2(I)}^2 \Bigg\}. \end{equation} \end{mylemma} \begin{proof} Let $u' =\sum_{i=1}^{\infty} u_i L'_i$ be the Legendre expansion of $u'$ with coefficients $u_i\in\mathbb{R}$, $i=1,\dots,\infty$. Then (cfr. Lemma 3.2 in \cite{ScSc2000}) \begin{equation*} \big(\Pi^r u\big)'=\sum_{i=1}^{r-1} u_i L'_i + \sum_{i=r}^{\infty} u_i L'_r \end{equation*} Now, for $r\in\mathbb{N}_0$, we denote by $\widehat{P}^r$ the $L^2(I)$-projection onto $\mathcal{P}^r(I)$. Hence, \begin{equation*} u' - \big(\Pi^r u\big)'= \sum_{i=r}^{\infty} u_i L'_i - \sum_{i=r}^{\infty} u_i L'_r = \sum_{i=r+1}^{\infty} u_i L'_i - \sum_{i=r+1}^{\infty} u_i L'_r = u' - \big(\widehat{P}^r u\big)' - \sum_{i=r+1}^{\infty} u_i L'_r. \end{equation*} Recalling that $||L'_r||_{L^2(I)} = r(r+2)$ we have \begin{equation*} ||u' - \big(\Pi^r u\big)'||_{L^2(I)}^2 \le ||u' - \big(\widehat{P}^r u\big)'||_{L^2(I)}^2 - \Bigg|\sum_{i=r+1}^{\infty} u_i\Bigg| r(r+1). \end{equation*} Finally, we use that $ \Bigg|\sum_{i=r+1}^{\infty} u_i\Bigg| \le \frac{C}{r}||u'||_{L^2(I)} $ (cfr. Lemma~3.6 in \cite{ScSc2000}) and get \begin{equation} \label{Eq:DerivativeProjectionError} ||u'-\big(\Pi^r u\big)'||_{L^2(I)}^2 \le C\big\{||u'-\big(\widehat{P}^r u\big)'||_{L^2(I)}^2+(r+1)||u'||_{L^2(I)}^2 \big\}. \end{equation} Now consider $q\in\mathcal{P}^r(I)$ arbitrary and insert $u'-q'$ into \eqref{Eq:DerivativeProjectionError}. The thesis follows from the reproducing properties of projectors $\Pi^r u$ and $\widehat{P}^r u$ and from the fact that $||u-\widehat{P}^r u||_{L^2(I)} \le ||u-q||_{L^2(I)} $ for any $q\in\mathcal{P}^r(I)$. \end{proof} By employing Proposition~3.9 in \cite{ScSc2000} and Lemma \ref{Le:DerivativeProjectionErrorInf} we obtain the following result. \begin{mylemma} \label{Le:DerivativeProjectionError} Let $I=(a,b)$, $\Delta t = b-a$, $r\in\mathbb{N}_0$ and $u\in H^{s_0+1}(I)$ for some $s_0\in\mathbb{N}_0$. Then \begin{equation*} ||u'-\big(\Pi^r u\big)'||_{L^2(I)}^2 \lesssim \bigg(\frac{\Delta t}{2}\bigg)^{2(s+1)}(r+2)\frac{(r-s)!}{(r+s)!}||u^{(s+1)}||_{L^2(I)}^2 \end{equation*} for any integer $0\le s \le \min(r,s_0)$. The hidden constants depend on $s_0$ but are independent from $r$ and $\Delta t$. \end{mylemma} Finally we observe that the bilinear form appearing in formulation \eqref{Eq:WeakProblem} is strongly consistent, i.e. \begin{equation} \label{Eq:Consistency} \mathcal{A}(\bm{z}-\bm{z}_{DG},\bm{v}) = 0 \qquad \forall\,\bm{v}\in\mathcal{V}^{\bm{r}}_{DG}. \end{equation} We now state the following convergence result. \begin{myth} \label{Th:ErrorEstimate} Let $\hat{\bm{u}}_{0},\hat{\bm{u}}_{1} \in \mathbb{R}^{d}$. Let $\bm{z}$ be the solution of problem~\eqref{Eq:FirstOrderSystem2} and let $\bm{z}_{DG}\in\mathcal{V}_{DG}^{\bm{r}}$ be its finite element approximation. If $\bm{z}|_{I_n}\in \bm{H}^{s_n}(I_n)$, for any $n=1,\dots,N$ with $s_n\geq2$, then it holds \begin{equation} \label{Eq:ErrorEstimate} |||\bm{z}-\bm{z}_{DG}||| \lesssim \sum_{n=1}^N \bigg(\frac{\Delta t}{2}\bigg)^{\mu_n+\frac{1}{2}}\Bigg((r_n+2)\frac{(r_n-\mu_n)!}{(r_n+\mu_n)!}\Bigg)^{\frac{1}{2}}||\bm{z}||_{H^{\mu_n+1}(I_n)}, \end{equation} where $\mu_n = \min(r_n,s_n)$, for any $n=1,\dots,N$ and the hidden constants depend on the norm of matrices $L$, $K$ and $A$. \end{myth} \begin{proof} We set $\bm{e} = \bm{z} - \bm{z}_{DG} = (\bm{z} - \Pi_I^r \bm{z}) + (\Pi_I^r \bm{z} - \bm{z}_{DG}) = \bm{e}^{\pi} + \bm{e}^{h}$. Hence we have $|||\bm{e}||| \le |||\bm{e}^{\pi}||| + |||\bm{e}^{h}|||$. Employing the properties of the projector \eqref{Eq:Projector} and estimates \eqref{Eq:ProjectionError} and \eqref{Eq:DerivativeProjectionError}, we can bound $|||\bm{e}^{\pi}|||$ as \begin{equation*} \begin{split} |||\bm{e}^{\pi}|||^2 &= \sum_{n=1}^N ||\widetilde{L}\bm{e}^{\pi}||_{L^2(I_n)}^2 + \frac{1}{2}(\widetilde{K}^{\frac{1}{2}}\bm{e}^{\pi}(0^+))^2 + \frac{1}{2}\sum_{n=1}^{N-1}(\widetilde{K}^{\frac{1}{2}}[\bm{e}^{\pi}]_n)^2 + \frac{1}{2}(\widetilde{K}^{\frac{1}{2}}\bm{e}^{\pi}(T^-))^2 \\ & = \sum_{n=1}^N ||\widetilde{L}\bm{e}^{\pi}||_{L^2(I_n)}^2 + \frac{1}{2} \sum_{n=1}^N \Bigg(-\int_{t_{n-1}}^{t_{n}}\widetilde{K}^{\frac{1}{2}}\dot{\bm{e}}^{\pi}(s)ds\Bigg)^2 \\ & \lesssim \sum_{n=1}^N \Big(||\bm{e}^{\pi}||_{L^2(I_n)}^2 + \Delta t ||\dot{\bm{e}^{\pi}}||_{L^2(I_n)}^2 \Big) \\ & \lesssim \sum_{n=1}^N \bigg[\bigg(\frac{\Delta t_n}{2}\bigg)^{2\mu_n+2} \frac{1}{r_n^2} + \bigg(\frac{\Delta t_n}{2}\bigg)^{2\mu_n+1} (r_n+2)\bigg] \frac{(r_n-\mu_n)!}{(r_n+\mu_n)!}||\bm{z}||_{H^{\mu_n+1}(I_n)} \\ & \lesssim \sum_{n=1}^N \bigg(\frac{\Delta t_n}{2}\bigg)^{2\mu_n+1} (r_n+2) \frac{(r_n-\mu_n)!}{(r_n+\mu_n)!}||\bm{z}||_{H^{\mu_n+1}(I_n)}, \end{split} \end{equation*} where $\mu_n = \min(r_n,s_n)$, for any $n=1,\dots,N$. For the term $|||\bm{e}_{h}|||$ we use \eqref{Eq:Consistency} and integrate by parts to get \begin{equation*} \begin{split} |||\bm{e}^{h}|||^2 &= \mathcal{A}(\bm{e}^h,\bm{e}^h) = -\mathcal{A}(\bm{e}^{\pi},\bm{e}^h) \\ & = \sum_{n=1}^N (\widetilde{K}\dot{\bm{e}}^{\pi},\bm{e}^h)_{I_n} + \sum_{n=1}^N(A\bm{e}^{\pi},\bm{e}^h)_{I_n} + \sum_{n=1}^{N-1} \widetilde{K}[\bm{e}^{\pi}]_n\cdot\bm{e}^h(t_n^+) + \widetilde{K}\bm{e}^{\pi}(0^+)\cdot\bm{e}^h(0^+) \\ & = \sum_{n=1}^N (\widetilde{K}\bm{e}^{\pi},\dot{\bm{e}}^h)_{I_n} + \sum_{n=1}^N(A\bm{e}^{\pi},\bm{e}^h)_{I_n} + \sum_{n=1}^{N-1} \widetilde{K}[\bm{e}^{h}]_n\cdot\bm{e}^{\pi}(t_n^-) - \widetilde{K}\bm{e}^{\pi}(T^-)\cdot\bm{e}^h (T^-). \end{split} \end{equation*} Thanks to \eqref{Eq:Projector}, only the second term of the last equation above does not vanish. Thus, we employ the Cauchy-Schwarz and arithmetic-geometric inequalities to obtain \begin{equation*} |||\bm{e}^{h}|||^2 = \sum_{n=1}^N(A\bm{e}^{\pi},\bm{e}^h)_{I_n} \lesssim \frac{1}{2} \sum_{n=1}^N ||\bm{e}^{\pi}||_{L^2(I_n)}^2 + \frac{1}{2} \sum_{n=1}^N ||\widetilde{L}\bm{e}^{h}||_{L^2(I_n)}^2 \lesssim \frac{1}{2} \sum_{n=1}^N ||\bm{e}^{\pi}||_{L^2(I_n)}^2 + \frac{1}{2}|||\bm{e}^h|||^2. \end{equation*} Hence, \begin{equation*} |||\bm{e}^{h}|||^2 \lesssim \sum_{n=1}^N \bigg(\frac{\Delta t_n}{2}\bigg)^{2\mu_n+2} \frac{1}{r_n^2} \frac{(r_n-\mu_n)!}{(r_n+\mu_n)!}||\bm{z}||_{H^{\mu_n+1}(I_n)}, \end{equation*} where $\mu_n = \min(r_n,s_n)$, for any $n=1,\dots,N$ and the thesis follows. \end{proof} \section{Algebraic formulation} \label{Sc:AlgebraicFormulation} In this section we derive the algebraic formulation stemming after DG discretization of \eqref{Eq:WeakProblem} for the time slab $I_n$. We consider on $I_n$ a local polynomial degree $r_n$. In practice, since we use discontinuous functions, we can compute the numerical solution one time slab at time, assuming the initial conditions stemming from the previous time slab known. Hence, problem \eqref{Eq:WeakProblem} reduces to: find $\bm{z}\in V^{r_n}(I_n)$ such that \begin{equation} \label{Eq:WeakFormulationReduced} (\widetilde{K}\dot{\bm{z}},\bm{v})_{I_n} + (A\bm{z},\bm{v})_{I_n} + \langle\widetilde{K}\bm{z},\bm{v}\rangle_{t_{n-1}^+} = (\bm{F},\bm{v})_{I_n} + \widetilde{K}\bm{z}(t_{n-1}^-)\cdot\bm{v}({t_{n-1}^+}), \quad \forall\,n=1,\dots,N. \end{equation} Introducing a basis $\{\psi^{\ell}(t)\}_{{\ell}=1,\dots,r_n+1}$ for the polynomial space $\mathbb{P}^{r_n}(I_n)$ we define a vectorial basis $\{ \boldsymbol{\Psi}_i^{\ell}(t) \}_{i=1,\dots,2d}^{{\ell}=1,\dots,r_n+1}$ of $V_n^{r_n}$ where \begin{equation*} \{ \boldsymbol{\Psi}_i^{\ell}(t) \}_j = \begin{cases} \psi^{\ell}(t) & {\ell} = 1,\dots,r_n+1, \quad \text{if } i=j, \\ 0 & {\ell} = 1,\dots,r_n+1, \quad \text{if } i\ne j. \end{cases} \end{equation*} Then, we set $D_n=d(r_n+1)$ and write the trial function $\bm{z}_n = \bm{z}_{DG}|_{I_n} \in V_n^{r_n}$ as \begin{equation*} \bm{z}_n(t) = \sum_{j=1}^{2d} \sum_{m=1}^{r_n+1} \alpha_{j}^m \boldsymbol{\Psi}_j^m(t), \end{equation*} where $\alpha_{j}^m\in\mathbb{R}$ for $j=1,\dots,2d$, $m=1,\dots,r_n+1$. Writing \eqref{Eq:WeakFormulationReduced} for any test function $\boldsymbol{\Psi}_i^{\ell}(t)$, $i=1,\dots,2d$, $\ell=1\,\dots,r_n+1$ we obtain the linear system \begin{equation} \label{Eq:LinearSystem} M\bm{Z}_n = \bm{G}_n, \end{equation} where $\bm{Z}_n,\bm{G}_n \in \mathbb{R}^{2D_n}$ are the vectors of expansion coefficient corresponding to the numerical solution and the right hand side on the interval $I_n$ by the chosen basis. Here $M\in\mathbb{R}^{2D_n\times2D_n}$ is the local stiffness matrix defined as \begin{equation} \label{Eq:StiffnessMatrix} M = \widetilde{K} \otimes (N^1+N^3) + A \otimes N^2 = \begin{bmatrix} K \otimes (N^1 + N^3) & -K \otimes N^2 \\ K \otimes N^2 & P \otimes (N^1+N^3) + L \otimes N^2 \end{bmatrix}, \end{equation} where $N^1,N^2,N^3 \in \mathbb{R}^{r_n+1}$ are the local time matrices \begin{equation} \label{Eq:TimeMatrices} N_{{\ell}m}^1 = (\dot{\psi}^m,\psi^{\ell})_{I_n}, \qquad N_{{\ell}m}^2 = (\psi^m,\psi^{\ell})_{I_n}, \qquad N_{{\ell}m}^3 = \langle\psi^m,\psi^{\ell}\rangle_{t_{n-1}^+}, \end{equation} for $\ell,m=1,...,r_n+1$. Similarly to \cite{ThHe2005}, we reformulate system \eqref{Eq:LinearSystem} to reduce the computational cost of its resolution phase. We first introduce the vectors $\bm{G}_n^u,\, \bm{G}_n^w,\, \bm{U}_n,\, \bm{W}_n \in \mathbb{R}^{D_n}$ such that \begin{equation*} \bm{G}_n = \big[\bm{G}_n^u, \bm{G}_n^w\big]^T, \qquad \bm{Z}_n = \big[\bm{U}_n, \bm{W}_n\big]^T \end{equation*} and the matrices \begin{equation} N^4 = (N^1+N^3)^{-1}, \qquad N^5 = N^4N^2, \qquad N^6 = N^2N^4, \qquad N^7 = N^2N^4N^2. \end{equation} Next, we apply a block Gaussian elimination getting \begin{equation*} M = \begin{bmatrix} K \otimes (N^1 + N^3) & -K \otimes N^2 \\ 0 & P \otimes (N^1+N^3) + L \otimes N^2 + K \otimes N^7 \end{bmatrix}, \end{equation*} and \begin{equation*} \bm{G}_n = \begin{bmatrix} \bm{G}_n^u \\ \bm{G}_n^w - \mathcal{I}_d\otimes N^6 \bm{G}_n^u \end{bmatrix}. \end{equation*} We define the matrix $\widehat{M}_n\in\mathbb{R}^{D_n\times D_n}$ as \begin{equation}\label{Eq:TimeMatrix} \widehat{M}_n = P \otimes (N^1+N^3) + L \otimes N^2 + K \otimes N^7, \end{equation} and the vector $\widehat{\bm{G}}_n\in\mathbb{R}^{D}$ as \begin{equation} \widehat{\bm{G}}_n = \bm{G}_n^w - \mathcal{I}_{d}\otimes N^6 \bm{G}_n^u. \end{equation} Then, we multiply the first block by $K^{-1}\otimes N^4$ and, exploiting the properties of the Kronecker product, we get \begin{equation*} \begin{bmatrix} \mathcal{I}_{D_n} & -\mathcal{I}_{d} \otimes N^5 \\ 0 & \widehat{M}_n \end{bmatrix} \begin{bmatrix} \bm{U}_n \\ \bm{W}_n \end{bmatrix} = \begin{bmatrix} (K^{-1}\otimes N^4)\bm{G}_n^u \\ \widehat{\bm{G}}_n \end{bmatrix}. \end{equation*} Therefore, we first obtain the velocity $\bm{W}_n$ by solving the linear system \begin{equation}\label{Eq:VelocitySystem} \widehat{M}_n \bm{W}_n = \widehat{\bm{G}}_n, \end{equation} and then, we can compute the displacement $\bm{U}_n$ as \begin{equation}\label{Eq:DisplacementUpdate1} \bm{U}_n = \mathcal{I}_{d} \otimes N^5 \bm{W}_n + (K^{-1}\otimes N^4)\bm{G}_n^u. \end{equation} Finally, since $\big[\bm{G}_n^u\big]_i^{\ell} = K\bm{U}(t_{n-1}^-)\cdot\boldsymbol{\Psi}_i^{\ell}(t_{n-1}^+)$, by defining $\bar{\bm{G}}_n^u\in \mathbb{R}^{D_n}$ as \begin{equation} \big[\bar{\bm{G}}_n^u\big]_i^{\ell} = \bm{U}(t_{n-1}^-)\cdot\boldsymbol{\Psi}_i^{\ell}(t_{n-1}^+), \end{equation} we can rewrite \eqref{Eq:DisplacementUpdate1} as \begin{equation}\label{Eq:AltDisplacementUpdate2} \bm{U}_n = \mathcal{I}_{d} \otimes N^5 \bm{W}_n + (\mathcal{I}_{d}\otimes N^4)\bar{\bm{G}}_n^u. \end{equation} \section{Numerical results} \label{Sc:NumericalResults} In this section we report a wide set of numerical experiments to validate the theoretical estimates and asses the performance of the DG method proposed in Section \ref{Sc:Method}. We first present a set of verification tests for scalar- and vector-valued problems, then we test our formulation onto two- and three-dimensional elastodynamics wave propagation problems, through the open source software SPEED (\url{http://speed.mox.polimi.it/}). \subsection{Scalar problem} \label{Sec:1DConvergence} For a time interval $I=[0,T]$, with $T=10$, we solve the scalar problem \begin{equation} \label{Eq:ScalarProblem} \begin{cases} \dot{u}(t) = w(t) & \forall t\in [0,10],\\ \dot{w}(t) + 5 w(t) + 6u(t) = f(t) & \forall t\in [0,10], \\ u(0) = 2, \\ w(0) = -5, \end{cases} \end{equation} whose exact solution is $ \bm{z}(t) = (w(t),u(t)) = (-3e^{-3t}-3e^{-2t},e^{-3t}+e^{-2t})$ for $t\in[0,10]$. We partition the time domain $I$ into $N$ time slabs of uniform length $\Delta t$ and we suppose the polynomial degree to be constant for each time-slab, i.e. $r_n = r$, for any $n=1,\dots,N$. We first compute the error $|||\bm{z}_{DG} -\bm{z} |||$ as a function of the time-step $\Delta t$ for several choices of the polynomial degree $r$, as shown in Figure \ref{Fig:ConvergenceTest0D} (left). The obtained results confirms the super-optimal convergence properties of the scheme as shown in \eqref{Eq:ErrorEstimate}. Finally, since $\bm{z} \in C^{\infty}(\mathbb{R})$, from Figure \ref{Fig:ConvergenceTest0D} (right) we can observe that the numerical error decreases exponentially with respect to the polynomial degree $r$. \begin{figure}[htbp] \centering \includegraphics[width=0.49\textwidth]{ConvergenceTest0D.png} \includegraphics[width=0.49\textwidth]{ConvergenceTest0D_Degree.png} \caption{Test case of Section~\ref{Sec:1DConvergence}. Left: computed error $|||\bm{z}_{DG}-\bm{z}|||$ as a function of time-step $\Delta t$, with $r = 2,3,4,5$. Right: computed error $|||\bm{z}-\bm{z}_{DG}|||$ as a function of polynomial degree $r$, using a time step $\Delta t = 0.1$.}\label{Fig:ConvergenceTest0D} \end{figure} \subsection{Application to a the visco-elastodynamics system} \label{Sec:AppVE} In the following experiments we employ the proposed DG method to solve the second-order differential system of equations stemming from the spatial discretization of the visco-elastodynamics equation: \begin{equation} \label{Eq:Elastodynamic} \begin{cases} \partial_t \bold{u} - \bold{w} = \textbf{0}, & \text{in } \Omega\times(0,T],\\ \rho\partial_{t}\bold{w} + 2\rho\zeta\bold{w} + \rho \zeta^2\bold{u} - \nabla\cdot\boldsymbol{\sigma}(\bold{u}) = \textbf{f}, & \text{in } \Omega\times(0,T],\\ \end{cases} \end{equation} where $\Omega\in\mathbb{R}^\mathsf{d}$, $\mathsf{d}=2,3$, is an open bounded polygonal domain. Here, $\rho$ represents the density of the medium, $\zeta$ is a decay factor whose dimension is inverse of time, $\textbf{f}$ is a given source term (e.g. seismic source) and $\boldsymbol{\sigma}$ is the stress tensor encoding the Hooke's law \begin{equation} \boldsymbol{\sigma}(\bold{u})_{ij} = \lambda\sum_{k=1}^\mathsf{d} \frac{\partial u_k}{\partial x_k} + \mu \left( \frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i} \right), \quad {\rm for} \; i,j=1,...,\mathsf{d}, \end{equation} being $\lambda$ and $\mu$ the first and the second Lam\'e parameters, respectively. Problem \eqref{Eq:Elastodynamic} is usually supplemented with boundary conditions for $\bold{u}$ and initial conditions for $\bold{u}$ and $\bold{w}$, that we do not report here for brevity. Finally, we suppose problem's data are regular enough to gaurantee its well-posedness \cite{AntoniettiFerroniMazzieriQuarteroni_2017}. By employing a finite element discretization (either in its continuous or discontinuous variant) for the semi-discrete approximation (in space) of \eqref{Eq:Elastodynamic} we obtain the following system \begin{equation*} \left( \begin{matrix} I & 0 \\ 0 & P \end{matrix} \right)\left( \begin{matrix} \dot{\bm{u}} \\ \dot{\bm{w}} \end{matrix} \right) + \left( \begin{matrix} 0 & -I \\ K & L \end{matrix} \right)\left( \begin{matrix} {\bm{u}} \\ {\bm{w}} \end{matrix} \right) = \left( \begin{matrix} \textbf{0} \\ \bm{f} \end{matrix} \right), \end{equation*} that can be easily rewritten as in \eqref{Eq:FirstOrderSystem1}. We remark that within the matrices and the right hand side are encoded the boundary conditions associated to \eqref{Eq:Elastodynamic}. For the space discretization of \eqref{Eq:Elastodynamic}, we consider in the following a high order Discontinuous Galerkin method based either on general polygonal meshes (in two dimensions) \cite{AnMa2018} or on unstructured hexahedral meshes (in three dimensions) \cite{mazzieri2013speed}. For the forthcoming experiments we denote by $h$ the granularity of the spatial mesh and $p$ the order of polynomials employed for space approximation. The combination of space and time DG methods yields to a high order space-time DG method that we denote by STDG. Remark that the latter has been implemented in the open source software SPEED (\url{http://speed.mox.polimi.it/}). \subsubsection{A two-dimensional test case with space-time polyhedral meshes} \label{Sec:2DConvergence} As a first verification test we consider problem~\eqref{Eq:Elastodynamic} in a bidimensional setting, i.e. $\Omega = (0,1)^2 \subset \mathbb{R}^2$. We set the mass density $\rho=1$, the Lamé coefficients $\lambda=\mu=1$, $\zeta = 1$ and choose the data $\textbf{f}$ and the initial conditions such that the exact solution of \eqref{Eq:Elastodynamic} is $\textbf{z} = (\bold{u},\bold{w})$ where \begin{equation*} \bold{u} = e^{-t} \begin{bmatrix} -\sin^2(\pi x)\sin(2\pi y) \\ \sin(2\pi x)\sin^2(\pi y) \end{bmatrix}, \qquad \bold{w} = \partial_t\bold{u}. \end{equation*} We consider a polygonal mesh (see Figure~\ref{fig:dgpolyspace-time}) made by 60 elements and set $p=8$. We take $T=0.4$ and divide the temporal iterval $(0,T]$ into $N$ time-slabs of uniform lenght $\Delta t$. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{poly_spacetime.png} \caption{Test case of Section~\ref{Sec:2DConvergence}. Example of space-time polygonal grid used for the verification test.} \label{fig:dgpolyspace-time} \end{figure} In Figure~\ref{Fig:ConvergenceTest2D} (left) we show the energy norm \eqref{Eq:Norm} of the numerical error $|||\bm{z}_{DG}-\bm{z}|||$ computed for several choices of time polynomial degree $r=1,2,3$ by varying the time step $\Delta t$. We can observe that the error estimate \eqref{Eq:ErrorEstimate} is confirmed by our numerical results. Moreover, from Figure~\ref{Fig:ConvergenceTest2D} (right) we can observe that the numerical error decreases exponentially with respect to the polynomial degree $r$. In the latter case we fixed $\Delta t = 0.1$ and use 10 polygonal elements for the space mesh, cf. Figure~\ref{fig:dgpolyspace-time}. \begin{figure}[h!] \includegraphics[width=0.49\textwidth]{ConvergenceTest2D_Time.png} \includegraphics[width=0.49\textwidth]{ConvergenceTest2D_Degree.png} \caption{Test case of Section~\ref{Sec:2DConvergence}. Left: computed error $|||\bm{z}-\bm{z}_{DG}|||$ as a function of time-step $\Delta t$ for $r = 1,2,3$, using a space discretization with a polygonal mesh composed of $60$ elements and $p=8$. Right: computed error $|||\bm{z}-\bm{z}_{DG}|||$ as a function of the polynomial degree $r=p$, using a spatial grid composed of 10 elements and a time step $\Delta t = 0.1$. } \label{Fig:ConvergenceTest2D} \end{figure} \subsubsection{A three-dimensional test case with space-time polytopal meshes} \label{Sec:3DConvergence} As a second verification test we consider problem~\eqref{Eq:Elastodynamic} for in a three dimensional setting. Here, we consider $\Omega = (0,1)^3 \subset \mathbb{R}^3$, $T=10$ and we set the external force $\boldsymbol{f}$ and the initial conditions so that the exact solution of \eqref{Eq:Elastodynamic} is $\textbf{z} = (\bold{u},\bold{w})$ given by \begin{equation} \label{Testcase} \bold{u} = \cos(3\pi t) \begin{bmatrix} \sin(\pi x)^2\sin(2\pi y)\sin(2\pi z) \\ \sin(2\pi x)\sin(\pi y)^2\sin(2\pi z) \\ \sin(2\pi x)\sin(2\pi y)\sin(\pi z)^2 \end{bmatrix}, \quad \bold{w} = -3\pi\cos(3\pi t) \bold{u}. \end{equation} We partition $\Omega$ by using a conforming hexahedral mesh of granularity $h$, and we use a uniform time domain partition of step size $\Delta t$ for the time interval $[0,T]$. We choose a polynomial degree $ p \ge 2$ for the space discretization and $ r \ge 1$ for the temporal one. We firstly set $h=0.0125$ corresponding to $512$ elements and fix $p=6$, and let the time step $\Delta t$ varying from $0.4$ to $0.00625$ for $r=1,2,3,4$. The computed energy errors are shown in Figure \ref{Fig:ConvergenceTest3D} (left). We can observe that the numerical results are in agreement with the theoretical ones, cf. Theorem~\ref{Th:ErrorEstimate}. We note that with $r=4$, the error reaches a plateau for $\Delta t \leq 0.025$. However, this effect could be easily overcome by increasing the spatial polynomial degree $p$ and/or by refining the mesh size $h$. Then, we fix a grid size $h=0.25$, a time step $\Delta t=0.1$ and let vary together the polynomial degrees, $p=r=2,3,4,5$. Figure \ref{Fig:ConvergenceTest3D} (right) shows an exponential decay of the error. \begin{figure} \includegraphics[width=0.49\textwidth]{ConvergenceTest3D_Time.png} \includegraphics[width=0.49\textwidth]{ConvergenceTest3D_Degree.png} \caption{Test case of Section~\ref{Sec:3DConvergence}. Left: computed errors $|||\bm{z}_{DG}-\boldsymbol{u}|||$ as a function of the time-step $\Delta t$, with $r=1,2,3,4$, $h=0.125$ and $p=6$. Right: computed errors $|||\bm{z}_{DG}-\boldsymbol{u}|||$ as a function of the polynomial degree $p=r$, with $\Delta t = 0.1$, $h=0.25$.} \label{Fig:ConvergenceTest3D} \end{figure} \subsubsection{Plane wave propagation} \label{Sec:PlaneWave} The aim of this test is to compare the performance of the proposed method STDG with the space-time DG method (here referred to as STDG$_0$) firstly presented in \cite{Paper_Dg-Time} and then applied to 3D problems in \cite{AnMaMi20}. The difference between STDG$_0$ and STDG is in the way the time approximation is obtain. Indeed, the former integrates the second order in time differential problem, whereas the latter discretizes the first order in time differential system. On the one hand, as pointed out in \cite{AnMaMi20}, the main limitation of the STDG$_0$ method is the ill-conditioning of the resulting stiffness matrix that makes the use of iterative solvers quite difficult. Hence, for STDG$_0$ direct methods are used forcing to store the stiffness matrix and greatly reducing the range of problems affordable by that method. On the other hand, even if the final linear systems stemming from STDG$_0$ and STDG methods are very similar (in fact they only differ upon the definition of the (local) time matrices) we obtain for the latter a well-conditioned system matrix making iterative methods employable and complex 3D problems solvable. Here, we consider a plane wave propagating along the vertical direction in two (horizontally stratified) heterogeneous domains. The source plane wave is polarized in the $x$ direction and its time dependency is given by a unit amplitude Ricker wave with peak frequency at $2~{\rm Hz}$. We impose a free surface condition on the top surface, absorbing boundary conditions on the bottom surface and homogeneous Dirichlet conditions along the $y$ and $z$ direction on the remaining boundaries. We solve the problem in two domains that differs from dimensions and material properties, and are called as Domain A and Domain B, respectively. Domain A has dimension $\Omega=(0,100)~{\rm m}\times(0,100)~{\rm m}\times(-500,0)~{\rm m}$, cf. Figure~\ref{Fig:TutorialDomain}, and is partitioned into 3 subdomains corresponding to the different material layers, cf. Table~\ref{Tab:TutorialMaterials}. The subdomains are discretized in space with a uniform cartesian hexahedral grid of size $h = 50~{\rm m}$ that results in 40 elements. Domain B has dimensions $\Omega=(0,100)~{\rm m}\times(0,100)~{\rm m}\times(-1850,0)~{\rm m}$, and has more layers, cf. Figure~\ref{Fig:TorrettaDomain} and Table~\ref{Tab:TorrettaMaterials}. The subdomains are discretized in space with a cartesian hexahedral grid of size $h$ ranging from $15~{\rm m}$ in the top layer to $50~{\rm m}$ in the bottom layer. Hence, the total number of elements is 1225. \begin{figure} \begin{minipage}{\textwidth} \begin{minipage}{0.3\textwidth} \centering \includegraphics[width=0.7\textwidth]{TutorialDomain}% \captionof{figure}{Test case of Section~\ref{Sec:PlaneWave}-Domain A. Computational domain $\Omega = \cup_{\ell=1}^{3}\Omega_{\ell}$.} \label{Fig:TutorialDomain} \end{minipage} \hfill \begin{minipage}{0.65\textwidth} \centering \begin{tabular}{|l|r|r|r|r|r|} \hline Layer & Height $[m]$ & $\rho [kg/m^3]$ & $c_p [m/s]$ & $c_s [m/s]$ & $\zeta [1/s]$ \\ \hline \hline $\Omega_1$ & $ 150 $ & $1800$ & $600$ & $300$ & $0.166$ \\ \hline $\Omega_2$ & $ 300 $ & $2200$ & $4000$ & $2000$ & $0.025$ \\ \hline $\Omega_3$ & $ 50 $ & $2200$ & $4000$ & $2000$ & $0.025$ \\ \hline \end{tabular} \captionof{table}{Mechanical properties for test case of Section~\ref{Sec:PlaneWave}-Domain A. Here, the Lam\'e parameters $\lambda$ and $\mu$ can be obtained through the relations $\mu = \rho c_s^2$ and $\lambda = \rho c_p^2 -\mu$.} \label{Tab:TutorialMaterials} \end{minipage} \end{minipage} \end{figure} \begin{figure} \begin{minipage}{\textwidth} \begin{minipage}{0.3\textwidth} \centering \includegraphics[width=0.7\textwidth]{TorrettaDomain}% \captionof{figure}{Test case of Section~\ref{Sec:PlaneWave}-Domain B. Computational domain $\Omega = \cup_{\ell=1}^{11}\Omega_{\ell}$.} \label{Fig:TorrettaDomain} \end{minipage} \hfill \begin{minipage}{0.65\textwidth} \centering \begin{tabular}{|l|r|r|r|r|r|} \hline Layer & Height $[m]$ & $\rho [kg/m^3]$ & $c_p [m/s]$ & $c_s [m/s]$ & $\zeta [1/s]$ \\ \hline \hline $\Omega_1$ & $ 15 $ & $1800$ & $1064$ & $236$ & $0.261$ \\ \hline $\Omega_2$ & $ 15 $ & $1800$ & $1321$ & $294$ & $0.216$ \\ \hline $\Omega_3$ & $ 20 $ & $1800$ & $1494$ & $332$ & $0.190$ \\ \hline $\Omega_4$ & $ 30 $ & $1800$ & $1664$ & $370$ & $0.169$ \\ \hline $\Omega_5$ & $ 40 $ & $1800$ & $1838$ & $408$ & $0.153$ \\ \hline $\Omega_6$ & $60 $ & $1800$ & $2024$ & $450$ & $0.139$ \\ \hline $\Omega_7$ & $ 120 $ & $2050$ & $1988$ & $523$ & $0.120$ \\ \hline $\Omega_8$ & $500 $ & $2050$ & $1920$ & $600$ & $0.105$ \\ \hline $\Omega_9$ & $ 400 $ & $2400$ & $3030$ & $1515$ & $0.041$ \\ \hline $\Omega_{10}$ & $ 600 $ & $2400$ & $4180$ & $2090$ & $0.030$ \\ \hline $\Omega_{11}$ & $ 50 $ & $2450$ & $5100$ & $2850$ & $0.020$ \\ \hline \end{tabular} \captionof{table}{Mechanical properties for test case of Section~\ref{Sec:PlaneWave}-Domain B. Here, the Lam\'e parameters $\lambda$ and $\mu$ can be obtained through the relations $\mu = \rho c_s^2$ and $\lambda = \rho c_p^2 -\mu$.} \label{Tab:TorrettaMaterials} \end{minipage} \end{minipage} \end{figure} In Figure~\ref{Fig:PlanewaveDisplacement} on the left (resp. on the right) we report the computed displacement $\bm{u}$ along the $x-$axis, registered at point $P=(50, 50, 0)~{\rm m}$ located on the top surface for Domain A (resp. Domain B). We compare the results with those obtained in \cite{AnMaMi20}, choosing a polynomial degree $p=r=2$ in both space and time variables and a time step $\Delta t = 0.01$. In both cases, we can observe a perfect agreement of the two solutions. \begin{figure} \includegraphics[width=0.49\textwidth]{TutorialDisp.png} \includegraphics[width=0.49\textwidth]{TorrettaDisp.png} \caption{Test case of Section~\ref{Sec:PlaneWave}. Computed displacement $\bm{u}$ along $x-$axis registered at $P=(50, 50, 0)~{\rm m}$ obtained employing the proposed formulation, i.e. STDG method, and the method \cite{AnMaMi20}, i.e. STDG$_0$, for Domain A (left) and Domain B (right). We set the polynomial degree $p=r=2$ in both space and time dimensions and time step $\Delta t = 0.01$.} \label{Fig:PlanewaveDisplacement} \end{figure} In Table~\ref{Tab:Comparison} we collect the condition number of the system matrix, the number of GMRES iterations and the execution time for the STDG$_0$ and STDG methods applied on a single time integration step, computed by using Domain A and Domain B, respectively. From the results we can observe that the proposed STDG method outperforms the STDG$_0$ one, in terms of condition number and GMRES iteration counts for the solution of the corresponding linear system. Clearly, for small problems, when the storage of the system matrix and the use of a direct solvers is possible the STSG$_0$ remains the most efficient solution. \begin{table}[h!] \centering \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Dom.} & \multirow{2}{*}{$p$} & \multicolumn{2}{c|}{Condition number} & \multicolumn{2}{c|}{\# GMRES it.} & \multicolumn{2}{c|}{Execution time [s]} \\ \cline{3-8} && STSG$_0$ & STDG & STSG$_0$ & STDG & STSG$_0$ & STDG \\ \hline \hline A & 2 & $1.2\cdot10^9$ & $1.3\cdot10^2$ & $1.5\cdot10^4$ & $27$ & $1.1$ & $3.0\cdot10^{-3}$\\ \hline A & 4 & $2.7\cdot10^{10}$ & $2.8\cdot10^3$ & $>10^6$ & $125$ & $>2200$ & $0.3\cdot10^{-1}$\\ \hline B & 2 & $1.3\cdot10^{14}$ & $5.0\cdot10^2$ & $4.2\cdot10^5$ & $56$ & $452.3$ & $6.5\cdot10^{-2}$\\ \hline \end{tabular} \caption{Test case of Section~\ref{Sec:PlaneWave}. Comparison between the proposed formulation \eqref{Eq:WeakProblem} and the method presented in \cite{AnMaMi20} in terms of conditioning and iterative resolution. We set $p=r$ and we fix the relative tolerance for the GMRES convergence at $10^{-12}$. } \label{Tab:Comparison} \end{table} \subsubsection{Layer over a half-space} \label{Sec:LOH1} In this experiment, we test the performance of the STDG method by considering a benchmark test case, cf. \cite{DaBr01} for a real elastodynamics application, known in literature as layer over a half-space (LOH). We let $\Omega=(-15,15)\times(-15,15) \times(0,17)~{\rm km}$ be composed of two layers with different material properties, cf. Table~\ref{Table:LOH1Materials}. The domain is partitioned employing two conforming meshes of different granularity. The ``fine'' (resp. ``coarse'') grid is composed of $352800$ (resp. $122400$) hexahedral elements, varying from size $86~{\rm m}$ (resp. $167~{\rm m}$), in the top layer, to $250~{\rm m}$ (resp. $500~{\rm m}$) in the bottom half-space, cf. Figure~\ref{Fig:LOH1Domain}. On the top surface we impose a free surface condition, i.e. $\boldsymbol{\sigma} \textbf{n} = \textbf{0}$, whereas on the lateral and bottom surfacews we consider absorbing boundary conditions \cite{stacey1988improved}. \begin{figure} [h!] \centering \includegraphics[width=0.9\textwidth]{LOHDomain}% \captionof{figure}{Test case of Section~\ref{Sec:LOH1}. Computational domain $\Omega = \cup_{\ell=1}^{2}\Omega_{\ell}$ and its partition.} \label{Fig:LOH1Domain} \end{figure} \begin{table}[h!] \centering \begin{tabular}{|l|r|r|r|r|r|} \hline Layer & Height $[km]$ & $\rho [kg/m^3]$ & $c_p [m/s]$ & $c_s [m/s]$ & $\zeta [1/s]$ \\ \hline \hline $\Omega_1$ & $ 1 $ & $2600$ & $2000$ & $4000$ & $0$ \\ \hline $\Omega_2$ & $ 16 $ & $2700$ & $3464$ & $6000$ & $0$ \\ \hline \end{tabular} \caption{Test case of Section~\ref{Sec:LOH1}. Mechanical properties of the medium. Here, the Lam\'e parameters $\lambda$ and $\mu$ can be obtained through the relations $\mu = \rho c_s^2$ and $\lambda = \rho c_p^2 -\mu$.} \label{Table:LOH1Materials} \end{table} The seismic excitation is given by a double couple point source located at the center of the domain expressed by \begin{equation} \label{Eq:LOH1Source} \bm{f}(\bm{x},t) = \nabla \delta (\bm{x}-\bm{x}_S)M_0\bigg(\frac{t}{t_0^2}\bigg)\exp{(-t/t_0)}, \end{equation} where $\bm{x}_S = (0,0,2)~{\rm km}$, $M_0 = 10^8~{\rm Nm}$ is the scalar seismic moment, $t_0 = 0.1~{\rm s}$ is the smoothness parameter, regulating the frequency content and amplitude of the source time function. The semi-analytical solution is available in \cite{DaBr01} together with further details on the problem's setup. We employ the STDG method with different choices of polynomial degrees and time integration steps. In Figures~\ref{Fig:LOH1ResultsFine41}-\ref{Fig:LOH1ResultsCoarse44-2} we show the velocity wave field computed at point $(6,8,0)~{\rm km}$ along with the reference solution, in both the time and frequency domains, for the sets of parameters tested. We also report relative seismogram error \begin{equation} \label{Eq:LOH1Error} E = \frac{\sum_{i=1}^{n_S}(\bm{u}_{\delta}(t_i)-\bm{u}(t_i))^2}{\sum_{i=1}^{n_S}(\bm{u}(t_i)^2)}, \end{equation} where $n_S$ is the number of samples of the seismogram, $\bm{u}_{\delta}(t_i)$ and $\bm{u}(t_i)$ are, respectively, the value of seismogram at sample $t_i$ and the corresponding reference value. In Table~\ref{Table:LOH1Sensitivity} we report the set of discretization parameters employed, together with some results obtaineds in terms of accuracy and computational efficiency. \begin{figure} [h!] \centering \includegraphics[width=0.5\textwidth]{LOH_4_1_Fine_Vel.png}% \includegraphics[width=0.5\textwidth]{LOH_4_1_Fine_Freq.png}% \captionof{figure}{Test case of Section~\ref{Sec:LOH1}. Velocity wave field recorded at $(6,8,0)~{\rm km}$ along with the reference solution (black line), in the time domain (left) and frequency domain (right), obtained with the ``fine'' grid, polynomial degree $p=4$ for space and $r=1$ for time domain, and time-step $\Delta t = 10^{-3}~{\rm s}$. The error $E$ is computed as in \eqref{Eq:LOH1Error}.} \label{Fig:LOH1ResultsFine41} \end{figure} \begin{figure} [h!] \centering \includegraphics[width=0.49\textwidth]{LOH_4_2_Fine_Vel.png}% \includegraphics[width=0.49\textwidth]{LOH_4_2_Fine_Freq.png}% \captionof{figure}{Test case of Section~\ref{Sec:LOH1}. Velocity wave field recorded at $(6,8,0)~{\rm km}$ along with the reference solution (black line), in the time domain (left) and frequency domain (right), obtained with the ``fine'' grid, polynomial degree $p=4$ for space and $r=2$ for time domain, and time-step $\Delta t = 10^{-3}~{\rm s}$. The error $E$ is computed as in \eqref{Eq:LOH1Error}.} \label{Fig:LOH1ResultsFine42} \end{figure} \begin{figure} [h!] \centering \includegraphics[width=0.49\textwidth]{LOH_4_4_-3_Coarse_Vel.png}% \includegraphics[width=0.49\textwidth]{LOH_4_4_-3_Coarse_Freq.png}% \captionof{figure}{Test case of Section~\ref{Sec:LOH1}. Velocity wave field recorded at $(6,8,0)~{\rm km}$ along with the reference solution (black line), in the time domain (left) and frequency domain (right), obtained with the ``coarse'' grid, polynomial degree $p=4$ for space and $r=4$ for time domain, and time-step $\Delta t = 10^{-3}~{\rm s}$. The error $E$ is computed as in \eqref{Eq:LOH1Error}.} \label{Fig:LOH1ResultsCoarse44-3} \end{figure} \begin{figure} [h!] \centering \includegraphics[width=0.49\textwidth]{LOH_4_4_-2_Coarse_Vel.png}% \includegraphics[width=0.49\textwidth]{LOH_4_4_-2_Coarse_Freq.png}% \captionof{figure}{Test case of Section~\ref{Sec:LOH1}. Velocity wave field recorded at $(6,8,0)~{\rm km}$ along with the reference solution (black line), in the time domain (left) and frequency domain (right), obtained with the ``coarse'' grid, polynomial degree $p=4$ for space and $r=4$ for time domain, and time-step $\Delta t = 5\cdot10^{-2}~{\rm s}$. The error $E$ is computed as in \eqref{Eq:LOH1Error}.} \label{Fig:LOH1ResultsCoarse44-2} \end{figure} \begin{table}[h!] \centering \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Grid} & \multirow{2}{*}{$p$} & \multirow{2}{*}{$r$} & \multirow{2}{*}{$\Delta t~[{\rm s}]$} & GMRES & Exec. Time & Tot. Exec. & \multirow{2}{*}{Error $E$}\\ &&&&iter.&per iter. [s] &Time [s]&\\ \hline \hline Fine & $4$ & $1$ & $10^{-3}$ & $6$ & $2.9$ & $3.08\cdot10^{4}$ & $0.015$ \\ \hline Fine & $4$ & $2$ & $10^{-3}$ & $8$ & $5.6$ & $6.59\cdot10^{4}$ & $0.020$ \\ \hline Coarse & $4$ & $4$ & $10^{-3}$ & $12$ & $7.6$ & $8.14\cdot10^{4}$ & $0.229$ \\ \hline Coarse & $4$ & $4$ & $5\cdot10^{-2}$ & $24$ & $27.9$ & $7.22\cdot10^{4}$ & $0.329$ \\ \hline \end{tabular} \caption{Test case of Section~\ref{Sec:LOH1}. Set of discretization parameters employed, and corresponding results in terms of computational efficiency and accuracy. The execution times are computed employing $512$ parallel processes, on \textit{Marconi100} cluster located at CINECA (Italy).} \label{Table:LOH1Sensitivity} \end{table} By employing the ``fine'' grid we obtain very good results both in terms of accuracy and efficiency. Indeed, the minimum relative error is less than $2\%$ with time polynomial degree $r=1$, see Figure~\ref{Fig:LOH1ResultsFine41}. Choosing $r=2$, as in Figure~\ref{Fig:LOH1ResultsFine42}, the error is larger (by a factor 40\%) but the solution is still enough accurate. However, in terms of total Execution time, with $r=1$ the algorithm performs better than choosing $r=2$, cf. Table~\ref{Table:LOH1Sensitivity}, column 7. As shown in Figure~\ref{Fig:LOH1ResultsCoarse44-3}, the ``coarse'' grid produces larger errors and worsen also the computational efficiency, since the number of GMRES iterations for a single time step increases. Doubling the integration time step $\Delta t$, see Figure~\ref{Fig:LOH1ResultsCoarse44-2}, causes an increase of the execution time for a single time step that partly compensate the decrease of total number of time steps. Consequently, the total execution time reduces but only by 12\%. In addition, this choice causes some non-physical oscillations in the code part of the signal that contribute to increase the relative error. Indeed, we can conclude that for this test case, spatial discretization is the most crucial aspect. Refining the mesh produces a great decrease of the relative error and increases the overall efficiency of the method. Concerning time integration, it appears that the method performs well even with low order polynomial degrees both in terms of computational efficiency and of accuracy. The method achieves its goal of accurately solving this elastodynamics problem that counts between 119 (``coarse'' grid) and 207 (``fine'' grid) millions of unknowns. The good properties of the proposed STDG method is once again highlighted by the fact that all the presented results are achieved without any preconditioning of the linear system. \subsection{Seismic wave propagation in the Grenoble valley} \label{Sec:Grenoble} In this last experiment, we apply the STDG method to a real geophysical study \cite{ChSt10}. This application consists in the simulation of seismic wave propagation generated by an hypothetical earthquake of magnitude $M_w = 6$ in the Grenoble valley, in the French Alps. The Y-shaped Grenoble valley, whose location is represented in Figure~\ref{Fig:GrenobleDomain}, is filled with late quaternary deposits, a much softer material than the one composing the surrounding mountains. We approximate the mechanical characteristics of the ground by employing three different material layers, whose properties are listed in Table~\ref{Table:GrenobleMaterials}. The alluvial basin layer contains soft sediments that compose the Grenoble's valley and corresponds to the yellow portion of the domain in Figure~\ref{Fig:GrenobleDomain}. Then, the two bedrock layers approximate the stiff materials composing the surrounding Alps and the first crustal layer. The earthquake generation is simulated through a kinematic fault rapture along a plane whose location is represented in Figure~\ref{Fig:GrenobleDomain}. \begin{figure} [h!] \centering \includegraphics[width=0.9\textwidth]{grenoble_paraview_2}% \caption{Test case of Section~\ref{Sec:Grenoble}. Geophysical domain and its location.} \label{Fig:GrenobleDomain} \end{figure} \begin{table}[h!] \centering \begin{tabular}{|@{}|l|r|r|r|r|} \hline Layer & $\rho~[{\rm kg/m^3}]$ & $c_s~[\rm{m/s}]$ & $c_p~[\rm{m/s}]$ & $ \zeta~[\rm {1/s}]$ \\ \hline \hline Alluvial basin & 2140 + 0.125 $z_{d}$ & 300 + 19 $\sqrt{z_{d}}$ & 1450 + 1.2 $z_{d}$ & 0.01 \\ \hline Bedrock $(0-3)$ km & 2720 & 3200 & 5600 & 0 \\ \hline Bedrock $(3-7)$ km & 2770 & 3430 & 5920 & 0 \\ \hline \end{tabular} \caption{Test case of Section~\ref{Sec:Grenoble}. Mechanical properties of the medium. Here, the Lam\'e parameters $\lambda$ and $\mu$ can be obtained through the relations $\mu = \rho c_s^2$ and $\lambda = \rho c_p^2 -\mu$. $z_{d}$ measures the depth of a point calculated from the top surface.} \label{Table:GrenobleMaterials} \end{table} The computational domain $\Omega=(0,50)\times(0,47)\times (-7,3)~{\rm km}$ is discretized with a fully unstructured hexahedral mesh represented in Figure~\ref{Fig:GrenobleDomain}. The mesh, composed of $202983$ elements, is refined in the valley with a mesh size $h=100~{\rm m}$, while it is coarser in the bedrock layers reaching $h\approx 1~{\rm km}$. \begin{figure} [h!] \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=\textwidth]{monitors}% \end{minipage} \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=\textwidth]{GrenobleCrossSection}% \end{minipage} \caption{Left: surface topography in the Grenoble area. The white line indicates the monitor points examined in Figure~\ref{Fig:GrenobleVel}. Right: cross section of the valley in correspondence of the monitor points.} \label{Fig:GrenoblePoints} \end{figure} \begin{figure} [h!] \begin{minipage}{\textwidth} \centering \includegraphics[width=\textwidth]{sismogrammi}% \end{minipage} \caption{Test case of Section~\ref{Sec:Grenoble}. Computed velocity field at the monitored points in Figure~\ref{Fig:GrenoblePoints}, together with the computed peak ground velocity for each monitor point. Comparisono between the STDG (bloack) solution and the SPECFEM (red) solution \cite{Chaljub2010QuantitativeCO}.} \label{Fig:GrenobleVel} \end{figure} On the top surface we impose a free surface condition, i.e. $\boldsymbol{\sigma} \textbf{n} = \textbf{0}$, whereas on the lateral and bottom surface we consider absorbing boundary conditions \cite{stacey1988improved}. We employ the STDG method with polynomial degrees $p=3$ for the space discretization and $r=1$ for the time integration, together with a time step $\Delta t = 10^{-3}~{\rm s}$. We focus on a set of monitor points whose location is represented in Figure~\ref{Fig:GrenoblePoints}. In Figure~\ref{Fig:GrenobleVel}, we report the velocity field registered at these points compared with the ones obtained with a different code, namely SPECFEM \cite{Chaljub2010QuantitativeCO}. The results are coherent with the different location of the points. Indeed, we observe highly perturbed waves in correspondence of the points $1-7$ that are located in the valley, i.e. in the alluvial material. This is caused by a refraction effect that arises when a wave moves into a soft material from a stiffer one. Moreover, the wave remains trapped inside the layer bouncing from the stiffer interfaces. The absence of this effect is evident from the monitors $8$ and $9$ that are located in the bedrock material. These typical behaviors are clearly visible also in Figure~\ref{Fig:GrenobleSnap}, where the magnitude of the ground velocity is represented for different time instants. Finally, concerning the computation efficiency of the scheme, we report that, with this choice of discretization parameters, we get a linear system with approximately $36$ millions of degrees of freedom that is solved in $17.5$ hours, employing $512$ parallel processes, on \textit{Marconi100} cluster located at CINECA (Italy). \begin{figure} [h!] \centering \includegraphics[width=0.49\textwidth]{snapshot5} \includegraphics[width=0.49\textwidth]{snapshot9} \includegraphics[width=0.49\textwidth]{snapshot13} \includegraphics[width=0.49\textwidth]{snapshot17} \caption{Test case of Section~\ref{Sec:Grenoble}. Computed ground velocity at different time instants obtained with polynomial degrees $p=3$ and $r=1$, for space and time, respectively, and $\Delta t = 10^{-3}~s$.} \label{Fig:GrenobleSnap} \end{figure} \section{Conclusions} In this work we have presented and analyzed a new time Discontinuous Galerkin method for the solution of a system of second-order differential equations. We have built an energy norm that naturally arose by the variational formulation of the problem, and that we have employed to prove well-posedness, stability and error bounds. Through a manipulation of the resulting linear system, we have reduced the computation cost of the solution phase and we have implemented and tested our method in the open-source software SPEED (\url{http://speed.mox.polimi.it/}). Finally, we have verified and validated the proposed numerical algorithm through some two- and three-dimensional benchmarks, as well as real geophysical applications. \section{Aknowledgements} This work was partially supported by "National Group of Computing Science" (GNCS-INdAM). P.F. Antonietti has been supported by the PRIN research grant n. 201744KLJL funded by the Ministry of Education, Universities and Research (MIUR).
proofpile-arXiv_065-25
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} During the last two decades, there has been great progress in the design of quantum devices based on waveguide structures. The latter is comprised of quantum emitters of natural or artificial atoms (qubits) that are coupled with the one--dimensional optical channel. Numerous applications in quantum simulation, quantum information processing, and communication have already been discussed in the literature\cite{qed1, qed2, qed3, qed4, QIT1, QIT2, QIT3, QIT4, QIT5}. Despite all efforts, the level of control and preservation of quantum coherence achieved in these media are still beyond the requirements for a practical realization of operable quantum information devices. This difficulty could be overcome by a better understanding of the nature of the interaction between matter and radiation. Accordingly, intensive studies of light--matter interaction in waveguide structures are necessary. Superconducting metamaterials (SCQMMs) are man-made material units that are both interesting and quite promising for the ultimate fabrication of quantum devices \cite{zag, qmm1, qmm2, qmm3, Rakhmanov2008, Shvetsov2013, Asai2015, Asai2018}. These engineered media, comprised of periodically arranged artificial atoms that form superconducting quantum bits (SCQ) while interacting with EM fields inside one--dimensional transmission lines. Owing to spatial confinement, tunability of the SCQ parameters, and the ability to tailor photon dispersion relation by their particular setup, SCQMMs may be conveniently engineered to provide tunable ``atom'' -- field interaction that can reach regimes ranging from weak to ultra strong coupling. This is of particular interest in the case of qubit interaction with quantized radiation fields when strong qubit--photon coupling leads to effective photon--photon and qubit--qubit correlations. The latter allows for the emergence of novel interference effects with possible practical applications. For example, photons may exhibit a nontrivial dispersion relation such as band edges and band gaps. In this way, QMM may be viewed as a photonic crystal \cite{qmm0}. This enriches their potential for practical applications and provides novel means for devising comprehensive studies of practical and fundamental aspects of the artificial atom--field interaction. Investigations of the emergence of an atom (or other emitter, qubit in particular)--photon bound states \cite{aphbs1, aphbs2, aphbs3, aphbs4, aphbs5, aphbs6, aphbs7, aphbs0, prohib, aphbs8, aphbs9, aphbs10,aphbs11} are of particular importance due to their consequences for radiation propagation \cite{prohib,ilja, prohib2, prohib3}, preservation of quantum coherence and entanglement \cite{ilja,ent1,aphbs11}. For example, the prohibition of the free propagation of radiation could be attributed to the formation of these bound states. Their creation within the continuum potentially can be used for the storage of quantum information \cite{prohib,prohib2,prohib3} and construction of photon memory devices \cite{aphbs3}. On the other hand, the recent discovery of topological excitations in SCQMMs implies that, by the engineering of topologically nontrivial QMM, it would be possible to tackle unavoidable structural irregularities in SCQMMs. This is since the creation of photon bound states provides the preservation of quantum coherence for times large enough to perform quantum information processing \cite{ilja}. A further important possible application is the exploitation of qubit--photon bound states as a mechanism of the entanglement preservation in quantum information processing \cite{ent1,aphbs11}. In this paper, we study the qubit--photon bound states emerging through the interaction of an EM field propagating through SCQMM, consisting of massive, two stripe-superconducting resonator filled with a large number ($\mathcal{N}\gg 1$) of Cooper pair box (CPB) or charge qubits. Such, essentially three--dimensional (3D) structure, substantially differs from the most common realization of SCQ based waveguiding structures \cite{qed1} and SCQMM setups \cite{qmm1} in which point--like SCQs are built-in \textit{coplanar} resonators. In such two--dimensional (2D) architectures, qubit--photon interaction is described within Jaynes (Tavis)--Cummings \cite{qed1}, Dicke model Hamiltonians \cite{aphbs8,aphbs9}. A realistic theoretical model for the proposed setup is derived in the next section in terms of classical variables is, while its quantization is performed in the third section. It substantially differs from those used in the studies of the consequences of ``atom''--light interaction in the engineered media so far \cite{aphbs8, aphbs9}, i.e. modified Dicke Hamiltonian. In particular, while the pure photon part is practically identical to those encountered so far, the qubit -- photon interaction is quite different and contains two terms that may be attributed to attractive and repulsive interaction, whose competition determines the character and existence qubit--photon bound states. The paper is organized as follows: Description of the model and classical Hamiltonian are introduced in the second section. The quantization procedure is given in the third section. Two--particle Schr\"odinger equation and its solutions are discussed in Section 4. Results and conclusions are summarized in the fifth section. Details of the mathematical derivation are given in the Appendices. \section{Experimental setup: the proposal} We investigate the non--classical properties of electromagnetic radiation propagating along the SCQMM in a construction visualized in Fig. (\ref{fig1}). It is made of an infinite ($\mathcal{N}\rightarrow \infty$), one--dimensional (1D) periodic SCQ array, with period $\ell$, placed in a transmission line (TL) consisting of two infinite bulk superconductors separated by a distance $d$; we consider $d$ being of the same order of magnitude as $\ell$ \cite{Rakhmanov2008, Shvetsov2013, Asai2015, Asai2018} (Figs. 1a and b). For simplicity, we took that the thickness of superconducting strips is $\ell$. Each SCQ is a tiny superconducting island connected to each bank of the TL through a Josephson junction (JJ). The control circuitry for each SCQ (Fig. 1c), consists of a gate voltage source $V_g$ coupled to it through a gate capacitor $C_g$ and allows for local control of the SCQMM by altering independently the state of each SCQ \cite{zag}. The SCQs exploit the nonlinearity of the Josephson effect and the large charging energy resulting from nanofabrication to create artificial mesoscopic two-level systems. \begin{figure}[h] \begin{center} \includegraphics[height=7cm]{fig1} \vskip -0.5 cm \caption{Illustration of the proposed setup of SCQMM: (a) A chain of Cooper pair box qubits inside the two--stripe transmission line. Each unit cell contains a tiny superconducting island connected with TL banks through two Josephson junctions, for the regions of the dielectric layers (blue). The propagating electromagnetic vector potential pulse is also shown schematically out of scale.\\ (b) The side view of the SCQMM. The magnetic field penetrates through free space between the islands.\\ (c) A unit cell of the SCQMM showing the control circuitry of the charge qubit, consisting of a gate potential $V_g$ applied to it through the gate capacitor $C_g$.} \label{fig1} \end{center} \end{figure} \subsection{Classical model Hamiltonian} In order to set up the problem, we first derive the classical model and subsequently perform its quantization. We assume that an electromagnetic (EM) wave with vector potential $\vec{A}=A_z (x,t) \hat{z}$ propagates along with the superconducting TL. The direction of propagation is parallel to the superconducting electrodes and while $\vec{A}$ is perpendicular to the direction of the EM wave propagation. Let us now derive the model Hamiltonian of the system under the consideration. First, we recall, that each CPB qubit is comprised of \textit{double barrier} JJ (DBJJ), i.e. two JJs connected in a series. Similar DBJJ has been widely studied and used in different contexts \cite{zag, tinkh, JJrow, JJrow1, zag1}. Hamiltonian of DBJJ may be obtained employing the straightforward extension of the Feynman semi--classical approach \cite{feynm} in which the dynamics of the single JJ is described within the simple two--level model: wave function of the CP condensate (Ginzburg--Landau -- GL order parameter) on each side of JJ is represented as $\varPsi_{p=1,2}=\sqrt{n_p}e^{i \phi_p}$, while the Cooper pair tunneling is accounted through the phenomenological coupling term. In such a way, single JJ dynamics may be described employing the nonlinear model Hamiltonian of the single variable: \textit{Josephson phase} representing the difference between the phases of GL order parameter in particular superconductors. \begin{figure}[h!] \begin{center} \hspace{-1cm} \includegraphics[width=0.55\textwidth]{twoJJj} \vspace{-1.5cm} \caption{Simplified illustration of the CPB qubit consisting of two Josephson junctions connected in series.} \label{fig2} \end{center} \end{figure} In the case of DBJJ, three superconducting segments separated by two JJs (\ref{fig2}), wave function in particular segment, upper, middle and lower, may be written as: $\Psi_p(t)=\sqrt{n_p}e^{i\phi_p(t)},\;\; p=u,m,l$, while the tunneling between them now is simply: $$H_t=-V\left(\Psi^*_u\Psi_m+\Psi^*_m\Psi_l+c.c\right)$$ where $V$ represents phenomenological parameter so called Josephson constant \cite{feynm}. Substitution of the CP wave function as given above and, in analogy with \cite{feynm}, assuming that CP numbers in each segment are almost the same and equal to $n_0$, we found that the Hamiltonian of double JJ system is the sum of Hamiltonians of two independent JJs: \begin{eqnarray}\label{single1} H= -E_c\sum_{i=l,u}\frac{\partial^2}{\partial \varphi_{i}^2}-E_{J}\sum_{i=l,u}\cos{\varphi_{i}}, \end{eqnarray} \noindent Here $\varphi_{u,(l)}$ denote the Josephson phase differences at lower (upper) junction. In our model we restrict ourselves to \textit{zero voltage} case \cite{Asai2015} when $\phi_l=\phi_u\equiv 0$ so that these phases read: $\varphi_{u}=\phi_u-\phi_m\equiv -\phi_m$ and $\varphi_{l}=\phi_l-\phi_m\equiv -\phi_m$. The energy parameters $E_c=\frac{2e^2}{C_J}$ and $E_J=\frac{\Phi_0 I_C}{2\pi c}$, $\Phi_0=\frac{h c}{2 e}$, $I_c$, $C_J$, and $c$ are the junction charging energy, so called Josephson energy, flux quantum, critical current, junction capacity, and speed of light, respectively. In the presence of EM field Josephson phase difference $\varphi_{i}$ acquires the gauge term and reads: \begin{equation}\label{gauge} \varphi_u(t)-\varphi_l(t)=-\phi_m\pm \frac{4\pi}{\Phi_0}\int_{1}^{2}\vec A(\vec r)\cdot d\vec l. \end{equation} Generalizing (\ref{single1}) to whole qubit lattice and accounting for the energy of EM field inside the SCQMM ($H_{em}= \frac{1}{8\pi}\int\left( E^2_n(\vec r) +B^2_n(\vec r) \right) d^3r$) we derived a total model Hamiltonian: \begin{eqnarray} \label{tot} \nonumber H=&&\sum_n\bigg[\frac{2\hbar^2}{E_c}\dot{\phi}^2_n -2E_J\cos\phi_n\cos\alpha_n+\\ &&\frac{2\hbar^2}{E_c}\dot\alpha^2_n+ \dot\alpha^2_n +E_{em}(\alpha_{n+1}-\alpha_n)^2\bigg]. \end{eqnarray} Here we have introduced the dimensionless amplitude of the vector potential $\alpha_n=\frac{2\pi d}{\Phi_0}A_n$. To facilitate practical calculation the charging term has been redefined using $\frac{d \phi_n(t)}{d t}\equiv \dot{\phi_n}=-i\frac{2e^2}{\hbar C_J}\frac{\partial}{\partial \phi_n}$, while the gauge term in Josephson phase difference has been approximated as $\int_{1}^{2}\vec A(\vec r) \cdot d\vec l\equiv \frac{2\pi d}{\Phi_0}A_n$. The identity holds in the present setup where it was assumed that the separation $d$ between the superconducting stripes and the period $\ell$, i.e. the center--to--center distance between qubits, are of the same order of magnitude and much smaller than the wavelength of EM radiation; this fact enables us to neglect the variation of the vector potential within each cell. As a result, the integration in Eq. (\ref{gauge}) is trivial (see for example \cite{Rakhmanov2008, Shvetsov2013, Asai2015}). Finally, in evaluating EM energy integration is taken over the entire unit cell. Thus, in accordance with the approximation adopted above, we neglect spatial variation of electric and magnetic field within the unit cell, so that the energy of EM field in particular unit cell was approximated as: \begin{eqnarray}\label{emfapp} \nonumber && H_{em}\approx \frac{V}{8\pi}\left( E^2_n+B^2_n\right), \\ && V=\ell^2d, \; \; - \mathrm{volyme\;\; of\; the\;\; unit\; cell. } \end{eqnarray} For simplicity, we took that the width of superconducting stripes is equal to inter--qubit distance: $\ell$. Following \cite{Rakhmanov2008} and \cite{Asai2015} we have neglected the contribution of the electric field, while the fraction that originates from the magnetic field was accounted for through the discretization procedure introduced in \cite{Rakhmanov2008, Shvetsov2013, Asai2015}: \begin{equation}\label{mf} B(x,t)=\frac{\partial A(x,t)}{\partial x}\rightarrow \frac{A^z_{n+1}-A^z_n}{\ell}. \end{equation} Here $E_{em}=\frac{1}{8\pi \ell d}\bigg(\frac{\Phi_0}{2\pi}\bigg)^2$, is the so called \textit{electromagnetic energy} introduced in \cite{Rakhmanov2008}, determining the speed of "light" in the qubit chain, which, in dimensionless units, reads $\beta=\sqrt{E_{em}/E_J}$. It, together with the ratio $\gamma=\frac{E_C}{E_J}$ represents the main quantitative characteristic of CPB qubits, their derivatives, transmon for example, and networks made of them. \section{Quantization and two--level approximation} The quantum--mechanical versus a (semi)classical description of the qubit--EM field coupled systems still has certain controversies \cite{contro}. Nevertheless, at low temperatures, a fully quantum treatment is justified, while the dissipation is negligible. Under these conditions, the quantum state of an island is determined by the number of extra Cooper pairs on them. In addition, EM radiation exhibits quantum features for weak (small amplitude) EM fields when their modes are populated with just a few photons, one or two, per wavelength \cite{fist2}. At this stage, we must note that the tunneling of the single CP between the banks and island does not affect the state of the former which contains a large number of CPs so that the deficiency or the excess of the single CPs has no particular significance. Formally we quantize our model by introducing the photon creation and annihilation operators in real (direct) space and Josephson phase and Cooper pair number operator in Cooper pair number basis. In such a way, through a few intermediate steps, described in Appendix 1, the classical Hamiltonian Eq. (\ref{tot}) can be approximated by the quantum one describing the interaction of a collection of the two--level systems and the quantized multimode electromagnetic field. \subsection{EM field} In the quantum regime the electromagnetic field is weak, i.e., the dimensionless amplitude of its vector is small and can be treated as quantum fluctuation, i.e. $\alpha_n\rightarrow\hat \alpha_n\ll 1$. This enables us to expand $\cos\hat\alpha_n\approx 1-\frac{\hat\alpha^2_n}{2}$. Next, we quantize the EM field in two steps: first we define the generalized momentum $P_n=\frac{2\hbar^2}{E_c}\dot{\alpha_n}$ canonically conjugated to $\alpha_n$. Subsequently we treat photon variables as operators $\alpha_n\rightarrow\hat \alpha_n$, $P_n\rightarrow \hat P_n$ satisfying the commutation relation $[\hat\alpha_n,\hat P_m]=i\hbar \delta_{m,n}$. It holds for the transformation Eq. (\ref{quantiz1}) through which we introduce photon creation and annihilation operators in real (direct) space: \begin{eqnarray}\label{quantiz1} \hat \alpha_n= \frac{1}{2}\sqrt{\frac{E_C}{\hbar\omega}}\left(a_n+a^{\dagger}_{n}\right),\;\; \hat P_n={i\hbar}\sqrt{\frac{\hbar\omega}{E_c}}\left(a^{\dagger}_{n}-a_n\right). \end{eqnarray} \subsection{Qubit subsystem} Similarly, in quantization of the CPB qubit subsystem we introduce the pair canonically conjugated variables (operators): the \textit{phase} $\phi\rightarrow \hat \phi$ and Cooper pair number operator $\hat N=-i\frac{\partial}{\partial \hat\phi_n}$, $[\phi_n, \hat{N_n}]=i$. Then we rewrite Eq. (\ref{tot}) in the Cooper pair number basis $|N\rangle$, using the correspondence: $\hat N=-i\frac{\partial}{\partial \phi_n}$ and noticing that $e^{\pm i\hat \phi_n }|N\rangle = |N \pm 1\rangle$.Next, in the obtained Hamiltonian we exploit the fact that in \textit{charge and transmon} regime only a few lowest levels are relevant and we may restrict ourselves to the reduced state space in which the single island can be unoccupied ($N=0$) or occupied by single Cooper pair ($N=1$). The resulting Hamiltonian is nondiagonal in reduced number basis ${|0\rangle,\; |1\rangle}$, and in the next step we diagonalize the free qubit part by means of transition to energy eigenbasis (${|e\rangle}-\mathrm{excited\; state},\; |g\rangle-\mathrm{ground\; state}$) performing the norm preserving unitary transformation Eq. (\ref{UT}). Finally, after neglecting the photon number non-preserving terms, i.e., those $\sim a^2_n$ and $\sim a^{\dagger 2}_n $, we obtain the quantized model Hamiltonian: \begin{eqnarray}\label{qb} \nonumber H=&&\Delta\sum_n|e\rangle_n\langle e|+\\ &&\hbar\omega\sum_n a^{\dagger}_n a_n-J\sum_n a^{\dagger}_n(a_{n+1}+a_{n-1})+\\ \nonumber &&\sum_n\bigg[B(|e\rangle_n\langle g|+|g\rangle_n\langle e|) -A|e\rangle_n\langle e|\bigg]a^{\dagger}_n a_n \end{eqnarray} Here the first term represents the Hamiltonian of the qubit subsystem with level splitting between the excited and ground-state $\Delta=2\epsilon$ ($\epsilon=\sqrt{E^2_J+E^2_C}$). It is represented here in terms of the operator $|e\rangle\langle e |$ to emphasize that initially system is prepared so that all qubits are excited. Such "atoms" are usually called \textit{emitters}. In the pure photon Hamiltonian, the two terms in the second line, correspond to typical boson \textit{tight binding} model describing photon hopping between neighboring qubits. Parameters $\omega$ and $J$ stay for the photon frequency and the photon inter--qubit tunneling amplitude, respectively: \begin{eqnarray} \label{param1} \hbar\omega=\sqrt{2E_{em}E_C+\frac{E_CE^2_J}{2E}}, \qquad J=\frac{E_{em}E_C}{2\hbar\omega}. \end{eqnarray} Considering the noninteracting case, pure photon, and qubit system, the present model is analogous to those appearing frequently in a theoretical description of charge and energy transfer in various contexts. Recent application concerns the photonic bandgap materials where it addresses the photon hopping motion in \textit{coupled resonator(cavities) waveguides} \cite{aphbs9,aphbs10}. Quantum metamaterials built of such structures with embedded tunable quantum emitters, i.e., qubits, opened a new perspective for further development of novel, quantum, technological devices, and for studies of nonclassical features of light \cite{fist2}. Finally, the last term is related to the qubit -- photon interaction. It possesses two components: the attractive one, measured by the parameter $A$, and repulsive $\sim B$. \begin{eqnarray}\label{param2} A=\frac{E^2_JE_c}{4\hbar\omega \epsilon},\;\; B=\frac{E_JE^2_c}{8\hbar\omega \epsilon}. \end{eqnarray} For the convenience we rewrite the interaction Hamiltonian in terms of "atomic" (pseudo--spin) operators ($\sigma^{\dagger,-,z}$): \begin{equation}\label{atomic} H_i=\sum_n [B(\sigma^{\dagger}_n+\sigma^{-}_n)-A\sigma^{\dagger}_n\sigma^{-}]a^{\dagger}_na_n. \end{equation} The operators in the attractive interaction term may be rearranged as follows: $\sigma^{\dagger}\sigma^-a^{\dagger}a\equiv \sigma^{\dagger}a\sigma^-a^{\dagger}- \sigma^{\dagger}\sigma^-$. Thus, it may be understood to originate on account the simultaneous excitation ($\sigma^{\dagger}a$) and de--excitation ($\sigma^{-}a^{\dagger}$) of the $n$--th qubit by an absorption and emission of the single photon. On the other hand, repulsive interaction comes from the photon scattering by qubits resulting in their excitation ($|e\rangle\langle g|$) and de--excitation ($|g\rangle\langle e|$). These mechanisms differ substantially from those accounted for within the Dicke and Jaynes--Cummings models coming from the excitation qubit, atom in general, by the absorption of the single photon ($|e\rangle\langle g| a$) and vice versa: qubit deexcitation by the emission of the single photon. In coplanar geometry setups \cite{qed1,qed2} a qubit--photon interaction substantially differs from the present one, and, in the rotating wave approximation, reads: $$H_{JC}=g\sum_n \sigma^{\dagger}_n a_n+\sigma^{-}a^{\dagger}_n$$. Note that here we can not distinguish whether the interaction is attractive or repulsive. This becomes possible only after deriving the exigent--value equation, counterpart equation (\ref{eigen1}) from the next paragraph, based on the sign of effective interaction parameter. So far, the interaction resulting from the set--up proposed here was not encountered in the studies of the light interacting neither with natural nor artificial media. Nevertheless, formally very similar models may appear in solids, magnetic semiconductors \cite{sd}, when a single electron creates micro--ferromagnetic domains flipping the spins of neighboring ions, while the interaction Hamiltonian is given in terms of the $s-d(f)$ being very similar to (\ref{atomic}.) We also point out that, in the present setup, the waveguide is the chain of unit cells (sketched at Fig. 1b) each of which contains a single qubit ("atom") and may be viewed as an optical resonator. That is, our waveguide is the set of the large number ($\mathcal{N}\gg 1$) of coupled resonators (unit cells) with one "atom" per "cavity", which imply translational invariance of the system. Nevertheless, most often, the waveguide is the set "resonators" designed independently of "atoms". In these structures "atoms" are arranged arbitrarily, depending on the particular application--research subject. Various settings are possible and a particular waveguide may be populated by a few ($\mathcal{N}$) "atoms", with one or more "atoms" per cavity \cite{aphbs3, aphbs4, aphbs5, aphbs6, aphbs7, aphbs0, prohib, aphbs8, aphbs9, aphbs10, aphbs11}. One more distinction must be made in comparison with related systems. In that respect we refer to quantum metamaterial designed of coplanar, mostly superconducting, resonator waveguide and several embedded qubits \cite{qed1, qed2, qed3, qed4}, where the qubits are linearly\footnote[2]{The interaction is of the first order in field amplitude and contains only the terms linear in photon operators.} coupled to the resonator modes. \section{qubit photon bound states} \subsection{Vector of state and Schr\"odinger equation} The wave function which diagonalizes Hamiltonian Eq. (\ref{qb}) has a form of a single photon dressed qubit (atom) state: \begin{eqnarray}\label{psi} \nonumber |\Psi\rangle= &&\sum_m u_m a^{\dagger}_m|0\rangle|g\rangle +\sum_{m,n}\Psi_{m,n} \sigma^{\dagger}_n a^{\dagger}_m |0\rangle|g\rangle_m, \\ &&\sigma^{\dagger}_n=|e\rangle_n\langle g|,\;\; \Psi_{m,n}=\Psi_{n,m}. \end{eqnarray} Here the probability amplitudes satisfy the normalization condition \begin{equation} \label{norm} \sum_m |u_m|^2+\sum_{m,n}|\Psi_{m,n}|^2=1. \end{equation} The first term in state Eq. (\ref{qb}) corresponds to the case when a single photon is excited in site $m$ with probability amplitude $u_m$, while the qubit remains in its ground state. The second term of a vector of state Eq. (\ref{qb}) corresponds to synchronized excitation of $n$--th qubit and photon at site $m$. The symmetry property $\Psi_{m,n}=\Psi_{n,m}$ reflects the translational invariance of chain: solutions must remain invariant when photon and qubit excitation exchange position simultaneous excitation of qubit at site $m$ and photon at $n$--th site. Owing to orthogonality of $\langle g|\langle 0|a_m$ and $\langle g|\langle 0|\sigma^{-}_m a_n$ and $|\Psi\rangle$ we may project Schr\"odinger equation $H|\Psi>=E|\Psi>$ onto $\sigma^{\dagger}_m a^{\dagger}_n|g\rangle|0\rangle$ and $a^{\dagger}_m |g\rangle|0\rangle$. In this way we obtain a system of coupled equations for the amplitudes $\Psi_{m,n}$ and $u_m$: \begin{eqnarray}\label{SE} \nonumber &&(\mathcal{E}-\Delta)\Psi_{m,n} +\frac{J}{2}\bigg(\Psi_{m,n+1}+\Psi_{m,n-1} +\left\lbrace m \rightleftarrows n \right\rbrace \bigg)=\\ &&\nonumber -A\Psi_{m,n}\delta_{m,n}+B u_m\delta_{m,n},\\ && \mathcal{E}u_m+J(u_{m+1}+u_{m-1})=B\Psi_{n,n}. \end{eqnarray} We will solve it by employing Fourier transform. Owing to the translational invariance we pick: \begin{equation} \label{ft} \Psi_{m,n}=\frac{1}{\sqrt{\mathcal{N}}}e^{i\frac{K(m+n)}{2}\ell}\Phi_{m-n},\;\; u_m=\frac{1}{\sqrt{\mathcal{N}}}\sum_k u_k e^{ikm\ell}. \end{equation} In this way, the second equation in Eq. (\ref{SE}) attains a simple form and may be readily solved for $u_m$, which then may be eliminated from the first one. In the resulting equation we employ the translational invariance and took $m-n=l$; next we perform Fourier transform $\Phi_l=\frac{1}{\mathcal{N}^{1/2}}\sum_q \Phi_q e^{iql\ell}$. This finally yields: \begin{eqnarray} \label{basic} &&\nonumber [\mathcal{E}-\Delta+2J\cos(Kd/2) \cos q] \Phi_q=\\ &&\bigg[-A+\frac{B^2}{(\mathcal{E}+ 2J\cos K)} \bigg] \left(\frac{1}{\mathcal{N}}\sum_q\Phi_q\right), \end{eqnarray} here $K$ and $q$ stand for center of mass and relative qubit--photon quasi--momenta, while $\mathcal{E}=E-\hbar\omega$. On the basis of this equation it is easy to find relation for eigenvalues: we first find $\Phi_q=...$, then we multiply both sides of the last equation with $1/\mathcal{N}$ and then sum up both sides over $q$. This results in: \begin{equation} \label{eigen0} 1=\frac{1}{\mathcal{N}} \sum_q\frac{1}{(\varepsilon-\delta+ \cos (K\ell/2)\cos q\ell)}\bigg[-a +\frac{b^2}{(\varepsilon + \cos K\ell)} \bigg]. \end{equation} Bound state solutions, if any exist, must lie outside the free state continuum appearing in the absence of qubit--photon interaction. In that case Eq. (\ref{basic}) has solution \begin{equation} \varepsilon(q,K)=\delta - \cos q\ell\cos \frac{K\ell}{2}, \end{equation} so that the bound state energy must lie either below the lower energy bound $$ \delta-|\cos(K\ell)/2|,$$or above the higher one $$ \delta+|\cos(K\ell)/2|.$$ \subsection{Eigenvalue equation} The summation over $q$ may be performed in accordance with the rule: $\frac{1}{\mathcal{N}}\sum_q<....>=\frac{1}{2\pi \ell}\int_{-\pi/\ell}^{\pi/\ell}d q<...>$. This, provided that $|\varepsilon-\delta|>1$, yields the self--consistent equations for energy eigenvalues: \begin{eqnarray}\label{eigen1} \nonumber 1=a'(K) \frac{sgn{(\varepsilon-\delta)}}{\sqrt{(\varepsilon-\delta)^2-\cos^2 K\ell/2}},\\ a'(K)=-a+b^2\frac{1}{\varepsilon+\cos K\ell}, \end{eqnarray} where $a=A/2J$, $b=B/2J$, $\delta=\Delta/2J$ and $\varepsilon=\mathcal{E}/2J$, stand for the normalized coefficients. For further convenience we express Eq. (\ref{eigen1}) in terms of just two parameters, $\beta$ and $\gamma$, which fully characterize proposed system: \begin{eqnarray} \label{norpar} \nonumber a=\frac{1}{4\beta^2}\frac{1}{\sqrt{1+\gamma^2}},\; b=\frac{\gamma a}{2},\\ \frac{\hbar\omega}{2J}=2+\frac{1}{2\beta^2\sqrt{1+\gamma^2}},\\ \nonumber \delta=2\sqrt{2\frac{(1+\gamma^2)}{\gamma\beta^2}+\frac{\sqrt{1+\gamma^2}}{2\gamma\beta^4}}. \end{eqnarray} Eigen--equation (\ref{eigen1}) is a nonlinear (in $\varepsilon(K)$) transcendental equation and can not be solved analytically. Nevertheless, its nonlinearity implies that it may have multiple solutions. That is, qubit--photon bound states if any exist, should exhibit multi--band structure. To facilitate practical calculations, to examine the possible appearance of multi--band structure of the qubit--photon spectra, finally, to compare present analysis with the related preceding ones \cite{bs1,bs2} we rewrite Eq. (\ref{eigen1}) in the self-consistent form \begin{equation}\label{selfc} \varepsilon(K) -\delta=\pm \sqrt{a'^2(K)+\cos^2 K/2}, \end{equation} in which, on the right hand side, $\varepsilon(K)$ appears implicitly through the $a'(K)$ in accordance with Eq. (\ref{eigen1}). This "solution" recalls much the exact one in the limit $a'(K)\rightarrow a$, appearing frequently in different contexts. Examples are numerous, and, despite different physical backgrounds, formally identical solutions, may be found in many cases such as \textit{bound states of two photons, phonons, excitons} \cite{bs1}. In addition, the problem of the bound state of an impurity atom and its vibrational or magnetic environment \cite{bs1}, within the simplest models, also reduce to this elementary solutions. \subsection{Existence of solutions} Solubility of Eq. (\ref{eigen1}) requires non--negativity of its right hand side, thus, for $\varepsilon - \delta<0$ ($\varepsilon - \delta >0$), eigen--energy solutions exist provided that $a'(K)<0$ ($a'(K)>0$). Accordingly, signs ($+$ or $-$) in Eq. (\ref{selfc}) stand for $\varepsilon - \delta<0$ and $\varepsilon - \delta>0$, respectively. Also, throughout the paper, we may call $a'(K)$ the \textit{effective} qubit--photon interaction strength. The term ``effective'' is used here to emphasize the self--consistency of (\ref{selfc}), and to point to its \textit{formal equivalence} with the exact ones appearing when $a'(K)\rightarrow a$. To find $\varepsilon(K)$ we have performed the numerical calculation focusing ourselves to the case $\varepsilon - \delta<0$ when an effective qubit--photon interaction is attractive. An opposite case was not considered since our numerical calculations have shown that the solutions of the eigenvalue problem exist for unrealistic values of system parameters. For example for $\gamma \sim 100$. \subsection{Solutions: analytical considerations} Before presentation of the results our numerical calculations we perform some auxiliary analytic analysis evaluating explicitly eigen--energies at band edges: $\varepsilon(\pm \pi)\equiv \varepsilon(\pi)$. In that limit (\ref{selfc}) become: \begin{equation} \label{edge} \varepsilon(\pi) -\delta =\pm a\left(1-\frac{a(\frac{\gamma}{2})^2}{\varepsilon(\pi)-1}\right). \end{equation} The signs $(+)$ or $(-)$ correspond to $\varepsilon -\delta > 0$ and $\varepsilon -\delta < 0$, respectively. The last equation, in both cases, is the quadratic in $\varepsilon(\pi)$ implying the appearance of two bands, both for attractive and repulsive effective interaction. \begin{figure*}[t!] \subfigure[]{\includegraphics[width=0.35\textwidth]{b01g02ph}} \subfigure[]{\includegraphics[width=0.35\textwidth]{b01g1ph}} \subfigure[]{\includegraphics[width=0.35\textwidth]{b01g5ph}} \subfigure[]{\includegraphics[width=0.35\textwidth]{b01g10ph}} \caption{Graphical illustration of the energy spectrum ($\varepsilon(K)$) of system for $\beta =0.1$, and for four different $\gamma$. Green shaded area corresponds to free states. Blue solid lines correspond to qubit--photon bound states. For the comparison we gave a band of bound states in the case in the absence of repulsive interaction--red dotted lines, and pure photon dispersion curve $\varepsilon(q)$ -- green dotted lines.} \label{fig02} \end{figure*} \begin{figure*}[t!] \centering \subfigure[]{\includegraphics[width=0.35\textwidth]{b02g02ph}} \subfigure[]{\includegraphics[width=0.375\textwidth]{b02g1ph}} \subfigure[]{\includegraphics[width=0.35\textwidth]{b02g5ph}} \subfigure[]{\includegraphics[width=0.35\textwidth]{b02g10ph}} \caption{ Energy spectrum ($\varepsilon(K)$) of system for $\beta =0.2$, and for the same values of $\gamma$ as in the preceding case.} \label{fig03} \end{figure*} \begin{figure*}[t!] \centering \subfigure[]{\includegraphics[width=0.35\textwidth]{b05g02ph}} \subfigure[]{\includegraphics[width=0.35\textwidth]{b05g1ph}} \subfigure[]{\includegraphics[width=0.35\textwidth]{b05g5ph}} \subfigure[]{\includegraphics[width=0.35\textwidth]{b05g10ph}} \caption{ Same as in previous cases for $\beta=0.5$.} \label{fig04} \end{figure*} Solutions of Eq. (\ref{edge}) are: \begin{eqnarray} \label{attr} \nonumber \varepsilon_{\pm}(\pi)=&&\frac{1+\delta-a}{2}\pm\frac{1-\delta+a}{2} \sqrt{1+\left(\frac{a\gamma}{1-\delta+a} \right)^2 }.\\ \nonumber && \mathrm{for\;\; attractive\;\; effective \;\;interaction},\\ \nonumber \varepsilon_{\pm}(\pi)&&=\frac{1+\delta+a}{2}\pm\frac{1-\delta-a}{2} \sqrt{1-\left(\frac{a\gamma}{1-\delta-a} \right)^2 },\\ && \mathrm{for\;\; the\; repulsive\;\;one.} \end{eqnarray} In the present context $\delta$ is large as compared with other system parameters. Thus, the ratios in both square roots may be regarded as small quantities. This enables us to expand both square roots in Eq. (\ref{attr}) in terms of "small parameter" $(a\gamma/(1-(\delta\pm a)))^2$ which yields the corresponding asymptotic relations: \begin{eqnarray} \label{asym} \nonumber \varepsilon_{-}\approx&&\delta - a -\frac{(\frac{a\gamma}{2})^2}{1-\delta+a},\\ \nonumber \varepsilon_{+}\approx&& 1+\frac{(\frac{a\gamma}{2})^2}{1-\delta+a},\\ \nonumber&& \mathrm{For\; "attractive" \;effective\; interaction,}\\ \nonumber \varepsilon_{-}\approx&&\delta + a -\frac{(\frac{a\gamma}{2})^2}{\delta+a-1},\\ \varepsilon_{+}\approx&& 1+\frac{(\frac{a\gamma}{2})^2}{1-\delta-a},\\&& \nonumber \mathrm{for \; "repulsive"\; effective \; interaction.} \end{eqnarray} Based on these equations we may estimate under which conditions particular types of solutions exist. For that purpose, we recall the existence conditions of the solutions--\textit{subsection C}. We focus on repulsive interaction for which our numerical calculations do not find meaningful solutions for reliable parameter values. According to Eq. (\ref{eigen1}) its solutions exist provided that $a'(K)>0$. Substituting the corresponding asymptotic solution from Eq. (\ref{attr}), the third equation in Eq. (\ref{asym}), into $a'(\pi)$ we obtain the following condition: \begin{equation} \label{conit} a(\frac{\gamma}{2})^2>\varepsilon(\pi)-1\Leftrightarrow \delta+a<1+ a(\frac{\gamma}{2})^2 \;\;\mathrm{for\; \; \varepsilon_+}. \end{equation} On the other hand, solution $\varepsilon_-$, after subtracting the $\delta$ on both sides, attains the form: $$ \varepsilon -\delta=1-\delta+\frac{\frac{\gamma^2}{4}}{\delta+a-1}.$$ Note that neither of these conditions can be satisfied in the present case. Namely, the condition for the existence of solutions in the case of repulsive interaction reads $\varepsilon-\delta<0$, which cannot be satisfied in practice due to large values of $\delta$. In particular, for that purpose $\gamma \gtrsim 100$ is required. \subsection{Solutions: numerical results} Numerical calculations, were performed for the values of system parameters covering both \textit{charging (large $\gamma$)} and \textit{Josephson ( small $\gamma$)} regime. Note that there is no any particular restrictions on the value of dimensionless speed of light $\beta$ in QMM. In particular, in literature \cite{Rakhmanov2008,Shvetsov2013,Asai2018,scir} $\beta$ was taken to vary from few tenths up to \emph{1}. Here we restrict ourselves to $\beta \eqslantless 0.5$ since the results for its larger values do not exhibit any \emph{substantial qualitative} difference. Thus we used $\beta =0.1; 0.2,\; \mathrm{and}\; 0.5$, while, for each $\beta$, we took four values $\gamma: 0.2,\; 1, \; 5\; \mathrm{and}\; 10$. Our results are illustrated in Figs. (\ref{fig02}) -- (\ref{fig04}). The energy spectrum consists of the free state continuum, green shaded area, and two bands of qubit photon bound states which are observed for each set of system parameters. The higher energy band (Band 1) lies below the free state continuum and, for large values of $\beta$ ($\beta= 0.5$ as presented at Fig.(4)), is practically indistinguishable from the bound states appearing in the case of pure attractive interaction corresponding to \textit{ad hoc} choice $B=0$. For small $\gamma$ Band 1 is well separated from the continuum approaching it for larger values of $\gamma$. Band 1 features profoundly change as $\beta$ decreases. For example, for $\beta=0.1$ (Fig. 3) the magnitudes of the Band 1 bound states energies and those of the free states, for each $K$, are almost twenty times higher than for $\beta=0.5$. In addition, for small $\gamma$ ($\gamma = 0.2$), Band 1 is practically indistinguishable from bound states corresponding to pure attractive interaction. As $\gamma$ rises Band 1 and solutions for the pure attractive interaction separate and both gradually tend towards the free state continuum. Qualitatively the same behavior is observed for $\beta = 0.2$ with a somewhat different degree of changes. As presented at the lower part of figures (\ref{fig02})--(\ref{fig04}), in parallel with Band 1 the second one (Band 2), appears. This is the novel band lying deeply below Band 1. It emerges from the competition between the attractive and repulsive interaction and lies below the free photon band. Its dependence on parameters $\beta$ and $\gamma$ exhibits similar behavior as for Band 1. That is, for large $\beta$, irrespective of the values of $\gamma$, Band 2 and free photon band are practically identical, due to complete compensation of the effective attractive and repulsive interactions. That is QMM is fully transparent, and there are no bound states. For smaller values of $\beta$ attractive interaction dominate over the repulsive and qubit--photon bound states to emerge, providing that $\gamma$ is high enough. Nevertheless, QMM is still transparent but for qubit--photon bound states. \section{Concluding Remarks} In this paper, we have studied the energy structure and cooperative qubit--photon excitation of a one--dimensional superconducting quantum metamaterial. The system consists of the large number ($\mathcal{N}\gg 1$) periodically arranged charge qubits placed inside the massive two-strip superconducting resonator. In such a setup each unit cell of SCQMM (sketched at Fig.1 b) can be viewed as an electromagnetic resonator, while the system as a whole, represents a coupled-resonator (cavities) waveguide with single an "atom" per cavity. This setup, upon quantization, exhibits some novel features in comparison to those used so far in the studies of the matter--light interaction. In particular, the system is translationally invariant since the number of "cavities" and "atoms" match: each cavity contains a single qubit. So far the studies on the subject were carried under the condition that the individual "atoms" \cite{aphbs3, aphbs4, aphbs5, aphbs6, aphbs7, prohib, aphbs8, aphbs9, aphbs10, aphbs11} or their ensembles \cite{aphbs0} are placed in different resonators and where translational invariance has been rarely accounted \cite{aphbs11, cohout}. Furthermore, the qubit--photon interaction is substantially different from that utilized in most studies on the subject \cite{aphbs3, aphbs4, aphbs5, aphbs6, aphbs7, aphbs0, aphbs8, prohib, aphbs9, aphbs10} which were carried out within the certain modifications of the celebrating Dicke model \cite{dicke}. The essential difference is that it now has two components: the attractive and the repulsive one originating on account of different mechanisms: i) simultaneous excitation ($\sigma^{\dagger}a$) and deexcitation ($\sigma^{-}a^{\dagger}$) of the $n$--th qubit by absorption and emission of the single-photon, attractive one and ii) the repulsive one from the photon scattering by qubits accompanied by their excitation and deexcitation. The main consequence of these peculiarities is the emergence of the mixed qubit--photon bound states. In particular, the energy spectrum of the qubit--photon bound states consists of two widely separated bands. The higher energy one lies far over the photon continuum. It is very close to that observed in the simple case of pure attractive interaction and appears for large $\varepsilon$ when $a'\rightarrow a$. The results, almost identical to the preceding ones \cite{bs1,bs2}, were observed. The lower band, near the band edges, lies within the photon continuum. Based on the recent findings \cite{aphbs8,aphbs9} we expect that these bound states may exert a considerable influence on the photon transport properties. It relies upon the possibility of radiation trapping due to the creation of these bound states \cite{aphbs8,aphbs9}. In the present case, due to translational invariance of the system, radiation trapping concerns the qubit dressing by photon cloud. The formation of bands of such complexes implies their free propagation. Band flattening with changes of the values of system parameters points to the slowing down of these mixed states. The emergence of the flat bands implies a possible stopping light which indicates that the proposed setup could be used for manipulating the opens up the novel means for realizing operable quantum devices. The proposed setup is convenient for the practical realization of such devices with controllable parameters which could be achieved by applying a constant external magnetic field in parallel with propagating EM field. In such a way, vector potential attains an additional constant term $\alpha_n\rightarrow \alpha_n+\alpha_0$ so that interaction term in (\ref{tot}), after straightforward calculation, reads: $$ H_i\approx-2E_J\cos\varphi_n\left[ \cos\alpha_0\left( 1-\frac{\alpha^2_n}{2}\right) -\sin\alpha_0 \;\alpha_n\right]. $$ Varying external field it would possible to change the tunneling energy and to "flip" between different regimes. A particularly interesting situation arises when $\alpha_0=\pi/2$ when the interaction term, upon quantization, attains the form identical to that encountered in coplanar arrangements. Finally, let us comment on the generality of our results. We do not expect that the features of the propagating signal, in the proposed geometrical arrangement, should not qualitatively depend on the particular choice of the type of qubit \cite{zag}. Thus, for the simplicity and certain flexibility for the manipulation of the single qubit, we use here charge qubits, while any other type would give analogous results. \begin{acknowledgments} We thank D. Kapor for fruitful discussion and useful comments on the manuscript. This work was supported by the Ministry of Education, Science, and technological development of the Republic of Serbia. Z.I. acknowledges support by the "Vin\v ca" Institute -- special grant No. 104-1-2/2020-020, dated 11.01.2021. We also acknowledge the co-financing of this research by the European Union and Greek national funds through the Operational Program Crete 2020-2024, under the call "Partnerships of Companies with Institutions for Research and Transfer of Knowledge in the Thematic Priorities of RIS3Crete", with project title "Analyzing urban dynamics through monitoring the city magnetic environment" (project KPHP1 - 0029067) and also by the Ministry of Science and Higher Education of the Russian Federation in the framework of Increase Competitiveness Program of NUST "MISiS" (No. K2-2019-010), implemented by a governmental decree dated 16th of March 2013, N 211. N.L. acknowledges support by the General Secretariat for Research and Technology (GSRT) and the Hellenic Foundation for Research and Innovation (HFRI) (Code No. 203). \end{acknowledgments} \section{Appendix 1: quantization of the model Hamiltonian} \subsection{Quantization of the qubit subsystem} After expansion $\cos\alpha_n\approx 1-\alpha^2_n/2$, and transition in Cooper pair basis number basis $|N\rangle$ together with the correspondence: $\hat N=-i\frac{\partial}{\partial \phi_n}$ and noticing that $e^{\pm i\hat \varphi_n }|N\rangle = |N \pm 1\rangle$ we rewrite Hamiltonian Eq. (\ref{tot}) in the charge basis as follows: \begin{eqnarray}\label{CPN} \nonumber H=&&\sum_n 2E_C\hat N^2_n|N\rangle_n\langle N|-\\ \nonumber && E_J\sum_n|N\rangle_n\langle N+1|+|N+1\rangle_n\langle N|+\\ \nonumber && \frac{E_J}{2}\sum_n\bigg(|N\rangle_n\langle N+1|+|N+1\rangle_n\langle N|\bigg)\alpha^2_n+ \\ && \sum_n\left(\frac{2\hbar^2}{E_c}\dot\alpha^2_n+ E_{em}(\alpha_{n+1}-\alpha_n)^2\right) \end{eqnarray} In the reduced state space, in which the single island can be unoccupied ($N=0$) or occupied by a single Cooper pair ($N=1$) we obtain reduced Hamiltonian: \begin{eqnarray}\label{reduced} \nonumber&& H=-E_c\mathcal{N}+ \sum_n\bigg[E_c \tau^z_n-E_J\tau^x_n\bigg]+\\ &&\sum_n\left(\frac{2\hbar^2}{E_c}\dot\alpha^2_n+ E_{em}(\alpha_{n+1}-\alpha_n)^2+\frac{E_J}{2}\tau^x_n\alpha^2_n\right). \end{eqnarray} where $\tau^x_n=|1\rangle_n\langle 0|+|0\rangle_n\langle 1|$ and $\tau^z_n=|1\rangle_n\langle 1|-|0\rangle_n\langle 0|$, while in deriving the above result we have used an apparent relation $\hat N_n=|1\rangle_n\langle 1|+|0\rangle_n\langle 0|\equiv 1$ Qubit component of this Hamiltonian may be diagonalized by means of the norm preserving ( $1=|e\rangle_n\langle e|+|g\rangle_n\langle g|$) transformation: \begin{eqnarray}\label{UT} \nonumber \tau^x_n=\cos\eta\left( |e\rangle_n\langle g|+|g\rangle_n\langle e|\right)-\sin\eta \left( |e\rangle_n\langle e|-|g\rangle_n\langle e|\right),\\ \tau^z_n=\cos\eta\left( |e\rangle_n\langle e|-|g\rangle_n\langle g|\right)+\sin\eta \left(|e\rangle_n\langle g|+|g\rangle_n\langle e|\right),\\ \nonumber \tan\eta=\frac{E_J}{E_C},\;\sin\eta=-\frac{E_J}{\sqrt{E^2_c+E^2_J}},\; \cos\eta=\frac{E_C}{\sqrt{E^2_c+E^2_J}} \end{eqnarray} In such a way, up to an irrelevant constant, above Hamiltonian became \begin{widetext} \begin{eqnarray}\label{htl} \nonumber H= &&\sum_n \left\{ 2\epsilon|e\rangle_n\langle e|+\left[\frac{E_JE_c}{2\epsilon}\left(|e\rangle_n\langle g|+|g\rangle_n\langle e|\right)-\frac{E^2_J}{\epsilon}|e\rangle_n\langle e|\right]\alpha^2_n \right\}+ \\ && \sum_n\left(\frac{2\hbar^2}{E_c}\dot\alpha^2_n+ E_{em}(\alpha_{n+1}-\alpha_n)^2+ \frac{E^2_J}{2\epsilon}\alpha^2_n\right). \end{eqnarray} \end{widetext} Here $\epsilon=\sqrt{E^2_c+E^2_J}$, so that the $\pm \epsilon$ denote the energy eigenstates: ground (-) and excited (+) one. \subsection{Quantization of EM field} As usual we consider $\alpha_n \ll 1$ and expand corresponding \textit{"cosine"} term in interaction. First we define the generalized momentum $P_n=\frac{2\hbar^2}{E_c}\dot{\alpha_n}$ canonically conjugated to $\alpha_n$. Now we treat photon variables as operators $\alpha_n\rightarrow\hat \alpha_n$, $P_n\rightarrow \hat P_n$ requiring that they satisfy commutation relation: $[\alpha_n, P_m]=i\hbar\delta_{m,n}$, which holds for the following transformation: \begin{eqnarray} \label{quantiz2} \nonumber \hat \alpha_n=\frac{1}{2}\sqrt{\frac{E_C}{\hbar\omega}}\left(a_n+a^{\dagger}_{n}\right),\;\; \hat P_n={i\hbar}\sqrt{\frac{\hbar\omega}{E_c}}\left(a^{\dagger}_{n}-a_n\right), \end{eqnarray} Substitution of the above expressions in Eq. (\ref{htl}) yields the following model Hamiltonian: \begin{widetext} \begin{eqnarray}\label{hf} \nonumber H=\nonumber\sum_n \left[2\epsilon|e\rangle_n\langle e|+\hbar\omega a^{\dagger}_n a_n-\frac{E_{em}E_C}{2\hbar\omega}a^{\dagger}_n\left(a_{n+1}+a_{n-1}\right) \right] +\frac{E_JE_C}{8\hbar\omega \epsilon}\sum_n\left[ E_c\left(|e\rangle_n\langle g|+|g\rangle_n\langle e|\right)-2E_J|e\rangle_n\langle e|\right]\left(a^{\dagger}_n+a_n \right)^2 \end{eqnarray} \end{widetext} \begin{widetext} \begin{eqnarray}\label{modH} \nonumber H_s=\Delta\sum_n|e\rangle_n\langle e|+\hbar\omega\sum_n a^{\dagger}_n a_n-J\sum_n a^{\dagger}(a_{n+1}+a_{n-1})-\sum_n\left[A|e\rangle_n\langle e|-B(|e\rangle_n\langle g|+|g\rangle_n\langle e|) a^{\dagger}_n a_n\right] \end{eqnarray} \end{widetext} \begin{equation} \hbar\omega=\sqrt{2E_{em}E_C+\frac{E_CE^2_J}{2E}},\;\; J=\frac{E_{em}E_C}{2\hbar\omega},\;\; A=\frac{E^2_JE_c}{4\hbar\omega E},\;\; B=\frac{E_JE^2_c}{8\hbar\omega E}. \end{equation}
proofpile-arXiv_065-26
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Scattering experiments, whether real or fictitious, offer a simple way to gain understanding of the physical implications of a theory. The small-deflection limit provides a further simplified testing ground where precise analytical results are usually possible. The study of small-angle scattering in general relativity began in the 1980's with the derivation of the first-order \cite{Portilla:1980uz} and second-order \cite{Westpfahl:1985tsl} deflection angle, and has recently seen a resurgence of interest due to connections with quantum scattering methods and the dynamics of bound systems. The new computational firepower thrown at this problem has resulted in spectacular progress (with references too numerous to list here), with the latest results now probing the \textit{fourth} order in the small-angle approximation \cite{Bern:2021dqo,Dlapa:2021npj}. Inspired by this rich, interconnected set of results, we set about to understand gravitational scattering with a new approach using self-force methods \cite{Gralla:2021qaf}. We managed to reproduce the second-order results from the 1980s, but discovered, to our surprise, an overlooked feature of the problem that appears even at the \textit{first} order beyond straight line motion. In addition to computing the energy, momentum, and angular momentum of the particles, we considered the last, overlooked conserved quantity: the mass moment. For a system of point particles, the mass moment is defined by \begin{align}\label{Nmechintro} \bm{N}_{\rm mech} = \sum_I E_I \bm{r}_I - t \sum_I \bm{p}_I, \end{align} where $\bm{r}_I$, $E_I$ and $\bm{p}_I$ are the position, energy and momentum (respectively) of the particles labeled by $I$.\footnote{We set the speed of light to unity ($c=1$) and regard relativistic mass and energy as equivalent. If we had not made this choice, we would divide the first term of \eqref{Nmechintro} by $c^2$, ensuring that mass moment has units of mass times length.} We include the subscript ``mech'' to emphasize that any field contributions have not been included in this formula. The mass moment is the position-weighted energy of the system minus its total momentum times time, and its conservation reflects the uniform motion of the center of energy. It is numerically equal to the total energy times the center of energy at time $t=0$, and it thereby tracks the position of the center of energy at a fiducial time. Although the total value can always be set to zero by a translation, the mass moment is additive (unlike the center of energy) and can therefore be budgeted like the energy, momentum, and angular momentum. That is, we can ask about \textit{exchange} of mass moment between different degrees of freedom, or \textit{radiation} of mass moment away to infinity. From a relativistic point of view, mass moment is inseparable from the angular momentum, since the two mix under boosts and only together form a relativistically invariant object (e.g., \cite{MTW}). In the scattering of two masses $m_1$ and $m_2$, the important mass scales are the initial total energy $E_0$ and the relativistic reduced mass $\mu$, \begin{align}\label{E0} E_0 = \sqrt{m_1^2 + m_2^2 + 2\gamma m_1 m_2}, \qquad \mu = \frac{m_1 m_2}{E_0}, \end{align} where $\gamma$ is the initial relative Lorentz factor. In the small-deflection limit, the center of energy-momentum (CEM) frame scattering angle is proportional to \begin{align} \chi = \frac{G E_0}{bv^2} \ll 1, \end{align} where $G$ is Newton's constant, $b$ is the impact parameter, and $v$ is the initial relative velocity. In our study of small-angle gravitational scattering through second order in $\chi$ \cite{Gralla:2021qaf}, we found that the mechanical mass moment changes during the scattering process. At leading order in $\chi$, the change is \begin{align} \Delta \bm{N}_{\rm mech} = 2 \mu b \chi \gamma(1-3v^2) \log \frac{m_2 + \gamma m_1}{m_1 + \gamma m_2} \bm{\hat{z}}, \end{align} where $\bm{\hat{z}}$ is a unit vector pointing from particle 1 to particle 2 at early times. This form makes clear that the effect disappears in the Newtonian limit $v \to 0$, as it must. However, plugging in for $\chi$ shows that the change in mass moment is in fact \textit{independent of the impact parameter}, \begin{align}\label{gravscoot} \Delta \bm{N}_{\rm mech} = 2 \gamma\frac{1-3v^2}{v^2} G m_1 m_2 \log \frac{m_2 + \gamma m_1}{m_1 + \gamma m_2} \bm{\hat{z}}. \end{align} This suggests that the effect is not tied to the details of the small-angle scattering encounter and will exist in similar form for large-angle scattering with $\chi \gtrsim 1$. These results surprised and puzzled us for a number of reasons, not least because we believed we were working in the CEM frame, where the total mass moment should be zero. We suspected that contributions from the gravitational field need to be included, but ran into obstacles related to the fact that field energy is fundamentally gauge-dependent in general relativity. It is also not entirely clear that spacetime can be treated as flat for the purposes of computing conserved quantities at early and late times, since these involve $1/t$ corrections that are formally the same order as gravitational field perturbations to the metric. Work is underway to settle these issues in general relativity \cite{gravscoot}. However, in the mean time, we can consider a simpler, electromagnetic analog where the problems do not arise. This problem is surely even older than the gravitational one, and many aspects of the calculation have undoubtedly been performed before (not least in the recent, mammoth exploration through third perturbative order \cite{Saketh:2021sri}). However, we are unaware of any results on the mass moment, or even any mention of this quantity, in past work on electromagnetic scattering. Consider, then, the small-angle scattering of classical charged particles. The leading deflection angle is proportional to \begin{align} \chi_{\rm EM} = \frac{q_1 q_2}{\mu b v^2} \ll 1,\label{chiEM} \end{align} where $q_1$ and $q_2$ are the particle charges. (There is no explicit coupling constant analogous to $G$ because we work in Gaussian units.) Computing at first order in $\chi_{\rm EM}$, we find a precisely analogous change in mass moment, \begin{align} \Delta \bm{N}_{\rm mech} & = -2 \mu b \gamma^{-2} \chi_{\rm EM} \log\frac{m_2 + \gamma m_1}{m_1 + \gamma m_2} \bm{\hat{z}} \label{NEM1} \\ & = -\frac{2 q_1q_2}{\gamma^2 v^2}\log\frac{m_2 + \gamma m_1}{m_1 + \gamma m_2} \bm{\hat{z}}. \label{NEM2} \end{align} Eq.~\eqref{NEM1} demonstrates the relationship to our perturbative calculation, while Eq.~\eqref{NEM2} shows that the change in mechanical mass moment is again independent of the impact parameter. In the electromagnetic setting we can be confident, based on general theorems, that this change is in fact compensated by an equal and opposite change in the electromagnetic contribution to the mass moment. The EM field mass moment is given by \begin{align}\label{NEM} \bm{N}_{\rm EM} = \int \mathcal{E} \bm{x} d^3 \bm{x} - t \int \bm{S} d^3x, \end{align} where $\mathcal{E}$ and $\bm{S}$ are the electromagnetic field energy and momentum densities, respectively. We argue that only the cross-term contributions (proportional to $q_1 q_2$) should be included in the point particle limit and explicitly evaluate these integrals at early and late times. We find that, indeed, the electromagnetic contribution exactly balances the mechanical one, \begin{align} \Delta \bm{N}_{\rm EM} = - \Delta \bm{N}_{\rm mech}. \end{align} That is, there is no change in total mass moment, only an \textit{exchange} between mechanical and electromagnetic degrees of freedom. The exchange is permanent: an electromagnetic scoot. The electromagnetic problem thus provides a neat and tidy story that can be fully understood. The details and outcome of this calculation give insight into electromagnetic phenomena and lessons for seeking analogous understanding in the gravitational case. We discuss these points in detail at the conclusion of the manuscript. While this paper was inspired by gravitational phenomena and is aimed substantially at researchers working in this area, we feel that the electromagnetic results are of interest in their own right. As such, we have endeavored to make the paper accessible to aficionados of electromagnetism who are not necessarily steeped in relativistic notation. We have therefore chosen to use vector notation throughout, eschewing the tensors that are standard in gravitational physics. However, we have retained the use of Gaussian units with the speed of light set equal to one, so that results can be easily compared with electromagnetic calculations in the high-energy and gravitational physics literature. Readers unfamiliar with these units can always restore constants like $4\pi \epsilon_0$ and $c$ via dimensional analysis. This paper is organized as follows. In Sec.~\ref{sec:scattering} we set up the problem and derive the particle trajectories and electromagnetic fields through first order in the interaction. In Sec.~\ref{sec:conserved} we calculate all conserved quantities at leading order at early and late times ($t \to \pm \infty$) and analyze the implications. We pay particular attention to the mass moment and also discuss the center of energy. We conclude with some discussion of the scoot phenomenon. An appendix describes the evaluation of certain integrals that arise in the analysis. \section{small-angle scattering of relativistic charged particles}\label{sec:scattering} We will consider a scattering encounter between two classical charged particles $1$ and $2$ in the approximation of small deflection. To leading order, the particles move in straight lines, and the description is simplest in the frame where one particle is at rest. We will take particle $2$ to be at rest at the origin and denote this frame with a prime. Choosing the motion to be in the $z'$ direction and the transverse separation to be in the $x'$ direction, the leading-order trajectories are simply \begin{align} \bm{r}_1' & = (b,0,v t') \label{r1p} \\ \bm{r}_2' & = (0,0,0), \label{r2p} \end{align} where $b$ and $v$ are constants interpreted as the impact parameter and relative velocity, respectively. Without loss of generality we assume that $v$ is positive, \begin{align} v>0, \end{align} so that particle 1 moves in the $+z'$ direction. We determine the corrected motion by integrating the Lorentz force law using the electric and magnetic fields produced by the background trajectories. There are no magnetic forces since $\dot{\bm{r}}_2'=0$ and $\bm{B}_2'=0$ to zeroth order, and the Lorentz force law becomes \begin{align} m_1 \ddot{\bm{r}}_1' & = \frac{q_1}{\gamma} \left( \bm{E}_2' - v^2 (\hat{\bm{z}}'\cdot \bm{E}_2') \hat{\bm{z}}'\right) \label{a1p}\\ m_2 \ddot{\bm{r}}_2' & = q_2 \bm{E}_1',\label{a2p} \end{align} where $\gamma=(1-v^2)^{-1/2}$ is the relative lorentz factor. The electric fields produced at leading order are \begin{align} \bm{E}_1' & = q_1 \gamma\frac{(x'-b)\bm{\hat{x}}' + y'\bm{\hat{y}'} + (z'-vt')\bm{\hat{z}}'}{[(x'-b)^2+y'^2+\gamma^2(z'- v t')^2]^{3/2}} \label{E1p} \\ \bm{E}_2' & = q_2\frac{x'\bm{\hat{x}}' + y' \bm{\hat{y}}'+ z'\bm{\hat{z}}'}{[x'^2+y'^2+z'^2]^{3/2}}.\label{E2p} \end{align} The corrected motion is determined by integrating the right-hand-sides of Eqs.~\eqref{a1p} and \eqref{a2p} using Eqs.~\eqref{E1p} and \eqref{E2p}. We choose the integration constants so that the perturbed velocity of each particle vanishes in the distant past (making $v$ interpreted as the initial relative velocity), and so that the particles reach $z'=0$ at $t'=0$. We find \begin{align} x_1' & = b + \frac{q_1 q_2}{b m_1 \gamma v^2}\left( vt' + \sqrt{b^2+v^2 t'^2} \right) \label{x1p} \\ z_1' & = v t' - \frac{q_1 q_2}{m_1 \gamma^3v^2}\textrm{arctanh}\frac{vt'}{\sqrt{v^2t'^2+b^2}} \label{z1p}\\ x_2' & = -\frac{q_1 q_2}{b m_2 v^2}\left( vt' + \sqrt{v^2 t'^2+b^2 \gamma^{-2}} \right)\label{x2p} \\ z_2' & = \frac{q_1 q_2}{m_2 \gamma^2v^2}\textrm{arctanh}\frac{vt'}{\sqrt{v^2t'^2+b^2\gamma^{-2}}}.\label{z2p} \end{align} These equations provide the corrected particle trajectories. The second particle is no longer at rest, but its velocity is asymptotically zero at early times. We may thus interpret the primed frame as the ``initial rest frame'' of particle 2. Note, however, that the position of particle 2 in fact diverges logarithmically at early (and late) times. This is an unavoidable consequence of the long-range nature of the Coulomb force: an inverse-square force integrates up to a logarithmically divergent position. \subsection{Transformation to CEM frame} While the primed frame was convenient for finding the trajectories, it is conceptually less useful since it makes an explicit preference for one particle over the other, when no such preference exists in the problem. A more natural choice is the center of energy-momentum (CEM) frame, defined as the frame with no momentum or mass moment, \begin{align}\label{CEM-def} \bm{p} = \bm{N} = 0. \end{align} In our relativistic scattering problem, these quantities receive contributions from both the particles and the electromagnetic field. At leading order (neglecting the interaction), the particles move in straight lines [Eqs.~\eqref{r1p} and \eqref{r2p}] and there is no contribution from the electromagnetic field. In this case the appropriate transformation consists of a Lorentz boost in the $z$ direction and a translation in the $x$ direction, \begin{align} t & = \frac{m_2+\gamma m_1}{E_0}t' - \frac{\gamma m_1 v}{E_0}z' \label{tCM} \\ z & = \frac{m_2+\gamma m_1}{E_0}z' - \frac{\gamma m_1 v}{E_0}t' \label{zCM} \\ x & = x' - b\frac{m_1(\gamma m_2 + m_1)}{E_0^2}, \label{xCM} \end{align} where $E_0$ was introduced in Eq.~\eqref{E0} above. Although this transformation was designed for the leading order motion, it turns out that no modification is necessary for the first perturbative correction. That is, we will see that Eqs.~\eqref{tCM}-\eqref{xCM} still take us to CEM frame as defined by \eqref{CEM-def}. However, and quite surprisingly, we find that it is essential to take into account the contribution from the electromagnetic field, \textit{even at early and late times, when the particles are infinitely separated}. This fact enables the electromagnetic scoot: a permanent exchange of mass moment between particle and field. \subsection{CEM-frame particle trajectories} The path of particle $1$ in the CEM frame is given by plugging Eqs.~\eqref{x1p} and \eqref{z1p} into Eqs.~\eqref{tCM}--\eqref{xCM} and solving the resulting set of equations for $x_1(t)$ and $z_1(t)$, dropping terms nonlinear in $q_1 q_2$; the same procedure for particle $2$ gives $x_2(t)$ and $z_2(t)$. The results are \begin{align} x_1 &= b_1 + \frac{q_1 q_2}{b m_1 \gamma v^2}\left( v t_1 + \sqrt{b^2+v^2 t_1^2} \right) \label{x1} \\ z_1 &=v_1 \left( t - \frac{q_1 q_2 E_0}{m_1 m_2 \gamma^3 v^3}\textrm{arctanh} \frac{v t_1}{\sqrt{b^2 + v^2 t_1^2}} \right) \label{z1} \\ x_2 &= b_2 - \frac{q_1 q_2}{b m_2 \gamma v^2}\left( v t_2 + \sqrt{b^2+v^2 t_2^2} \right) \label{x2} \\ z_2 &= v_2\left( t - \frac{q_1q_2 E_0}{m_1 m_2 \gamma^3 v^3}\textrm{arctanh}\frac{v t_2}{\sqrt{b^2 + v^2 t_2^2}}\right), \label{z2} \end{align} where we define \begin{align} t_1 & = \frac{\gamma E_0}{m_1 + \gamma m_2}t, \qquad \! \ \ \ \ t_2= \frac{\gamma E_0}{m_2 + \gamma m_1}t \label{tdefs} \\ b_1 & = \frac{m_2(m_2+\gamma m_1)}{E_0^2}b, \quad b_2 = -\frac{m_1(m_1+\gamma m_2)}{E_0^2}b \label{bdefs} \\ v_1 & = \frac{\gamma m_2}{m_1 + \gamma m_2}v, \qquad \ \ \ v_2 = -\frac{\gamma m_1}{m_2 +\gamma m_1}v. \label{vdefs} \end{align} Notice that relabeling the particles ($m_1 \leftrightarrow m_2$ and $q_1 \leftrightarrow q_2$) is equivalent to reversing the sign of $x$ and $z$. In other words, the configuration is invariant under swapping the particles and also rotating by $180^\circ$ within the plane of their motion. This physical symmetry is a special property of the CEM frame. It was taken as the \textit{definition} of the CEM frame in our recent work \cite{Gralla:2021qaf}. \subsection{CEM-frame electromagnetic field} The leading electromagetic field is determined by the leading, straight-line motion of the charges. Since the sources have constant velocity, their fields are just the boosted Coulomb field, given for $I=1,2$ by \begin{align} \mathbf{E}_I & = \frac{q_I\gamma_I}{R_I^3}\left[ (x-b_I)\bm{\hat{x}} + y \bm{\hat{y}} + (z-v_I t)\bm{\hat{z}} \right] \label{EI} \\ \mathbf{B}_I & = \frac{q_I\gamma_I v_I}{R_I^3}\left[ y \bm{\hat{x}} - (x-b_I) \bm{\hat{y}} \right], \label{BI} \end{align} where $\gamma_I=(1-v_I^2)^{-1/2}$ and the boosted distance function is \begin{align} R_I & = \sqrt{(x-b_I)^2+y^2+\gamma_I^2(z-v_It)^2}. \end{align} Notice that the electric and magnetic fields are invariant under the operations $m_1 \leftrightarrow m_2$, $q_1 \leftrightarrow q_2$, $x \to -x$, $z\to -z$, a special property of the CEM frame [see discussion below Eq.~\eqref{vdefs}]. This completes the first-corrected description of the problem in the CEM frame: the particle trajectories are given in Eqs.~\eqref{x1}--\eqref{z2}, while the electric and magnetic fields are given in Eqs.~\eqref{EI} and \eqref{BI}. \section{Analysis of conserved quantities}\label{sec:conserved} We will now discuss the behavior of the four conserved quantities: energy, momentum, angular momentum, and mass moment. For clarity, let us first imagine the situation where the particles are modeled by smooth, extended bodies. The budget for the system involves mechanical contributions from bodies $1$ and $2$ as well as the contribution from the electromagnetic field, \begin{align} E & = E_1 + E_2 + E_F \\ \bm{p} & = \bm{p}_1 + \bm{p}_2 + \bm{p}_{F} \\ \bm{L} & = \bm{L}_1 + \bm{L}_2 + \bm{L}_{F} \\ \bm{N} & = \bm{N}_1 + \bm{N}_2 + \bm{N}_{F}. \end{align} The form of the body contributions will depend on the particular model for the bodies, but the electromagnetic contribution is always given by \begin{align} E_F & = \frac{1}{8\pi} \int \left( \bm{E}^2+\bm{B}^2 \right) d^3 x \label{EF} \\ \bm{p}_F & = \frac{1}{4\pi}\int \left( \bm{E} \times \bm{B} \right) d^3x \label{pF} \\ \bm{L}_F & = \frac{1}{4 \pi}\int \bm{x} \times (\bm{E} \times \bm{B}) \ \! d^3 x\\ \bm{N}_F & = \frac{1}{8\pi} \int \left( \bm{E}^2+\bm{B}^2 \right) \bm{x} \ \! d^3 x - \bm{p}_F t. \label{NF} \end{align} Now let us consider the point particle limit. The particle conserved quantities take their standard relativistic forms, \begin{align} E_I & = \gamma_I m_I \label{Ep} \\ \bm{p}_I & = \gamma_I m_I \dot{\bm{r}}_I \label{pp} \\ \bm{L}_I & = \bm{r}_I \times \bm{p}_I \label{Lp} \\ \bm{N}_I & = \gamma_I m_I \bm{r}_I - \bm{p}_I t,\label{Np} \end{align} where $I=1,2$ labels the particles. In these expressions the full Lorentz factors $\gamma_I=(1-\dot{\bm{r}}_I^2)^{-1/2}$ must be used, as opposed to the background Lorentz factors appearing in Eqs.~\eqref{EI} and \eqref{BI}. The point particle limit is a significant simplification, since we now have definite expressions for the conserved quantities. The point particle limit will also help with the field integrals, but a naive application brings trouble. Whereas Eqs.~\eqref{EF}-\eqref{NF} are perfectly well-defined for extended bodies, they are divergent for point particles. The electric and magnetic fields grow like inverse distance squared as one approaches the particles, so that the densities of the conserved quantities grow like inverse distance to the fourth power. These singularities are not integrable, and all four conserved quantities are divergent. This issue is generally known as the ``electron self-energy problem,'' although the problem is with the naive point particle limit, not with electrons. To see how to proceed, let us consider the example of the energy, \begin{align} E_{F} = \frac{1}{8\pi} \int \left[(\bm{E}_1 + \bm{E}_2)^2 + (\bm{B}_1 + \bm{B}_2)^2 \right] d^3 x. \end{align} This integral naturally splits into three contributions $E_f = E_{F1}+E_{F2}+E_{F\times}$ given by \begin{align} E_{F1} & = \frac{1}{8\pi} \int (\bm{E}_1^2 + \bm{B}_1^2) d^3 x \\ E_{F2} & = \frac{1}{8\pi} \int (\bm{E}_2^2 + \bm{B}_2^2) d^3 x \\ E_{F\times} & = \frac{1}{4\pi} \int (\bm{E}_1\cdot \bm{E}_2 + \bm{B}_1 \cdot \bm{B}_2) d^3 x.\label{Ecross} \end{align} The integrals for $E_{F1}$ and $E_{F2}$ are infinite on account of the inverse-square divergence of the electric and magnetic fields at the positions of the particles. However, there are several different ways to see that we can, and in fact \textit{must}, drop these terms from the calculation. First, note that the $E_{F1}$ and $E_{F2}$ integrals are proportional to $q_1^2$ and $q_2^2$, respectively, and hence are relevant only to self-force effects. The self-force is proportional to the time-derivative of the acceleration and therefore unimportant in the approximation of straight line motion that we adopt; these effects become relevant at one order higher than we pursue here \cite{Saketh:2021sri}. Secondly (but relatedly), the self-energy terms are in fact \textit{already included} in our calculation via their contribution to the particle masses $m_1$ and $m_2$, and it would be inconsistent to include them again in the energy budget. This may be seen directly in the derivation of the Lorentz force law for extended bodies \cite{Gralla:2009md},\footnote{In the classic derivation of the self-force (e.g. \cite{Poisson:1999tv}), the infinite self-energy is combined with a negatively infinite ``bare mass'', with the sum representing the finite particle mass $m$. In a rigorous derivation with extended bodies \cite{Gralla:2009md}, one finds an analogous \textit{finite} mass renormalization, with the observable mass \textit{proven} to be a sum of material and field contributions, each of which is individually finite. This decomposition occurs even in the derivation of the Lorentz force law, irrespective of self-force corrections.} and also manifests in the slow motion limit, where the cross term integral evaluates precisely to the usual interaction energy $U=q_1 q_2/(|\bm{r}_1-\bm{r}_2|)$ [see Eq.~\eqref{EFxtilde} below]. That is, reproducing the usual conservation of total (kinetic plus potential) energy requires dropping the self-energy terms. Finally, in more practical terms, the correction we consider is proportional to $q_1 q_2$ [see Eqs.~\eqref{x1}--\eqref{z2}], and to solve the problem consistently at this order we require only the cross-term contributions from the electromagnetic fields. For all these reasons, the correct budget for the conserved quantities in our problem is \begin{align} E & = E_1 + E_2 + E_{F\times} \label{Etot} \\ \bm{p} & = \bm{p}_1 + \bm{p}_2 + \bm{p}_{F\times} \label{ptot} \\ \bm{L} & = \bm{L}_1 + \bm{L}_2 + \bm{L}_{F\times} \label{Ltot} \\ \bm{N} & = \bm{N}_1 + \bm{N}_2 + \bm{N}_{F\times},\label{Ntot} \end{align} where the particle contributions are given by Eqs.~\eqref{Ep}-\eqref{Np}, while the cross-term field contributions are given by \begin{align} E_{F\times} & = \frac{1}{8\pi} \int \mathcal{E}_\times d^3 x \label{EFx} \\ \bm{p}_{F\times} & = \frac{1}{4 \pi}\int \bm{S}_\times d^3x \label{pFx} \\ \bm{L}_{F\times} & = \frac{1}{4 \pi}\int \bm{x} \times \bm{S}_\times \ \! d^3 x \label{LFx}\\ \bm{N}_{F\times} & = \frac{1}{8\pi} \int \mathcal{E}_\times \bm{x} \ \! d^3 x - \bm{p}_{F\times} t. \label{NFx} \end{align} Here we have introduced the cross-term energy and momentum densities as \begin{align} \mathcal{E}_\times & = \frac{1}{4\pi} \left( \bm{E}_1 \cdot \bm{E}_2 + \bm{B}_1\cdot \bm{B}_2 \right) \label{ecross} \\ \bm{S}_\times & = \frac{1}{4\pi}\left( \bm{E}_1 \times \bm{B}_2 + \bm{E}_2 \times \bm{B}_1 \right). \label{Scross} \end{align} All terms in the conserved quantity budgets \eqref{Etot}-\eqref{Ntot} are now fully specified and mathematically well-defined. \subsection{Initial and final values: mechanical contribution} In order to understand the exchange of conserved quantities between particles and field, we will consider those quantities at early and late times in the scattering problem. Expanding the trajectories at $t \to \pm \infty$, we find \begin{align} x_I &= b_I + \Theta(t) \frac{2 q_1 q_2}{\mu b \gamma v^2}v_It + O(1/t^2) \\ z_I &= v_I t \mp v_I \frac{q_1 q_2}{\mu \gamma^3 v^3}\log\frac{2 v |t_I|}{b} + O(1/t^2) \end{align} Here and below, the upper sign corresponds to late times, while the lower sign corresponds to early times. Notice that the $z$ position of both particles is logarithmically divergent at early and late times. As discussed below Eq.~\eqref{z2p}, the divergence originates from the inverse-square nature of the electromagnetic force. The presence of the Heaviside function $\Theta(t)$ in the $x$ position shows the scattering of the particles by a small angle $\delta$: \begin{align} \delta = \lim_{t \to \infty} \left( \frac{x_I- b_I}{z_I}\right) = \frac{2 q_1 q_2}{\mu b \gamma v^2}. \end{align} This result is well known (e.g., \cite{Saketh:2021sri}). Calculating the conserved quantities from Eqs.~\eqref{Ep}-\eqref{Np}, we have \begin{align} E_1 & = \frac{m_1+\gamma m_2}{E_0}\left( m_1 - \frac{m_2}{E_0}\frac{q_1 q_2}{\gamma v |t|}\right) +O(t^{-2})\\ \bm{p}_1 & = \left(\mu \gamma v - \frac{q_1q_2}{\gamma^2 v^2}\frac{(m_1 + \gamma m_2)^2}{E_0^2 |t|}\right)\bm{\hat{z}} \nonumber \\ & \qquad + \Theta(t) \frac{2 q_1 q_2}{bv} \bm{\hat{x}} + O(t^{-2}) \\ \bm{L}_1 & = - \mu b \gamma v \frac{m_2(m_2+\gamma m_1)}{E_0^2} \bm{\hat{y}} + O(t^{-2}) \\ \bm{N}_1 & = \mp \frac{q_1 q_2}{ \gamma^2 v^2}\left(\log \frac{2 \gamma v E_0 |t|}{(m_1 + \gamma m_2)b}-1 \right) \bm{\hat{z}}+O(t^{-2}) \end{align} The values for particle 2 are given by exchanging $1 \leftrightarrow 2$ and sending $x \to -x$ and $z \to -z$ [see discussion below Eq.~\eqref{vdefs}]. The energy, momentum, and angular momentum have well-defined initial and final values (good limits as $t \to -\infty$ and $t \to +\infty$, respectively), but the mass moment inherits the logarithmic divergence of the position. If we instead consider the total mechanical contribution to the conserved quantities, we have \begin{align} E_1 + E_2 & = E_0 - \frac{q_1 q_2}{ v |t|} \frac{m_1^2+m_2^2+2m_1m_2/\gamma}{E_0^2} \label{E12} \\ \bm{p}_1 + \bm{p}_2 & = -\frac{q_1q_2}{|t|}\frac{m_2^2-m_1^2}{E_0^2}\bm{\hat{z}}+O(t^{-2}) \label{p12} \\ \bm{L}_1 + \bm{L}_2 & = - \mu b \gamma v \bm{\hat{y}} + O(t^{-2}) \label{L12} \\ \bm{N}_1 + \bm{N}_2 & = \mp \frac{q_1q_2}{\gamma^2 v^2}\log\frac{m_2 + \gamma m_1}{m_1 + \gamma m_2} \bm{\hat{z}} + O(t^{-2}). \label{N12} \end{align} The total mechanical mass moment has well-defined initial and final values (limits as $t\to \pm\infty$), given by the lower sign and upper sign (respectively) in Eq.~\eqref{N12}. These values are different, meaning there is a permanent change in mechanical mass moment. That is, if $\Delta$ represents final minus initial, and ``mech'' refers to the total contribution from the particles, we have \begin{align} \Delta E_{\rm mech} = 0, \quad \Delta \bm{p}_{\rm mech} = 0, \quad \Delta \bm{L}_{\rm mech} = 0, \end{align} but \begin{align} \Delta \bm{N}_{\rm mech} = -\frac{2 q_1q_2}{\gamma^2 v^2}\log\frac{m_2 + \gamma m_1}{m_1 + \gamma m_2} \bm{\hat{z}}. \label{DeltaNmech} \end{align} Since the total mass moment is conserved, there must be an opposing change in the electromagnetic field mass moment. We will now directly compute the field contributions to the conserved quantities in order to see the exchange explicitly. \subsection{Initial and final values: field contribution}\label{sec:field-early} We wish to compute the initial and final values of the field contributions to the conserved quantities. At early and late times, the particles are widely separated compared to their impact parameter, and one expects the problem to become effectively one-dimensional. This intuition is confirmed by dimensional analysis, as follows. At a given time $t$, the problem contains two length scales, $b$ and $v t$. The cross-term field contributions can depend only on the ratio after a relevant dimensionful combination has been factored out, \begin{align} E_{F\times} & = \frac{q_1 q_2}{v t} f_E\left(\frac{b}{vt}, m_1, m_2, v\right) \label{EFfunc}\\ \bm{p}_{F\times} & = \frac{q_1 q_2}{v t} f_p\left(\frac{b}{vt}, m_1, m_2, v\right) \label{pFfunc}\\ \bm{L}_{F\times} & = q_1 q_2 f_L\left(\frac{b}{vt}, m_1, m_2, v\right) \label{LFfunc} \\ \bm{N}_{F\times} & = q_1 q_2 f_N\left(\frac{b}{vt}, m_1, m_2, v\right). \label{NFfunc} \end{align} The functions $f$ are just placeholders indicating functional dependence. These equations may also be derived mathematically by making the change of variables $\bm{x}'=\bm{x}/(vt)$ in the cross-term integrals \eqref{EFx}-\eqref{NFx} and noting that $b_I/(v_I t)$ is independent of $I$, \begin{align} \frac{b_I}{v_I t} = \frac{b}{vt}\frac{(m_1+ \gamma m_2)(m_2+ \gamma m_1)}{E_0^2 \gamma}. \end{align} Eqs.~\eqref{EFfunc}-\eqref{NFfunc} show that the leading behavior at large $|t|$ may be computed using the limit $b \to 0$ at fixed $t$. This confirms the intuition that the problem is one-dimensional at early (and late) times and allows us to use the $b=0$ versions of the electric and magnetic fields to compute the cross-term field contributions. These integrals are evaluated in Appendix \ref{sec:field-integrals}. Noting our convention $v>0$, the results are\footnote{The analysis of the appendix did not establish the size of the error terms, and we have filled them in to match the mechanical values in Eqs.~\eqref{E12}--\eqref{N12}.} \begin{align} E_{F\times} & = \frac{q_1 q_2}{v |t|}\frac{m_1^2 + m_2^2 + 2 m_1 m_2/\gamma}{E_0^2} + O(t^{-2}) \label{EFxresult} \\ \bm{p}_{F\times} & = \frac{ q_1 q_2}{|t|}\frac{m_2^2-m_1^2}{E_0^2}\bm{\hat{z}} + O(t^{-2}) \label{pFxresult} \\ \bm{L}_{F\times} & = O(t^{-2}) \label{LFxresult} \\ \bm{N}_{F\times} & = \pm \frac{q_1 q_2}{\gamma^2v^2}\log\frac{m_2 +\gamma m_1}{m_1 + \gamma m_2}\bm{\hat{z}} + O(t^{-2}). \label{NFxresult} \end{align} Denoting these electromagnetic contributions with a subscript ``EM'', we see that the changes in electromagnetic conserved quantities (final minus initial) are \begin{align} \Delta E_{\rm EM} = 0, \quad \Delta \bm{p}_{\rm EM} = 0, \quad \Delta \bm{L}_{\rm EM} = 0, \end{align} and \begin{align} \Delta \bm{N}_{\rm EM} = \frac{2 q_1q_2}{\gamma^2 v^2}\log\frac{m_2 + \gamma m_1}{m_1 + \gamma m_2} \bm{\hat{z}}. \label{DeltaNEM} \end{align} The total values of conserved quantities remain constant, but there is an exchange of mass moment between mechanical and electromagnetic degrees of freedom [Eqs.~\eqref{DeltaNmech} and \eqref{DeltaNEM}]. \subsection{Full time evolution} Eqs.~\eqref{E12}--\eqref{N12} and \eqref{EFxresult}--\eqref{NFxresult} show that the initial and final values of the conserved quantities are \begin{align} E = E_0, \quad \bm{p} = 0, \quad \bm{L} = - \mu b \gamma v \bm{\hat{y}}, \quad \bm{N} = 0. \label{allofthem} \end{align} Since there is no radiation in the problem (fields fall off like distance squared), it is clear that these values of the total (mechanical plus electromagnetic) conserved quantities must hold precisely at all times during the motion.\footnote{Mathematically, one can easily show that there is no radiative flux through past or future null infinity because the relevant flux integrals fall off at least like inverse distance cubed.} It was therefore sufficient to calculate the initial values of the conserved quantities in order to know their values for all time. Eq.~\eqref{allofthem} shows in particular that the conditions $\bm{p} = \bm{N} = 0$ defining the CEM frame [Eq.~\eqref{CEM-def}] indeed hold for our solution to the scattering problem. We were unable to explicitly evaluate the electromagnetic contributions at intermediate times, but from the result \eqref{allofthem} we know these are equal and opposite to the mechanical ones, which may be calculated from the trajectories \eqref{x1}--\eqref{z2}. Fig.~\ref{fig:plot} shows the exchange in mass moment during the scattering process. \begin{figure} \centering \includegraphics[scale=.65]{Nplot3.pdf} \caption{Exchange of mass moment between mechanical and field degrees of freedom in small-angle electromagnetic scattering. We take speed $v=1/2$ and mass ratio $m_2/m_1=2$.} \label{fig:plot} \end{figure} \subsection{Mechanical center of Energy}\label{sec:CE} The mass moment is somewhat unfamiliar, and a reader might question why we have considered it at all. Given that the total momentum is zero, why not just consider the center of energy, which is an intuitive relativistic generalization of the center of mass? The simplest answer---already given in the introduction---is that the mass moment is additive, so it makes sense to talk about separate mechanical and electromagnetic contributions. However, one might still wonder about the behavior of the mechanical center of energy. We will find that this behavior is quite misleading! Let us define the mechanical center of energy as \begin{align} \bm{C} = \frac{\sum_I E_I \bm{r}_I}{\sum_I E_I},\label{Cdef} \end{align} where as usual $I=1,2$ labels the particles. Computing this quantity from our results [starting either with the trajectories \eqref{x1}--\eqref{z2} or the conserved quantities \eqref{E12}--\eqref{N12}], one finds \begin{align} \lim_{t \to \pm \infty} \bm{C} = \frac{\mp q_1q_2}{\gamma^2 v^2 E_0}\left( \log\frac{m_2 + \gamma m_1}{m_1 + \gamma m_2}+\frac{m_2^2 -m_1^2}{E_0^2} \right) \bm{\hat{z}}. \end{align} The change in mechanical center of energy is thus \begin{align} \Delta \bm{C} = -\frac{2 q_1q_2}{\gamma^2 v^2 E_0}\left( \log\frac{m_2 + \gamma m_1}{m_1 + \gamma m_2}+\frac{m_2^2 -m_1^2}{E_0^2} \right) \bm{\hat{z}}.\label{DeltaC} \end{align} This is a perfectly correct result given the definition \eqref{Cdef}, but it is hard to understand. Because the center of energy is not a conserved quantity, there is no way to discuss exchange between particles and field. And the specific form of \eqref{DeltaC} is quite puzzling because $\Delta \bm{C}$ does not vanish in the non-relativistic limit $v \ll 1$ (the second term survives).\footnote{To take the non-relativistic limit we must ensure that our small parameter $\chi_{\rm EM}$ \eqref{chiEM} remains small, which can be effected by expressing Eq.~\eqref{DeltaC} in terms of $\chi_{\rm EM}$ before letting $v \to 0$.} It is well known that the mechanical center of mass is strictly conserved for non-relativistic two-body dynamics, where it is usually eliminated at the very start by passing to an effective single-particle description. How, then, can the mechanical center of energy fail to be conserved in the non-relativistic limit? The answer is that the mechanical center of energy \eqref{Cdef} does not actually reduce to the mechanical center of mass in the non-relativistic limit appropriate to the scattering problem. No matter how small the velocity, there will be a correction to the particles' net kinetic energy that balances the potential energy $q_1 q_2/|\bm{r}_1-\bm{r}_2|$. This correction falls off only like the inverse distance between the particles and hence contributes to the mechanical center of energy (a distance-weighted average) even in the limit of infinite particle separation. The problematic second term in \eqref{DeltaC} is precisely (twice) this contribution. We see that the mechanical center of energy does not have a very useful non-relativistic limit. By contrast, the mechanical mass moment properly reduces to the mechanical center of mass in the non-relativistic limit, in any frame with no net momentum. \section{Discussion} We conclude with some discussion of the character and implications of these results. Let us begin with the size and direction of the electromagnetic scoot \eqref{NEM2}. The dimensional scale of the effect is set by the charges and initial velocity in the combination $q_1 q_2/v^2$, which has units of mass moment (mass times distance). The size also depends on the dimensionless Lorentz factor $\gamma$ and mass ratio $m_2/m_1$. In particular, the masses influence the scoot only through their ratio, with the absolute mass scale playing no role in setting the size. The direction of the scoot depends on the mass ratio as well as on the \textit{signs} of the charges, i.e., whether the interaction is attractive or repulsive. If the interaction is attractive, the scoot is towards the final position of the heavier one, whereas if the interaction is repulsive, the scoot is towards the final position of the lighter one. This association holds for the gravitational case \eqref{gravscoot} at sufficiently low velocities, ($v < 1/\sqrt{3}$, to be precise) but breaks down above this threshold. The qualitative agreement between the gravitational and electromagnetic results at small velocity is expected given the well-known gravitomagnetic analogy. We have no intuition for the sign reversal in the gravitational scoot at $v=1/\sqrt{3}$, where the whole effect vanishes. This reversal does not occur in electromagnetism. In the introduction we commented that the gravitational and electromagnetic scoots have a universal character in that they are independent of impact parameter. Our study of the electromagnetic case sheds considerable light on this issue. The initial and final values of the mechanical mass moments are associated with logarithmic corrections to the particle position due to the long-range nature of the Coulomb force. These corrections will appear at initial and final times irrespective of the details of the scattering encounter. Similarly, our calculation of the electromagnetic field contributions relies only on the condition $|vt| \gg b$, which will occur at sufficiently early or late times in any scattering encounter. In other words, our calculations show that \textit{whenever charged particles interact only by electromagnetic forces, there will always be non-zero mechanical and electromagnetic contributions to the mass moment, even in the limit of wide separation}. These contributions are directed tangent to the particle separation, and hence will change in any scattering encounter that changes the orientation of the particles. In particular, if $\bm{\hat{r}}_{12}^{\rm initial}$ is a unit vector pointing from particle $1$ to particle $2$ at early times and $\bm{\hat{r}}^{\rm final}_{12}$ is a unit vector pointing from particle $1$ to particle $2$ at late times, then there will be a change in CEM-frame mechanical mass moment given by \begin{align} \Delta \bm{N}_{\rm mech} = \frac{q_1q_2}{\gamma^2 v^2}\log\frac{m_2 + \gamma m_1}{m_1 + \gamma m_2} (\bm{\hat{r}}^{\rm final}_{12}-\bm{\hat{r}}_{12}^{\rm initial}).\label{DeltaNgen} \end{align} In small-angle scattering we have $\bm{\hat{r}}^{\rm final}_{12} \approx -\bm{\hat{r}}^{\rm initial}_{12}$, reproducing the result of our explicit calculation [Eq.~\eqref{NEM2} or \eqref{DeltaNmech}]. In general scattering (at higher order in perturbation theory, or without any approximation), there may be additional terms due to radiative losses, but the ``conservative'' contribution \eqref{DeltaNgen} will always be present. In this sense the scoot is an unavoidable, and even trivial, consequence of the displacement between mechanical and electromagnetic mass moment that persists at large separation. Why is this non-zero displacement present? Here we can offer only mathematical reasoning. The Coulomb force falls of as $1/d^2$ with increasing particle separation $d$. This means that energies and momenta will receive corrections from the interaction at order $q_1q_2/d$, and the terms $E_I \bm{r}_I$ and $\bm{p}_I t$ present in the mass moment \eqref{Nmechintro} will have a finite limits at early and late times, where $d \sim \bm{r} \sim v t$. Similar comments apply to the electromagnetic cross-term energy and momentum. These various contributions to the total mass moment depend on a variety of parameters $(q_1,q_2,m_1,m_2,v)$, and it would be surprising if they were all individually zero at all parameter values. We may set one linear combination to zero by choice of frame (the center of energy frame), but there will still be non-zero contributions from different degrees of freedom of the system. These electromagnetic results provide context for the gravitational problem. They provide encouragement that the gravitational result \eqref{gravscoot} is not an artifact of some peculiar choice of gauge but rather a bona-fide physical effect worthy of further exploration. They also reveal that a proper accounting of the conserved quantities will undoubtedly require consideration of log corrections to particle position as well as gravitational field contributions. These effects are surely an integral part of the initial and final configurations for the gravitational scattering problem, and will be relevant for any rigorous formulation of scattering as a map from past timelike infinity to future timelike infinity. While the study of spacelike and null infinity in general relativity is rather mature, comparably little is known about timelike infinity, especially when matter is present. We hope that our electromagnetic results will be helpful in establishing a rigorous framework for the general relativistic scattering of massive bodies. \section*{Acknowledgements} We are grateful to Drew Milsom for helpful discussions. This work was supported in part by NSF grant PHY--1752809 to the University of Arizona.
proofpile-arXiv_065-27
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction and Problem Formulation}\vspace*{-0.05in} \noindent In this paper we consider a family of sweeping processes with controlled polyhedral moving sets defined on a Hilbert space $\mathcal{H}$. To describe this family, fix some $x_{0}\in \mathcal{H}$ and, for arbitrary control functions $\left(u,b\right) :\left[ 0,T\right] \rightarrow \mathcal{H}^{m}\times \mathbb{R}^{m}$ satisfying $x_{0}\in C_{\left( u,b\right) }(0)$, define the \emph{moving polyhedral set} \begin{equation} C_{\left( u,b\right) }(t):=\left\{ x\in \mathcal{H}|\left\langle u_{i}(t),x\right\rangle \leq b_{i}(t)\quad \left( \,i=1,\ldots ,m\right) \right\} \quad \left( t\in \left[ 0,T\right] \right).\label{movpoly} \end{equation} This induces the \emph{controlled sweeping process} $\left( \mathcal{S}_{\left( u,b\right) }\right)$ given by \begin{equation} -\dot{x}(t)\in N_{C_{\left( u,b\right) }(t)}\left( x(t)\right) \,\, \mathrm{a.e.\;}\;t\in % \left[ 0,T\right] ,\,\, x(0)=x_{0}\in C_{\left( u,b\right) }(0), \label{sweeping} \end{equation} where $N_{C}(x)$ stands for the classical normal cone of convex analysis defined as \begin{equation} N_{C}(x):=\{v\in \mathcal{H}\;\big|\;\left\langle v,y-x\right\rangle \leq 0\}\;\mbox{ if }\;x\in C\;\mbox{ and }\;N_{C}(x):=\emptyset \;% \mbox{ else}.\label{nc} \end{equation} We emphasize that the differential inclusion in (\ref{sweeping}) comes along with the hidden pointwise \emph{state constraints} $x(t)\in C_{\left( u,b\right) }(t)$ for all $t\in \left[ 0,T\right]$, because otherwise the normal cone is empty by definition. \emph{Uncontrolled} sweeping processes were introduced and initially studied by Moreau \cite{JM1,JM2,JM3} and then were extensively developed in the literature, where the main attention was paid to the existence and uniqueness of solutions and various applications; see, e.g., \cite{Adly,Bro,bt,JV,kunzemonteiro,maka} with their references.\vspace*{0.02in} \emph{Existence and uniqueness} of \emph{class-preserving} solutions $x_{(u,b)}$ to the sweeping dynamics \eqref{sweeping} generated by \emph{control} functions $(u,b)$ in \eqref{movpoly} from various classes in Hilbert spaces is the \emph{first topic} of our paper. Note that the standard approach to this issue (see, e.g., \cite{kunzemonteiro}) consists of checking the Hausdorff Lipschitz continuity of the moving set \eqref{movpoly}. However, this does not make much sense when the moving set is an unbounded polyhedron. The $W^{1,2}$-preserving existence and uniqueness results for moving polyhedra were obtained by Tolstonogov \cite{tolsto,tolsto2,tolsto3} and more recently in \cite{mor1} under certain qualification conditions in Hilbert and finite-dimensional settings; see more discussions in Section~\ref{exsol}. Here we develop a novel approach involving the \emph{truncation} of polyhedra and deriving refined \emph{error bounds}. This allows us obtain new class-preserving results, which shows that Lipschitz continuous (resp.\ absolutely continuous) controls in \eqref{movpoly} uniquely generate Lipschitz continuous (resp.\ absolutely continuous) trajectories of \eqref{sweeping} under an explicit and easily formulated \emph{uniform Slater condition} for moving control polyhedra in separable Hilbert spaces.\vspace*{0.02in} The {\em second topic} of our study addresses \emph{quantitative stability} issues on the \emph{H\"olderian} dependence of solutions to \eqref{sweeping} on the corresponding perturbations of controls $(u,b)$ in moving sets as well as the initial value $x_0$ in separable Hilbert spaces. To the best of our knowledge, such questions have never been posted for the sweeping processes formulated in \eqref{movpoly} and \eqref{sweeping}. Based on the aforementioned truncation techniques and error bounds, we establish efficient results in this direction in the $W^{1,1}$ control-trajectory framework.\vspace*{0.02in} The \emph{third topic} we investigate here concerns an \emph{optimal control} problem for the sweeping process in \eqref{movpoly} and \eqref{sweeping} under the additional pointwise equality constraint on the \emph{$u$-component of controls} and \emph{geometric endpoint constraint} $x_{(u,b)}\in\Omega$ on trajectories. Optimal control theory for sweeping processes, with addressing the main issue of deriving necessary optimality conditions, has been started rather recently in \cite{chhm} and then has been extensively developed in subsequent publications (see, e.g., \cite{ao,bk,cm,mor1,cg,colmor,mor2,cmn,pfs,zeidan} and the references therein), which did not concern however systems with endpoint constraints. Problems of sweeping optimal control, that are governed by discontinuous differential inclusions with intrinsic pointwise and irregular state constraints, constitute one of the most challenging class in modern control theory. We develop here the {\em method of discrete approximation}, which allows us to constructively approximate the constrained control sweeping process under consideration by discrete-time sweeping systems with perturbed endpoint constraints so that feasible and optimal solutions to discrete approximations \emph{strongly converge} to the designated feasible and locally optimal solutions of the original problem under the {\em uniform Slater condition} introduced above. Employing then advanced tools of first-order and second-order variational analysis and generalized differentiation, we derive new \emph{necessary optimality conditions} for discrete approximations that gives us efficient \emph{suboptimality conditions} for a general class of local minimizers in the original problem of sweeping optimal control.\vspace{0.02in} The rest of the paper is organized as follow. Section~\ref{sec:trunc} presents major technical developments on the truncation and error bounds, which are of their own interest while being widely used in deriving the main results of the paper. Section~\ref{exsol} is devoted to establishing the class-preserving existence and uniqueness theorems for the controlled sweeping process. Section~\ref{quantstab} addresses stability issues for sweeping trajectories under control and initial value perturbations. In Section~\ref{discapp} we formulate an optimal control problems for the sweeping process (${\cal S}_{(u,b)}$) with endpoint constraint and construct its well-posed discrete approximations with establishing the $W^{1,2}$-strong convergence of feasible and optimal solutions. The final Section~\ref{sec:optim-disc} provides necessary optimality and suboptimality conditions for such control problems via advanced tools of generalized differentiation.\vspace*{-0.15in} \section{Error bounds and truncation of moving sets}\label{sec:trunc}\vspace*{-0.05in} \noindent This section plays a crucial role in describing and justifying our strategy to derive existence and stability results for sweeping processes with controlled polyhedra in both finite-dimensional and infinite-dimensional settings. The conventional by now theory of sweeping processes establishes the existence of Lipschitz continuous solutions of the sweeping dynamics via the Hausdorff Lipschitz continuity of moving sets; see, e.g., Theorem~2 in \cite{kunzemonteiro} and its proof. Unfortunately, this approach does not work for the case of unbounded moving polyhedra. For instance, in the case in moving \emph{halfspaces}, i.e., for $m=1$ in \eqref{movpoly}, the Hausdorff distance is either zero (if the two halfspaces coincide), or infinity otherwise. Hence the only ``moving" halfspaces satisfying Hausdorff Lipschitz continuity are constant in time, which clearly does not offer any freedom for controlling the process. However, when \emph{truncating} the moving polyhedron with a ball, the Hausdorff Lipschitz continuity may well be achieved. This suggests the following \emph{strategy}, which will be implemented in the paper. \emph{First} we intend to show that Lipschitzian controls lead us to \emph{bounded} continuous solutions of the sweeping process and that the moving polyhedron \emph{truncated} with a ball sufficiently large to contain this solution is Hausdorff Lipschitz, which hence verifies the actual Lipschitz continuity of the solution. The \emph{second step} of our approach is to establish an appropriate \emph{error bound} for the truncation moving polyhedra. For the reader's convenience, we split this section into several subsections and present numerical examples providing the driving forces for our approach. \vspace*{-0.07in} \subsection{\bf Hausdorff Lipschitz continuity of truncated moving polyhedra} \noindent As discussed above, it is generally hopeless to ensure a Hausdorff Lipschitz estimate for moving polyhedra \eqref{movpoly} in the form \begin{equation} d_{H}\left( C_{\left( u,b\right) }(s),C_{\left( u,b\right) }(t)\right) \leq \widehat{L}\left\vert s-t\right\vert \quad \forall s,t\in \left[ 0,T\right]. \label{hausorig} \end{equation} Our efforts are now paid to establish a \emph{truncated estimate} of type \begin{equation} d_{H}\left( C_{\left( u,b\right) }^{r}(s),C_{\left( u,b\right) }^{r}(t)\right) \leq \widehat{L}\left\vert s-t\right\vert \quad \forall s,t\in \left[ 0,T\right], \label{haustrunc} \end{equation} where $r\geq 0$ is appropriately given, and where $C^{r}:=C\cap \mathbb{B} \left( 0,r\right) $. To accomplish this, we proceed in following two steps. Our \emph{first step} is to derive the \emph{weakened Hausdorff estimate} given by \begin{equation} d\left( x,C_{\left( u,b\right) }(t)\right) \leq L\left( \left\Vert x\right\Vert \right) \left\vert s-t\right\vert \quad \forall s,t\in \left[ 0,T\right] \,\,\forall x\in C_{\left( u,b\right) }(s) \label{weakhaus0} \end{equation} with some monotonically increasing function $L(\cdot)$. Estimate \eqref{weakhaus0} clearly yields \begin{equation} d\left( x,C_{\left( u,b\right) }(t)\right) \leq \widehat{L}\left\vert s-t\right\vert \quad \forall s,t\in \left[ 0,T\right] \,\,\forall x\in C_{\left( u,b\right) }^{r}(s) \label{weakhaus} \end{equation} with $\widehat{L}:=L\left( r\right)$. In the \emph{second step} we prove the general estimate \begin{equation} d\left( x,C_{\left( u,b\right) }^{r}(t)\right) \leq 3d\left( x,C_{\left( u,b\right) }(t)\right) \quad \forall t\in \left[ 0,T\right] \,\,\forall x\in \mathbb{B}\left( 0,r\right) \label{truncest} \end{equation} for all $r$ sufficiently large. Combining the latter with \eqref{weakhaus} will ensure the desired truncated estimate (\ref{haustrunc}). Details follow.\vspace*{-0.05in} \subsubsection{\bf Limitations of Hoffman's error bound} \noindent The first idea, which comes to our mind for proving (\ref {weakhaus0}), is the use of the classical {\em Hoffman's error bound}; see, e.g., \cite[Theorem 2.200]{bonnshap}. It guarantees in our setting that, for each $t\in \left[ 0,T\right] $, there exists some $\widetilde{L}\left( t\right):=L(t,u(t),b(t))$ ensuring the distance estimate \begin{equation} d\left( x,C_{\left( u,b\right) }(t)\right) \leq \widetilde{L}\left( t\right) \max_{i=1,\ldots ,m}\left[ \left\langle u_{i}(t),x\right\rangle -b_{i}(t)% \right] _{+}\quad \forall x\in \mathcal{H} \label{hoffman} \end{equation}% provided that $C_{\left( u,b\right) }(t)\neq \emptyset $. In particular, for $x\in C_{\left( u,b\right) }(s)$ it follows from $\left\langle u_{i}(s),x\right\rangle \leq b_{i}(s)$ for $i=1,\ldots,m$, that \begin{eqnarray} &&\left[ \left\langle u_{i}(t),x\right\rangle -b_{i}(t)\right] _{+} \label{pluest}\\ &=&\left[\left\langle u_{i}(t),x\right\rangle -\left\langle u_{i}(s),x\right\rangle +\left\langle u_{i}(s),x\right\rangle -b_{i}(s)+b_{i}(s)-b_{i}(t)\right] _{+} \notag \\ &\leq&\left[ \left\langle u_{i}(t),x\right\rangle -\left\langle u_{i}(s),x\right\rangle +b_{i}(s)-b_{i}(t)\right] _{+}\notag \\ &\leq&\left\Vert u_{i}(t)-u_{i}(s)\right\Vert \left\Vert x\right\Vert +\left\vert b_{i}(s)-b_{i}(t)\right\vert \quad \forall i=1,\ldots ,m.\notag \end{eqnarray} When $\left( u,b\right)$ is Lipschitz continuous, this combines with the previous estimate to give us (with $\left\Vert \cdot \right\Vert _{\infty }$ referring to the maximum norm) the inequalities \begin{eqnarray*} d\left( x,C_{\left( u,b\right) }(t)\right) &\leq &\widetilde{L}\left( t\right) \left( \left\Vert u(t)-u(s)\right\Vert _{\infty }\left\Vert x\right\Vert +\left\Vert b(s)-b(t)\right\Vert _{\infty }\right) \\ &\leq &\widetilde{L}\left( t\right) \left( \left\Vert x\right\Vert +1\right) K\left\vert s-t\right\vert \quad \forall x\in C_{\left( u,b\right) }(s), \end{eqnarray*} where $K$ is a Lipschitz constant of $\left( u,b\right)$. Therefore, if the function $\widetilde{L}\left( t\right) $ is bounded from above on $\left[ 0,T\right]$, say by $L^{\ast }$, then the desired estimate (\ref{weakhaus0}) would follow with the function $L\left( \tau \right) :=\left( \tau +1\right) L^{\ast }$, which is clearly monotonically increasing. Unfortunately, even for Lipschitzian controls $\left( u,b\right)$, the function $\widetilde{L}\left( t\right) $ may be \emph{unbounded from above} as can be seen from the following example.\vspace*{-0.05in} \begin{example} \label{counter} {\rm In (\ref{movpoly}) put $m:=2$, $\mathcal{H}:=\mathbb{R}^{2}$, $T:=1$ and define the smooth (hence Lipschitz continuous) control pair \begin{equation*} u_{1}\left( t\right) :=\left( 0,1\right) ;\,b_{1}\left( t\right) :=1;\,u_{2}\left( t\right) :=\left( t,-1\right) ;\,b_{2}\left( t\right) :=0. \end{equation*} For $t\in \left( 0,1\right] $, take $x\left( t\right) :=\left( t^{-3},1\right) $ and observe that \begin{equation*} d\left( x\left( t\right) ,C_{\left( u,b\right) }(t)\right) =t^{-3}-t^{-1}\;\mbox{ and }\;\max_{i=1,\ldots ,m}\left[ \left\langle u_{i}(t),x\left( t\right) \right\rangle -b_{i}(t)\right] _{+}=t^{-2}-1. \end{equation*} It thus follows from (\ref{hoffman}) that $\widetilde{L}\left( t\right) \geq t^{-1}$ for all $t\in \left( 0,1\right] $. Therefore, the function $\widetilde{L}\left( t\right) $ is unbounded on $\left[ 0,T\right] $}. \end{example}\vspace*{-0.2in} \begin{remark} \label{specialcases} {\rm There are certain special cases in which Hoffman's error bound leads us to a \emph{bounded} function $\widetilde{L}\left( t\right) $ in \eqref{weakhaus0} on the interval $\left[ 0,T\right]$, even for non-Lipschitzian controls $\left( u,b\right)$. We mention the following: \begin{enumerate} \item In the case of a \emph{moving halfspace} (i.e., $m=1$ and $u(t)\ne 0$ for all $t\in \left[ 0,1\right] $) with a continuous control $u:\left[ 0,T\right] \rightarrow \mathcal{H}$ and an arbitrary control $b:\left[ 0,T\right] \rightarrow \mathbb{R}$, we have that \begin{equation*} d\left( x,C_{\left( u,b\right) }(t)\right) =\left\Vert u(t)\right\Vert ^{-1} \left[ \left\langle u(t),x\right\rangle -b(t)\right] _{+}\leq L^{-1}\left[ \left\langle u(t),x\right\rangle -b(t)\right] _{+} \end{equation*} for all $ t\in \left[0,1\right]$ and all $x\in \mathcal{H}$, where $L:=\inf\limits_{t\in \left[ 0,1\right] }\left\Vert u(t)\right\Vert >0$. \item In the case where variable control functions are situated only on the \emph{right-hand side} of \eqref{movpoly}, i.e, when $u\left( t\right)\equiv u\ne 0$) while $b:\left[ 0,T\right] \rightarrow \mathbb{R}$ is arbitrary, it follows from \cite[Proposition~4.6]{jourzag} that \begin{equation*} d\left( x,C_{\left( u,b\right) }(t)\right) \leq L\max_{i=1,\ldots ,m}\left[ \left\langle u_{i}(t),x\right\rangle -b_{i}(t)\right] _{+}\quad \forall t\in % \left[ 0,T\right] \,\,\forall x\in \mathcal{H} \end{equation*} whenever $C_{\left( u,b\right) }(t)\ne\emptyset$ for all $t\in \left[ 0,T\right] $. \end{enumerate}} \end{remark}\vspace*{-0.05in} \noindent Example~\ref{counter} illustrates the drastic impact of fully controlled polyhedral moving sets on Hoffman's error bound starting from dimension two, even for smooth controls. Fortunately, it turns out that---despite the fact that the approach using Hoffman's error bound sketched above is not viable for our purposes---we may find an \emph{alternative path} based on (\ref{weakhaus0}), in order to reach the desired goal. To support this idea, let us revisit Example~\ref{counter} and observe that the sweeping process generated by the Lipschitzian control in this example does admit a unique Lipschitzian solution for an arbitrary initial point $x_{0}\in C_{\left( u,b\right)}(0)$.\vspace*{-0.05in} \begin{example} \label{countercalc} {\rm Consider the control pair $\left( u,b\right)$ defined in Example~\ref{counter} and fix an arbitrary initial point $x_{0}\in C_{\left( u,b\right)}(0)$. We subdivide the initial polyhedron as $C_{\left( u,b\right) }(0)=\Omega _{1}\cup \Omega _{2}$ with the sets \begin{equation*} \Omega _{1}:=\left\{ x\in C_{\left( u,b\right) }(0)\;\big|\;x_{2}<x_{1}\right\}\;\mbox{ and }\; \Omega _{2}:=\left\{ x\in C_{\left( u,b\right) }(0)\big|\;x_{2}\geq x_{1}\right\}. \end{equation*} If $x_{0}\in \Omega _{2}$, then for an arbitrary time $t\in \left( 0,1\right) $ the boundaries of the two controlled halfspaces have no contact with $x_{0}$. Consequently, $\dot{x}(t)=0$ for all $t\in \left( 0,1\right) $, and hence $ x\left( t\right) =x_{0}$ for all $t\in \left[ 0,1\right] $. In contrast, for $x_{0}\in \Omega _{1}$ we get \begin{align*} x\left( t\right) &=\left\{ \begin{array}{cc} x_{0} & t\in \left[ 0,t_{1}\right] \\ y\left( t\right) & t\in \left( t_{1},t_{2}\right) \\ \left( 1/t,1\right) & t\in \left[ t_{2},1\right] \end{array} \right., \,\, t_{1}=\frac{x_{0,2}}{x_{0,1}},\,\, t_{2}=\left\{ \begin{tabular}{ll} $\frac{1}{\sqrt{\left\Vert x_{0}\right\Vert ^{2}-1}}$ & if $\left\Vert x_{0}\right\Vert \geq \sqrt{2}$ \\ $\infty $ & else \end{tabular} \right.,\\ &\quad y_{1}\left( t\right)=\frac{\left\Vert x_{0}\right\Vert }{% \sqrt{1+t^{2}}},\;\mbox{ and }\;y_{2}\left( t\right)=\frac{\left\Vert x_{0}\right\Vert }{\sqrt{1+t^{2}}}t. \end{align*} Here $t_{1}$ denotes the time when the second halfspace (the moving one) becomes binding for $x_{0}$ for the first time, i.e., when $tx_{0,1}=x_{0,2}$. This gives us the indicated formula for $t_{1}$. For $t<t_{1}$ both halfspaces are nonbinding for $x_{0}$; so $\dot{x}(t)=0$, and hence $x\left( t\right) =x_{0}$ for all $t\in \left[ 0,t_{1}\right]$. For $t\geq t_{1}$ the second halfspace is binding. The first halfspace also becomes binding at a certain time $t_{2}>t_{1}$; so we have $x_{2}\left( t\right) =1$ for all $t\in \lbrack t_{2},1]$. Since the second halfspace keeps binding, it follows that $tx_{1}\left( t\right) =x_{2}\left( t\right) =1$ from where we conclude that $ x_{1}\left( t\right) =1/t$ during this period of time. It remains to determine the trajectory $x\left( t\right) $ for $t\in \left( t_{1},t_{2}\right)$, as well as the switching time $t_{2}$. Since in this interval only the second halfspace is binding, we derive the following relations from the sweeping dynamics: \begin{equation*} -\dot{x}(t)\in N_{C_{\left( u,b\right) }(t)}\left( x(t)\right) =\mathbb{R}% _{+}\left( t,-1\right) \quad \forall t\in \left( t_{1},t_{2}\right) . \end{equation*} Consequently, there exists a function $\lambda \left( t\right) \leq 0$ such that \begin{equation*} \dot{x}_{1}(t)=t\lambda \left( t\right) ;\quad \dot{x}_{2}(t)=-\lambda \left( t\right) \quad \forall t\in \left( t_{1},t_{2}\right). \end{equation*} On the other hand, with the second halfspace being binding, we also have that $tx_{1}\left( t\right) =x_{2}\left(t\right)$ for all $t\in\lbrack t_{1},t_{2})$. This tells us therefore that \begin{equation*} \dot{x}_{1}(t)=-t\dot{x}_{2}(t)=-\frac{x_{2}\left( t\right) }{x_{1}\left( t\right) }\dot{x}_{2}(t)\Longleftrightarrow \dot{x}_{1}(t)x_{1}\left( t\right) +\dot{x}_{2}(t)x_{2}\left( t\right) =0\quad \forall t\in \left( t_{1},t_{2}\right) . \end{equation*} The solution to the latter differential equation is given by $ x_{1}^{2}\left( t\right) +x_{2}^{2}\left( t\right) =C$, where the constant $ C $ can be identified from the fact that $x\left( t_{1}\right) =x_{0}$, which yields $C=\left\Vert x_{0}\right\Vert ^{2}$. Along with the equality $tx_{1}\left(t\right) =x_{2}\left( t\right) $, we identify the function $y(t)$ indicated in the formula above. Finally, the switching time $t_{2}$ is determined from the relation $y_{2}\left( t_{2}\right) =1$. Observe that for $\left\Vert x_{0}\right\Vert < \sqrt{2}$ the first halfspace is never binding in the given time interval $ \left[ 0,1\right]$. It is easy to check that the determined solution $x(t)$ is Lipschitz continuous on the entire interval $[0,1]$, and as such it has to be unique due \cite[Theorem~3]{kunzemonteiro}}. \end{example}\vspace*{-0.2in} \subsubsection{\bf Uniform Slater condition and weakened Hausdorff estimate} \noindent As shown in our subsequent analysis, the reason why the announced result---that Lipschitzian controls yield Lipschitzian solutions of the sweeping process---can be maintained in Example~\ref{counter} despite the fact that an argumentation via Hoffman's error bound does not apply, consists in the fulfillment of an appropriate \emph{constraint qualification}. Now we introduce this qualification condition, which plays a crucial role not only in establishing existence and stability results presented in what follows, but also in the two last sections of the paper dealing with the verification of the strong convergence of discrete approximations and the derivation of necessary optimality conditions for sweeping optimal control.\vspace*{0.03in} Here is this easy formulated and natural qualification condition.\vspace*{-0.05in} \begin{definition} We say that the moving polyhedron in \eqref{movpoly} generated by the given control pair $(u,b)$ satisfies the {\sc uniform Slater condition} if \begin{equation} \forall t\in \left[ 0,T\right] \,\,\exists x\in \mathcal{H}\;\mbox{ such that}\;\left\langle u_{i}\left( t\right) ,x\right\rangle <b_{i}\left( t\right) \quad \forall i=1,\ldots ,m. \label{unifslater} \end{equation} \end{definition}\vspace*{-0.03in} We emphasize that, unlike the boundedness of $\widetilde L(t)$ in Hoffman's error bound estimate \eqref{hoffman}, this constraint qualification is \emph{essential} for our desired result. Indeed, a simple two-dimensional example taken from \cite[Example~2.3]{mor2} shows that, even for smooth control functions, the sweeping process (\ref{sweeping}) may not admit a solution when (\ref{unifslater}) is violated. On the other hand, we see below that (\ref{unifslater}) yields the weakened Hausdorff estimate (\ref{weakhaus0}), which is the first step mentioned in the introduction to this section.\vspace*{0.05in} Before deriving (\ref{weakhaus0}) via \eqref{unifslater}, we show that the following seemingly stronger version of (\ref{unifslater}) has been used in the earlier work on the existence of solutions to sweeping processes defined by moving polyhedra \cite[Assumption (H4)]{mor1}: \begin{equation} \exists \varepsilon >0\,\,\forall t\in \left[ 0,T\right] \,\,\exists x\in \mathcal{H}\;\mbox{with}\;\left\langle u_{i}\left( t\right) ,x\right\rangle \leq b_{i}\left( t\right) -\varepsilon \quad \forall i=1,\ldots ,m \label{slater2} \end{equation} It turns out, however, that this ``strong uniform Slater condition" is \emph{equivalent} to the uniform Slater condition formulated in \eqref{unifslater}.\vspace*{-0.05in} \begin{proposition} \label{slaterequiv} Assume that the control $\left( u,b\right)$ in \eqref {movpoly} is continuous. Then conditions \eqref{unifslater} and \eqref {slater2} are equivalent. \end{proposition}\vspace*{-0.15in} \begin{proof} Since (\ref{slater2}) obviously yields (\ref{unifslater}), it remains to verify the opposite implication. Assume that (\ref{slater2}) fails, which tells us that \begin{equation*} \forall n\in \mathbb{N}\,\,\exists t_{n}\in \left[ 0,T\right] \,\,\forall x\in \mathcal{H}\,\,\exists i\in \left\{ 1,\ldots ,m\right\}\;\mbox{with}\;\left\langle u_{i}\left( t_{n}\right) ,x\right\rangle >b_{i}\left( t_{n}\right) -\frac{1}{ n}. \end{equation*} For some subsequence $t_{n_{k}}\in \left[ 0,T\right]$, there exists $\bar{t}% \in \left[ 0,T\right] $ such that $t_{n_{k}}\rightarrow _{k}\bar{t}$. Fix an arbitrary vector $x\in \mathcal{H}$ and then get \begin{equation*} \forall k\in \mathbb{N}\,\,\exists i_{k}\in \left\{ 1,\ldots ,m\right\}\;\mbox{with}\;\left\langle u_{i_{k}}\left( t_{n_{k}}\right) ,x\right\rangle >b_{i_{k}}\left( t_{n_{k}}\right) -\frac{1}{n_{k}}. \end{equation*} Selecting another subsequence, find $i^{\ast }\in \left\{1,\ldots ,m\right\}$ such that $i_{k_{l}}\equiv i^{\ast }$. Therefore, we have the inequalities \begin{equation*} \left\langle u_{i^{\ast }}\left( t_{n_{k_{l}}}\right) ,x\right\rangle >b_{i^{\ast }}\left( t_{n_{k_{l}}}\right) -\frac{1}{n_{k_{l}}}\;\mbox{ for all }\;l\in \mathbb{N}. \end{equation*} Passing there to the limit as $l\rightarrow \infty $ gives us $\left\langle u_{i^{\ast }}\left( \bar{t}\right) ,x\right\rangle \geq b_{i^{\ast }}\left( \bar{t}\right) $. Since $x\in \mathcal{H}$ was chosen arbitrarily, we arrive at \begin{equation*} \exists \bar{t}\in \left[ 0,T\right] \,\,\forall x\in \mathcal{H}\,\,\exists i^{\ast }\in \left\{ 1,\ldots ,m\right\}\;\mbox{with}\;\left\langle u_{i^{\ast }}\left( \bar{t}\right) ,x\right\rangle \geq b_{i^{\ast }}\left( \bar{t}\right), \end{equation*} which contradicts (\ref{unifslater}) and thus completes the proof of the proposition.\qed \end{proof} \noindent Now we turn to the announced proof of the weakened Hausdorff estimate (\ref{weakhaus0}). Given $\delta>0$, define the \emph{$\delta -$moving polyhedron} by \begin{equation} C_{(u,b)}^{\left( \delta \right) }(t):=\big\{ x\in \mathcal{H}\;\big|\;\left\langle u_{i}(t),x\right\rangle \leq b_{i}(t)-\delta \,\, \left(i=1,\ldots ,m\right)\big\} \,\,\left( t\in \left[ 0,T\right] \right) . \label{delmov} \end{equation} To proceed, we first present the following crucial technical lemma involving continuous controls $(u,b)\in\mathcal{C}([0,T],\mathcal{H}^{m})\times \mathcal{C}([0,T],\mathbb{ R}^{m})$ in the moving polyhedron \eqref{movpoly} endowed with the maximum norm \begin{equation*} \left\Vert (u,b)\right\Vert _{\infty }:=\max_{t\in \left[ 0,T\right] ,i=1,\ldots ,m}\left\Vert u_{i}(t)\right\Vert +\max_{t\in \left[ 0,T\right] ,i=1,\ldots ,m}\left\vert b_{i}(t)\right\vert. \end{equation*} The associated closed ball in this space centered at $(u,b)$ with radius $r>0$ is denoted by $\mathbb{B}_{\infty }\left( (u,b),r\right)$.\vspace*{-0.05in} \begin{lemma} \label{strongselection} Fix continuous control $(\bar{u}, \bar{b})\in \mathcal{C}([0,T],\mathcal{H}^{m})\times \mathcal{C}([0,T], \mathbb{R}^{m})$ satisfying the uniform Slater condition \eqref{unifslater}. Then there exists $\varepsilon>0$ such that whenever $\gamma\in \left( 0,\varepsilon\right)$ we can find a continuous function $\widehat{x}\in \mathcal{C}([0,T],\mathcal{H})$ for which \begin{equation} \widehat{x}(t)\in C_{(u,b)}^{\left( \gamma \right) }(t)\,\, \forall t\in % \left[ 0,T\right] \,\,\forall (u,b)\in \mathcal{B}:=\mathbb{B}_{\infty }\left( \left( \bar{u},\bar{b}\right),\frac{\varepsilon -\gamma}{3\left( 1+\left\Vert \widehat{x}\right\Vert _{\infty }\right) }\right). \label{select1} \end{equation} Furthermore, we have the estimate \begin{equation} d(x,C_{(u,b)}(t))\leq {\frac{{f_{(u,b)}(t,x)}}{{f_{(u,b)}(t,x)-f_{(u,b)}(t, \widehat{x}(t))}}}\Vert x-\widehat{x}(t))\Vert\,\,\forall t\in \left[ 0,T\right] \label{select2} \end{equation} for all $t\in \left[ 0,T\right]$, all $x\in \mathcal{H}\backslash C_{(u,b)}(t)$, and all $(u,b)\in \mathcal{B}$, where $f_{(u,b)}(t,x):=\max_{i=1,\cdots ,m}\langle u_{i}(t),x\rangle -b_{i}(t)$. Finally, \begin{align} &d(x,C_{(u^{\prime },b^{\prime })}(t))\leq\notag \\ &\Vert x-\widehat{x}(t)\Vert \min \left\{ 1,\gamma ^{-1}\max_{i=1,\cdots ,m}\left[ \langle u_{i}^{\prime }(t)-u_{i}(s),x\rangle +b_{i}(s)-b_{i}^{\prime }(t)\right] _{+}\right\} \label{select3} \end{align} for all $(u,b),(u^{\prime },b^{\prime })\in \mathcal{B}$, all $s,t\in \left[ 0,T\right], \,$\ and all $x\in C_{(u,b)}(s)$. \end{lemma}\vspace*{-0.15in} \begin{proof} As shown in Proposition~\ref{slaterequiv}, the imposed uniform Slater condition (\ref{unifslater}) is equivalent to (\ref{slater2}) for $(u,b):=(\bar{u},\bar{b})$. Using the latter and choosing $\varepsilon >0$ therein, pick an arbitrary number $\gamma \in \left( 0,\varepsilon \right)$ and define \begin{equation*} \delta :=\frac{2\varepsilon +\gamma }{3}\in \left( 0,\varepsilon \right) . \end{equation*} Then condition (\ref{slater2}) tells us that \begin{equation*} \forall t\in \left[ 0,T\right] \,\,\exists x\in \mathcal{H}\;\mbox{ with }\;\left\langle \bar{u}_{i}\left( t\right) ,x\right\rangle \leq \bar{b}_{i}\left( t\right) -\varepsilon <\bar{b}_{i}\left( t\right) -\delta \quad \forall i=1,\ldots,m. \end{equation*} In other words, for each $t\in \left[ 0,T\right]$ the convex set $C_{\left( \bar{u},\bar{b}\right) }^{\left( \delta \right) }(t)$ admits a Slater point. This ensures the inclusion \begin{equation*} C_{\left( \bar{u},\bar{b}\right) }^{\left( \delta \right) }(t)\subseteq \mathrm{cl}\,\left\{ x\in \mathcal{H}\;\big|\;\left\langle \bar{u}_{i}\left( t\right) ,x\right\rangle <\bar{b}_{i}\left( t\right) -\delta \right\} \quad \forall t\in \left[ 0,T\right] \end{equation*} which in turn allows to conclude (by invoking, e.g., \cite[Theorem~3.1.5]{fivemen}) that $C_{( u,b)}^{(\delta)}:[0,T]\rightrightarrows \mathcal{H}$ is a lower semicontinuous multifunction. Since the images $C_{(\bar{u},\bar b)}^{(\delta)}(t)$ are closed and convex for all $t\in \left[0,T\right]$, the classical Michael selection theorem ensures the existence of a continuous function $\widehat{x}\in \mathcal{C}([0,T],\mathcal{H})$ with \begin{equation*} \widehat{x}(t)\in C_{\left( \bar{u},\bar{b}\right) }^{\left( \delta \right) }\left( t\right) \quad \forall t\in \left[ 0,T\right]. \end{equation*} Next we fix an arbitrary continuous control $(u,b)\in \mathcal{B}$ and get by the definition of $\delta $ the following inequalities: \begin{eqnarray*} \left\langle u_{i}\left( t\right) ,\widehat{x}(t)\right\rangle -b_{i}\left( t\right) &\leq &\left\langle \bar{u}_{i}\left( t\right) ,\widehat{x} (t)\right\rangle +\left\Vert u_{i}\left( t\right) -\bar{u}_{i}\left( t\right) \right\Vert\cdot\left\Vert \widehat{x}(t)\right\Vert -b_{i}\left( t\right) \\ &\leq &\bar{b}_{i}\left( t\right) -\delta +\left\Vert u_{i}\left( t\right) -% \bar{u}_{i}\left( t\right) \right\Vert\cdot\left\Vert \widehat{x}(t)\right\Vert -b_{i}\left( t\right) \\ &\leq &\frac{2}{3}\left( \varepsilon -\gamma \right) -\delta \leq -\gamma \quad \forall t\in \left[ 0,T\right] \,\,\forall i=1,\ldots ,m. \end{eqnarray*} Thus $\widehat{x}\in \mathcal{C}([0,T],\mathcal{H})$ and $\widehat{x}(t)\in C_{\left( u,b\right) }^{\left( \gamma \right) }\left( t\right) $ for all $ t\in \left[ 0,T\right]$, which verify (\ref{select1}). Addressing the second assertion of the lemma, fix arbitrary elements $t\in \left[ 0,T\right] $, $(u,b)\in \mathcal{B}$, and $x\in \mathcal{H}\backslash C_{(u,b)}(t)$. Remembering the construction of ${f_{(u,b)}}$, we have that ${f_{(u,b)}(t,x)>0}$ by $x\in \mathcal{H}\backslash C_{(u,b)}(t)$ and ${ f_{(u,b)}(t,\widehat{x}(t))\leq -\gamma <0}$ by the already proved relation ( \ref{select1}), define \begin{equation*} \lambda :={\frac{{f_{(u,b)}(t,x)}}{{f_{(u,b)}(t,x)-f_{(u,b)}(t,\widehat{x} (t))}}\in }\left( 0,1\right). \end{equation*} It follows from the convexity of ${f_{(u,b)}}(t,\cdot )$ that \begin{equation*} f_{(u,b)}(t,(1-\lambda )x+\lambda \widehat{x}(t))\leq (1-\lambda )f_{(u,b)}(t,x)+\lambda f_{(u,b)}(t,\widehat{x}(t))=0, \end{equation*} and so $(1-\lambda )x+\lambda \widehat{x}(t)\in C_{(u,b)}(t)$. This verifies (\ref{select2}), which can be written as \begin{equation*} d(x,C_{(u,b)}(t))\leq \Vert x-((1-\lambda )x+\lambda \widehat{x}(t))\Vert ={ \lambda }\Vert x-\widehat{x}(t)\Vert. \end{equation*} It remains to justify the final assertion of the lemma. To proceed, fix arbitrary elements $s,t\in \lbrack 0,T]$, $ (u,b),(u^{\prime },b^{\prime })\in \mathcal{B}$, and $x\in C_{(u,b)}(s)$. If $ x\in C_{(u^{\prime },b^{\prime })}(t)$, then (\ref{select3}) holds trivially. Supposing now that $x\notin C_{(u^{\prime },b^{\prime })}(t)$ gives us ${ f_{(u^{\prime },b^{\prime })}(t,x)>0}$ and ${f_{(u,b)}(t,\widehat{x}(t))\le -\gamma }$ by (\ref{select1}). Therefore, (\ref{select2}) yields \begin{align*} &d(x,C_{(u^{\prime },b^{\prime })}(t))\\&\leq{\frac{{f_{(u^{\prime },b^{\prime })}(t,x)}}{{f_{(u^{\prime },b^{\prime })}(t,x)-f_{(u^{\prime },b^{\prime })}(t,\widehat{x}(t))}}}\Vert x-\widehat{x}(t))\Vert \leq {\gamma }^{-1}{f_{(u^{\prime },b^{\prime })}(t,x)}\Vert x-\widehat{x}(t))\Vert \\ & \leq {\gamma }^{-1}\left( {f_{(u^{\prime },b^{\prime })}(t,x)-f_{(u,b)}(s,x)}\right) \Vert x-\widehat{x}(t))\Vert \quad \left( \mbox{because of }x\in C_{(u,b)}(s)\right) \\ & \leq {\gamma }^{-1}\Vert x-\widehat{x}(t)\Vert \max_{i=1,\cdots ,m}\left[ \langle u_{i}^{\prime }(t)-u_{i}(s),x\rangle +b_{i}(s)-b_{i}^{\prime }(t) \right] _{+}. \end{align*} Since $\widehat{x}(t)\in C_{(u^{\prime },b^{\prime })}^{\left( \gamma \right) }(t)\subseteq C_{(u^{\prime },b^{\prime })}(t)$ by (\ref{select1}), we also have that $d(x,C_{(u^{\prime },b^{\prime })}(t))\leq \Vert x- \widehat{x}(t))\Vert $. Combining the above verifies (\ref{select3}) and completes the proof. \qed \end{proof} We are now in a position to derive the weakened Hausdorff estimate (\ref{weakhaus0}).\vspace*{-0.05in} \begin{theorem}\label{slaterest} Let $\left( u,b\right) $ be a Lipschitz continuous control along which the moving polyhedron \eqref{movpoly} satisfies the uniform Slater condition \eqref{unifslater}. Then there exist constants $ K_{1},K_{2}\geq 0$ such that the weakened Hausdorff estimate \eqref{weakhaus0} holds with the monotonically increasing function $L:\mathbb{R}_{+}\rightarrow \mathbb{R}_{+}$ defined by \begin{equation} L\left( r\right):=K_{1}\left( r+1\right) \left( r+K_{2}\right) \quad \left( r\geq 0\right). \label{lrquad} \end{equation} \end{theorem}\vspace*{-0.15in} \begin{proof} We again employ the uniform Slater condition \eqref{unifslater} in the equivalent form \eqref{slater2} by Proposition~\ref{slaterequiv}. Then we get from \eqref{select3} in Lemma~\ref{strongselection} that \begin{equation*} d(x,C_{(u,b)}(t))\leq \frac{2}{\varepsilon }\Vert x-\widehat{x}(t)\Vert \max_{i=1,\cdots ,m}\left[\langle u_{i}(t)-u_{i}(s),x\rangle +b_{i}(s)-b_{i}(t)\right] _{+} \end{equation*} along a continuous function $\widehat{x}(\cdot)$ for all $s,t\in\left[ 0,T \right] \,$\ and all $x\in C_{(u,b)}(s)$. Define $\varkappa :=\max\limits_{t\in \left[ 0,T\right] }\left\Vert \widehat{x}\left( t\right) \right\Vert \geq 0$ and denote by $K\geq 0$ a Lipschitz constant of the control pair $\left( u,b\right) $. Then we have the estimate \begin{equation*} d(x,C_{(u,b)}(t))\leq \frac{2K}{\varepsilon }\Vert x-\widehat{x}(t)\Vert \left( \left\Vert x\right\Vert +1\right) \left\vert s-t\right\vert \leq \frac{2K}{\varepsilon }\left( \Vert x\Vert +\varkappa \right) \left( \left\Vert x\right\Vert +1\right) \left\vert s-t\right\vert \end{equation*} for all $s,t\in \left[ 0,T\right] \,$\ and all $x\in C_{(u,b)}(s)$. This is exactly (\ref{weakhaus0}) with the monotonically increasing function $ L\left( r\right) :=\delta ^{-1}K\left( r+\varkappa \right) \left( r+1\right) $. \qed \end{proof}\vspace*{-0.2in} \begin{remark} \noindent {\rm The moving polyhedron $C_{\left( u,b\right) }$ defined in Example~\ref{counter} does satisfy the uniform Slater condition. To see this, select the constant solution $x\left( t\right) \equiv \left(0,0.5\right)$ in (\ref{unifslater}). Thus the estimate (\ref{weakhaus0}) can be verified in this example via Theorem~\ref{slaterest}, while the usage of Hoffman's error bound does not lead us to the desired result. The reason is that Hoffman's error bound---if applicable as in the special cases mentioned in Remark~\ref{specialcases}---would necessarily bring us to an \emph{affine function} $L$ in \eqref{weakhaus0}; see the discussion above in Example~\ref{counter}. Yet, a closer inspection of the example shows that such an affine function $L$ cannot work in this example. Indeed, consider the sequences \begin{equation*} x^{\left( n\right) }:=\left( 2n,0\right) \in C_{\left( u,b\right) }\left( 0\right) ;\quad t_{n}:=n^{-1}\quad \left( n\in \mathbb{N}\right). \end{equation*} Assuming that estimate (\ref{weakhaus0}) holds with an affine function $ L\left( r\right):=ar+b$ and choosing $s:=0$, we arrive at the following contradiction \begin{eqnarray*} n &\leq &\sqrt{1+n^{2}}=d\left( x^{\left( n\right) },C_{\left( u,b\right) }(t_{n})\right) \leq \left( a\left\Vert x^{\left( n\right) }\right\Vert +b\right) t_{n} \\ &=&\left(2an+b\right) n^{-1}\leq 2a+\left\vert b\right\vert \quad \forall n\in \mathbb{N}. \end{eqnarray*} On the other hand, the choice of the \emph{quadratic function} (\ref{lrquad}) by Theorem~\ref{slaterest} allows us to derive the weakened Hausdorff estimate \eqref{weakhaus0} in this example}. \end{remark}\vspace*{-0.2in} \subsubsection{\bf General truncation lemma} \noindent The last subsection of this section accomplishes the \emph{second step} of our approach outlined in the introduction to this section. The following general truncation result clearly implies the desired estimates \eqref{truncest} for truncating polyhedra.\vspace*{-0.05in} \begin{lemma} \label{TL} Let $(X,\Vert \cdot \Vert )$ be a normed space, and let $C$ be a nonempty, closed, and convex subset of $X$. Define the truncating set $C^{r}:=C\cap \mathbb{B}\left(0,r\right) $ for $r>0$. Then we have the estimate \begin{equation} d(x,C^{r})\leq \frac{2r}{r-d(0, C)} d(x,C)\quad \forall x\in \mathbb{B} \left( 0,r\right) \,\,\forall r>d\left( 0,C\right). \label{truncest0} \end{equation} Consequently, it follows that \begin{equation} d(x,C^{r})\leq 3d(x,C)\quad \forall x\in \mathbb{B}\left( 0,r\right) \,\,\forall r> 3d\left( 0,C\right). \label{truncest1} \end{equation} \end{lemma}\vspace*{-0.15in} \begin{proof} Pick arbitrary elements $r>d\left( 0,C\right) $, $x\in \mathbb{B}\left( 0,r\right) $, and $\varepsilon $ with $0<\varepsilon <r-d(0,C)$. If $x\in C$, then $x\in C^{r}$ and (\ref{truncest0}) holds trivially. Assume now that $x\notin C$, and so $d(x,C)>0$. Choose $x_{0},y\in C$ such that \begin{equation} \left\Vert x_{0}\right\Vert \leq \beta :=d\left( 0,C\right) +\varepsilon ,\,\,\left\Vert x-y\right\Vert \leq d(x,C)+\min \left\{ \varepsilon ,d\left( x,C\right) \right\} . \label{tworel} \end{equation} If $\Vert y\Vert \leq r$, then $y\in C^{r}$, and (\ref{truncest0}) follows from the inequality in (\ref{tworel}). Therefore, it remains to examine the case where $\Vert y\Vert >r$. The equality in (\ref{tworel}) combined with $ \varepsilon <r-d(0,C)$ gives us the estimate $\left\Vert x_{0}\right\Vert \leq \beta <r$. Therefore, there exists $\gamma \in \left( 0,1\right)$ such that $\Vert z\Vert =r$ for $z:=(1-\gamma )y+\gamma x_{0}$. The convexity of $C$ readily ensures that $z\in C^{r}$. Then we have \begin{equation*} r\leq (1-\gamma )\Vert y\Vert +\gamma \Vert x_{0}\Vert \text{\quad \mbox{or, equivalently,}\quad }\gamma \left( \Vert y\Vert -\Vert x_{0}\Vert \right) \leq \Vert y\Vert -r. \end{equation*} Due to $\Vert y\Vert >r>\beta \geq \left\Vert x_{0}\right\Vert $, the latter implies that \begin{equation*} \Vert z-y\Vert =\gamma \Vert y-x_{0}\Vert \leq \frac{\Vert y\Vert -r}{\Vert y\Vert -\beta }\left( \Vert y\Vert +\beta \right). \end{equation*} Taking into account that $\Vert x\Vert \leq r$ brings us to \begin{equation*} \Vert y\Vert \leq \Vert y-x\Vert +\Vert x\Vert \leq d(x,C)+\varepsilon +r, \end{equation*} and therefore we arrive at the estimate \begin{equation*} \Vert z-y\Vert \leq \frac{\Vert y\Vert +\beta }{\Vert y\Vert -\beta } (d(x,C)+\varepsilon). \end{equation*} Combining all the above leads us to the relationships \begin{equation*} \Vert z-x\Vert \leq \Vert z-y\Vert +\Vert y-x\Vert \leq (1+\frac{\Vert y\Vert +\beta }{\Vert y\Vert -\beta })(d(x,C)+\varepsilon )\leq \left( 2+% \frac{2\beta }{r-\beta }\right) (d(x,C)+\varepsilon ). \end{equation*}% Since $z\in C^{r}$ and $\varepsilon$ was chosen arbitrarily with $ 0<\varepsilon <r-d(0,C)$, we get \begin{equation*} d(x,C_{r})\leq \left( 2+\frac{2d(0, C) }{r-d(0, C) }\right) d(x,C), \end{equation*} which verifies (\ref{truncest0}) and thus completes the proof of the truncation lemma. \qed \end{proof}\vspace*{-0.25in} \section{Existence and uniqueness of sweeping solutions}\label{exsol}\vspace*{-0.05in} \noindent The main goal of this section is establishing two class-preservation \emph{existence and uniqueness} theorems for polyhedral controlled sweeping processes defined in \eqref{movpoly} and \eqref{sweeping} under the uniform Slater condition \eqref{unifslater} in the setting of \emph{separable Hilbert spaces}. Namely, we aim at proving that \emph{Lipschitz continuous} controls $(u,b)$ uniquely generate Lipschitz continuous trajectories of ${\cal S}_{(u,b)}$ and that \emph{absolutely continuous} (of class $W^{1,1}$) controls uniquely generate sweeping trajectories of the same class. Note that results of this type in the $W^{1,2}$ control-trajectory framework we obtained in \cite{tolsto,tolsto2,tolsto3} for various types of sweeping processes under appropriate assumptions in separable Hilbert spaces. Similar preservation results of class $W^{1,2}$ were established in \cite{mor1} in finite dimensions under the strong uniform Slater condition \eqref{slater2} reducing to \eqref{unifslater} as we now know. Observe also that results of this type in class of $W^{1,1}$ were derived in \cite{mor2,colmor} for polyhedral sweeping processes in finite-dimensional spaces under essentially stronger qualification conditions than \eqref{unifslater} used in what follows. Our approach below is strongly based on the truncation procedure and error bound estimates developed in the previous section.\vspace*{0.03in} Here is the first theorem dealing with Lipschitzian controls.\vspace*{-0.08in} \begin{theorem} \label{existlip} Let $\mathcal{H}$ be a separable Hilbert space. Assume that $\left( u,b\right)$ is Lipschitz continuous control and that the moving polyhedron $C_{\left(u,b\right)}$ in \eqref{movpoly} satisfies the uniform Slater condition \eqref{unifslater} along this control pair. Then the sweeping process $\left( \mathcal{S}_{\left( u,b\right) }\right)$ admits a unique Lipschitz continuous solution. \end{theorem}\vspace*{-0.2in} \begin{proof} Theorem~\ref{slaterest} ensures the existence of a monotonically increasing function $L:\mathbb{R}_{+}\rightarrow \mathbb{R}_{+}$ satisfying the weakened Hausdorff estimate \eqref{weakhaus0}. This gives us for each $r>0$ a constant $\widehat{L}_{r}:=L\left( r\right)$ such that (\ref{weakhaus}) holds. Thus for all $r>0$, all $s,t\in \left[ 0,T\right] $, and all $x\in C_{\left( u,b\right) }(s)$ with $\left\Vert x\right\Vert\leq r$ there is $y\in C_{\left( u,b\right) }(t)$ satisfying \begin{equation*} \left\Vert x-y\right\Vert \leq \left( \widehat{L}_{r}+1\right) \left\vert s-t\right\vert. \end{equation*} Indeed, the latter is obvious with the choice of $y:=x$ in the case where $s=t$, and this follows from (\ref{weakhaus}) and from $d\left( x,C_{\left( u,b\right) }(t)\right) <\left( \widehat{L}_{r}+1\right) \left\vert s-t\right\vert $ in the case where $s\neq t$. Since the linear function $s\longmapsto \left( \widehat{L}_{r}+1\right) s$ trivially belongs to $W^{1,2}\left[ 0,T\right]$, it is $r$-\textit{weakly uniformly lower semicontinuous from the right} for $p=2$ in the sense of Tolstonogov \cite[eq. (2.2)]{tolsto}. Therefore, we deduce from \cite[Lemma~2.1 and Lemma~3.1]{tolsto} that the sweeping process $\left( \mathcal{S}_{\left( u,b\right) }\right)$ has a unique solution $x^{\ast }\in W^{1,2}\left( \left[ 0,T\right],\mathcal{H}\right)$. In particular, the trajectory $x^{\ast}(t)$ is absolutely continuous on $[0,T]$. It remains to show that $x^{\ast}(t)$ is Lipschitz continuous on this interval. To proceed, define \begin{equation} \rho :=\max\limits_{t\in \left[ 0,T\right] }\left\Vert x^{\ast }\left( t\right) \right\Vert ;\quad r:=3\rho +1 \label{rhordef} \end{equation} and then fix arbitrary $s,t\in \left[ 0,T\right] $ and \begin{equation*} x\in C_{\left( u,b\right)}^{r}(s):=C_{\left( u,b\right) }(s)\cap \mathbb{B} \left( 0,r\right). \end{equation*} As a solution to $\left( \mathcal{S}_{\left( u,b\right) }\right) $, the function $x^{\ast }(t)$ satisfies the hidden state constraint $x^{\ast }\left( t\right) \in C_{\left( u,b\right) }(t)$. Therefore, we obtain \begin{equation*} r=3\rho +1\geq 3\left\Vert x^{\ast }\left( t\right) \right\Vert +1>3d\left( 0,C_{\left( u,b\right) }(t)\right). \end{equation*} This allows us to invoke the truncation result from Lemma~\ref{TL} to get \begin{equation} d\left( x,C_{\left( u,b\right) }^{r}(t)\right) \leq 3d\left( x,C_{\left( u,b\right) }(t)\right).\label{almost} \end{equation} On the other hand, Theorem~\ref{slaterest} yields (\ref{weakhaus0}) and hence gives us a constant $\widehat{L}$ such that (\ref{weakhaus}) holds for our selected $s,t\in \left[ 0,T\right] $. Combining this with (\ref{almost}), and recalling that $s,t,x$ were chosen arbitrarily, we arrive at the estimate \begin{equation*} d\left( x,C_{\left( u,b\right) }^{r}(t)\right) \leq 3\widehat{L}\left\vert s-t\right\vert \quad \forall s,t\in \left[ 0,T\right] \,\,\forall x\in C_{\left( u,b\right) }^{r}(s). \end{equation*} Interchanging the roles of $s$ and $t$ readily yields the desired Lipschitz Hausdorff estimate (\ref{haustrunc}) of the truncated moving polyhedron with modulus $3\widehat{L}$. Employing the standard existence result from \cite[Theorem~2]{kunzemonteiro}) leads us to deducing from the obtained estimate that the truncated sweeping process $\big(\mathcal{\widetilde{S}}_{\left(u,b\right)}\big)$ defined as \begin{equation} -\dot{x} (t)\in N_{C_{\left( u,b\right)}^{r}(t)}\left( x(t)\right)\;\mathrm{ a.e.\,}\;t\in[0,T],\;\; x(0)=x_{0}\in C_{\left( u,b\right) }^{r}(0) \label{tildes} \end{equation} admits a Lipschitz continuous solution $\widetilde{x}(\cdot)$. It follows from the definitions in (\ref{rhordef}) that for all $r>\rho $ we have the inclusions \begin{equation*} x^{\ast }\left( t\right) \in C_{\left( u,b\right) }(t)\cap \mathbb{B}\left( 0,\rho \right) \subseteq C_{\left( u,b\right) }(t)\cap \mathrm{int\,}\mathbb{ B}\left( 0,r\right) \subset C_{\left( u,b\right) }^{r}(t)\quad \forall t\in \left[ 0,T\right]. \end{equation*} On the one hand, the resulting inclusion justifies the feasibility of the initial point in $\big(\mathcal{\widetilde{S}}_{\left( u,b\right) }\big)$ due to $x_{0}=x^{\ast }\left( 0\right)$. On the other hand, it tells us that \begin{equation*} N_{C_{\left( u,b\right) }^{r}(t)}\left(x^{\ast }(t)\right) =N_{C_{\left( u,b\right) }(t)}\left(x^{\ast }(t)\right) \quad \forall \mathrm{\,}t\in \left[ 0,T\right]. \end{equation*} Therefore, $x^{\ast }(\cdot)$ being a solution to $\big(\mathcal{S}_{\left( u,b\right) }\big)$ is also a solution to $\big( \mathcal{\widetilde{S}} _{\left( u,b\right) }\big)$. Since $x^{\ast}(t)$ is absolutely continuous on $[0,T]$ as an element of $W^{1,2}\left( \left[ 0,T\right] ,\mathcal{H}\right)$, and since $\left( \mathcal{\widetilde{S}}_{\left( u,b\right) }\right) $ can have at most one absolutely continuous solution by \cite[Theorem~3]{kunzemonteiro}, we conclude that $x^{\ast}(\cdot)=\widetilde{x}(\cdot)$. This ensures that $x^{\ast }(t)$ is Lipschitz continuous on $[0,T]$, since $\widetilde{x}(t)$ is so. Thus we complete the proof.\qed \end{proof}\vspace*{-0.05in} Our next goal in this section is establish the existence of a unique \emph{absolutely continuous} solution of the sweeping process $\big(\mathcal{S}_{\left(u,b\right)}\big)$ generated by any absolutely control $(u,b)$ in the moving polyhedron \eqref{movpoly} under the same uniform Slater condition. Recall that the norms on the spaces of absolutely continuous functions $W^{1,1}([0,T],\mathcal{H}^{m})$ and $ W^{1,1}([0,T],\mathbb{R}^{m})$ are defined, respectively, by \begin{equation*} \Vert u\Vert _{1,1}:=\sum_{i=1}^{m}\Vert u_{i}(0)\Vert +\sum_{i=1}^{m}\int_{0}^{T}\Vert \dot{u}_{i}(t)\Vert dt,\;\Vert b\Vert _{1,1}:=\sum_{i=1}^{m}|b_{i}(0)|+\sum_{i=1}^{m}\int_{0}^{T}|\dot{b} _{i}(t)|dt. \end{equation*} The norm on the product space $W^{1,1}([0,T],\mathcal{H}^{m})\times W^{1,1}([0,T],\mathbb{R}^{m})$) is $\Vert (u,b)\Vert _{1,1}:=\Vert u\Vert _{1,1}+\Vert b\Vert _{1,1}$, and the induced ball around $ \left( u,b\right) $ with radius $r$ is $\mathbb{B}_{1,1}\left( \left( u,b\right),r\right)$.\vspace*{0.05in} The proof of the following theorem elaborates a reduction idea from \cite{thibault} that allows us to deal with non-Lipschitzian controls of the sweeping dynamics.\vspace*{-0.05in} \begin{theorem} \label{existsobolev} Let $\mathcal{H}$ be a separable Hilbert space. Take $\left( \bar{u},\bar{b}\right)\in W^{1,1}([0,T],\mathcal{ H}^{m})\times W^{1,1}([0,T],\mathbb{R}^{m})$ and suppose that the moving polyhedron $C_{\left( u,b\right) }$ in \eqref{movpoly} satisfies the uniform Slater condition \eqref{unifslater}. Then the control pair $(u,b)$ generates a unique solution $x\in W^{1,1}\left( \left[ 0,T\right] ,\mathcal{H}\right)$ of the sweeping process $\big(\mathcal{S}_{\left( u,b\right)}\big) $ in \eqref{sweeping}. \end{theorem}\vspace*{-0.2in} \begin{proof} It follows from the Newton-Leibniz formula that \begin{equation*} \left\Vert f\left( t\right) -f\left( s\right) \right\Vert \leq \int_{s}^{t}\left\Vert \dot{f}\left( r\right) \right\Vert dr\quad \forall f\in W^{1,1}\left( \left[ 0,T\right] ,\mathcal{H}\right) \end{equation*} whenever $s,t\in\left[ 0,T\right]$ with $s\leq t$. Therefore, for all such $s,t$ we have \begin{align} \sum_{i=1}^{m}\Vert u_{i}(t)-u_{i}(s)\Vert +|b_{i}(t)-b_{i}(s)|&\leq \left\vert \int_{s}^{t}\sum_{i=1}^{m}\Vert \dot{u}_{i}(r)\Vert +|\dot{b} _{i}(r)|dr+t-s\right\vert\notag\\&=|\gamma (t)-\gamma (s)| \label{trafo} \end{align} with the strongly increasing and absolutely continuous function \begin{equation}\label{gamma} \gamma (t):=t+\int_{0}^{t}\sum_{i=1}^{m}\Vert \dot{u}_{i}(r)\Vert +|\dot{b} _{i}(r)|dr \end{equation} For each index $i=1,\ldots ,m$, introduce the pair $\left( u_{i}^{\prime},b_{i}^{\prime }\right) :[0,\gamma (T)]\rightarrow H\times \mathbb{R}$ by \begin{equation*} \left( u_{i}^{\prime },b_{i}^{\prime }\right) (\tau ):=\left( u_{i},b_{i}\right) (\gamma ^{-1}(\tau )),\quad \tau \in \lbrack 0,\gamma (T)]. \end{equation*} Then we readily have the relationship \begin{equation} C_{\left( u^{\prime },b^{\prime }\right) }(\tau )=C_{\left( u,b\right) }(\gamma ^{-1}(\tau )),\quad \tau \in \lbrack 0,\gamma (T)]. \label{ctrafo} \end{equation} Since $\gamma ^{-1}(0)=0$, it follows from \eqref{ctrafo} that $x_{0}\in C_{\left( u,b\right) }(0)=C_{\left( u^{\prime },b^{\prime }\right) }(0)$. Therefore, the sweeping process \begin{equation*} \big( \mathcal{S}_{\left( u^{\prime },b^{\prime }\right) }^{\prime }\big) :\qquad -\dot{x}(\tau )\in N_{C_{\left( u^{\prime },b^{\prime }\right) }(\tau )}\left( x(\tau )\right) \quad \mathrm{a.e.\,}\tau \in \left[ 0,\gamma (T)\right] ,\quad x(0)=x_{0} \end{equation*} is exactly of type $\left( \mathcal{S}_{\left( u,b\right) }\right) $ as in (\ref{sweeping}). Furthermore, (\ref{trafo}) yields \begin{equation*} \Vert u_{i}^{\prime }(\tau _{1})-u_{i}^{\prime }(\tau _{2})\Vert +|b_{i}^{\prime }(\tau _{1})-b_{i}^{\prime }(\tau _{2})|\leq \left\vert \tau _{1}-\tau _{2}\right\vert \quad \forall \tau _{1},\tau _{2}\in \left[ 0,\gamma (T)\right] \,\,\forall i=1,\ldots ,m, \end{equation*} which tells us that the control $\left( u^{\prime },b^{\prime }\right) $ is Lipschitz continuous on the interval $[0,\gamma(T)]$. Observe also that $C_{\left( u^{\prime },b^{\prime }\right) }$ satisfies the uniform Slater condition (\ref{unifslater}) on this interval since $C_{\left( u,b\right) }$ does so on the original interval $\left[ 0,T\right] $). This allows us to invoke Theorem~\ref{existlip}, applied now to the control $\left( u^{\prime },b^{\prime }\right) $, and conclude that the modified sweeping process $\big( \mathcal{S}_{\left( u^{\prime },b^{\prime }\right) }^{\prime }\big) $ admits a unique Lipschitzian solution $y(\cdot)$ with some modulus $K$. For all $t\in \lbrack 0,T]$, set $z\left( t\right) :=y\left( \gamma (t)\right) $, which implies that $\dot{z}\left( t\right) :=\dot{y} \left( \gamma (t)\right) \dot{\gamma}(t)$ for a.e.\ $t\in[0,T]$. Hence \begin{equation*} \left\Vert \dot{z}\left( t\right) \right\Vert \leq \left\Vert \dot{y}\left( \gamma (t)\right) \right\Vert \dot{\gamma}(t)\leq K\dot{\gamma}(t)\quad \mathrm{a.e.\;}\;t\in \left[ 0,T\right] . \end{equation*} Since $y(\cdot)$ is a solution to $\big( \mathcal{S}_{\left( u^{\prime },b^{\prime }\right) }^{\prime }\big) $ while $\dot{\gamma}(t)> 0$ for a.e.\ $t\in \left[ 0,T\right] $, we get by using (\ref{ctrafo}) that \begin{align*} -\dot{z}\left( t\right)=\dot{y}\left( \gamma (t)\right) \dot{\gamma}(t)&\in \dot{\gamma}(t)N_{C_{\left( u^{\prime },b^{\prime }\right) }(\gamma (t))}\left( y(\gamma (t))\right)=N_{C_{\left( u^{\prime },b^{\prime }\right) }(\gamma (t))}\left( z(t)\right)\\ &=N_{C_{\left( u,b\right)}(t)}\left( z(t)\right)\;\mathrm{ a.e.}\;t\in[0.T]. \end{align*} It follows from \eqref{gamma} that $\gamma \in W^{1,1}\left( [0,T],\mathbb{R}\right) $, and so $z\in W^{1,1}([0,T],H)$ as well. Furthermore, we have that $z\left( 0\right) =y\left( \gamma (0)\right) =y(0)=x_{0}$ because $y(\cdot)$ is a solution of $\big( \mathcal{S}_{\left( u^{\prime },b^{\prime }\right) }^{\prime }\big)$. This allows us to conclude that $z(\cdot)$ is a solution of the original sweeping process $\big( \mathcal{S}_{\left( u,b\right)}\big) $ and---being absolutely continuous on $[0,T]$---it is unique by \cite[Theorem~3]{kunzemonteiro}. \qed \end{proof}\vspace*{-0.05in} Finally in this section, we present a consequence of Theorem~\ref{existsobolev} ensuring the result of this type for the $\delta-$ moving polyhedron \eqref{delmov}. This result is important to our applications to stability in the next section.\vspace*{-0.05in} \begin{corollary} \label{Cdelta} Let $\mathcal{H}$ be a separable Hilbert space, and let the uniform Slater condition \eqref{unifslater} be satisfied along a given control $\left( \bar{u},\bar{b}\right) \in W^{1,1}([0,T],\mathcal{ H}^{m})\times W^{1,1}([0,T],\mathbb{R}^{m})$. Then there exists $\varepsilon >0$ such that for all numbers $\delta \in \lbrack 0,\varepsilon )$ the perturbed sweeping process \begin{equation} -\dot{x}\in N(C_{\left( \bar{u},\bar{b}\right) }^{\left( \delta \right) }(t),x(t))\quad \mathrm{a.e.\;}t\in \left[ 0,T\right] ,\quad x(0)=\widehat{x}% \left( 0\right) \in C_{\left( \bar{u},\bar{b}\right)}^{\left( \delta \right) }(0) \label{ydelta} \end{equation} admits a unique absolutely continuous solution. Here $C_{\left( \bar{u},\bar{b} \right) }^{\left( \delta \right) }$ is defined in \eqref{delmov} and $ \widehat{x}(\cdot)$ is the continuous selection $\widehat{x}(t)\in C_{\left( \bar{u},\bar{b}\right) }^{\left( \delta \right) }(t)$ taken from \eqref{select1}. \end{corollary}\vspace*{-0.15in} \begin{proof} As in the proof of Lemma~\ref{strongselection}, choose $\varepsilon >0$ from \eqref{slater2} and pick $\delta \in \lbrack 0,\varepsilon )$. Then $C_{\left( \bar{u},\widetilde{b}\right) }=C_{\left( \bar{u},\bar{b}\right) }^{\left( \delta \right) }$, with $\widetilde{b}$ defined by $\widetilde{b} _{i}:=b_{i}-\delta $ as $i=1,\ldots,m$, also satisfies the uniform Slater condition. The result now follows from Theorem~\ref{existsobolev}. \qed \end{proof}\vspace*{-0.25in} \section{Quantitative stability of the perturbed sweeping dynamics}\label{quantstab}\vspace*{-0.05in} \noindent In this section, we investigate the stability of solutions to controlled polyhedral sweeping processes with respect to perturbations of controls and initial values of the sweeping dynamics. Theorem~\ref{existsobolev} allows us to associate with each absolutely continuous control $\left( u,b\right)$ satisfying (\ref{unifslater}) and with the initial value $ x(0)=x_{0}\in C_{\left( u,b\right) }\left( 0\right)$ the unique absolutely continuous solution $x_{(u,b)}$ of the sweeping process $\big( \mathcal{S}_{\left( u,b\right) }\big)$. In contrast with the previous analysis, where the initial point $x_0$ was fixed, we now compare solutions of $\big( \mathcal{S}_{\left( u,b\right) }\big)$ corresponding not only to different controls but also to different initial points. To emphasize this dependence, let us write $\big(\mathcal{S}_{\left( u,b,x_{0}\right) }\big)$ for the sweeping process $\big(\mathcal{S} _{\left( u,b\right) }\big) $ corresponding to the initial condition $x(0)=x_{0}\in C_{\left( u,b\right) }\left( 0\right) $ and denote its unique solution by $ x_{\left( u,b,x_{0}\right)}$. We begin with the following estimate, which is based on Lemma~\ref{strongselection} and uses the arguments from the proof of Proposition~3 in \cite{HaddadJouraniThibault}.\vspace*{-0.05in} \begin{lemma}\label{LemmaEstim} Assume that $\mathcal{H}$ is a separable Hilbert space, and that the uniform Slater condition \eqref{unifslater} holds for some given control $(\bar{u},\bar{b})\in W^{1,1}([0,T],\mathcal{H} ^{m})\times W^{1,1}([0,T],\mathbb{R}^{m})$. Then there exists $ \varepsilon>0$ such that for all $\delta \in \left( 0,\varepsilon \right) $, for all controls $(u,b)\in \mathbb{B}_{1,1}\left( (\bar{u},\bar{b}),\frac{ \delta }{1+\Vert y_{\delta }\Vert_{\infty }}\right) $, and for all corresponding solutions $x(\cdot)$ to the sweeping processes $\big( \mathcal{S}_{\left( u,b,x_{0}\right) }\big) $ we have the estimate \begin{align} &\Vert \dot{x}(t)\Vert \leq \frac{1}{\delta }\left( \Vert \widehat{x}\Vert _{\infty }+\Vert y_{\delta }\Vert _{\infty }+\alpha _{\delta }\right) \left( 1+\Vert y_{\delta }\Vert _{\infty }+\alpha _{\delta }\right) \sum_{i=1}^{m}\left( \Vert \dot{u}_{i}(t)\Vert +|\dot{b}_{i}(t)|\right)\notag\\ &\mathrm{a.e.}\,\,t\in \lbrack 0,T]. \label{EstTraj2} \end{align} Here $\widehat{x}(\cdot)$ stands for the continuous selection $\widehat{x}(t)\in C_{\left(\bar{u},\bar{b}\right) }^{\left( \delta \right) }(t)$ taken from \eqref{select1}, $y_{\delta}(\cdot)$ refers to the associate unique solution of the perturbed sweeping process \eqref{ydelta} guaranteed by Corollary~{\rm\ref{Cdelta}}, and the constant $\alpha_\delta$ is defined by \begin{equation} \alpha _{\delta }:=\int_{0}^{T}\Vert \dot{y}_{\delta }(t)\Vert dt+\sqrt{ \left( \int_{0}^{T}\Vert \dot{y}_{\delta }(t)\Vert dt\right) ^{2}+\Vert x(0)- \widehat{x}\left( 0\right) \Vert ^{2}}.\label{alphadelta2} \end{equation} \end{lemma}\vspace*{-0.05in} \begin{proof} As in previous proofs, we choose $\varepsilon >0$ from perturbed uniform Slater condition (\ref {slater2}) equivalent to the the assumed one (\ref{unifslater}) by Proposition~\ref{slaterequiv}. Fix an arbitrary $\delta \in \left( 0,\varepsilon \right)$, then fix an arbitrary control pair \begin{equation} (u,b)\in \mathbb{B}_{1,1}\left( (\bar{u},\bar{b}),\frac{\delta }{1+\Vert y_{\delta }\Vert _{\infty }}\right), \label{newradius} \end{equation} and denote by $x(\cdot)$ the corresponding unique solution of the sweeping process $\big( \mathcal{S}_{\left( u,b,x_{0}\right) }\big)$ due to Theorem~\ref{existsobolev}. By the absolute continuity of the triple $(u,b,x)$, the derivatives $\dot{x}(t)$, $\dot{u}_{i}(t)$ and $\dot{b}_{i}(t)$ exist for almost all $t\in \left[ 0,1\right] $. Fixing now any such time $t$ and then get \begin{align*} x(t-s)&=x(t)-s(\dot{x}(t)+\alpha _{x}(s)),\quad u_{i}(t-s)=u_{i}(t)-s(\dot{u}% _{i}(t)+\alpha _{u,i}(s))\\b_{i}(t-s)&=b_{i}(t)-s(\dot{b}_{i}(t)+\alpha _{b,i}(s)), \end{align*} where $\lim_{s\rightarrow 0}\alpha _{x}(s)=0$, $\lim_{s\rightarrow 0}\alpha _{u,i}(s)=0$ and $\lim_{s\rightarrow 0}\alpha _{b,i}(s)=0$. Since $x(t-s)\in C_{\left( u,b\right) }\left( t-s\right) $ for all $s$, we deduce from (\ref {select3}) that \begin{align*} x(t-s)\in C_{(u,b)}(t)+\hskip 9cm&\\ \frac{1}{\delta }\Vert x(t-s)-\widehat{x}(t)\Vert \sum_{i=1}^{m}\left( \Vert u_{i}(t-s)-u_{i}(t)\Vert \cdot \Vert x(t-s)\Vert +|b_{i}(t-s)-b_{i}(t)\right) |\mathbb{B}&, \end{align*} where $\mathbb{B}$ refers as usual to the unit ball in $\mathcal{H}$. Using the convexity of the $C_{(u,b)}(t)$ and passing to the limit $s\downarrow 0$, gives us the inclusion \begin{equation*} -\dot{x}(t)\in T(C_{(u,b)}(t),x(t))+\frac{1}{\delta }\Vert x(t)-\widehat{x} (t)\Vert \sum_{i=1}^{m}\left( \Vert \dot{u}_{i}(t)\Vert \cdot \Vert x(t)\Vert +|\dot{b}_{i}(t)|\right)\mathbb{B}, \end{equation*} where $T(S,u)$ stands for the tangent cone to a convex set $S$ at $u$ in the sense of convex analysis. As $-\dot{x}(t)\in N(C_{(u,b)}(t),x(t))$, we arrive at \begin{equation*} \Vert \dot{x}(t)\Vert ^{2}\leq \Vert \dot{x}(t)\Vert \cdot \frac{1}{\delta } \Vert x(t)-\widehat{x}(t)\Vert \sum_{i=1}^{m}\left( \Vert \dot{u} _{i}(t)\Vert \cdot \Vert x(t)\Vert +|\dot{b}_{i}(t)|\right), \end{equation*} which in turn implies, since $t$ was arbitrarily chosen from a subset of full measure on $[0,T]$, the derivative norm estimate \begin{equation} \Vert \dot{x}(t)\Vert \leq \\\frac{1}{\delta }\Vert x(t)-\widehat{x}\left( t\right) \Vert \sum_{i=1}^{m}\left( \Vert \dot{u}_{i}(t)\Vert \cdot \Vert x(t)\Vert +|\dot{b}_{i}(t)|\right) \quad \mathrm{a.e.\,}t\in \lbrack 0,T]. \label{EstTraj} \end{equation} To proceed further, let $y_{\delta}(\cdot)$ be the unique absolutely continuous solution to the sweeping process (\ref{ydelta}) according to Corollary~\ref{Cdelta}. Since $\left\langle \bar{u}_{i}\left( t\right) ,y_{\delta }(t)\right\rangle \le\bar{b}_{i}\left( t\right) -\delta $ for all $t\in \lbrack 0,T]$ and all $ i=1,\ldots ,m$, we deduce from (\ref{newradius}) that \begin{eqnarray*} \langle u_{i}(t),y_{\delta }(t)\rangle -b_{i}(t) &\leq &\langle u_{i}(t)-% \bar{u}_{i}(t),y_{\delta }(t)\rangle +\bar{b}_{i}(t)-b_{i}(t)-\delta \\ &\leq &\Vert u-\bar{u}\Vert _{\infty }\Vert y_{\delta }\Vert _{\infty }+\Vert b-\bar{b}\Vert _{\infty }-\delta\\& \leq& \Vert u-\bar{u}\Vert _{1,1}\Vert y_{\delta }\Vert _{\infty }+\Vert b-\bar{b}\Vert _{1,1}-\delta \\ &\leq &\Vert (u,b)-(\bar{u},\bar{b})\Vert _{1,1}(1+\Vert y_{\delta }\Vert _{\infty })-\delta \leq 0\quad \forall t\in \lbrack 0,T]. \end{eqnarray*} Therefore, $y_{\delta }(t)\in C_{(u,b)}(t)$ for all $t\in \lbrack 0,T]$. Remembering that $x(\cdot)$ solves the original sweeping process $\big( \mathcal{S}_{\left( u,b,x_{0}\right) }\big) $, it follows that $-\dot{x}(t)\in N_{C_{\left( u,b\right) }(t)}\left( x(t)\right) $ for a.e.\ $\mathrm{\,} t\in \lbrack 0,T]$, and hence we have \begin{align*} \frac{d}{dt}\frac{1}{2}\Vert x(t)-y_{\delta }(t)\Vert ^{2}& = \langle \dot{x} (t)-\dot{y}_{\delta }(t),x(t)-y_{\delta }(t)\rangle\\& =\langle \dot{x} (t),x(t)-y_{\delta }(t)\rangle +\langle -\dot{y}_{\delta }(t),x(t)-y_{\delta }(t)\rangle \\ & \leq \langle -\dot{y}_{\delta }(t),x(t)-y_{\delta }(t)\rangle \leq \Vert \dot{y}_{\delta }(t)\Vert \cdot \Vert x(t)-y_{\delta }(t)\Vert _{\infty }. \end{align*} This brings us to the estimate \begin{equation*} \frac{\Vert x(t)-y_{\delta }(t)\Vert ^{2}}{2}-\frac{\Vert x(0)-\widehat{x} \left( 0\right) \Vert ^{2}}{2}\leq \Vert x-y_{\delta }\Vert _{\infty }\cdot \int_{0}^{T}\Vert \dot{y}_{\delta }(t)\Vert dt\quad \forall t\in \lbrack 0,T], \end{equation*} which implies on turn that \begin{equation*} \frac{\Vert x-y_{\delta }\Vert _{\infty }^{2}}{2}-\frac{\Vert x(0)-\widehat{x }\left( 0\right) \Vert ^{2}}{2}\leq \Vert x-y_{\delta }\Vert _{\infty }\cdot \int_{0}^{T}\Vert \dot{y}_{\delta }(t)\Vert dt. \end{equation*} Consequently, we arrive at the inequality \begin{equation*} \Vert x-y_{\delta }\Vert _{\infty }^{2}-2\left( \int_{0}^{T}\Vert \dot{y} _{\delta }(t)\Vert dt\right) \Vert x-y_{\delta }\Vert _{\infty }-\Vert x(0)- \widehat{x}\left( 0\right) \Vert ^{2}\leq 0. \end{equation*} Invoking the definition of $\alpha_\delta$ in \eqref{alphadelta2} gives us the estimate \begin{equation} \Vert x-y_{\delta }\Vert _{\infty }\leq \alpha _{\delta }, \label{alphadelta} \end{equation} which being combined with (\ref{EstTraj}) verifies the claimed inequality (\ref{EstTraj2}) and thus completes the proof of the lemma. \qed \end{proof}\vspace*{-0.05in} Now we are ready to establish the main stability result.\vspace*{-0.07in} \begin{theorem} \label{controltostate} Let $\mathcal{H}$ be a separable Hilbert space, and let the uniform Slater condition \eqref{unifslater} hold for a given control pair $(\bar{u},\bar{b})\in W^{1,1}([0,T],\mathcal{H}^{m})\times W^{1,1}([0,T],\mathbb{R}^{m})$. Then there exist a number $\rho >0$ and a continuous function $K:\mathcal{H}\times\mathcal{H}\to\mathbb{R}_+$ such that for all control pairs \begin{equation} (u,b),(u^{\prime },b^{\prime })\in \left[ W^{1,1}([0,T],\mathcal{H}% ^{m})\times W^{1,1}([0,T],\mathbb{R}^{m})\right] \cap \mathbb{B}_{1,1}\left( (\bar{u},\bar{b}),\rho \right) , \label{control ball} \end{equation} for all initial values $x_{0}\in C_{(u,b)}(0)$, $x_{0}^{\prime }\in C_{(u^{\prime },b^{\prime })}(0)$, and the associated solutions $x,x^{\prime }$ to the sweeping processes $\big( \mathcal{S}_{\left( u,b,x_{0}\right) }\big)$ and $\big( \mathcal{S}_{\left( u^{\prime },b^{\prime },x_{0}^{\prime }\right) }\big) $, respectively, we have\vspace*{-0.1in} \begin{equation} \left\Vert x(t)-x^{\prime }(t)\right\Vert ^{2}\leq \left\Vert x_{0}-x_{0}^{\prime }\right\Vert ^{2}+K(x_0,x^\prime_0)\Vert (u-u^{\prime },b-b^{\prime })\Vert _{\infty }\quad \forall t\in \lbrack 0,T]. \label{hoelder} \end{equation} \end{theorem}\vspace*{-0.15in} \begin{proof} As above, we employ the equivalent description (\ref{slater2}) of the uniform Slater condition (\ref{unifslater}) and take $\varepsilon>0$ from Proposition~\ref{slaterequiv}. Fixing an arbitrary number $\delta \in \left( 0,\varepsilon\right)$, define the quantity \begin{equation} \rho :=\min \left\{ \frac{\delta }{1+\Vert y_{\delta }\Vert _{\infty }}, \frac{\varepsilon -\delta }{3\left( 1+\left\Vert \widehat{x}\right\Vert _{\infty }\right) }\right\}, \label{rhomindef} \end{equation} where $\widehat{x}(\cdot)$ is the continuous selection $\widehat{x}(t)\in C_{\left( \bar{u},\bar{b}\right) }^{\left( \delta \right) }(t)$ satisfying (\ref{select1}), and where $y_{\delta}(\cdot)$ is the unique absolutely continuous solution to the perturbed sweeping process (\ref{ydelta}) taken from Corollary~\ref{Cdelta}. Select arbitrary controls $(u,b),(u^{\prime },b^{\prime })$ from (\ref{control ball}), arbitrary initial values $x_{0}\in C_{(u,b)}(0)$, $x_{0}^{\prime }\in C_{(u^{\prime },b^{\prime })}(0)$, and the associated solutions $x,x^{\prime } $ to the sweeping processes $\big(\mathcal{S}_{\left( u,b,x_{0}\right) }\big)$ and $\big( \mathcal{S}_{\left( u^{\prime },b^{\prime },x_{0}^{\prime }\right)}\big)$, respectively. Then it follows from (\ref{alphadelta}) that \begin{equation} \Vert x-y_{\delta }\Vert _{\infty }\leq \alpha _{\delta }\;\mbox{ and }\;\Vert x^{\prime }-y_{\delta }\Vert _{\infty }\leq \alpha _{\delta }^{\prime } \label{yaldel} \end{equation} for $\alpha _{\delta }$ defined in (\ref{alphadelta2}) and $\alpha _{\delta }^{\prime }$ defined by the same formula with the initial value $ x\left( 0\right) =x_{0}$ replaced by the initial value $x^{\prime }\left( 0\right) =x_{0}^{\prime }$. Lemma~\ref{LemmaEstim} gives us estimate (\ref{EstTraj2}) for the control $(u,b)$ as well as the corresponding estimate \begin{align} &\Vert \dot{x}^{\prime }(t)\Vert \leq \delta ^{-1}\left( \Vert \widehat{x}% \Vert _{\infty }+\Vert y_{\delta }\Vert _{\infty }+\alpha _{\delta }^{\prime }\right) \left( 1+\Vert y_{\delta }\Vert _{\infty }+\alpha _{\delta }^{\prime }\right) \sum_{i=1}^{m}\left( \Vert \dot{u}_{i}^{\prime }(t)\Vert +|\dot{b}_{i}^{\prime }(t)|\right)\notag\\ &\mathrm{a.e.}\,\,t\in \lbrack 0,T]\label{EstTraj6} \end{align} for the control $(u^{\prime },b^{\prime })$. Denoting now \begin{align} C:=&\left( \alpha _{\delta }+\Vert y_{\delta }\Vert _{\infty }+\Vert { \widehat{x}}\Vert _{\infty }\right) \left( 1+\alpha _{\delta }+\Vert y_{\delta }\Vert _{\infty }\right), \notag\\C^{\prime }:=&\left( \alpha _{\delta }^{\prime }+\Vert y_{\delta }\Vert _{\infty }+\Vert {\widehat{x}} \Vert _{\infty }\right) \left( 1+\alpha _{\delta }^{\prime }+\Vert y_{\delta }\Vert _{\infty }\right) \label{ccbar} \end{align} and integrating (\ref{EstTraj2}) ensure that \begin{equation*} \int_{0}^{t}\Vert \dot{x}(s)\Vert ds\leq \delta ^{-1}C\sum_{i=1}^{m}\int_{0}^{t}\left( \Vert \dot{u}_{i}(s)\Vert +|\dot{b} _{i}(s)|\right) ds\quad \forall \,t\in \lbrack 0,T]. \end{equation*} Therefore, recalling that $(u,b)\in \mathbb{B}_{1,1}\left( (\bar{u},\bar{b} ),\rho \right)$ yields \begin{equation} \int_{0}^{t}\Vert \dot{x}(s)\Vert ds\leq \delta ^{-1}C\left\Vert \left( u,b\right) \right\Vert _{1,1}\leq \delta ^{-1}C\left( \rho +\left\Vert \left( \bar{u},\bar{b}\right) \right\Vert _{1,1}\right) . \label{EstTraj3} \end{equation} Similarly, the integration of (\ref{EstTraj6}) gives us \begin{equation} \int_{0}^{t}\Vert \dot{x}^{\prime }(s)\Vert ds\leq \delta ^{-1}C^{\prime }\left( \rho +\left\Vert \left( \bar{u},\bar{b}\right) \right\Vert _{1,1}\right). \label{EstTraj4} \end{equation} Let now $t\in \lbrack 0,T]$ be from a subset of full measure such that $\dot{ x}(t)$ and $\dot{x}^{\prime }(t)$ exist. We clearly have $x(t)\in C_{(u,b)}(t)$ and $x^{\prime }(t)\in C_{(u^{\prime },b^{\prime })}(t)$. Since a ball in the $ \Vert \cdot \Vert _{1,1}$-norm is contained in a ball of the same radius in the $\Vert \cdot \Vert _{\infty }$-norm, the construction of $\rho $ in (\ref {rhomindef}) allows us to employ the error bound (\ref{select3}) from Lemma \ref {strongselection}. This ensures the existence of $x_{1}\in C_{(u^{\prime },b^{\prime })}(t)$ and $x_{1}^{\prime }\in C_{(u,b)}(t)$ with \begin{equation*} \begin{array}{ll} \Vert x(t)-x_{1}\Vert\leq\delta ^{-1}{\Vert x(t)-\widehat{x}(t)\Vert } \displaystyle\max_{i=1,\ldots ,m}\left[ \langle u_{i}^{\prime }(t)-u_{i}(t),x\rangle +b_{i}(t)-b_{i}^{\prime }(t)\right] _{+}\\ \leq\delta ^{-1}{\Vert x(t)-\widehat{x}(t)\Vert }\left( \Vert u(t)-u^{\prime }(t)\Vert \left\Vert x{(t)}\right\Vert +\left\Vert b(t)-b^{\prime }(t)\right\Vert \right)\\ \leq\delta ^{-1}{\Vert x(t)-\widehat{x}(t)\Vert }(1+\Vert x(t)\Vert )(\Vert (u(t)-u^{\prime }(t),b(t)-b^{\prime }(t))\Vert. \end{array} \end{equation*} Similar considerations bring us to the estimate \begin{equation*} \Vert x^{\prime }(t)-x_{1}^{\prime }\Vert \leq \delta ^{-1}{\Vert x}^{\prime }{(t)-\widehat{x}(t)\Vert }(1+\Vert x^{\prime }(t)\Vert )(\Vert (u(t)-u^{\prime }(t),b(t)-b^{\prime }(t))\Vert. \end{equation*} Since $x(\cdot)$ and $x^{\prime }(\cdot)$ are absolutely continuous solutions to $\big( \mathcal{S}_{\left(u,b,x_{0}\right)}\big)$ and $\big(\mathcal{S}_{\left( u^{\prime },b^{\prime },x_{0}^{\prime }\right) }\big)$, respectively, we deduce from $-\dot{x} (t)\in N_{C_{\left( u,b\right) }(t)}\left( x(t)\right) $, $-\dot{x} ^{\prime }(t)\in N_{C_{\left( u^{\prime },b^{\prime }\right) }(t)}\left( x^{\prime }(t)\right) $, and the obtained estimates of $\Vert x(t)-x_{1}\Vert$ and $\Vert x^{\prime }(t)-x_{1}^{\prime }\Vert$ that \begin{eqnarray*} &&{\frac{d}{{dt}}}\frac{1}{2}\left\Vert x(t)-x^{\prime }(t)\right\Vert ^{2} \\ &=&\langle \dot{x}(t)-\dot{x}^{\prime }(t),x(t)-x^{\prime }(t)\rangle =\langle \dot{x}(t),x(t)-x^{\prime }(t)\rangle -\langle \dot{x}^{\prime }(t),x(t)-x^{\prime }(t)\rangle \\ &=&\langle \dot{x}(t),x(t)-x_{1}^{\prime }\rangle +\langle \dot{x} (t),x_{1}^{\prime }-x^{\prime }(t)\rangle +\langle \dot{x}^{\prime }(t),x^{\prime }(t)-x_{1}\rangle +\langle \dot{x}(t),x_{1}-x(t)\rangle \\ &\leq &\langle \dot{x}(t),x_{1}^{\prime }-x^{\prime }(t)\rangle +\langle \dot{x}(t),x_{1}-x(t)\rangle\\ &\leq& \Vert \dot{x}(t)\Vert \Vert x_{1}^{\prime }-x^{\prime }(t)\Vert +\Vert \dot{x}^{\prime }(t)\Vert \Vert x_{1}-x(t)\Vert \\ &\leq &\delta ^{-1}\left( \Vert \dot{x}(t)\Vert {\Vert x^{\prime }(t)- \widehat{x}(t)\Vert }(1+\Vert x^{\prime }(t)\Vert )+\Vert \dot{x}^{\prime }(t)\Vert {\Vert x(t)-\widehat{x}(t)\Vert }(1+\Vert x(t)\Vert )\right) \\ &&\cdot \Vert (u(t)-u^{\prime }(t),b(t)-b^{\prime }(t))\Vert. \end{eqnarray*} For all $t\in[0,T]$ define the function \begin{equation*} \chi \left( t\right) :=\delta ^{-1}\left( {\Vert x^{\prime }(t)-\widehat{x} (t)\Vert }(1+\Vert x^{\prime }(t)\Vert )+{\Vert x(t)-\widehat{x}(t)\Vert } (1+\Vert x(t)\Vert )\right). \end{equation*} Then the latter estimate can be rewritten as \begin{equation} {\frac{d}{{dt}}}\frac{1}{2}\left\Vert x(t)-x^{\prime }(t)\right\Vert ^{2}\leq \chi \left( t\right) \left( \Vert \dot{x}(t)\Vert +\Vert \dot{x} ^{\prime }(t)\Vert \right) \Vert (u(t)-u^{\prime }(t),b(t)-b^{\prime }(t))\Vert. \label{chiineq} \end{equation} It follows from (\ref{yaldel}) and (\ref{ccbar}) that $\chi \left( t\right) \leq \delta ^{-1}\left( C+C^{\prime }\right) $. As $t$ was arbitrarily chosen from a subset of full measure of $\left[ 0,T \right]$, we integrate (\ref{chiineq}) and then employ (\ref{EstTraj3}) and (\ref{EstTraj4}) to get \begin{eqnarray*} &&\left\Vert x(t)-x^{\prime }(t)\right\Vert ^{2}-\left\Vert x(0)-x^{\prime }(0)\right\Vert ^{2}\\ &\leq &\delta ^{-1}\left( C+C^{\prime }\right) \int_{0}^{t}\left( \Vert \dot{x}(s)\Vert +\Vert \dot{x}^{\prime }(s)\Vert \right) \Vert (u(s)-u^{\prime }(s),b(s)-b^{\prime }(s))\Vert ds \\ &\leq &\delta ^{-1}\left( C+C^{\prime }\right) \Vert (u-u^{\prime },b-b^{\prime })\Vert _{\infty }\int_{0}^{t}\left( \Vert \dot{x}(s)\Vert +\Vert \dot{x}^{\prime }(s)\Vert \right) ds \\ &\leq &\underbrace{\delta ^{-2}\left( C+C^{\prime }\right) ^{2}\left( \rho +\left\Vert \left( \bar{u},\bar{b}\right) \right\Vert _{1,1}\right)}_{K(x_0,x^\prime_0)} \Vert (u-u^{\prime},b-b^{\prime })\Vert _{\infty } \end{eqnarray*} for all $t\in \lbrack 0,T]$. By (\ref{ccbar}), $C$ and $C^\prime$ depend continuously on $\alpha_\delta$ and $\alpha^\prime_\delta$, respectively, which in turn depend continuously on $x_0$ and $x^\prime_0$ by (\ref{alphadelta2})). Thus we verify that the obtained continuous function $K(x_0,x\prime_0)$ ensures the claimed estimate \eqref{hoelder}, and we are done with the proof of the theorem. \qed \end{proof}\vspace*{-0.05in} To conclude this section, we present a direct consequence of Theorem~\ref{controltostate} for the case where the initial value $x_0$ in \eqref{sweeping} is fixed. In this case the function $K(\cdot)$ in the estimate \eqref{hoelder} is constant.\vspace*{-0.05in} \begin{corollary} Let $\mathcal{H}$ be a separable Hilbert space, let the uniform Slater condition \eqref{unifslater} hold for a given control $(\bar{u},\bar{b})\in W^{1,1}([0,T],\mathcal{H}^{m})\times W^{1,1}([0,T],\mathbb{R}^{m})$, and let $x_0\in C_{(\bar{u},\bar{b})}$ be an arbitrarily given initial value in \eqref{sweeping}. Then there exist positive numbers $\rho$ and $K$ such that for all controls $(u,b),(u^{\prime },b^{\prime })$ satisfying \eqref{control ball} and the corresponding solutions $x(\cdot)$ and $x^{\prime }(\cdot)$ of the sweeping processes $\big(\mathcal{S}_{\left( u,b,x_{0}\right) }\big)$ and $\big( \mathcal{S}_{\left( u^{\prime },b^{\prime },x_{0}\right) }\big) $ with $x_{0}\in C_{(u,b)}(0)\cap C_{(u^{\prime },b^{\prime })}(0)$, respectively, we have \begin{equation*} \left\Vert x(t)-x^{\prime }(t)\right\Vert ^{2}\leq K\Vert (u-u^{\prime },b-b^{\prime })\Vert _{\infty }\quad \forall t\in \lbrack 0,T]. \end{equation*} \end{corollary}\vspace*{-0.25in} \section{Discrete approximations of controlled sweeping processes}\label{discapp} \setcounter{equation}{0}\vspace*{-0.05in} \noindent The last two sections of the paper are devoted to the study of the following \emph{optimal control problem} for the sweeping process \eqref{sweeping} with controls in polyhedral moving sets \eqref{movpoly} and additional \emph{endpoint constraints} as well as the {\em pointwise equality constraints} on the \emph{$u$-control} functions: \begin{align} \min \left\{ f(u,b)|(u,b)\in W^{1,2}([0,T],\mathbb{R}^{nm}\times \mathbb{R} ^{m}),\,\,\Vert u_{i}(t)\Vert =1\,\,(i=1,\ldots ,m)\right.&\notag\\ \left. x_{\left( u,b\right) }(T)\in \Omega \right\},& \tag{P} \end{align} where $\Omega \subseteq\mathbb{R}^{n}$ is a closed subset, $f$ is a cost function (specified later on), and $x_{\left(u,b\right) }$ is the unique trajectory of the polyhedral sweeping process $\big(\mathcal{S}_{(u,b)}\big)$ from \eqref{sweeping} generated by a control pair $(u,b)=(u(\cdot),b(\cdot))$ of the above class on $[0,T]$. Such a control pair $(u,b)$ is called a \emph{feasible solution} to ($P$) if $\|u(t)\|$=1 for all $t\in[0,T]$ and $x_{(u,b)}(T)\in\Omega$ for the corresponding trajectory of \eqref{sweeping}. Note that our focus in what follows is on \emph{Lipschitzian} controls in $(P)$, which uniquely generate by Theorem~\ref{existlip} Lipschitzian sweeping trajectory under the imposed \emph{uniform Slater condition} \eqref{unifslater}. At the current stage of our developments for $(P)$, we have to restrict ourselves to the case of finite-dimensional state spaces. Our main goal here is to develop the {\em method of discrete approximations} to investigate the sweeping control problem $(P)$ and its discrete counterparts from both viewpoints of {\em stability} and deriving {\em necessary suboptimality} and {\em optimality conditions}. Stability issues address the construction of finite-difference approximations of sweeping differential inclusions such that their feasible solutions strongly approximate a broad class of {\em canonical controls} in the original sweeping process; this notion is introduced in the paper for the {\em first time}. Furthermore, we construct a sequence of discrete-time optimal control problems $(P_k)$ always admitting optimal solutions, which $W^{1,2}$-{\em strongly converge} to a prescribed {\em local minimizer} of the {\em intermediate} class (between weak and strong, including the latter) of the original sweeping control problem $(P)$. This opens the door to derive {\em necessary optimality conditions} for such minimizers of $(P)$ by using advanced tools of variational analysis and (first-order and second-order) generalized differentiation. Furnishing this approach, we concentrate here on deriving necessary optimality conditions for problems $(P_k)$ with the approximation number $k\inI\!\!N$ being sufficiently large. The obtained necessary optimality conditions for $(P_k)$ serve as constructive {\em suboptimality} conditions for intermediate local minimizers of $(P)$ that are convenient for numerical implementations. This is a \emph{clear advantage} of the method of discrete approximations in comparison with other methods of deriving necessary optimality conditions for continuous-time variational and control problems. In our separate publication, we are going to realize the involved limiting procedure of passing to the limit from the obtained necessary optimality conditions for $(P_k)$ (i.e., suboptimality conditions for $(P)$) to derive {\em exact} necessary optimality conditions for intermediate local minimizers of continuous-time sweeping control problems of type $(P)$. The method of discrete approximations was developed in \cite{m95,m06} to establish necessary suboptimality and optimality conditions for {\em Lipschitzian} differential inclusions. Sweeping differential inclusions are highly {\em discontinuous}, and the machinery of Lipschitzian variational analysis is not applicable in the sweeping framework. Further developments of this method in various sweeping control settings can be found in \cite{ao,mor1,cm,cg,mor2,cmn} and the references therein. However, neither these publications, nor those of \cite{bk,pfs,zeidan} exploring other approaches to deriving optimality conditions in different models of sweeping optimal control address additional endpoint constraints $x(T)\in\Omega$ on sweeping trajectories. In this section we focus on the construction of discrete approximations for the constrained sweeping dynamics and local minimizers of $(P)$ with obtaining stability/convergence results, while the next section is devoted to reviewing the required tools of generalized differentiation and their applications to necessary optimality conditions for discrete approximation problems $(P_k)$ giving us suboptimality conditions for intermediate local minimizers of $(P)$.\vspace*{0.03in} Let us start with introducing a new notion of {\em canonical controls} for problem $(P)$ that plays a crucial role in our developments.\vspace*{-0.07in} \begin{definition}\label{canon} We say that a control pair $(u,b)\in W^{1,2}([0,T],\mathbb{R}^{nm}\times\mathbb{R})$ is {\sc canonical} for problem $(P)$ if the following conditions hold:\\ $\bullet$ The functions $u(\cdot)$ and $b(\cdot))$ are Lipschitz continuous on $[0,T]$.\\ $\bullet$ The uniform Slater condition \eqref{unifslater} is satisfied along $(u,b)$.\\ $\bullet$ We have the constraints \begin{equation*} \|u_i(t)\|=1\;\mbox{ for all }\;t\in[0,T]\;\mbox{ and }\;i=1,\ldots,m. \end{equation*} $\bullet$ The derivatives $\dot u(\cdot)$ and $\dot b(\cdot)$ are of bounded variation $(BV)$ on $[0,T]$ together with the derivative of the unique Lipschitz continuous trajectory $x(\cdot)$ of \eqref{sweeping} generated by the control pair $(u,b)$. \end{definition}\vspace*{-0.1in} Observe that the corresponding trajectory to \eqref{sweeping} generated by a canonical control pair {\em may not} satisfy the endpoint constraint $x_{\left(u,b\right)}(T)\in\Omega$, i.e., not any canonical pair is feasible for $(P)$. To proceed with our approach, we construct a sequence of discrete approximations of the sweeping process $(\mathcal{S}_{(u,b)})$ from \eqref{sweeping} over the controlled polyhedron \eqref{movpoly} {\em without any appeal to optimization} as in $(P)$. For each $k\in\mathbb{N}$ define the discrete mesh on $[0,T]$ by \begin{equation} \label{e:DP} \Delta_k:=\left\{0=t^k_0<t^k_1<\ldots<t^k_{\nu(k)-1}<t^k_{\nu(k)}=T\right\} \end{equation} with $h^k_j:=t^k_{j+1}-t^k_j\downarrow 0$, $j=0,\ldots,\nu(k)-1$, as $k\to\infty$. Denote \begin{equation} \label{F} F(u,b,x):=N_{C(u,b)},\;C(u,b):=\big\{x\in\mathbb{R}^n\;\big| \;\left\langle u_i,x\right\rangle\le b_i\;\;(i=1,\ldots,m)\big\}. \end{equation} The following theorem tells us that \emph{any canonical} control pair $(u,b)$ and the corresponding sweeping trajectory $x(\cdot)$ can be \emph{$W^{1,2}$-strongly} approximated by feasible solutions to discrete sweeping processes defined on the partition $\Delta_k$ from \eqref{e:DP} and appropriately extended to the continuous-time interval $[0,T]$.\vspace*{-0.05in} \begin{theorem} \label{da-feas} Let $\left(\bar{u}(\cdot),\bar b(\cdot)\right)$ be a canonical control pair for $(P)$, and let $\bar{x}(\cdot)$ be the corresponding unique solution of the Cauchy problem in \eqref{sweeping}. Then there exist a mesh $\Delta_k$ in \eqref{e:DP}, a sequence of piecewise linear functions $(\widetilde u^k(\cdot),\widetilde b^k(\cdot),\widetilde x^k(\cdot))$ on $[0,T]$, and a sequence of positive numbers $\delta_k\downarrow 0$ as $k\to\infty$ such that $(\widetilde u^k(0),\widetilde b^k(0),\widetilde x^k(0))=(\bar{u}(0),\bar b(0),x_0)$, \begin{equation}\label{e:a-dc} 1-\delta_k\le\left\|\widetilde u^k_i(t^k_j)\right\|\le 1+\delta_k\;% \mbox{ for all }\;t^k_j\in\Delta_k,\quad i=1,\ldots,m, \end{equation} \begin{equation*} \widetilde x^k(t)=\widetilde x^k(t^k_j)+(t-t^k_j)\widetilde v^k_j,\;\;t^k_j\le t\le t^k_{j+1}\;\;\mbox{with}\;\;-\widetilde v^k_j\in F\big(\widetilde u^k(t^k_j),\widetilde b^k(t^k_j),\widetilde x^k(t^k_j)\big) \end{equation*} for $j=0,\ldots,\nu(k)-1$, $k\in\mathbb{N}$, and the sequence $\{(\widetilde u^k(\cdot),\widetilde b^k(\cdot),\widetilde x^k(\cdot))\}$ converges to $(\bar{u}(\cdot),\bar b(\cdot),\bar{x}(\cdot))$ as $k\to\infty$ in the $W^{1,2}$-norm topology on $[0,T]$. \end{theorem}\vspace*{-0.2in} \begin{proof} As mentioned, the existence of the unique Lipschitz continuous trajectory $\bar{x}(\cdot)$ of the Cauchy problem for the polyhedral sweeping process in \eqref{sweeping} generated by the given canonical control pair $(\bar{u}(\cdot),\bar b(\cdot))$ follows from Theorem~\ref{existlip}. Now we are in a position to deduce the claimed assertions from \cite[Theorem~4.1]{mor1} under the BV assumption on $\dot{\bar{u}}(\cdot)$, $\dot{\bar b}(\cdot)$, and $\dot{\bar{x}}(\cdot)$. Indeed, the qualification condition (H4) from \cite[Theorem~4.1]{mor1} is equivalent to the uniform Slater condition \eqref{unifslater} by our new result obtained in Proposition~\ref{slaterequiv}. Thus the application of \cite[Theorem~4.1]{mor1} gives us all the assertions claimed in this theorem. \qed \end{proof}\vspace*{-0.1in} From now on, we consider for simplicity problem $(P)$, where the cost function is defined in the \emph{Mayer form} via a given terminal state function $\varphi\colon\mathbb{R}^n\to\mathbb{R}$ by \begin{equation*} f(u,b):=\varphi\big(x_{u,b}(T)\big). \end{equation*} If the function $\varphi$ is lower semicontinuous, then problem $(P)$ admits a (global) \emph{optimal solution} in $W^{1,2}([0,T],\mathbb{R}^{nm}\times% \mathbb{R}^m)$ provided that there is a bounded minimizing sequence of feasible solutions; see \cite[Theorem~3.1]{mor1} and its proof. Since our main attention is paid to deriving necessary (sub)optimality conditions in $(P)$, it is natural to define an appropriate notion of \emph{local minimizers}.\vspace*{0.03in} The notion of local minimizers of our study in this paper occupies an \emph{% intermediate} position between the classical notions of \emph{weak} and \emph{strong} minimizers in variational and control problems, while encompassing the latter. Following \cite{m95}, where this notion was initiated for Lipschitzian differential inclusions (see also \cite{m06} for more details), we keep the name ``intermediate" for the version of this notion in the setting of our sweeping control problem $(P)$.\vspace*{-0.1in} \begin{definition} \label{ilm} We say that a feasible control pair $(\bar{u},\bar b)$ for $(P)$ is an {\sc intermediate local minimizer} in this problem if there exists $% \varepsilon>0$ such that \begin{equation*} \varphi\big(x_{\bar{u},\bar b}(T)\big)\le\varphi\big(x_{u,b}(T)\big) \end{equation*} for any feasible solution to $(P)$ satisfying the condition \begin{equation} \label{loc} \|(u,b)-(\bar{u},\bar b)\|_{W^{1,2}}+\|x_{u,b}-x_{\bar{u},\bar b}\|_{W^{1,2}}\le\varepsilon. \end{equation} \end{definition} The notion of \emph{strong local minimizer} for $(P)$ is a particular case of Definition~\ref{ilm}, where the norm $\|x_{u,b}-x_{\bar{u},\bar b}\|_{W^{1,2}}$ in \eqref{loc} is replaced with the norm $\|x_{u,b}-x_{\bar{u% },\bar b}\|_{\mathcal{C}}$ in the space of continuous functions $\mathcal{C}% ([0,T],\mathbb{R}^n)$. It is not hard to construct examples showing that there exist intermediate local minimizers to $(P)$ that fail to be strong ones; see \cite{m95,m06,vinter} even for simpler problems.\vspace*{0.05in} Having $F(u,b,x)$ from \eqref{F}, fix a Lipschitz continuous intermediate local minimizer $(\bar{u},\bar b)$ for $(P)$ with the corresponding sweeping trajectory $\bar{x}(\cdot):=x_{\bar{u},\bar b}$ and assume that the uniform Slater condition \eqref{unifslater} holds along $(\bar{u},\bar b)$. Take the mesh $\Delta_k$ from \eqref{e:DP} and identify the points $t^k_j$ with the index $j$ if no confusion arises. Consider now discrete triples $(u^k,b^k,x^k)$ with the components \begin{equation*} (u^k,b^k,x^k):=(u^k_0,u^k_1,\ldots,u^k_{\nu(k)},b^k_0,b^k_1,\ldots,b^k_{% \nu(k)},x^k_0,x^k_1,\ldots,x^k_{\nu(k)}) \end{equation*} and form the sequence of \emph{discrete approximation} problems $(P_k)$ by: \begin{align} &\mbox{minimize}\quad\varphi\big(x^k_{\nu(k)})+\label{disc-cost}\\ &\frac{1}{2}\sum_{j=0}^{\nu(k)-1} \int_{t^k_j}^{t^k_{j+1}}\bigg\|\bigg(\dfrac{u^k_{j+1}-u^k_j}{h^k_j},\dfrac{ b^k_{j+1}-b^k_j}{h^k_j},\dfrac{x^k_{j+1} -x^k_j}{h^k_j}\bigg)-\big(\dot{\bar{ u}}(t),\dot{\bar b}(t),\dot{\bar{x}}(t)\big)\bigg\|^2dt\notag \end{align} over the triples $(u^k,b^k,x^k$) subject to the following constraints: \begin{equation} \label{disc-sw} x^k_{j+1}\in x^k_j-h^k_jF(u^k_j,b^k_j,x^k_j),\;j=0,\ldots,\nu(k)-1, \end{equation} \begin{equation}\label{ini} x^k_0=x_0\in C_{\bar{u},\bar b}(0),\;(u^k_0,b^k_0)=\big(\bar{u}(0),\bar b(0)\big)% ,\;x^k_{\nu(k)}\in\Omega+\xi_kI\!\!B, \end{equation} \begin{equation}\label{u-const} 1-\delta_k\le\left\|u^k_i(t^k_j)\right\|\le 1+\delta_k\;\mbox{ for all } \;t^k_j\in\Delta_k,\quad i=1,\ldots,m, \end{equation} \begin{equation} \label{ic1} \sum_{j=0}^{\nu(k)-1}\int_{t^k_j}^{t^k_{j+1}}\Big\|\big(u^k_j,b^k_j,x^k_j \big)-\big(\bar{u}(t),\bar b(t),\bar{x}(t)\big)\Big\|^2dt\le\dfrac{\varepsilon}{2}, \end{equation} \begin{equation}\label{ic2} \sum_{j=0}^{\nu(k)-1}\int_{t^k_j}^{t^k_{j+1}}\bigg\|\bigg(\dfrac{% u^k_{j+1}-u^k_j}{h^k_j},\dfrac{b^k_{j+1}-b^k_j}{h^k_j},\dfrac{x^k_{j+1}-x^k_j }{h^k_j}\bigg)- \big(\dot{\bar{x}}(t),\dot{\bar a}(t),\dot{\bar b}(t)\big) \bigg\|^2dt\le\dfrac{\varepsilon}{2}, \end{equation} where $\{\delta_k\}$ in \eqref{u-const} is taken from Theorem~\ref{da-feas} applied to $(\bar{u},\bar b)$ and can be chosen such that both inequalities in \eqref{u-const} are strict, where $\varepsilon>0$ in \eqref{ic1} and \eqref{ic2} is taken from Definition~\ref{ilm} of the intermediate local minimizer $(\bar{u},\bar b)$ for $(P)$, and where the sequence $\{\xi_k\}$ of the endpoint perturbations in \eqref{ini} is defined by \begin{equation} \label{end-pert} \xi_k:=\|\widetilde x^k(T)-\bar{x}(T)\|\to 0\;\mbox{ as }\;k\in\mathbb{N} \end{equation} via the sequence $\{\widetilde x^k(\cdot)\}$ approximating $\bar{x}(\cdot)$ in Theorem~\ref{da-feas}.\vspace*{0.03in} The next theorem establishes the existence of optimal solutions to problems $(P_k)$ for all $k\in\mathbb{N}$ and then shows that any sequence of optimal controls $\{(\bar{u}^k,\bar b^k)\}$ to $(P_k)$ constructed for the given {\em canonical intermediate local minimizer} $(\bar{u},\bar b)$ of $(P)$, together with the corresponding sequence of discrete trajectories $\{\bar{x}^k\}$ piecewise linearly extended to the whole interval $[0,T]$, \emph{strongly $W^{1,2}$-converge} as $k\to\infty$ to the prescribed local optimal triple $(\bar{u},\bar b,\bar{x})$ for $(P)$.\vspace*{-0.07in} \begin{theorem}\label{ilm-conver} Let $(\bar{u},\bar b)$ be a canonical intermediate local minimizer for $(P)$ with the corresponding sweeping trajectory $\bar{x}(\cdot)$. The following assertions hold:\\[0.5ex] {\bf(i)} If the cost function $\varphi$ is lower semicontinuous around $% \bar{x}(T)$, then each problem $(P_k)$ admits an optimal solution whenever $% k\in\mathbb{N}$ is sufficiently large.\newline {\bf(ii)} If in addition $\varphi$ is continuous around $\bar{x}(T)$, then every sequence of optimal solutions $\{(\bar{u}^k,\bar b^k)\}$ to $(P_k)$ and the corresponding sequence of discrete trajectories $\{\bar{x}^k\}$, being piecewise linearly extended to $[0,T]$, converge to $(\bar{u},\bar b,% \bar{x})$ as $k\to\infty$ in the norm topology of $W^{1,2}([0,T],\mathbb{R}% ^{mn}\times\mathbb{R}^m\times\mathbb{R}^n)$. \end{theorem}\vspace*{-0.2in} \begin{proof} To verify (i), observe first that the set of feasible solutions to problem $ (P_k)$ is {\em nonempty} for all large $k\in\mathbb{N}$. Namely, we show that the approximating sequence $\{(\widetilde u^k,\widetilde b^k,\widetilde x^k)\}$ from Theorem~\ref{da-feas}, being applied to the given {\em canonical} intermediate local minimizer $(\bar{u},\bar b)$ of the original problem $(P)$, consists of feasible solutions to $(P_k)$ when $k$ is sufficiently large. Indeed, the discrete sweeping inclusions \eqref{disc-sw} with the initial data in % \eqref{ini} are clearly satisfied for $\{(\widetilde u^k,\widetilde b^k,\widetilde x^k)\}$ together with the control constraints \eqref{u-const}, the conditions in \eqref{ic1} and \eqref{ic2} also hold for large $k$ by the $W^{1,2}$-convergence of the extended triples $\{(\widetilde u^k(t),\widetilde b^k(t),\widetilde x^k(t))\}$ to $(\bar{u}(t),\bar b(t), \bar{x}(t))$ on $[0,T]$ as $k\to\infty$, and the fulfillment of the endpoint constraint in \eqref{ini} for the approximating trajectories $\widetilde x^k(\cdot)$ follows from $\bar{x}(T)\in\Omega$ and the definition of $\xi_k$ in \eqref{end-pert} by Theorem~\ref{da-feas} applied to the canonical intermediate local minimizer $(\bar{u},\bar b)$. It follows from the construction of $(P_k)$ and the structure of $F$ in \eqref{F} that the set of feasible solutions to $(P_k)$ is closed. Furthermore, the constraints in \eqref{u-const}--\eqref{ic2} ensure the boundedness of the latter set. Since $\varphi$ is assumed to be lower semicontinuous around $\bar{x}(T)$, the existence of optimal solutions to such $(P_k)$ follows from the classical Weierstrass existence theorem in finite dimensions. Now we verify assertion (ii) of the theorem. Consider an arbitrary sequence $\{(\bar{ u}^k(\cdot),\bar b^k(\cdot),\bar{x}^k(\cdot))\}$ of optimal controls to $ (P_k)$ and the associated trajectories of \eqref{disc-sw} that are piecewise linearly extended to $[0,T]$. We aim at proving \begin{equation} \label{lim-con} \lim_{k\to\infty}\int^T_0\big\|(\dot{\bar{u}}^k(t),\dot{\bar b}^k(t),\dot{\bar{x}} ^k(t)\big)-(\dot{\bar{u}}(t),\dot{\bar b}(t),\dot{\bar{x}}(t)\big)\big\|^2dt=0, \end{equation} which readily yields the claimed convergence in (ii). Supposing on the contrary that \eqref{lim-con} fails gives us a subsequence of $k\to\infty$ (no relabeling) along which the limit in \eqref{lim-con} equals to some $\sigma>0$. Due to \eqref{ic2}, the sequence $ \{(\dot{\bar{u}}^k(t),\dot{\bar b}^k(t),\dot{\bar{x}}^k(t))\}$ is weakly compact in $L^2([0,T],\mathbb{R}^{mn}\times\mathbb{R}^m\times\mathbb{R}^n)$, and hence it contains a subsequence that converges to some triple $ (\vartheta^u(\cdot),\vartheta^b(\cdot),\vartheta^x(\cdot))\in L^2([0,T], \mathbb{R}^{mn}\times\mathbb{R}^m\times\mathbb{R}^n)$ weakly in this space. Employing Mazur's weak closure theorem tells us that there is a sequence of convex combinations of $(\dot{\bar{u}}^k(\cdot),\dot{\bar b}^k(\cdot),\dot{\bar{x}}^k(\cdot))$, which converges to $(\vartheta^u(\cdot),\vartheta^b(\cdot),\vartheta^x(\cdot))$ strongly in $L^2([0,T],\mathbb{R}^{mn}\times\mathbb{R} ^m\times\mathbb{R}^n)$, and hence almost everywhere on $[0,T]$ along a subsequence. Define \begin{equation*} \big(\widehat u(t),\widehat b(t),\widehat x(t)\big):=(\bar{u}(0),\bar b(0),x_0)+\int^t_0\big(\vartheta^u(\tau),\vartheta^b(\tau),\vartheta^x(\tau)% \big)d\tau\;\mbox{ for all }\;t\in[0,T] \end{equation*} and get that $(\dot{\widehat u}(t),\dot{\widehat b}(t),\dot{\widehat x}% (t))=(\vartheta^u(t),\vartheta^b(t),\vartheta^x(t))$ for a.e.\ $t\in[0,T]$. It follows from the construction of $(\widehat u(t),\widehat b(t),\widehat x(t))$ and the passage to the limit as $k\to\infty$ in \eqref{ini}--\eqref{ic2} that $\|\widehat u(t)\|=1$ on $[0,T]$, that $\widehat x(T)\in\Omega$, and that $(\widehat u(t),\widehat b(t),\widehat x(t))$ belongs to the $\varepsilon$-neighborhood of $(\bar{u}% (\cdot),\bar b(\cdot),\bar{x}(\cdot))$ in the norm topology of $% W^{1,2}([0,T],\mathbb{R}^{mn}\times\mathbb{R}^m\times\mathbb{R}^n)$. Let us now check that the limiting triple $(\widehat u(\cdot),\widehat b(\cdot),\widehat x(\cdot))$ satisfies the sweeping inclusion \begin{equation} \label{sweep} -\dot{x}(t)\in N_{C_{(u(t),b(t))}}\big(x(t)\big)\;\mbox{ for a.e. }\;t\in[0,T] \end{equation} over the controlled polyhedron. It follows from \eqref{disc-sw} due to \eqref{movpoly} and \eqref{F} that \begin{equation*} \left\langle\bar{u}^k_i(t_j),\bar{x}^k(t_j)\right\rangle\le\bar b^k_i(t_j)\; \mbox{ for all }\;i=1,\ldots,m,\;\mbox{ all }\;j=0,\;\nu(k)-1,\;\mbox{ and } \;k\in\mathbb{N}. \end{equation*} Passing there to the limit as $k\to\infty$ ensures the conditions \begin{equation} \label{Ck} \left\langle\widehat u_i(t),\widehat x(t)\right\rangle\le\widehat b_i(t)\; \mbox{ for all }\;i=1,\ldots,m\;\mbox{ and }\;t\in[0,T], \end{equation} i.e., $\widehat x(t)\in C_{(\widehat u(t),\widehat b(t))}$ on $[0,T]$. To proceed further, we use the construction of $F$ in \eqref{F} and rewrite \eqref{disc-sw} along the optimal triple $(\bar{u}^k,\bar b^k,\bar{x}^k)$ for $(P_k)$ as \begin{equation} \label{disc-sw1} -\frac{\bar{x}^k(t_{j+1})-\bar{x}^k(t_j)}{h_{k_j}}\in N_{C_{(\bar{u}% ^k(t_j),\bar b^k(t_j))}}\big(\bar{x}^k(t_j)\big)\quad (j=0,\ldots,\nu(k)-1,\,\,k\in\mathbb{N}). \end{equation} Recalling the piecewise linear extensions $(\bar{u}^k(t),\bar b^k(t),\bar{x}^k(t))$ of the discrete triples $(% \bar{u}^k,\bar b^k,\bar{x}^k)$ and their strong $W^{1,2}$-convergence to the triple $(\widehat u(t),\widehat b(t),\widehat x(t))$ satisfying \eqref{Ck} tells us by passing to the limit in \eqref{disc-sw1} as $k\to\infty$ that the sweeping inclusion \eqref{sweep} holds for $(\widehat u(t),\widehat b(t),\widehat x(t))$. The verification of the latter involves the usage of the aforementioned Mazur theorem and the outer semicontinuity (closed-graph) property of the convex normal cone \eqref{nc} with respect to pointwise perturbations of the moving polyhedron $C_{(u,b)}$ in \eqref{sweep}. All the above shows that the limiting triple $(\widehat u,\widehat b,\widehat x)$ is a feasible solution to problem $(P)$ while satisfying the $\varepsilon$-localization condition \eqref{loc}. Passing finally to the limit in $(P_k)$ with taking into account the assumed continuity of $ \varphi$ and remembering the value $\sigma>0$ of the chosen limiting point of the sequence in \eqref{lim-con}, we get that $\varphi(\widehat x(T))<\varphi(\bar{x}(T))$. This contradicts the imposed local optimality of $(\bar{u},\bar b)$ in $(P)$ and hence completes the proof of theorem. \qed \end{proof}\vspace*{-0.27in} \section{Optimality conditions via discrete approximations}\label{sec:optim-disc}\setcounter{equation}{0}\vspace*{-0.05in} The results of the previous section show that optimal solutions to the finite-dimensional discrete-time problem $(P_k)$ are approximating {\em suboptimal} solutions to the original sweeping control problem $(P)$ of infinite-dimensional dynamic optimization. Therefore, necessary optimality conditions for solutions to problems $(P_k)$, when $k\inI\!\!N$ is sufficiently large, can be viewed as (necessary) {\em suboptimality conditions} for the prescribed intermediate local minimizers of $(P)$. This observation allows us to justify solving the original sweeping control problem by applying appropriate numerical techniques based on necessary optimality conditions for the discrete approximations. Each discrete-time problem $(P_k)$ can be reduced to a nondynamic problem of \emph{mathematical programming} in finite-dimensional spaces. As we see, problems $(P_k)$ contain constraints of special types, the most challenging of which are given by \emph{increasingly many inclusions} in \eqref{disc-sw} that come from the sweeping dynamics. The latter constraints of the \emph{graphical type} require appropriate tools of generalized differentiation to deal with. In particular, Clarke's nonsmooth analysis cannot be apply here, since his normal cone is usually too large for graphical sets associated with velocity mappings in \eqref{sweeping} and \eqref{disc-sw}. In fact, the only (known to us) machinery of generalized differentiation suitable for these purposes is the one introduced by the third author and then developed by many researchers; see, e.g., the books \cite{m06,m18,rw} and the references therein. We now briefly review what is needed in this paper. Given a set $\Theta\subset\mathbb{R}^n$ locally closed around $\bar{z}\in\Theta$, the (Mordukhovich basic/limiting) {\em normal cone} to $\Theta$ at $\bar{z}$ is defined by \begin{align}\label{nor_con} &N(\bar{z};\Theta)=N_\Theta(\bar{z}):=\\ &\big\{v\in\mathbb{R}^n\;\big|\;\exists\,z_k\to\bar{z},\;w_k\in\Pi(z_k;\Omega),\;\alpha_k\ge 0\;\mbox{ with }\;\alpha_k(z_k-w_k)\to v\big\},\notag \end{align} where $\Pi(z;\Theta):=\{w\in\Theta\;|\;\|z-w\|=d(z,\Theta)\}$ is the Euclidean projector of $z\in\mathbb{R}^n$ onto $\Theta$. While for convex sets $\Theta$ the normal cone \eqref{nor_con} agrees with the classical one \eqref{nc}, in general the set of normals \eqref{nor_con} may be nonconvex even for simple sets as, e.g., the graph of the absolute value function $|\cdot|$ at $\bar{z}=(0,0)\in\mathbb{R}^2$. Nevertheless, the normal cone \eqref{nor_con} for sets, as well as the coderivatives of set-valued mappings and (first-order and second-order) subdifferentials of extended-real-valued functions generated by \eqref{nor_con}, enjoy \emph{comprehensive calculus rules} that are based on \emph{variational and extremal principles} of variational analysis. Given further a set-valued mapping ${\cal F}\colon\mathbb{R}^n\rightrightarrows\mathbb{R}^m$ with the graph $\mbox{\rm gph}\,{\cal F}:=\{(x,y)\in\mathbb{R}^n\times\mathbb{R}^m\;|\;y\in{\cal F}(x)\}$ locally closed around $(\bar{x},\bar{y})\in\mbox{\rm gph}\,{\cal F}$, the {\em coderivative} of ${\cal F}$ at $(\bar{x},\bar{y})$ is defined by \begin{equation}\label{coderivative} D^*{\cal F}(\bar{x},\bar{y})(u):=\big\{v\in\mathbb{R}^n\;\big|\;(v,-u)\in N\big((\bar{x},\bar{y});\mbox{\rm gph}\,{\cal F}\big)\big\},\quad u\in\mathbb{R}^m. \end{equation} Given finally an extended-real-valued function $f\colon\mathbb{R}^n\to\Bar{\R}:=(-\infty,\infty]$ lower semicontinuous around $\bar{x}$ with $f(\bar{x})<\infty$ and the epigraph $\mbox{\rm epi}\, f:=\{(x,\alpha)\in\mathbb{R}^{n+1}\;|\;\alpha\ge f(x)\}$, the (first-order) {\em subdifferential} of $f$ at $\bar{x}$ can be defined geometrically via the normal cone \eqref{nor_con} as \begin{equation}\label{sub} \partial f(\bar{x}):=\big\{v\in\mathbb{R}^n\big|\;(v,-1)\in N\big((\bar{x},f(\bar{x}));\mbox{\rm epi}\, f\big)\big\}, \end{equation} while it admits various analytic descriptions that can be found in the aforementioned books. Observe that the normal cone \eqref{nor_con} is the subdifferential \eqref{sub} of the indicator function $\delta_\Theta(x)$ of $\Theta$, which equals $0$ for $x\in\Theta$ and $\infty$ otherwise. The {\em second-order subdifferential} of $f$ at $\bar{x}$ relative to $\bar{x}\in\partial f(\bar{x})$ is defined as the coderivative of the first-order subdifferential mapping by \begin{equation}\label{2nd} \partial^2f(\bar{x},\bar{v})(d):=\big(D^*\partial f\big)(\bar{x},\bar{v})(d),\quad d\in\mathbb{R}^n. \end{equation} This construction naturally arises in optimal control of sweeping processes of type \eqref{sweeping}, where the right-hand side is described by the normal cone mapping. We look for second-order evaluations of the coderivative in \eqref{2nd} applied to the normal cone mapping $F$ in \eqref{F} generated by the control-dependent convex polyhedron $C(u,b)$ in the sweeping process \eqref{sweeping}. The result needed in this paper follows from \cite[Theorem~4.3]{mor2}, where it was derived by using calculations in \cite{mo} and Robinson's theorem of the calmness property of polyhedral multifunctions \cite{rob}. To formulate the required result, consider the matrix $$ A:=[u_{ij}]\,\,(i=1,\ldots,m;j=1,\ldots,n) $$ with the vector columns $u_i$ as well as the transpose matrix $A^T$. As usual, the symbol $^\perp$ indicates the orthogonal complement of a vector in the corresponding space. Having the controlled polyhedron $C(u,b)$ in \eqref{F}, take its {\em active indices} at $(u,b,x)$ with $x\in C(u,b)$ denoted by \begin{equation*} I(u,b,x):=\big\{i\in\{1,\ldots,m\}\;\big|\;\langle u_i,x\rangle=b_i\big\}. \end{equation*} The {\em positive linear independence constraint qualification} (PLICQ) at $(u,b,x)$ is \begin{equation}\label{PLICQ} \bigg[\sum_{i\in I(x,u,b)}\alpha_iu_i=0,\,\alpha_i\ge 0\bigg]\Longrightarrow\big[\alpha_i=0\;\;\mbox{for all}\;\;i\in I(x,u,b)\big]. \end{equation} This condition is significantly weaker than the classical {\em linear independence constraint qualification} (LICQ), which corresponds to \eqref{PLICQ} with $\alpha_i\in\mathbb{R}$ while not being used in this paper. Considering the moving polyhedron as in \eqref{movpoly}, it is not hard to check that our basic uniform Slater condition from \eqref{unifslater} is equivalent to the fulfillment of PLICQ along the feasible triple $(x(t),u(t),b(t))$ for all $t\in[0,T]$; see \cite{mor1} for more discussions on this topic. Given $x\in C(u,b)$ and $v\in N(x;C(u,b))$, define the sets \begin{equation*} Q(p):=\left\{\begin{array}{ll} q_i=0\;\mbox{ for all }\;i\;\mbox{ with either }\;\langle u_i,x\rangle<b_i\;\mbox{ or }\;p_i=0,\;\mbox{ or}\;\langle u_i,y\rangle<0,\\ q_i\ge 0\;\mbox{ for all }\;i\;\mbox{ such that }\;\langle u_i,x\rangle=b_i,\;p_i=0,\;\mbox{ and }\;\langle u_i,y\rangle>0, \end{array} \right. \end{equation*} \begin{equation*} P(y):=\big\{p\in N_{\mathbb{R}^m_-}(Ax-b)\;\big|\;A^Tp=v\big\}\;\mbox{ for }\;y\in\bigcap_{\{i\;|\;p_i>0\}}u_i^\perp, \end{equation*} where the normal cone to the nonpositive orthant $\mathbb{R}^m_-$ is easy to compute. Now we are ready to present the required evaluation of the coderivative of the normal cone mapping $F(x,u,b)$ generated by the controlled polyhedron in \eqref{F}. The following lemma is a slight modification of \cite[Theorem~4.3]{mor2}.\vspace*{-0.1in} \begin{lemma}\label{cod-eval} Taking $F$ and $C(u,b)$ from \eqref{F}, suppose that the active vector columns $\{u_i\;|\;i\in I(u,b,x)\}$ are positively linearly independent for any $(u,b,x)$ with $x\in C(u,b)$. Then for all such $(u,b,x)$, all $v\in N(x;C(u,b))$, and all $y\in\cap_{\{i\;|\;p_i>0\}}u_i^\perp$ we have the coderivative upper estimate \begin{equation}\label{cod_inclusion} D^*F(u,b,x,v)(y)\subset\bigcup\limits_{\begin{subarray}{l}p\in P(y)\\q\in Q(p) \end{subarray}}\left\{\left(\begin{array}{c} A^Tq\\\hline p_{1}y+q_1x\\ \vdots\\ p_my+q_mx\\\hline-q \end{array} \right)\right\}. \end{equation} \end{lemma} Note that imposing the LICQ condition instead of PLICQ ensures that the set $P(y)$ is a singleton and that the inclusion in \eqref{cod_inclusion} holds as equality; see \cite[Theorem~4.3]{mor2}. However, for the purpose of this paper it is sufficient to have the inclusion in \eqref{cod_inclusion} under the less restrictive PLICQ. To proceed further, we need one more auxiliary result giving us necessary optimality conditions for a finite-dimensional nondynamic problem of {\em mathematical programming} with finitely many equality, inequality and inclusion (geometric) constraints. The next lemma is obtained by combining the necessary optimality conditions from \cite[Theorem~6.5]{m18} for mathematical programs containing one geometric constraint and the intersection rule for limiting normals taken from \cite[Corollary~2.17]{m18}. Arguing in this way, we can derive necessary optimality conditions for mathematical programs described by lower semicontinuous cost and inequality constraint functions as well as continuous functions describing equality constraints. However, we confine ourselves to considering problems with just locally Lipschitzian functions for cost and inequality constraints and smooth functions for equality constraints, since only such functional constraints appear in mathematical programs to which we reduce the discrete-time sweeping control problems $(P_k)$.\vspace*{-0.1in} \begin{lemma}\label{math-prog} Consider the following problem of mathematical programming: \begin{equation} \tag{MP} \left\{\begin{array}{ll}\mbox{minimize }\;f_0(z)\;\mbox{ as }\;z\in\mathbb{R}^d\;\mbox{ subject to}\\ f_i(z)\le 0\;\mbox{ for }\;i=1,\ldots,s,\\ g_j(z)=0\;\mbox{ for }\;j=0,\ldots,r,\\ z\in\Theta_j\;\mbox{ for }\;j=0,\ldots,l, \end{array}\right. \end{equation} where all the functions $f_i$ and $g_j$ are real-valued. Given a local minimizer $\bar{z}$ to $(MP)$, assume that the functions $f_i$ are locally Lipschitzian around $\bar{z}$ for $i=0,\ldots,s$, the functions $g_j$ are continuously differentiable around this point for $j=0,\ldots,r$, and the sets $\Theta_j$ are locally closed around $\bar{z}$ for all $j=0,\ldots,l$. Then there exist nonnegative numbers $\lambda_0,\ldots,\lambda_{s}$, real numbers $\mu_0,\ldots,\mu_r$, and vectors $z^*_j\in\mathbb{R}^d$ for $j=0,\ldots,l$, not equal to zero simultaneously, such that \begin{equation*} \lambda_i f_i(\bar{z})=0\;\mbox{ for }\;i=1,\ldots,s, \end{equation*} \begin{equation*} z^*_j\in N(\bar{z};\Theta_j)\;\mbox{ for }\;j=0,\ldots,l, \end{equation*} \begin{equation*} -\sum_{j=0}^l z^*_j\in\sum_{i=0}^s\lambda_i\partial f_i(\bar{z})+\sum_{j=0}^{r}\mu_i\nabla g_j(\bar{z}), \end{equation*} \end{lemma} Having Lemma~\ref{cod-eval} and Lemma~\ref{math-prog} in hand, we are now in a position to establish necessary conditions for optimal solutions to problems $(P_k)$ from \eqref{disc-cost}--\eqref{ic2} whenever the approximation number $k\inI\!\!N$ is sufficiently large. The obtained relationships involve the given intermediate local minimizer for the sweeping optimal control problem $(P)$ and thus present necessary suboptimality conditions for the original continuous-time problem due to Theorem~\ref{ilm-conver}. For any $x\in\mathbb{R}^n$, $y=(y_1,\ldots,y_m)\in\mathbb{R}^{nm}$ with $y_i\in\mathbb{R}^n$ $(i=1,\ldots,m)$, and $\alpha=(\alpha_1,\ldots,\alpha_m)\in\mathbb{R}^m$ we use the symbols \begin{equation*} {\rm rep}_m(x):=(x,\ldots,x)\in\mathbb{R}^{nm}\;\mbox{ and }\;[\alpha,y]:=(\alpha_1y_1,\ldots,\alpha_my_m)\in\mathbb{R}^{nm}. \end{equation*} \begin{theorem}\label{nc-disc} Let $(\bar{u},\bar b)$ be a canonical intermediate local minimizer of $(P)$ generated the trajectory $\bar{x}=\bar{x}(\cdot)$ of the controlled polyhedral sweeping process \eqref{sweeping} such that the cost function $\varphi$ is locally Lipschitzian around $\bar{x}(T)$. Fix an optimal triple $(\bar{u}^k,\bar b^k,\bar{x}^k)$ in problem $(P_k)$ with the components \begin{equation*} (\bar{u}^k,\bar b^k,\bar{x}^k):=(\bar{u}^k_0,\bar{u}^k_1,\ldots,\bar{u}^k_{\nu(k)},\bar b^k_0,\bar b^k_1,\ldots,\bar b^k_{\nu(k)},\bar{x}^k_0,\bar{x}^k_1,\ldots,\bar{x}^k_{\nu(k)}) \end{equation*} and choose $k\inI\!\!N$ to be sufficiently large. Denote the quantities \begin{align*} \theta_{j}^{uk}&:=\int_{t_j^k}^{t_{j+1}^k}\(\frac{\bar{u}_{j+1}^k-\bar{u}_j^k}{h^k_j}-\dot{\bar{u}}(t)\)dt,\quad \theta_{j}^{bk}:=\int_{t_j^k}^{t_{j+1}^k}\(\frac{\bar b_{j+1}^k-\bar b_j^k}{h^k_j}-\dot{\bar b}(t)\)dt,\\ \theta_{j}^{xk}&:=\int_{t_j^k}^{t_{j+1}^k}\(\frac{\bar{x}_{j+1}^k-\bar{x}_j^k}{h^k_j}-\dot{\bar{x}}(t)\)dt \end{align*} and define the set $\Omega_k:=\Omega+\xi_kI\!\!B$, where $\xi_k$ is taken from the construction of problem $(P_k)$. Then there exist a multiplier $\lambda^k\ge 0$, an adjoint triple $p_j^k=(p_{j}^{xk},p_{j}^{ak},p_{j}^{bk})\in\mathbb{R}^{n+mn+m}$ $(j=0,\ldots,\nu(k))$, as well as vectors $\eta^k=(\eta_0^k,\ldots,\eta^k_{\nu(k})$ $\in\mathbb{R}^{m(\nu(k)+1)}_+$, $\alpha^{1k}=\(\alpha_{0}^{1k},\ldots,\alpha_{\nu(k)}^{1k}\)\in\mathbb{R}^{m(\nu(k)+1)}_+$, $\alpha^{2k}=(\alpha_{0}^{2k},\ldots,\alpha_{\nu(k)}^{2k})\in\mathbb{R}^{m(\nu(k)+1)}_{+}$, and $\gamma^k=(\gamma_0^k,\ldots,\gamma^k_{\nu(k)-1})\in\mathbb{R}^{m\nu(k)}$ such that \begin{equation}\label{ntc0} \lambda^k+\left \|\alpha^{1k}-\alpha^{2k}\right \|+\displaystyle\left \|\eta_{\nu(k)}^k\right \|+\sum_{j=0}^{\nu(k)-1}\left \| p_{j}^{xk}\right \|+\left \| p_{0}^{ak}\right \|+\left \| p_{0}^{bk}\right \|\ne 0, \end{equation} \begin{equation}\label{ntc1} \lambda^k+\left \|\alpha^{1k}-\alpha^{2k}\right \|+\left \|\gamma^k\right \|+\left \| p^{ak}_{\nu(k)}\right \|+\left \| p^{bk}_{\nu(k)}\right \|\ne 0, \end{equation} and we have the following conditions:\\[1ex] {\sc $\bullet$ dynamic relationships}, which are satisfied for all indices $j=0,\ldots,\nu(k)-1$ and $i=1,\ldots,m:$ \begin{equation}\label{87} -\frac{\bar{x}_{j+1}^k-\bar{x}_j^k}{h^k_j} =\sum_{i=1}^{m}\eta_{ij}^k\bar{u}_{ij}^k, \end{equation} \begin{equation}\label{cona} \begin{array}{ll} &\dfrac{p_{j+1}^{uk}-p_{j}^{uk}}{h^k_j}-\dfrac{2}{h^k_j}\[\alpha_{j}^{1k}-\alpha_{j}^{2k},\bar{u}_{j}^k\]\\ &=\[\gamma_{j}^k,\mbox{\rm rep}\,_m(\bar{x}_j^k)\]+\[\eta_j^k,\mbox{\rm rep}\,_m\(-\dfrac{1}{h^k_j}\lambda^k\theta_{j}^{xk}-\lambda^k+p_{j+1}^{xk}\)\], \end{array} \end{equation} \begin{equation}\label{conb} \frac{p_{j+1}^{bk}-p_{j}^{bk}}{h^k_j}=-\gamma_{j}^k, \end{equation} \begin{equation}\label{conx} \frac{p_{j+1}^{xk}-p_{j}^{xk}}{h^k_j}=\sum_{i=1}^{m}\gamma_{ij}^k\bar{u}_{ij}^k, \end{equation} where the components of the vectors $\gamma^k_j$ are such that \begin{equation}\label{congg1} \begin{cases} \gamma^k_{ij}=0\;\;\mbox{if}\;\;\langle\bar{u}^k_{ij},\bar{x}^k_j\rangle<\bar b^k_{ij}\;\;\mbox{or}\;\;\eta^k_{ij}=0,\;\Big\langle\bar{u}_{ij}^k,-\dfrac{1}{h^k_j}\lambda^k\theta_{j}^{xk}+p_{j+1}^{xk}\Big\rangle<0,\\ \gamma^k_{ij}\ge 0\;\;\mbox{if}\;\;\langle\bar{u}^k_{ij},\bar{x}^k_j\rangle=\bar b^k_{ij},\;\eta^k_{ij}=0,\;\Big\langle\bar{u}_{ij}^k,-\dfrac{1}{h^k_j}\lambda^k\theta_{j}^{xk}+p_{j+1}^{xk}\Big\rangle>0,\\ \gamma^k_{ij}\in\mathbb{R} \;\;\mbox{if}\;\;\eta^k_{ij}>0,\;\Big\langle\bar{u}_{ij}^k,-\dfrac{1}{h^k_j}\lambda^k\theta_{j}^{xk}+p_{j+1}^{xk}\Big\rangle=0. \end{cases} \end{equation} {\sc $\bullet$ complementary slackness conditions}: \begin{equation}\label{71l1} \alpha_{ij}^{1k}\(\left \| u_{ij}^k\right \|-(1+\delta_k)\)=0\quad (i=1,\ldots,m,\;\;j=0,\ldots,\nu(k)), \end{equation} \begin{equation}\label{71l2} \alpha_{ij}^{2k}\(\left \| u_{ij}^k\right \|-(1-\delta_k)\)=0\quad (i=1,\ldots,m,\;\;j=0,\ldots,\nu(k)), \end{equation} \begin{equation}\label{eta} \[\langle u_{ij}^k,\bar{x}_j^k\rangle<\bar b_{ij}^k\]\Longrightarrow \eta_{ij}^k=0\quad (i=1,\ldots,m,\;\;j=0,\ldots,\nu(k)-1), \end{equation} \begin{equation}\label{eta1} \[\langle\bar{u}_{i\nu(k)}^k,\bar{x}_{\nu(k)}^k\rangle<\bar b_{i\nu(k)}^k\]\Longrightarrow \eta_{i\nu(k)}^k=0\quad (i=1,\ldots,m,\;\;j=0,\ldots,\nu(k)-1), \end{equation} \begin{equation}\label{96} \eta_{ij}^k>0\Longrightarrow \[\Big\langle\bar{u}_{ij}^k,-\frac{1}{h^k_j}\lambda^k\theta_{j}^{xk}+p_{j+1}^{xk}\Big\rangle=0\]\,\,(i=1,\ldots,m,\;\;j=0,\ldots,\nu(k)-1). \end{equation} {\sc $\bullet$ transversality relationships} at the right end of the trajectory: \begin{equation}\label{nmutx} -p_{\nu(k)}^{xk}\in\lambda^k\partial\varphi(\bar{x}_{\nu(k)}^k)+N\big(\bar{x}^k_{\nu(k)};\Omega_k)+\sum_{i=1}^{m}\eta_{i\nu(k)}^k\bar{u}_{i\nu(k)}^k, \end{equation} \begin{equation}\label{nmuta} p_{\nu(k)}^{uk}=-2\[\alpha_{\nu(k)}^{1k}-\alpha_{\nu(k)}^{2k},\bar{u}_{\nu(k)}^k\]-\[\eta_{\nu(k)}^k,\mbox{\rm rep}\,_m(\bar{x}_{\nu(k)}^k)\], \end{equation} \begin{equation}\label{nmutb} p_{i\nu(k)}^{bk}=\eta^k_{i\nu(k)}\ge 0,\;\langle \bar{u}^{k}_{i\nu(k)},\bar{x}^k_{\nu(k)}\rangle<\bar b^k_{i\nu(k)}\Longrightarrow p_{i\nu(k)}^{bk}=0\quad (i=1,\ldots,m). \end{equation} \end{theorem}\vspace*{-0.15in} \begin{proof} To reduce problem $(P_k)$ from \eqref{disc-cost}--\eqref{ic2} for each fixed $k\inI\!\!N$ to a mathematical program of type $(MP)$ formulated in Lemma~\ref{math-prog}, we form the multidimensional vector \begin{align*} z:=\left(u_0^k,\ldots,u_{\nu(k)}^k,b_0^k,\ldots,b_{\nu(k)}^k,x_0^k,\ldots,x_{\nu(k)}^k,v_0^k,\ldots,v_{\nu(k)-1}^k,\right.&\\ \left. w_0^k,\ldots,w_{\nu(k)-1}^k,y_0^k,\ldots,y_{\nu(k)-1}^k\right)& \end{align*} and consider the problem of minimizing the cost function \begin{equation*} f_0(z):=\varphi(x^k_{\nu(k)})+\frac{1}{2}\sum_{j=0}^{\nu(k)-1}\int_{t_j^k}^{t_{j+1}^k}\big\|\big(v_j^k-\dot{\bar{u}}(t),w_j^k-\dot{\bar b}(t),y_j^k-\dot{\bar{x}}(t) \big)\big\|^2dt \end{equation*} subject to the five groups of inequality constraints \begin{equation*} f_1(z):=\sum_{j=0}^{\nu(k)-1}\int_{t^k_j}^{t^k_{j+1}}\left \|\(u^k_j,b^k_j,x^k_j\)-\(\bar{u}(t),\bar b(t),\bar{x}(t)\)\right \|^2dt-\dfrac{\varepsilon}{2}\le 0, \end{equation*} \begin{equation*} f_2(z):=\sum_{j=0}^{\nu(k)-1}\int_{t_j^k}^{t_{j+1}^k}\left \|\big(v_j^k,w_j^k,y_j^k\big)-\big(\dot{\bar{u}}(t),\dot{\bar b}(t),\dot{\bar{x}}(t)\big)\right \|^2dt-\frac{\varepsilon}{2}\le 0, \end{equation*} \begin{equation*} f_{ij}(z):=\|u_{ij}^k\|^2-(1+\delta_k)^2\le 0\;\mbox{ for }\;i=1,\ldots,m,\;\;j=0,\ldots,\nu(k), \end{equation*} \begin{equation*} \widetilde f_{ij}(z):=(1-\delta_k)^2-\|u_{ij}^k\|^2\le 0,\;\mbox{ for }\;i=1,\ldots,m,\;\;j=0,\ldots,\nu(k), \end{equation*} \begin{equation*} \widehat f_i(z):=\big\langle u_{i\nu(k)}^k,x_{\nu(k)}^k\big\rangle-b_{i\nu(k)}^k\le 0\;\mbox{ for }\;i=1,\ldots,m, \end{equation*} the three groups of equality constraints \begin{equation*} g^u_j(z):=u_{j+1}^k-u_j^k-h^k_jv_j^k=0\;\;\mbox{ for }\;j=0,\ldots,\nu(k)-1, \end{equation*} \begin{equation*} g^b_j(z):=b_{j+1}^k-b_j^k-h^k_jw_j^k=0\;\mbox{ for }\;j=0,\ldots,\nu(k)-1, \end{equation*} \begin{equation*} g^x_j(z):=x_{j+1}^k-x_j^k-h^k_jy_j^k=0,\;\mbox{ for }\;j=0,\ldots,\nu(k)-1, \end{equation*} and the two groups of inclusion constraints \begin{equation*} z\in\Theta_j:=\big\{z\;\big|-y_j^k\in F(u_j^k,b_j^k,x_j^k)\big\}\;\mbox{ for }\;j=0,\ldots,\nu(k)-1, \end{equation*} \begin{equation*} z\in\Theta_{\nu(k)}:=\big\{z\;\big|\;(u^k_0,b^k_0,x^k_0)\;\mbox{ are fixed, }\;x^k_{\nu(k)}\in\Omega_k\big\}, \end{equation*} where those for $j=0,\ldots,\nu(k)-1$ incorporate the constraints $x^k_j\in C(u^k_j,b^k_j)$ for such $j$ due to the construction of $F$ in \eqref{F}. As we see, the formulated nondynamic equivalent of problem $(P_k)$ is written in the mathematical programming form $(MP)$ as in Lemma~\ref{math-prog} with the fulfillment all the assumptions imposed in the lemma. Thus we can readily apply the conclusions of the lemma by taking into account the particular structure of the functions and sets in the formulated equivalent of $(P_k)$. Employing now the necessary optimality conditions of Lemma~\ref{math-prog} to the optimal solution \begin{align*} \bar{z}:=\bar{z}^k=\(\bar{u}_0^k,\ldots,\bar{u}_{\nu(k)}^k,\bar b_0^k,\ldots,\bar b_{\nu(k)}^k,\bar{x}_0^k,\ldots,\bar{x}_{\nu(k)}^k,\bar{v}_0^k,\ldots,\bar{v}_{\nu(k)-1}^k,\right.&\\ \left.\bar{w}_0^k,\ldots,\bar{w}_{\nu(k)-1}^k, \bar{y}_0^k,\ldots,\bar{y}_{\nu(k)-1}^k\)& \end{align*} of problem $(MP)\equiv(P_k)$, observe by Theorem~\ref{ilm-conver} that the inequality constraints defined by the functions $f_1$ and $f_2$ above are {\em inactive} at $\bar{z}$ for sufficiently large $k$, and thus the corresponding multipliers will not appear in optimality conditions. Taking this into account, we find by Lemma~\ref{math-prog} multipliers $\lambda^k\ge 0$, $(\beta^k_1,\ldots,\beta^k_m)\in\mathbb{R}^m_+$, $p^k_j=(p^{uk}_j,p^{bk}_j,p^{xk}_j)\in\mathbb{R}^{mn+n+m}$ for $j=1,\ldots,\nu(k)$, as well as vectors \begin{align*} z^*_j:=\(u^*_{0j},\ldots,u^*_{\nu(k)j},b^*_{0j},\ldots,b^*_{\nu(k)j},x^*_{0j},\ldots,x^*_{\nu(k)j},v^*_{0j},\ldots,v^*_{(\nu(k)-1)j},\right.&\\ \left. w^*_{0j},\ldots,w^*_{(\nu(k)-1)j},y^*_{0j},\ldots,y^*_{(\nu(k)-1)j}\)& \end{align*} for $j=0,\ldots,\nu(k)$, $\alpha^{1k}=(\alpha_{0}^{1k},\ldots,\alpha_{\nu(k)}^{1k})\in\mathbb{R}^{\nu(k)+1}_+$, $\alpha^{2k}=(\alpha_{0}^{2k},\ldots,\alpha_{\nu(k)}^{2k})\in\mathbb{R}^{\nu(k)+1}_{-}$ such that the complementary slackness conditions \eqref{71l1}, \eqref{71l2}, and \begin{equation}\label{71+} \beta^k_i\big(\big\langle \bar{u}_{i\nu(k)}^k,\bar{x}_{\nu(k)}^k\big\rangle-\bar b_{i\nu(k)}^k\big)=0\;\mbox{ for }\;i=1,\ldots,m \end{equation} hold together with the normal cone inclusions \begin{equation}\label{nor-inc} z^*_j\in N(\bar{z};\Theta_j)\;\mbox{ for }\;j=0,\ldots,\nu(k) \end{equation} and the generalized Lagrangian condition \begin{equation}\label{70} \begin{array}{ll} -\displaystyle\sum_{j=0}^{\nu(k)}z^*_j&\in\lambda^k\partial f_0(\bar{z})+\displaystyle\sum_{i=1}^{m}\beta^k_j\nabla\widehat f_i(\bar{z})+\displaystyle\sum_{j=0}^{\nu(k)-1}\nabla g_j(\bar{z})^T p_{j+1}^{k}\\ &+\displaystyle\sum_{j=0}^{\nu(k)}\sum^m_{i=1}\bigg[\alpha_{ij}^{1k}\nabla f_{ij}(\bar{z})+\alpha_{ij}^{2k}\nabla\widetilde f_{ij}(\bar{z})\bigg], \end{array} \end{equation} where $g_j=(g^u_j,g^b_j,g^x_j)$, and where the dual elements $\lambda^k$, $\beta^k_i$, $p^k_j$, $z^*_j$, $\alpha^{1k}$, and $\alpha^{2k}$ are not all zero simultaneously. Looking at the graphical structure of the geometric constraints $z\in\Theta_j$ for $j=0,\ldots,\nu(k)-1$, we readily deduce from \eqref{nor-inc} that \begin{equation*} \begin{array}{ll} (u^*_{jj},b^*_{jj},x^*_{jj},-y^*_{jj})\in N\Big(\Big(\bar{u}_j^k,\bar b_j^k,\bar{x}_j^k,-\dfrac{\bar{x}_{j+1}^k-\bar{x}_j^k}{h^k_j} \Big);\mbox{\rm gph}\, F\Big)\,\,(j=0,\ldots,\nu(k)-1) \end{array} \end{equation*} with all the other components of $z^*_j$ equal to zero for these indices $j$. It follows from the coderivative definition \eqref{coderivative} that the obtained normal cone inclusion can be equivalently written as \begin{equation}\label{cod-inc} (u^*_{jj},b^*_{jj},x^*_{jj})\in D^*F\Big(\bar{u}_j^k,\bar b_j^k,\bar{x}_j^k,-\dfrac{\bar{x}_{j+1}^k-\bar{x}_j^k}{h^k_j} \Big)(y^*_{jj})\;\mbox{ for }\;j=0,\ldots,\nu(k)-1. \end{equation} Since the mapping $F$ is given in the particular form \eqref{F}, we are able to use the coderivative evaluation in \eqref{cod-inc} provided the fulfillment of PLICQ \eqref{PLICQ} along the discrete optimal solutions for all $k$ sufficiently large. As discussed above, the assumed uniform Slater condition \eqref{unifslater} for the given canonical intermediate local minimizer $(\bar{u},\bar b)$ of $(P)$ yields PLICQ at $(\bar{u},\bar b,\bar{x})$. Since the latter condition is {\em robust} with respect to perturbations of the initial triple and since the discrete optimal solutions strongly converge to $(\bar{u}(\cdot),\bar b(\cdot),\bar{x}(\cdot))$ by Theorem~\ref{ilm-conver}, we are in a position to use Lemma~\ref{cod-eval} in the coderivative inclusion \eqref{cod-inc}. Prior to this, let us calculate the other terms in the generalized Lagrangian condition \eqref{70}. First observe that the summation term in the cost function is smooth. Therefore, the usage of the subdifferential sum rule from \cite[Proposition~1.30(ii)]{m18} gives the precise calculation \begin{equation*} \partial f_0(\bar{z})=\partial\varphi(\bar{x}^k_{\nu(k)})+\sum_{j=0}^{\nu(k)-1}\big(0,\ldots,0,\theta^{uk}_j,\theta^{bk}_j,\theta^{xk}_j\big) \end{equation*} where zeros stands for the all components of $\bar{z}$ till $\bar{v}^k_j$, and where $\theta^{uk}_j,\theta^{bk}_j,\theta^{xk}_j$ are defined in the formulation of the theorem. Further, with the usage of our notation presented before the formulation of this theorem, we easily get \begin{equation*} \sum_{i=1}^{m}\beta_i^k\nabla \widehat f_i(\bar{z})=\(\sum_{i=1}^{m} \beta_i^k\bar{u}_{ik}^k,\[\beta^k,\mbox{\rm rep}\,_m(\bar{x}_{\nu(k)}^k)\],-\beta^k\), \end{equation*} \begin{equation*} \(\sum_{j=0}^{\nu(k)-1}\nabla g_j(\bar{z})^Tp_{j+1}^k\)_{(u_j,b_j,x_j)}=\left\{\begin{array}{llll} -p_{1}^{k}&\textrm{ if }& j=0\\[1ex] p_{j}^{k}-p_{j+1}^{k}&\textrm{ if }& j=1,\ldots,\nu(k)-1\\[1ex] p_{\nu(k)}^{k}&\textrm{ if }& j=\nu(k) \end{array} \right., \end{equation*} \begin{align*} \(\sum_{j=0}^{\nu(k)-1}\nabla g_j(\bar{z})^Tp_{j+1}^k\)_{(v_j,w_j,y_j)}=\( -h^k_0p_{1}^{uk},-h^k_1p_{2}^{uk},\ldots,-h^k_{\nu(k)-1}p_{\nu(k)}^{uk},\right.&\\ \left. -h^k_0p_{1}^{bk},-h^k_1p_{2}^{bk},\ldots, -h^k_{\nu(k)-1}p_{\nu(k)}^{bk},-h^k_0p_{1}^{xk},-h^k_1p_{2}^{xk},\ldots, -h^k_{\nu(k)-1}p_{\nu(k)}^{xk}\),& \end{align*} \begin{align*} \sum_{j=0}^{\nu(k)}\sum^m_{i=1}\alpha_{ij}^{1k}\nabla f_{ij}(\bar{z})=2\[\alpha_{j}^{1k},\bar{u}_{j}^k\],\; \;\sum_{j=0}^{\nu(k)}\sum^m_{i=1}\alpha_{ij}^{2k}\nabla\widetilde f_{ij}(\bar{z})=-2\[\alpha_{j}^{2k},\bar{u}_{j}^k\]&\\ (j=0,\ldots,\nu(k)).& \end{align*} To proceed with \eqref{70}, it remains to express the dual element $z^*_{\nu(k)}\in N(\bar{z};\Theta_{\nu(k)})$ in \eqref{nor-inc} corresponding the last geometric constraint $\bar{z}_{\nu(k)}\in\Theta_{\nu(k)}$ in terms of the data of $(P_k)$. We directly conclude from the structure of $\Theta_{\nu(k)}$ that the components of $z^*_{\nu(k)}$ corresponding to $(u_0^k,b_0^k,x_0^k)$ are free (i.e., just belong to $\mathbb{R}^{mn}\times\mathbb{R}^m\times\mathbb{R}^n$), that $x^*_{\nu(k)\nu(k)}\in N(\bar{x}^k_{\nu(k)};\Omega_k)$, and that all the other components are equal to zero. The fulfillment of PLICQ along $(\bar{u}^k,\bar b^k,\bar{x}^k)\}$ for all $k$ sufficiently large allows us to find unique vectors $\eta_j^k\in\mathbb{R}^m_+$ such that \begin{equation*}\label{h:5.39} \sum_{i=1}^m\eta_{ij}^k\bar{u}_{ij}^k=-\frac{\bar{x}_{j+1}^k-\bar{x}_j^k}{h^k_j}\;\mbox{ for }\;j=0,\ldots,\nu(k)-1. \end{equation*} For the last index $j=\nu(k)$, we put $\eta^k_{\nu(k)}:=\beta^k\in\mathbb{R}^m_+$. Substituting all the above into the Lagrangian inclusion \eqref{70} with taking into account the coderivative upper estimate from Lemma~\ref{cod-eval} gives us the claimed necessary optimality conditions \eqref{87}--\eqref{nmutb}. Finally, the nontriviality conditions in \eqref{ntc0} and \eqref{ntc1} follows directly from \eqref{87}--\eqref{nmutb} and the nontriviality of the dual elements in Lemma~\ref{math-prog} for the mathematical program $(MP)$ equivalent to $(P_k)$. Therefore, we complete the proof of the theorem. \qed \end{proof}\vspace*{-0.3in} \section*{Acknowledgment}\vspace*{-0.05in} The first author acknowledges support by the FMJH Program Gaspard Monge in optimization and operations research including support to this program by EDF as well as by the Deutsche Forschungsgemeinschaft for their support of project B04 within the CRC/Transregio 154. The work of the second author was partly supported by the EIPHI Graduate School (contract ANR-17-EURE-0002). Research of the third author was partly supported by the US National Science Foundation under grants DMS-1512846 and DMS-1808978, by the US Air Force Office of Scientific Research grant \#15RT04, and by the Australian Research Council under grant DP-190100555.\vspace*{-0.15in}
proofpile-arXiv_065-28
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{Sec:Intro} In this paper, we address the classical problem of minimizing the sum of a smooth function $f$ and a nonsmooth function $\phi$, also known under the name \emph{composite optimization}. This setting received much attention throughout the last years due to its inherent practical relevance in, e.g., machine learning, data compression, matrix completion, and image processing, see e.g.\ \cite{BianChen2015,BrucksteinDonohoElad2009,Chartrand2007,DiLorenzoLiuzziRinaldiSchoenSciandrone2012,LiuDaiMa2015,MarjanovicSolo2012}. A standard technique for the solution of composite optimization problems is the proximal gradient method, introduced by Fukushima and Mine \cite{FukushimaMine1981} and popularized e.g.\ by Combettes and Wajs in \cite{CombettesWajs2005}. A particular instance of this method is the celebrated iterative shrinkage/threshold algorithm (ISTA), see, e.g.\ \cite{BeckTeboulle2009}. A summary of existing results for the case where the nonsmooth term $\phi$ is defined by a convex function is given in the monograph by Beck \cite{Beck2017}. The proximal gradient method can also be interpreted as a forward-backward splitting method, see \cite{Bruck1977,Passty1979} for its origins and \cite{BauschkeCombettes2011} for a modern view, and is able to handle problems where the nonsmooth term $\phi$ is given by a merely lower semicontinuous function, see, e.g.\ the seminal works \cite{AttouchBolteSvaiter2013,BolteSabachTeboulle2014}. These references also provide convergence and rate-of-convergence results by using the popular \emph{descent lemma} together with the celebrated \emph{Kurdyka--\L ojasiewicz property}. To the best of our knowledge, however, the majority of available convergence results for proximal gradient methods assume that the smooth term $f$ is continuously differentiable with a globally Lipschitz continuous gradient (or they require local Lipschitzness together with a bounded level set which, again, implies the global Lipschitz continuity on this level set). This requirement, which is the essential ingredient for the classical descent lemma, is often satisfied for standard applications of the proximal gradient method in data science and image processing, where $f$ appears to be a quadratic function. In this paper, we aim to get rid of this global Lipschitz condition. This is motivated by the fact that the algorithmic application we have in mind does not satisfy this Lipschitz property since the smooth term $f$ corresponds to the augmented Lagrangian function of a general nonlinear constrained optimization problem, which rarely has a globally Lipschitz continuous gradient or a bounded level set. The proximal gradient method will be used to solve the resulting subproblems which forces us to generalize the convergence theory up to reasonable assumptions which are likely to hold in our framework. We refer the interested reader to \cite{ChenGuoLuYe2017,GuoDeng2021,JiaKanzowMehlitzWachsmuth2021,DeMarchiJiaKanzowMehlitz2022} where such augmented Lagrangian \emph{proximal} methods are investigated. Numerically, a nonmonotone version of the proximal gradient method is often preferred. Based on ideas by Grippo et al.\ \cite{GrippoLamparielloLucidi1986} in the context of smooth unconstrained optimization problems, Wright et al.\ \cite{WrightNowakFigueiredo2009} developed a nonmonotone proximal gradient method for composite optimization problems known under the name SpaRSA. In their paper, the authors assume that the nonsmooth part $\phi$ of the objective function is convex. Almost simultaneously, the authors of \cite{BirginMartinezRaydan2000} presented a nonmonotone projected gradient method for the minimization of a differentiable function over a convex set. Their findings can be interpreted as a special case of the results from \cite{WrightNowakFigueiredo2009} where $\phi$ equals the indicator of a convex set. The ideas from \cite{BirginMartinezRaydan2000,WrightNowakFigueiredo2009} were subsequently generalized in the papers \cite{ChenGuoLuYe2017,ChenLuPong2016} where the proximal gradient method is used as a subproblem solver within an augmented Lagrangian and penalization scheme, respectively. However, the authors did not address the aforementioned problematic lack of Lipschitzness in these papers which causes their convergence theory to be barely applicable in their algorithmic framework. In \cite{LiLin2015,WangLiu2021}, the authors present nonmonotone extensions of ISTA which can handle merely lower semicontinuous terms in the objective function. Again, for the convergence analysis, global Lipschitzness of the smooth term's derivative is assumed. Due to its practical importance, we therefore aim to provide a convergence theory for the nonmonotone proximal gradient method without using any Lipschitz assumption. In the seminal paper \cite{BauschkeBolteTeboulle2017}, the authors consider the composite optimization problem with both terms being convex, but without a global Lipschitz assumption for the gradient of the smooth part $f$. They get suitable rate-of-convergence results for the iterates generated by a Bregman-type proximal gradient method using only a local Lipschitz condition. In addition, however, they require that there is a constant $ L > 0 $ such that $ L h - f $ is convex, where $ h $ is a convex function which defines the Bregman distance (in our setting, $ h $ equals the squared norm). Some examples indicate that this convexity-type condition is satisfied in many practically relevant situations. Subsequently, this approach was generalized to the nonconvex setting in \cite{BolteSabachTeboulleVaisbourd2018} using, once again, a local Lipschitz assumption only, as well as the slighty stronger assumption (in order to deal with the nonconvexity) that there exist $ L > 0 $ and a convex function $ h $ such that both $ L h - f $ and $ L h + f $ are convex. Note that the constant $ L $ plays a central role in the design of the corresponding proximal-type methods. Particularly, it is used explicitly for the choice of stepsizes. Finally, the very recent paper \cite{CohenHallakTeboulle2022} proves global convergence results under a local Lipschitz assumption (without the additional convexity-type condition), but assumes that the iterates and stepsizes of the underlying proximal gradient method remain bounded. To the best of our knowledge, this is the current state-of-the-art regarding the convergence properties of proximal gradient methods. The aim of this paper is slightly different, since we do not provide rate-of-convergence results, but conditions which guarantee accumulation points to be suitable stationary points of the composite optimization problem. This is the essential feature of the proximal gradient method which, for example, is exploited in \cite{ChenGuoLuYe2017,JiaKanzowMehlitzWachsmuth2021,DeMarchiJiaKanzowMehlitz2022} to develop augmented Lagrangian proximal methods. We also stress that, in this particular situation, the above assumption that $ L h \pm f $ is convex for some $ L > 0 $ is often violated unless we are dealing with linear constraints only. Our analysis does not require a global Lipschitz assumption and is not based on the crucial descent lemma, contrasting \cite{BauschkeBolteTeboulle2017,BolteSabachTeboulleVaisbourd2018} mentioned above. The results show that we can get stationary accumulation points only under a local Lipschitz assumption and, depending on the properties of $ \phi $, sometimes even without any Lipschitz condition. In any case, a convexity-type condition like $ L h \pm f $ being convex for some constant $ L $ is not required at all. Moreover, the implementation of our proximal gradient method does not need any knowledge of the size of any Lipschitz-type constant. Since the aim of this paper is to get a better understanding of the theoretical convergence properties of both monotone and nonmonotone proximal gradient methods, and since these methods have already been applied numerically to a large variety of problems, we do not include any numerical results in this paper. Let us recall that we are mainly interested in conditions ensuring that accumulation points of sequences produced by the proximal gradient method are stationary. The main contributions of this paper show that this property holds (neglecting a few technical conditions) for the monotone proximal gradient method if either the smooth function $f$ is continuously differentiable and the nonsmooth function $\phi$ is continuous on its domain (e.g., this assumption holds for a constrained optimization problem where $\phi$ corresponds to the indicator function of a nonempty and closed set), or if $f$ is differentiable with a locally Lipschitz continuous derivative and $\phi$ is an arbitrary lower semicontinuous function. Corresponding statements for the nonmonotone proximal gradient method require stronger assumptions, basically the uniform continuity of the objective function on a level set. That, however, is a standard assumption in the literature dealing with nonmonotone stepsize rules. The paper is organized as follows: In \cref{Sec:Prelims}, we give a detailed statement of the composite optimization problem and provide some necessary background material from variational analysis. The convergence properties of the monotone and nonmonotone proximal gradient method are then discussed in \cref{Sec:GenSpecGrad,Sec:GenSpecGradNM}, respectively. We close with some final remarks in \cref{Sec:Final}. \section{Problem Setting and Preliminaries}\label{Sec:Prelims} We consider the \emph{composite} optimization problem \begin{equation}\label{Eq:P}\tag{P} \min_x \ \psi(x):=f(x) + \phi (x), \quad \quad x \in \mathbb X, \end{equation} where $ f\colon \mathbb{X} \to \mathbb{R} $ is continuously differentiable, $ \phi\colon \mathbb{X} \to \overline{\mathbb{R}}:= \mathbb{R}\cup\{\infty\} $ is lower semicontinuous (possibly infinite-valued and nondifferentiable), and $ \mathbb{X} $ denotes a Euclidean space, i.e., a real and finite-dimensional Hilbert space. We assume that the domain $\operatorname{dom}\phi:=\{x\in\mathbb X\,|\,\phi(x)<\infty\}$ of $ \phi $ is nonempty to rule out trivial situations. In order to minimize the function $\psi\colon\mathbb X\to\overline\mathbb{R}$ in \eqref{Eq:P}, it seems reasonable to exploit the composite structure of $\psi$, i.e., to rely on the differentiability of $f$ on the one hand, and on some beneficial structural properties of $\phi$ on the other one. This is the idea behind splitting methods. Throughout the paper, the Euclidean space $\mathbb X$ will be equipped with the inner product $\langle\cdot,\cdot\rangle\colon \mathbb X\times\mathbb X\to\mathbb{R}$ and the associated norm $\norm{\cdot}$. For some set $A\subset\mathbb X$ and some point $x\in\mathbb X$, we make use of $A+x=x+A:=\{x+a\,|\,a\in A\}$ for the purpose of simplicity. For some sequence $\{x^k\}\subset\mathbb X$ and $x\in\mathbb X$, $x^k\to_\phi x$ means that $x^k\to x$ and $\phi(x^k)\to\phi(x)$. The continuous linear operator $f'(x)\colon\mathbb X\to\mathbb{R}$ denotes the derivative of $f$ at $x\in\mathbb X$, and we will make use of $\nabla f(x):=f'(x)^*1$ where $f'(x)^*\colon\mathbb{R}\to\mathbb X$ is the adjoint of $f'(x)$. This way, $\nabla f$ is a mapping from $\mathbb X$ to $\mathbb X$. Furthermore, we find $f'(x)d=\langle\nabla f(x),d\rangle$ for each $d\in\mathbb X$. The following concepts are standard in variational analysis, see e.g.\ \cite{Mordukhovich2018,RockafellarWets2009}. Let us fix some point $x\in\operatorname{dom}\phi$. Then \[ \widehat\partial\phi(x) := \left\{ \eta\in\mathbb X\,\middle|\, \liminf\limits_{y\to x}\frac{\phi(y)-\phi(x)-\langle \eta,y-x\rangle}{\norm{y-x}}\geq 0 \right\} \] is called the {\em regular} (or {\em Fr\'{e}chet}) {\em subdifferential} of $\phi$ at $x$. Furthermore, the set \[ \partial\phi(x) := \left\{ \eta\in\mathbb X\,\middle|\, \exists\{x^k\},\{\eta^k\}\subset\mathbb X\colon x^k\to_\phi x,\,\eta^k\to\eta,\,\eta^k\in\widehat{\partial}\phi(x^k)\,\forall k\in\mathbb{N} \right\} \] is well known as the {\em limiting} (or {\em Mordukhovich}) {\em subdifferential} of $\phi$ at $x$. Clearly, we always have $\widehat{\partial}\phi(x)\subset\partial\phi(x)$ by construction. Whenever $\phi$ is convex, equality holds, and both subdifferentials coincide with the subdifferential of convex analysis, i.e., \[ \widehat{\partial}\phi(x) = \partial\phi(x) = \{ \eta\in\mathbb X\,|\,\forall y\in\mathbb X\colon\,\phi(y)\geq\phi(x)+\langle\eta,y-x\rangle \} \] holds in this situation. It can be seen right from the definition that whenever $x^*\in\operatorname{dom}\phi$ is a local minimizer of $\phi$, then $0\in\widehat\partial\phi(x^*)$, which is referred to as Fermat's rule, see \cite[Proposition~1.30(i)]{Mordukhovich2018}. Given $ x \in \operatorname{dom}\phi $, the limiting subdifferential has the important robustness property \begin{equation}\label{Eq:robustness} \left\{ \eta\in\mathbb X\,\middle|\, \exists\{x^k\},\{\eta^k\}\subset\mathbb X\colon\,x^k\to_\phi x,\,\eta^k\to\eta,\,\eta^k\in\partial\phi(x^k)\,\forall k\in\mathbb{N} \right\} \subset \partial\phi(x), \end{equation} see \cite[Proposition~1.20]{Mordukhovich2018}. Clearly, the converse inclusion $\supset$ is also valid by definition of the limiting subdifferential. Note that in situations where $\phi$ is discontinuous at $x$, the requirement $x^k\to_\phi x$ in the definition of the set on the left-hand side in \eqref{Eq:robustness} is strictly necessary. In fact, the usual outer semicontinuity in the sense of set-valued mappings, given by \begin{equation}\label{Eq:osc} \left\{ \eta\in\mathbb X\,\middle|\, \exists\{x^k\},\{\eta^k\}\subset\mathbb X\colon\,x^k\to x,\,\eta^k\to\eta,\,\eta^k\in\partial\phi(x^k)\,\forall k\in\mathbb{N} \right\} \subset \partial\phi(x), \end{equation} would be a much stronger condition in this situation and does not hold in general. Whenever $x\in\operatorname{dom}\phi$ is fixed, the sum rule \begin{equation} \label{eq:regular_sum_rule} \widehat{\partial}(f+\phi)(x) = \nabla f(x)+\widehat{\partial}\phi(x) \end{equation} holds, see \cite[Proposition~1.30(ii)]{Mordukhovich2018}. Thus, due to Fermat's rule, whenever $x^*\in\operatorname{dom}\phi$ is a local minimizer of $f+\phi$, we have $0\in \nabla f (x^*)+\widehat{\partial}\phi(x^*)$. This condition is potentially more restrictive than $0\in \nabla f (x^*)+\partial\phi(x^*)$ which, naturally, also serves as a necessary optimality condition for \eqref{Eq:P}. However, the latter is more interesting from an algorithmic point of view as it is well known from the literature on splitting methods comprising nonconvex functions $\phi$. If $\phi$ is convex, there is no difference between those stationarity conditions. Throughout the paper, a point $x^*\in\operatorname{dom}\phi$ satisfying $0\in \nabla f (x^*)+\partial\phi(x^*)$ will be called a {\em Mordukhovich-stationary} ({\em M-stationary} for short) point of \eqref{Eq:P} due to the appearance of the limiting subdifferential. In the literature, the name \emph{limiting critical point} is used as well. We close this section with two special instances of problem \eqref{Eq:P} and comment on the corresponding M-stationary conditions. \begin{remark}\label{Rem:constrained_opt} Consider the constrained optimization problem \[ \min_x \ f(x) \quad \text{subject to} \quad x \in C \] for a continuously differentiable function $ f\colon\mathbb X\to\mathbb{R} $ and a nonempty and closed (not necessarily convex) set $ C \subset \mathbb{X} $. This problem is equivalent to the unconstrained problem \eqref{Eq:P} by setting $ \phi := \delta_C $, where $ \delta_C \colon \mathbb X\to\overline{\mathbb{R}} $ denotes the indicator function of the set $C$, vanishing on $C$ and taking the value $\infty$ on $\mathbb X\setminus C$, which is lower semicontinuous due to the assumptions regarding $ C $. The corresponding M-stationarity condition is given by \[ 0 \in \nabla f (x^*) + \partial \delta_C (x^*) = \nabla f (x^*) + \mathcal N_C (x^*), \] where $\mathcal N_C(x^*) $ denotes the \emph{limiting} (or \emph{Mordukhovich}) \emph{normal cone}, see \cite[Proposition~1.19]{Mordukhovich2018}. \end{remark} \begin{remark}\label{rem:non_Lipschitz_constrained_optimization} Consider the more general constrained optimization problem \[ \min_x \ f(x) + \varphi (x) \quad \text{subject to} \quad x \in C \] with $ f\colon\mathbb X\to\mathbb{R}$ and $ C\subset\mathbb X $ as in \Cref{Rem:constrained_opt}, and $ \varphi\colon\mathbb X\to\overline{\mathbb{R}} $ being another lower semicontinuous function (which might represent a regularization, penalty, or sparsity-promoting term, for example). Setting $ \phi := \varphi + \delta_C $, we obtain once again an optimization problem of the form \eqref{Eq:P}. The corresponding M-stationarity condition is given by \[ 0 \in \nabla f (x^*) + \partial \phi (x^*) = \nabla f (x^*) + \partial ( \varphi + \delta_C ) (x^*). \] Unfortunately, the sum rule \[ \partial ( \varphi + \delta_C ) (x^*) \subset \partial \varphi (x^*) + \partial \delta_C (x^*) = \partial \varphi (x^*) + \mathcal N_C(x^*) \] does not hold in general. However, for locally Lipschitz functions $ \varphi $, for example, it applies, see \cite[Theorems~1.22, 2.19]{Mordukhovich2018}. Note that the resulting stationarity condition \[ 0 \in \nabla f (x^*) + \partial \varphi (x^*) + \mathcal N_C (x^*) \] might be slightly weaker than M-stationarity as introduced above. Related discussions can be found in \cite[Section~3]{GuoYe2018}. \end{remark} \section{Monotone Proximal Gradient Method}\label{Sec:GenSpecGrad} We first investigate a monotone version of the proximal gradient method applied to the composite optimization problem \eqref{Eq:P} with $ f $ being continuously differentiable and $ \phi $ being lower semicontinuous. Recall that the corresponding M-stationarity condition is given by \[ 0 \in \nabla f (x) + \partial \phi (x). \] Our aim is to find, at least approximately, an M-stationary point of \eqref{Eq:P}. The following algorithm is the classical proximal gradient method for this class of problems. Since we will also consider a nonmonotone variant of this algorithm in the following section, we call this the monotone proximal gradient method. \begin{algorithm}[Monotone Proximal Gradient Method]\leavevmode \label{Alg:MonotoneProxGrad} \begin{algorithmic}[1] \REQUIRE $ \tau > 1, 0 < \gamma_{\min} \leq \gamma_{\max} < \infty, \delta \in (0,1), x^0 \in \operatorname{dom}\phi $ \STATE Set $k := 0$. \WHILE{A suitable termination criterion is violated at iteration $ k $} \STATE Choose $ \gamma_k^0 \in [ \gamma_{\min}, \gamma_{\max}] $. \STATE\label{step:subproblem_solve_MonotoneProxGrad} For $ i = 0, 1, 2, \ldots $, compute a solution $ x^{k,i} $ of \begin{equation}\label{Eq:Subki} \min_x \ f (x^k) + \langle\nabla f(x^k), x - x^k \rangle + \frac{\gamma_{k,i}}{2} \| x - x^k \|^2 + \phi (x), \quad x \in \mathbb X \end{equation} with $ \gamma_{k,i} := \tau^i \gamma_k^0 $, until the acceptance criterion \begin{equation}\label{Eq:StepCrit} \psi (x^{k,i}) \leq \psi (x^k) - \delta \frac{\gamma_{k,i}}{2} \| x^{k,i} - x^k \|^2 \end{equation} holds. \STATE Denote by $ i_k := i $ the terminal value, and set $ \gamma_k := \gamma_{k,i_k} $ and $ x^{k+1} := x^{k,i_k} $. \STATE Set $ k \leftarrow k + 1 $. \ENDWHILE \RETURN $x^k$ \end{algorithmic} \end{algorithm} The convergence theory requires some technical assumptions. \begin{assumption}\label{Ass:ProxGradMonotone} \begin{enumerate} \item \label{item:psi_bounded} The function $ \psi $ is bounded from below on $ \operatorname{dom}\phi $. \item \label{item:phi_bounded_affine} The function $ \phi $ is bounded from below by an affine function. \end{enumerate} \end{assumption} \Cref{Ass:ProxGradMonotone}~\ref{item:psi_bounded} is a reasonable condition regarding the given composite optimization problem, whereas \Cref{Ass:ProxGradMonotone}~\ref{item:phi_bounded_affine} is essentially a statement relevant for the subproblems from \eqref{Eq:Subki}. In particular, \Cref{Ass:ProxGradMonotone}~\ref{item:phi_bounded_affine} implies that the quadratic objective function of the subproblems \eqref{Eq:Subki} are, for fixed $ k, i \in\mathbb{N}$, coercive, and therefore always attain a solution $ x^{k,i} $ (which, however, may not be unique). The subsequent convergence theory assumes implicitly that \Cref{Alg:MonotoneProxGrad} generates an infinite sequence. We first establish that the stepsize rule in \cref{step:subproblem_solve_MonotoneProxGrad} of \cref{Alg:MonotoneProxGrad} is always finite. \begin{lemma}\label{Lem:StepsizeFinite} Consider a fixed iteration $ k $ of \cref{Alg:MonotoneProxGrad}, assume that $ x^k $ is not an M-stationary point of \eqref{Eq:P}, and suppose that \Cref{Ass:ProxGradMonotone}~\ref{item:phi_bounded_affine} holds. Then the inner loop in \cref{step:subproblem_solve_MonotoneProxGrad} of \Cref{Alg:MonotoneProxGrad} is finite, i.e., we have $ \gamma_k = \gamma_{k,i_k} $ for some finite index $ i_k \in \{ 0, 1, 2, \ldots \} $. \end{lemma} \begin{proof} Suppose that the inner loop of \cref{Alg:MonotoneProxGrad} does not terminate after a finite number of steps in iteration $k$. Recall that $ x^{k,i} $ is a solution of \eqref{Eq:Subki}. Therefore, we get \begin{equation}\label{Eq:Sub1i} \langle \nabla f(x^k),x^{k,i} - x^k\rangle + \frac{\gamma_{k,i}}{2} \| x^{k,i} - x^k \|^2 + \phi (x^{k,i}) \leq \phi (x^k). \end{equation} Noting that $ \gamma_{k,i} \to \infty $ for $ i \to \infty $ and using \Cref{Ass:ProxGradMonotone}~\ref{item:phi_bounded_affine}, we obtain $ x^{k,i} \to x^k $ for $ i \to \infty $. Taking the limit $ i \to \infty $ therefore yields \begin{equation*} \phi (x^k) \leq \liminf_{i \to \infty} \phi (x^{k,i}) \leq \limsup_{i \to \infty} \phi (x^{k,i}) \leq \phi (x^k), \end{equation*} where the first estimate follows from the lower semicontinuity of $ \phi $ and the final inequality is a consequence of \eqref{Eq:Sub1i}. Therefore, we have \begin{equation}\label{eq:robustsum} \phi (x^{k,i}) \to \phi (x^k) \quad \text{for } i \to \infty . \end{equation} We claim that \begin{equation}\label{Eq:Sub2i} \liminf_{i \to \infty} \gamma_{k,i} \| x^{k,i} - x^k \| > 0. \end{equation} Assume, by contradiction, that there is a subsequence $ i_l \to \infty $ such that \begin{equation}\label{Eq:Sub3i} \liminf_{l \to \infty} \gamma_{k, i_l} \| x^{k, i_l} - x^k \| = 0. \end{equation} Since $ x^{k,i_l} $ is optimal for \eqref{Eq:Subki}, Fermat's rule and the sum rule \eqref{eq:regular_sum_rule} yield \begin{equation}\label{eq:optimality_condition_subproblem} 0 \in \nabla f (x^k) + \gamma_{k, i_l} ( x^{k, i_l} - x^k ) + \widehat\partial \phi (x^{k,i_l}) \end{equation} for all $l\in\mathbb{N}$. Taking the limit $ l \to \infty $ while using \eqref{eq:robustsum} and \eqref{Eq:Sub3i}, we obtain \begin{equation*} 0 \in \nabla f (x^k) + \partial \phi (x^k), \end{equation*} which means that $ x^k $ is already an M-stationary point of \eqref{Eq:P}. This contradiction shows that \eqref{Eq:Sub2i} holds. Hence, there is a constant $ c > 0 $ such that \begin{equation*} \gamma_{k,i} \| x^{k,i} - x^k \| \geq c \end{equation*} holds for all large enough $i\in\mathbb{N}$. In particular, this implies \begin{equation}\label{Eq:Sub4i} ( 1- \delta ) \frac{\gamma_{k,i}}{2} \| x^{k,i} - x^k \|^2 \geq \frac{1- \delta}{2} c \| x^{k,i} - x^k \| \geq o \big( \| x^{k,i} - x^k \| \big) \end{equation} for all sufficiently large $i\in\mathbb{N}$. Furthermore, \eqref{Eq:Sub1i} shows that \begin{equation}\label{Eq:Sub5i} \langle \nabla f (x^k),x^{k,i} - x^k\rangle + \phi \big( x^{k,i} \big) - \phi (x^k) \leq - \frac{\gamma_{k,i}}{2} \| x^{k,i} - x^k \|^2 . \end{equation} Using a Taylor expansion of the function $ f $ and exploiting \eqref{Eq:Sub4i}, \eqref{Eq:Sub5i}, we obtain \begin{align*} \psi ( x^{k,i}) - \psi (x^k) & = f ( x^{k,i}) + \phi (x^{k,i}) - f (x^k) - \phi (x^k) \\ & = \langle \nabla f (x^k),x^{k,i}-x^k\rangle + \phi ( x^{k,i} ) - \phi (x^k) + o \big( \| x^{k,i} - x^k \| \big) \\ & \leq - \frac{\gamma_{k,i}}{2} \| x^{k,i} - x^k \|^2 + o \big( \| x^{k,i} - x^k \| \big) \\ & \leq - \delta \frac{\gamma_{k,i}}{2} \| x^{k,i} - x^k \|^2 \end{align*} for all $ i \in \mathbb{N} $ sufficiently large. This, however, means that the acceptance criterion \eqref{Eq:StepCrit} is valid for sufficiently large $i\in\mathbb{N}$, contradicting our assumption. This completes the proof. \end{proof} Let us note that the above proof actually shows that the inner loop from \cref{step:subproblem_solve_MonotoneProxGrad} of \cref{Alg:MonotoneProxGrad} is either finite, or we have $\gamma_{k,i_l}\|x^{k,i_l}-x^k\|\to 0$ along a subsequence $i_l\to\infty$. Rewriting \eqref{eq:optimality_condition_subproblem} by means of \begin{equation}\label{eq:rewritten_optimality_condition_subproblem} \nabla f(x^{k,i_l})-\nabla f(x^k)+\gamma_{k,i_l}(x^k-x^{k,i_l}) \in \nabla f(x^{k,i_l})+\widehat\partial\phi(x^{k,i_l}) \end{equation} and recalling that $\nabla f\colon\mathbb X\to\mathbb X$ is continuous motivates to also use \[ \|\nabla f(x^{k,i})-\nabla f(x^k)+\gamma_{k,i}(x^k-x^{k,i})\|\leq\tau_\textup{abs} \] for some $\tau_\textup{abs}>0$ as a termination criterion of the inner loop since this encodes, in some sense, approximate M-stationarity of $x^{k,i}$ for \eqref{Eq:P} (note that taking the limit $l\to\infty$ in \eqref{eq:rewritten_optimality_condition_subproblem} would recover the limiting subdifferential of $\phi$ at $x^k$ since we have $x^{k,i_l}\to_\phi x^k$ by \eqref{eq:robustsum}). A critical step for the convergence theory of \Cref{Alg:MonotoneProxGrad} is provided by the following result. \begin{proposition}\label{Prop:xdiff} Let \Cref{Ass:ProxGradMonotone} hold. Then each sequence $ \{ x^k \} $ generated by \Cref{Alg:MonotoneProxGrad} satisfies $ \| x^{k+1} - x^k \| \to 0 $. \end{proposition} \begin{proof} First recall that the sequence $ \{ x^k \} $ is well-defined by \Cref{Lem:StepsizeFinite}. Using the acceptance criterion \eqref{Eq:StepCrit}, we get \begin{equation}\label{Eq:xdiff-1} \psi (x^{k+1}) \leq \psi (x^k) - \delta \frac{\gamma_k}{2} \| x^{k+1} - x^k \|^2 \leq \psi (x^k) \end{equation} for all $ k \in \mathbb{{N}} $. Hence, the sequence $ \{ \psi (x^k) \} $ is monotonically decreasing. Since $ \psi $ is bounded from below on $ \operatorname{dom}\phi $ by \Cref{Ass:ProxGradMonotone}~\ref{item:psi_bounded} and $ \{ x^k \} \subset \operatorname{dom}\phi $, it follows that this sequence is convergent. Therefore, \eqref{Eq:xdiff-1} implies \begin{equation*} \gamma_k \| x^{k+1} - x^k \|^2 \to 0 \quad \text{for } k \to \infty . \end{equation*} Hence the assertion follows from the fact that, by construction, we have $ \gamma_k \geq \gamma_{\min} > 0 $ for all $ k \in \mathbb{N} $. \end{proof} A refined analysis gives the following result. \begin{proposition}\label{Prop:gammaxdiff} Let \Cref{Ass:ProxGradMonotone} hold, let $\{x^k\}$ be a sequence generated by \cref{Alg:MonotoneProxGrad}, and let $ \{ x^k \}_K $ be a subsequence converging to some point $ x^* $. Then $ \gamma_k \| x^{k+1} - x^k \| \to_K 0 $ holds. \end{proposition} \begin{proof} If the subsequence $ \{ \gamma_k \}_K $ is bounded, the statement follows immediately from \Cref{Prop:xdiff}. The remaining part of this proof therefore assumes that this subsequence is unbounded. Without loss of generality, we may assume that $ \gamma_k \to_K \infty $ and that the acceptance criterion \eqref{Eq:StepCrit} is violated in the first iteration of the inner loop for each $k\in K$. Then, for $ \hat{\gamma}_k := \gamma_k / \tau $, $k\in K$, we also have $\hat\gamma_k\to_K\infty$, but the corresponding vector $ \hat x^k := x^{k, i_k-1} $ does not satisfy the stepsize condition from \eqref{Eq:StepCrit}, i.e., we have \begin{equation}\label{Prop:gammaxdiff-1} \psi (\hat x^k) > \psi (x^k) - \delta \frac{\hat{\gamma}_k}{2} \| \hat{x}^k - x^k \|^2 \quad \forall k \in K. \end{equation} On the other hand, since $ \hat{x}^k $ solves the corresponding subproblem \eqref{Eq:Subki} with $\hat{\gamma}_k = \gamma_{k,i_k-1}$, we have \begin{equation}\label{Prop:gammaxdiff-2} \langle \nabla f (x^k), \hat{x}^k - x^k\rangle + \frac{\hat{\gamma}_k}{2} \| \hat{x}^k - x^k \|^2 + \phi ( \hat{x}^k ) - \phi (x^k) \leq 0 . \end{equation} We claim that this, in particular, implies $ \hat{x}^k \to_K x^* $. In fact, using \eqref{Prop:gammaxdiff-2}, the Cauchy-Schwarz inequality, and the monotonicity of $ \{ \psi (x^k) \} $, we obtain \begin{align*} \frac{\hat{\gamma}_k}{2} \| \hat{x}^k - x^k \|^2 & \leq \| \nabla f (x^k) \| \| \hat{x}^k - x^k \| + \phi (x^k) - \phi ( \hat{x}^k ) \\ & = \| \nabla f (x^k) \| \| \hat{x}^k - x^k \| + \psi (x^k) - f (x^k) - \phi ( \hat{x}^k ) \\ & \leq \| \nabla f (x^k) \| \| \hat{x}^k - x^k \| + \psi (x^0) - f (x^k) - \phi ( \hat{x}^k ) . \end{align*} Since $ f $ is continuously differentiable and $ - \phi $ is bounded from above by an affine function in view of \Cref{Ass:ProxGradMonotone}~\ref{item:phi_bounded_affine}, this implies $ \| \hat{x}^k - x^k \| \to_K 0 $. In fact, if $ \{ \| \hat{x}^k - x^k \| \}_K $ would be unbounded, then the left-hand side would grow more rapidly than the right-hand side, and if $ \{ \| \hat{x}^k - x^k \| \}_K $ would be bounded, but staying away, at least on a subsequence, from zero by a positive number, the right-hand side would be bounded, whereas the left-hand side would be unbounded on the corresponding subsequence. Now, by the mean-value theorem, there exists $ \xi^k $ on the line segment connecting $ x^k $ with $ \hat{x}^k $ such that \begin{equation}\label{Prop:gammaxdiff-3} \begin{aligned} \psi ( \hat{x}^k ) - \psi (x^k) &= f ( \hat{x}^k ) + \phi (\hat{x}^k) - f (x^k) - \phi (x^k) \\ &= \langle \nabla f (\xi^k),\hat{x}^k - x^k \rangle + \phi (\hat{x}^k) - \phi (x^k) . \end{aligned} \end{equation} Substituting the expression $ \phi (\hat{x}^k) - \phi (x^k) $ from \eqref{Prop:gammaxdiff-3} into \eqref{Prop:gammaxdiff-2} yields \begin{equation*} \langle \nabla f(x^k) - \nabla f ( \xi^k) ,\hat x^k - x^k \rangle + \frac{\hat{\gamma}_k}{2} \| \hat{x}^k - x^k \|^2 + \psi ( \hat{x}^k ) - \psi (x^k) \leq 0. \end{equation*} Exploiting \eqref{Prop:gammaxdiff-1}, we therefore obtain \begin{align*} \frac{\hat{\gamma}_k}{2} \| \hat{x}^k - x^k \|^2 & \leq - \langle \nabla f(x^k) - \nabla f ( \xi^k), \hat x^k - x^k \rangle + \psi (x^k) - \psi ( \hat{x}^k ) \\ & \leq \| \nabla f (x^k) - \nabla f ( \xi^k) \| \| \hat x^k - x^k \| + \delta \frac{\hat{\gamma}_k}{2} \| \hat{x}^k - x^k \|^2 , \end{align*} which can be rewritten as \begin{equation}\label{Prop:gammaxdiff-4} ( 1 - \delta ) \frac{\hat{\gamma}_k}{2} \| \hat{x}^k - x^k \| \leq \| \nabla f (x^k) - \nabla f ( \xi^k) \| \end{equation} (note that $ \hat{x}^k \neq x^k $ in view of \eqref{Prop:gammaxdiff-1}). Since $ x^k\to_K x^* $ (by assumption) and $ \hat{x}^k \to_K x^* $ (by the previous part of this proof), we also get $ \xi^k \to_K x^* $. Using $ \delta \in (0,1) $ and the continuous differentiability of $ f $, it follows from \eqref{Prop:gammaxdiff-4} that $ \hat{\gamma}_k \| \hat{x}^k - x^k \| \to_K 0 $. Finally, exploiting the fact that $ x^{k+1} $ and $ \hat{x}^k $ are solutions of the subproblems \eqref{Eq:Subki} with parameters $ \gamma_k $ and $ \hat{\gamma}_k $, respectively, we find \begin{align*} \langle \nabla f(x^k),x^{k+1} - x^k\rangle &+ \frac{\gamma_k}{2} \| x^{k+1} - x^k \|^2 + \phi (x^{k+1}) \\ & \leq \langle \nabla f (x^k), \hat{x}^k - x^k \rangle + \frac{\gamma_k}{2} \| \hat{x}^k - x^k \|^2 + \phi (\hat{x}^k), \\ \langle \nabla f(x^k), \hat{x}^k - x^k \rangle &+ \frac{\hat{\gamma}_k}{2} \| \hat{x}^k - x^k \|^2 + \phi (\hat{x}^k) \\ & \leq \langle \nabla f(x^k), x^{k+1} - x^k \rangle + \frac{\hat{\gamma}_k}{2} \| x^{k+1} - x^k \|^2 + \phi (x^{k+1}). \end{align*} Adding these two inequalities and noting that $ \gamma_k = \tau \hat{\gamma}_k > \hat{\gamma}_k $ yields $ \| x^{k+1} - x^k \| \leq \| \hat{x}^k - x^k \| $ and, therefore, \begin{equation*} \gamma_k \| x^{k+1}- x^k \| = \tau \hat{\gamma}_k \| x^{k+1} - x^k \| \leq \tau \hat{\gamma}_k \| \hat{x}^k - x^k \| \to_K 0. \end{equation*} This completes the proof. \end{proof} The above technique of proof implies a boundedness result for the sequence $ \{ \gamma_k \} _K$ if $ \nabla f$ satisfies a local Lipschitz property around the associated accumulation point of iterates. This observation is stated explicitly in the following result. \begin{corollary}\label{Cor:LipschitzCase} Let \Cref{Ass:ProxGradMonotone} hold, let $\{x^k\}$ be a sequence generated by \Cref{Alg:MonotoneProxGrad}, let $ \{ x^k \}_K $ be a subsequence converging to some point $ x^* $, and assume that $ \nabla f \colon \mathbb X\to\mathbb X$ is locally Lipschitz continuous around $ x^* $. Then the corresponding subsequence $ \{ \gamma_k \}_K $ is bounded. \end{corollary} \begin{proof} We may argue as in the proof of \cref{Prop:gammaxdiff}. Hence, on the contrary, assume that $ \gamma_k \to_K \infty $. For each $k\in K$, define $ \hat{\gamma}_k $ and $ \hat{x}^k $ as in that proof, and let $ L > 0 $ denote the local Lipschitz constant of $ \nabla f $ around $ x^* $. Recall that $ x^k \to_K x^* $ (by assumption) and $ \hat{x}^k \to_K x^* $ (from the proof of \cref{Prop:gammaxdiff}). Exploiting \eqref{Prop:gammaxdiff-4}, we therefore obtain \begin{equation*} ( 1 - \delta ) \frac{\hat{\gamma}_k}{2} \| \hat{x}^k - x^k \| \leq L \| \hat{x}^k - \xi^k \| \leq L \| \hat{x}^k - x^k \| \end{equation*} for all $ k \in K $ sufficiently large, using the fact that $ \xi^k $ is on the line segment between $ x^k $ and $ \hat{x}^k $. Since $ \hat{\gamma}_k \to_K \infty $ and $ \hat{x}^k \neq x^k $, see once again \eqref{Prop:gammaxdiff-1}, this gives a contradiction. Hence, $ \{ \gamma_k \}_K $ stays bounded. \end{proof} The following is the main convergence result for \Cref{Alg:MonotoneProxGrad} which requires a slightly stronger smoothness assumption on either $ f $ or $ \phi $. \begin{theorem}\label{Thm:ConvProxGrad} Assume that \Cref{Ass:ProxGradMonotone} holds while either $ \phi $ is continuous on $\operatorname{dom}\phi$ or $ \nabla f\colon\mathbb X\to\mathbb X $ is locally Lipschitz continuous. Then each accumulation point $ x^* $ of a sequence $ \{ x^k \} $ generated by \Cref{Alg:MonotoneProxGrad} is an M-stationary point of \eqref{Eq:P}. \end{theorem} \begin{proof} Let $ \{ x^k \}_K $ be a subsequence converging to $ x^* $. In view of \Cref{Prop:xdiff}, it follows that also the subsequence $ \{ x^{k+1} \}_K $ converges to $ x^* $. Furthermore, \Cref{Prop:gammaxdiff} yields $ \gamma_k \| x^{k+1} - x^k \| \to_K 0 $. The minimizing property of $ x^{k+1} $, Fermat's rule, and the sum rule \eqref{eq:regular_sum_rule} imply that \begin{equation}\label{Eq:OptCondSub} 0 \in \nabla f (x^{k}) + \gamma_k (x^{k+1}-x^k) + \widehat\partial \phi (x^{k+1}) \end{equation} holds for each $k\in K$. Hence, if we can show $ \phi (x^{k+1}) \to_K \phi (x^*) $, we can take the limit $ k \to_K \infty $ in \eqref{Eq:OptCondSub} to obtain the desired statement $ 0 \in \nabla f (x^*) + \partial \phi (x^*) $. Due to \eqref{Eq:xdiff-1}, we find $\psi(x^{k+1})\leq \psi(x^0)$ for each $k\in K$. Taking the limit $k\to_K\infty$ while respecting the lower semicontinuity of $\phi$ gives $\psi(x^*)\leq\psi(x^0)$, and due to $x^0\in\operatorname{dom}\phi$, we find $x^*\in\operatorname{dom}\phi$. Thus, the condition $ \phi (x^{k+1}) \to_K \phi (x^*) $ obviously holds if $ \phi $ is continuous on its domain since all iterates $ x^k $ generated by \Cref{Alg:MonotoneProxGrad} as well as $x^*$ belong to $\operatorname{dom}\phi$. Hence, it remains to consider the situation where $ \phi $ is only lower semicontinuous, but $ \nabla f $ is locally Lipschitz continuous. From $ x^{k+1} \to_K x^* $ and the lower semicontinuity of $\phi$, we find \begin{equation*} \phi (x^*) \leq \liminf_{k \in K} \phi (x^{k+1}) \leq \limsup_{k \in K} \phi (x^{k+1}). \end{equation*} It therefore suffices to show that $ \limsup_{k \in K} \phi (x^{k+1}) \leq \phi (x^*) $ holds. Since $ x^{k+1} $ solves the subproblem \eqref{Eq:Subki} with parameter $ \gamma_k $, we obtain \begin{align*} \langle \nabla f (x^k),x^{k+1}-x^k \rangle &+ \frac{\gamma_k}{2} \| x^{k+1} - x^k \|^2 + \phi (x^{k+1} ) \\ & \leq \langle \nabla f(x^k),x^* - x^k \rangle + \frac{\gamma_k}{2} \| x^* - x^k \|^2 + \phi (x^*) \end{align*} for each $k\in K$. We now take the upper limit over $K$ on both sides. Using the continuity of $ \nabla f $, the convergences $ x^{k+1} - x^k \to_K 0 $ as well as $ \gamma_k \| x^{k+1} - x^k \|^2 \to_K 0 $ (see \Cref{Prop:xdiff,Prop:gammaxdiff}), and taking into account that $ \gamma_k \| x^k - x^* \|^2 \to_K 0 $ due to the boundedness of the subsequence $ \{ \gamma_k \}_K $ in this situation, see \cref{Cor:LipschitzCase}, we obtain $ \limsup_{k \in K} \phi (x^{k+1}) \leq \phi (x^*) $. Altogether, we therefore get $ \phi (x^{k+1}) \to_K \phi (x^*) $, and this completes the proof. \end{proof} Note that $ \phi $ being continuous on $ \operatorname{dom} \phi$ is an assumption which holds, e.g., if $ \phi $ is the indicator function of a closed set, see \Cref{Rem:constrained_opt}. Therefore, \Cref{Thm:ConvProxGrad} provides a global convergence result for constrained optimization problems with an arbitrary continuously differentiable objective function over any closed (not necessarily convex) feasible set. Moreover, the previous convergence result also holds for a general lower semicontinuous function $ \phi $ provided that $ \nabla f $ is locally Lipschitz continuous. This includes, for example, sparse optimization problems in $\mathbb X\in\{\mathbb{R}^n,\mathbb{R}^{n\times m}\}$ involving the so-called $ \ell_0$-quasi-norm, which counts the number of nonzero entries of the input vector, as a penalty term or optimization problems in $\mathbb X:=\mathbb{R}^{n\times m}$ comprising rank penalties. Note that we still do not require the global Lipschitz continuity of $ \nabla f $. However, it is an open question whether the previous convergence result also holds for the general setting where $ f $ is only continuously differentiable and $ \phi $ is just lower semicontinuous. \begin{remark}\label{rem:termination_MonotoneGroxGrad} Let $\{x^k\}$ be a sequence generated by \cref{Alg:MonotoneProxGrad}. In iteration $k\in\mathbb{N}$, $x^{k+1}$ satisfies the necessary optimality condition \eqref{Eq:OptCondSub} of the subproblem \eqref{Eq:Subki}. Hence, from the next iteration's point of view, we obtain \[ \gamma_{k-1}(x^{k-1}-x^k)+\nabla f(x^k)-\nabla f(x^{k-1}) \in \nabla f(x^k)+\widehat{\partial}\phi(x^k) \] for each $k\in\mathbb{N}$ with $k\geq 1$. This justifies evaluation of the termination criterion \begin{equation}\label{eq:termination} \norm{\gamma_{k-1}(x^{k-1}-x^k)+\nabla f(x^k)-\nabla f(x^{k-1})}\leq\tau_\textup{abs} \end{equation} for some $\tau_\textup{abs}>0$ since this means that $x^k$ is, in some sense, approximately M-stationary for \eqref{Eq:P}. Observe that, along a subsequence $\{x^k\}_K$ satisfying $x^{k-1}\to_K x^*$ for some $x^*$, \cref{Prop:xdiff,Prop:gammaxdiff} yield $x^k\to_K x^*$ and $\gamma_{k-1}(x^k-x^{k-1})\to_K0$ under appropriate assumptions, which means that \eqref{eq:termination} is satisfied for large enough $k\in K$ due to continuity of $\nabla f\colon\mathbb X\to\mathbb X$, see the discussion after \cref{Lem:StepsizeFinite} as well. \end{remark} Recall that the existence of accumulation points is guaranteed by the coercivity of the function $ \psi $. A simple criterion for the convergence of the entire sequence $ \{ x^k \} $ is provided by the following comment. \begin{remark}\label{rem:conv_entire_sequence} Let $ \{ x^k \} $ be any sequence generated by \Cref{Alg:MonotoneProxGrad} such that $ x^* $ is an isolated accumulation point of this sequence. Then the entire sequence converges to $ x^* $. This follows immediately from \cite[Lemma~4.10]{MoS1983} and the property of the proximal gradient method stated in \Cref{Prop:xdiff}. The accumulation point $ x^* $ is isolated, in particular, if $ f $ is twice continuously differentiable with $ \nabla^2 f(x^*) $ being positive definite and $ \phi $ is convex. In this situation, $ x^* $ is a strict local minimum of $ \psi $ and therefore the only stationary point of $ \psi $ is a neighborhood of $ x^* $. Since, by \Cref{Thm:ConvProxGrad}, every accumulation point is stationary, it follows that $ x^* $ is necessarily an isolated stationary point in this situation and, thus, convergence of the whole sequence $ \{ x^k \} $ to $ x^* $ follows. \end{remark} \section{Nonmonotone Proximal Gradient Method}\label{Sec:GenSpecGradNM} The method to be presented here is a nonmonotone version of the proximal gradient method from the previous section. The kind of nonmonotonicity used here was introduced by Grippo et al.\ \cite{GrippoLamparielloLucidi1986} for a class of smooth unconstrained optimization problems and then discussed, in the framework of composite optimization problems, by Wright et al.\ \cite{WrightNowakFigueiredo2009} as well as in some subsequent papers. We first state the precise algorithm and investigate its convergence properties. The relation to the existing convergence results is postponed until the end of this section. \begin{algorithm}[Nonmonotone Proximal Gradient Method]\leavevmode \label{Alg:NonmonotoneProxGrad} \begin{algorithmic}[1] \REQUIRE $ \tau > 0$, $0 < \gamma_{\min} \leq \gamma_{\max} < \infty$, $m \in \mathbb{N}$, $\delta\in (0,1)$, $x^0 \in \operatorname{dom}\phi $ \STATE Set $k := 0$. \WHILE{A suitable termination criterion is violated at iteration $ k $} \STATE Set $ m_k := \min \{ k, m \} $ and choose $ \gamma_k^0 \in [ \gamma_{\min}, \gamma_{\max}] $. \STATE For $ i = 0, 1, 2, \ldots $, compute a solution $ x^{k,i} $ of \begin{equation}\label{Eq:NonSubki} \min_x \ f (x^k) + \langle \nabla f (x^k),x - x^k\rangle + \frac{\gamma_{k,i}}{2} \| x - x^k \|^2 + \phi (x), \quad x\in\mathbb X \end{equation} with $ \gamma_{k,i} := \tau^i \gamma_k^0 $, until the acceptance criterion \begin{equation}\label{Eq:NonStepCrit} \psi (x^{k,i}) \leq \max_{j=0, 1, \ldots, m_k} \psi (x^{k-j}) - \delta \frac{\gamma_{k,i}}{2} \| x^{k,i} - x^k \|^2 \end{equation} holds. \STATE Denote by $ i_k := i $ the terminal value, and set $ \gamma_k := \gamma_{k,i_k} $ and $ x^{k+1} := x^{k,i_k} $. \STATE Set $ k \leftarrow k + 1 $. \ENDWHILE \RETURN $x^k$ \end{algorithmic} \end{algorithm} The only difference between \Cref{Alg:MonotoneProxGrad} and \Cref{Alg:NonmonotoneProxGrad} is in the stepsize rule. More precisely, \Cref{Alg:NonmonotoneProxGrad} may be viewed as a generalization of \Cref{Alg:MonotoneProxGrad} since the particular choice $ m = 0 $ recovers \Cref{Alg:MonotoneProxGrad}. Numerically, in many examples, the choice $ m > 0 $ leads to better results and is therefore preferred in practice. On the other hand, for $ m > 0 $, we usually get a nonmonotone behavior of the function values $ \{\psi (x^k) \} $ which complicates the theory significantly. In addition, the nonmontone proximal gradient method also requires stronger assumptions in order to prove a suitable convergence result. In particular, in addition to the requirements from \cref{Ass:ProxGradMonotone}, we need the following additional conditions on the data functions in order to proceed. \begin{assumption}\label{Ass:ProxGradNonmonotone} \begin{enumerate} \item\label{item:uniform_continuity} The function $ \psi $ is uniformly continuous on the sublevel set $\mathcal L_\psi(x^0):=\{x\in \mathbb X\,|\,\psi(x)\leq\psi(x^0)\}$. \item\label{item:continuity_phi} The function $\phi$ is continuous on $\operatorname{dom}\phi$. \end{enumerate} \end{assumption} Note that we always have $\mathcal L_\psi(x^0)\subset\operatorname{dom}\phi$ by the continuity of $f$. Furthermore, whenever $\psi$ is coercive, \cref{Ass:ProxGradNonmonotone}~\ref{item:continuity_phi} already implies \cref{Ass:ProxGradNonmonotone}~\ref{item:uniform_continuity} since $\mathcal L_\psi(x^0)$ would be a compact subset of $\operatorname{dom}\phi$ in this situation, and continuous functions are uniformly continuous on compact sets. Observe that coercivity of $\psi$ is an inherent property in many practically relevant settings. We further note that, in general, \cref{Ass:ProxGradNonmonotone}~\ref{item:uniform_continuity} does not imply \cref{Ass:ProxGradNonmonotone}~\ref{item:continuity_phi}, and the latter is a necessary requirement since, in our convergence theory, we will also evaluate the function $ \phi $ in some points resulting from an auxiliary sequence $ \{ \hat{x}^k \} $ which may not belong to the level set $\mathcal L_\psi(x^0) $. For the convergence theory, we assume implicitly that \cref{Alg:NonmonotoneProxGrad} generates an infinite sequence $\{x^k\}$. We first note that the stepsize rule in the inner loop of \cref{Alg:NonmonotoneProxGrad} is always finite. Since \[ \psi(x^k)\leq\max\limits_{j=0,1,\ldots,m_k}\psi(x^{k-j}) \] this observation follows immediately from \Cref{Lem:StepsizeFinite}. Throughout the section, for each $k\in\mathbb{N}$, let $l(k)\in\{k-m_k,\ldots,k\}$ be an index such that \[ \psi(x^{l(k)})=\max\limits_{j=0,1,\ldots,m_k}\psi(x^{k-j}) \] is valid. We already mentioned that $\{\psi(x^k)\}$ may possess a nonmonotone behavior. However, as the following lemma shows, $\{\psi(x^{l(k)})\}$ is monotonically decreasing. \begin{lemma}\label{Lem:decrease_condition_in_nonmonotone_framework} Let \cref{Ass:ProxGradMonotone}~\ref{item:phi_bounded_affine} hold and let $\{x^k\}$ be a sequence generated by \cref{Alg:NonmonotoneProxGrad}. Then $\{\psi(x^{l(k)})\}$ is monotonically decreasing. \end{lemma} \begin{proof} The nonmonotone stepsize rule from \eqref{Eq:NonStepCrit} can be rewritten as \begin{equation}\label{Eq:NA1} \psi(x^{k+1}) \leq \psi(x^{l(k)}) - \delta \frac{\gamma_k}{2} \| x^{k+1} - x^k \|^2. \end{equation} Using $ m_{k+1} \leq m_k + 1 $, we find \begin{align*} \psi ( x^{l(k+1)}) & = \max_{j=0,1, \ldots, m_{k+1}}\psi (x^{k+1-j}) \\ & \leq \max_{j=0,1, \ldots, m_k +1}\psi (x^{k+1-j}) \\ & = \max \left\{ \max_{j = 0, 1, \ldots, m_k}\psi (x^{k-j}), \psi (x^{k+1}) \right\} \\ & = \max \left\{ \psi (x^{l(k)}),\psi (x^{k+1}) \right\} \\ & = \psi (x^{l(k)}), \end{align*} where the last equality follows from \eqref{Eq:NA1}. This shows the claim. \end{proof} As a corollary of the above result, we obtain that the iterates of \cref{Alg:NonmonotoneProxGrad} belong to the level set $\mathcal L_\psi(x^0)$. \begin{corollary}\label{Cor:level_set_condition_nonmonotone_framework} Let \cref{Ass:ProxGradMonotone}~\ref{item:phi_bounded_affine} hold and let $\{x^k\}$ be a sequence generated by \cref{Alg:NonmonotoneProxGrad}. Then $\{x^k\},\{x^{l(k)}\}\subset\mathcal L_\psi(x^0)$ holds. \end{corollary} \begin{proof} Noting that $l(0)=0$ holds by construction, \cref{Lem:decrease_condition_in_nonmonotone_framework} and \eqref{Eq:NA1} yield the estimate $\psi(x^{k+1})\leq\psi(x^{l(k)})\leq\psi(x^{l(0)})=\psi(x^0)$ for each $k\in\mathbb{N}$ which shows the claim. \end{proof} The counterpart of \Cref{Prop:xdiff} is significantly more difficult to prove in the nonmonotone setting. In fact, it is this central result which requires the uniform continuity of the objective function $ \psi $ from \cref{Ass:ProxGradNonmonotone}~\ref{item:uniform_continuity}. Though its proof is essentially the one from \cite{WrightNowakFigueiredo2009}, we present all details since they turn out to be of some importance for the discussion at the end of this section. \begin{proposition}\label{Prop:xdiff_NM} Let \Cref{Ass:ProxGradMonotone} and \cref{Ass:ProxGradNonmonotone}~\ref{item:uniform_continuity} hold. Then each sequence $\{x^k\}$ generated by \cref{Alg:NonmonotoneProxGrad} satisfies $ \| x^{k+1} - x^k \| \to 0 $. \end{proposition} \begin{proof} Since $ \psi $ is bounded from below due to \cref{Ass:ProxGradMonotone}~\ref{item:psi_bounded}, \cref{Lem:decrease_condition_in_nonmonotone_framework} implies \begin{equation}\label{Eq:Limitfk} \lim_{k \to \infty}\psi ( x^{l(k)}) = \psi^* \end{equation} for some finite $\psi^*\in\mathbb{R}$. From \cref{Cor:level_set_condition_nonmonotone_framework}, we find $\{x^{l(k)}\}\subset\mathcal L_\psi(x^0)$. Applying \eqref{Eq:NA1} with $k$ replaced by $l(k)-n-1$ for some $n\in\mathbb{N}$ gives $\psi(x^{l(k)-n})\leq\psi(x^{l(l(k)-n-1)})\leq\psi(x^0)$, i.e., $\{x^{l(k)-n}\}\subset\mathcal L_\psi(x^0)$ (here, we assume implicitly that $k$ is large enough such that no negative indices $l(k)-n-1$ occur). More precisely, for $n=0$, we have \begin{equation*} \psi (x^{l(k)}) -\psi (x^{l(l(k)-1)}) \leq - \delta \frac{\gamma_{l(k)-1}}{2} \| x^{l(k)} - x^{l(k)-1} \|^2 \leq 0. \end{equation*} Taking the limit $ k \to \infty $ in the previous inequality and using \eqref{Eq:Limitfk}, we therefore obtain \begin{equation*} \lim_{k \to \infty} \gamma_{l(k)-1} \| x^{l(k)} - x^{l(k)-1} \|^2 = 0 . \end{equation*} Since $ \gamma_k \geq \gamma_{\min} > 0 $ for all $ k \in \mathbb{N} $, we get \begin{equation}\label{Eq:Ind1} \lim_{k \to \infty} d^{l(k)-1} = 0, \end{equation} where $ d^k := x^{k+1} - x^k $ for all $ k \in \mathbb{N} $. Using \eqref{Eq:Limitfk} and \eqref{Eq:Ind1}, it follows that \begin{equation}\label{Eq:Ind2} \psi^* = \lim_{k \to \infty}\psi (x^{l(k)}) = \lim_{k \to \infty} \psi \big( x^{l(k)-1} + d^{l(k)-1} \big) = \lim_{k \to \infty} \psi (x^{l(k)-1}), \end{equation} where the last equality takes into account the uniform continuity of $ \psi $ from \cref{Ass:ProxGradNonmonotone}~\ref{item:uniform_continuity} and \eqref{Eq:Ind1}. We will now prove, by induction, that the limits \begin{equation}\label{Eq:Indj} \lim_{k \to \infty} d^{l(k)-j} = 0,\qquad \lim_{k \to \infty}\psi (x^{l(k)-j}) = \psi^* \end{equation} hold for all $ j \in\mathbb{N}$ with $j\geq 1$. We already know from \eqref{Eq:Ind1} and \eqref{Eq:Ind2} that \eqref{Eq:Indj} holds for $ j = 1 $. Suppose that \eqref{Eq:Indj} holds for some $ j \geq 1 $. We need to show that it holds for $ j+1 $. Using \eqref{Eq:NA1} with $ k $ replaced by $ l(k)-j-1 $, we have \begin{equation*} \psi (x^{l(k)-j}) \leq \psi (x^{l(l(k)-j-1)}) - \delta \frac{\gamma_{l(k)-j-1}}{2} \| d^{l(k)-j-1} \|^2 \end{equation*} (again, we assume implicitly that $ k $ is large enough such that $ l(k)-j-1 $ is nonnegative). Rearranging this expression and using $ \gamma_k \geq \gamma_{\min} $ for all $ k $ yields \begin{equation*} \| d^{l(k)-j-1} \|^2 \leq \frac{2}{\gamma_{\min} \delta} \big(\psi(x^{l(l(k)-j-1)}) -\psi (x^{l(k)-j}) \big). \end{equation*} Taking $ k \to \infty $, using \eqref{Eq:Limitfk}, as well as the induction hypothesis, it follows that \begin{equation}\label{Eq:dstar} \lim_{k \to \infty} d^{l(k)-j-1} = 0, \end{equation} which proves the induction step for the first limit in \eqref{Eq:Indj}. The second limit then follows from \begin{equation*} \lim_{k \to \infty} \psi \big( x^{l(k)-(j+1)} \big) = \lim_{k \to \infty} \psi \big( x^{l(k)-(j+1)} + d^{l(k)-j-1)} \big) = \lim_{k \to \infty} \psi \big( x^{l(k)-j} \big) = \psi^*, \end{equation*} where the first equation exploits \eqref{Eq:dstar} together with the uniform continuity of $ \psi $ from \cref{Ass:ProxGradNonmonotone}~\ref{item:uniform_continuity} and $\{x^{l(k)-j}\},\{x^{l(k)-(j+1)}\}\subset\mathcal L_\psi(x^0)$, whereas the final equation is the induction hypothesis. In the last step of our proof, we now show that $ \lim_{k \to \infty} d^k = 0 $ holds. Suppose that this is not true. Then there is a (suitably shifted, for notational simplicity) subsequence $ \{ d^{k-m-1} \}_{k \in K} $ and a constant $ c > 0 $ such that \begin{equation}\label{Eq:Contrad} \| d^{k-m-1} \| \geq c \quad \forall k \in K. \end{equation} Now, for each $ k \in K $, the corresponding index $ l(k) $ is one of the indices $ k - m, k - m + 1, \ldots, k $. Hence, we can write $ k - m - 1 = l(k) - j_k $ for some index $ j_k \in \{ 1, 2, \ldots, m+1 \} $. Since there are only finitely many possible indices $ j_k $, we may assume without loss of generality that $ j_k = j $ holds for some fixed index $ j \in \{1,\ldots,m+1\}$. Then \eqref{Eq:Indj} implies \begin{equation*} \lim_{k \to_K \infty} d^{k-m-1} = \lim_{k \to_K \infty} d^{l(k) - j} = 0. \end{equation*} This contradicts \eqref{Eq:Contrad} and therefore completes the proof. \end{proof} \begin{theorem}\label{Thm:ConvProxGradNM} Assume that \Cref{Ass:ProxGradMonotone,Ass:ProxGradNonmonotone} hold and let $\{x^k\}$ be a sequence generated by \cref{Alg:NonmonotoneProxGrad}. Suppose that $x^*$ is an accumulation point of $\{x^k\}$ such that $x^k\to_K x^*$ holds along a subsequence $k\to_K\infty$. Then $ x^* $ is an M-stationary point of \eqref{Eq:P}, and $\gamma_k(x^{k+1}-x^k)\to_K 0$ is valid. \end{theorem} \begin{proof} Since $ \{ x^k \}_K $ is a subsequence converging to $ x^* $, it follows from \cref{Prop:xdiff_NM} that also the subsequence $ \{ x^{k+1} \}_K $ converges to $ x^* $. We note that $x^*\in\operatorname{dom}\phi$ follows from \cref{Cor:level_set_condition_nonmonotone_framework} by closedness of $\mathcal L_\psi(x^0)$. The minimizing property of $ x^{k+1} $ for \eqref{Eq:NonSubki} together with Fermat's rule and the sum rule from \eqref{eq:regular_sum_rule} imply that the necessary optimality condition \eqref{Eq:OptCondSub} holds for each $k\in K$. We claim that the subsequence $ \{ \gamma_k \}_K $ is bounded. Assume, by contradiction, that this is not true. Without loss of generality, let us assume that $ \gamma_k \to_K \infty $ and that the acceptance criterion \eqref{Eq:NonStepCrit} is violated in the first iteration of the inner loop for each $k\in K$. Setting $ \hat{\gamma}_k := \gamma_k / \tau $ for each $k\in K$, $\{\hat\gamma^k\}_K$ also tends to infinity, but the corresponding vectors $ \hat x^k := x^{k, i_k-1} $, $k\in K$, do not satisfy the stepsize condition from \eqref{Eq:NonStepCrit}, i.e., we have \begin{equation}\label{Eq:ContraStepSub} \psi (\hat x^k) > \max_{j=0, 1, \ldots, m_k} \psi (x^{k-j}) - \delta \frac{\hat{\gamma}_k}{2} \| \hat{x}^k - x^k \|^2 \qquad\forall k\in K. \end{equation} On the other hand, since $ \hat{x}^k = x^{k,i_k-1} $ solves the corresponding subproblem \eqref{Eq:Subki} with $ \hat{\gamma}_k = \gamma_{k, i_k-1} $, we have \begin{equation}\label{Eq:optimality_subproblem_previous_inner_iteration} \langle \nabla f (x^k),\hat{x}^k - x^k\rangle + \frac{\hat{\gamma}_k}{2} \| \hat{x}^k - x^k \|^2 + \phi ( \hat{x}^k ) \leq \phi (x^k) \end{equation} for each $k\in K$. Due to $ \hat{\gamma}_k \to_K \infty $ and since $ \phi $ is bounded from below by an affine function due to \cref{Ass:ProxGradMonotone}~\ref{item:phi_bounded_affine} while $ \phi $ is continuous on its domain by \cref{Ass:ProxGradNonmonotone}~\ref{item:continuity_phi} (which yields boundedness of the right-hand side of \eqref{Eq:optimality_subproblem_previous_inner_iteration}), this implies $ \hat x^k - x^k \to_K 0 $. Consequently, we have $\hat{x}^k\to_K x^* $ as well. Now, if $\hat\gamma_k\|\hat x^k-x^k\|\to_{K'} 0$ holds along a subsequence $k\to_{K'}\infty$ such that $K'\subset K$, then, due to \[ 0\in \nabla f(x^k)+\hat\gamma_k(\hat x^k-x^k)+\widehat\partial\phi(\hat x^k), \] which holds for each $k\in K'$ by means of Fermat's rule and the sum rule \eqref{eq:regular_sum_rule}, we immediately see that $x^*$ is an M-stationary point of \eqref{Eq:P} by taking the limit $k\to_{K'}\infty$ and exploiting the continuity of $\phi$ on $\operatorname{dom}\phi$ from \cref{Ass:ProxGradNonmonotone}~\ref{item:continuity_phi}. Thus, for the remainder of the proof, we may assume that there is a constant $c>0$ such that \begin{equation*} \hat{\gamma}_k \| \hat{x}^k - x^k \| \geq c \end{equation*} holds for each $k\in K$. Further, we then also get \begin{equation*} ( 1- \delta ) \frac{\hat{\gamma}_k}{2} \| \hat{x}^k - x^k \|^2 \geq \frac{1- \delta}{2} c \| \hat{x}^k - x^k \| \geq o \big( \| \hat{x}^k - x^k \| \big) \end{equation*} for all $ k \in K $ sufficiently large. Rearranging \eqref{Eq:optimality_subproblem_previous_inner_iteration} gives us \[ \langle\nabla f(x^k) ,\hat{x}^k - x^k \rangle + \phi (\hat{x}^k) - \phi (x^k) \leq -\frac{\hat{\gamma}^k}{2} \| \hat{x}^k- x^k \|^2 \] for each $k\in K$. From the mean-value theorem, we obtain some $\xi^k$ on the line segment between $\hat x^k$ and $x^k$ such that \begin{align*} &\psi ( \hat{x}^k) - \max_{j= 0, 1, \ldots, m_k} \psi (x^{k-j}) \\ &\qquad \leq \psi ( \hat{x}^k) - \psi (x^k) \\ &\qquad = \langle \nabla f( \xi^k) , \hat{x}^k - x^k \rangle + \phi ( \hat{x}^k ) - \phi (x^k) \\ & \qquad = \langle \nabla f(x^k) , \hat{x}^k - x^k \rangle + \phi ( \hat{x}^k ) - \phi (x^k) + \langle \nabla f ( \xi^k) - \nabla f(x^k) , \hat{x}^k - x^k \rangle \\ & \qquad \leq - \frac{\hat{\gamma}^k}{2} \| \hat{x}^k- x^k \|^2 + o (\| \hat{x}^k - x^k \|) \\ &\qquad \leq - \delta \frac{\hat{\gamma}^k}{2} \| \hat{x}^k- x^k \|^2 \end{align*} for all $ k \in K $ sufficiently large. This contradiction to \eqref{Eq:ContraStepSub} shows that the sequence $ \{ \gamma_k \}_K $ is bounded. Finally, the continuity of $\phi$ from \cref{Ass:ProxGradNonmonotone}~\ref{item:continuity_phi} gives $\phi(x^{k+1})\to_K\phi(x^*)$ due to $x^{k+1}\to_K x^*$. Thus, recalling $x^k\to_K x^*$ and the boundedness of $\{\gamma_k\}_K$, we find $\gamma_k(x^{k+1}-x^k)\to_K 0$, and taking the limit $k\to_K\infty$ in \eqref{Eq:OptCondSub} gives us M-stationarity of $x^*$ for \eqref{Eq:P}. \end{proof} \begin{remark}\label{rem:remarks_regarding_ProxGradNM} \begin{enumerate} \item Note that \cref{Ass:ProxGradMonotone,Ass:ProxGradNonmonotone} do not comprise any Lipschitz conditions on $\nabla f$. \item The results in this section recover the findings from \cite[Section~4]{GuoDeng2021} and \cite[Section~3]{JiaKanzowMehlitzWachsmuth2021} which were obtained in the special situation where $\phi$ is the indicator function associated with a closed set, see \cref{Rem:constrained_opt} as well. \item Based on \cref{Thm:ConvProxGradNM}, \eqref{eq:termination} also provides a reasonable termination criterion for \cref{Alg:NonmonotoneProxGrad}, see \cref{rem:termination_MonotoneGroxGrad} as well. \item In view of \cref{Prop:xdiff_NM}, it follows in the same way as in \cref{rem:conv_entire_sequence} that the entire sequence $ \{ x^k \} $ generated by \cref{Alg:NonmonotoneProxGrad} converges if there exists an isolated accumulation point. \end{enumerate} \end{remark} The uniform continuity of $ \psi $ which is demanded in \cref{Ass:ProxGradNonmonotone}~\ref{item:uniform_continuity} is obviously a much stronger assumption than the one used in the previous section for the monotone proximal gradient method. In particular, this assumption rules out applications where $ \phi $ is given by the $ \ell_0 $-quasi-norm. Nevertheless, the theory still covers the situation where the role of $ \phi $ is played by an $\ell_p$-type penalty function for $p\in(0,1)$ over $\mathbb X\in\{\mathbb{R}^n,\mathbb{R}^{n\times m}\}$ which is known to promote sparse solutions. More precisely, this choice is popular in sparse optimization if the more common $ \ell_1 $-norm does not provide satisfactory sparsity results, and the application of the $ \ell_0 $-quasi-norm seems too difficult, see \cite{BianChen2015,Chartrand2007,ChenGuoLuYe2017,DeMarchiJiaKanzowMehlitz2022,LiuDaiMa2015,MarjanovicSolo2012} for some applications and numerical results based on the $ \ell_p $-quasi-norm or closely related expressions. We would like to note that uniform continuity is a standard assumption in the context of nonmonotone stepsize rules involving acceptance criteria of type \eqref{Eq:NonStepCrit}, see \cite[page 710]{GrippoLamparielloLucidi1986}. We close this section with a discussion on existing convergence results for nonmonotone proximal gradient methods. To the best of our knowledge, the first one can be found in \cite{WrightNowakFigueiredo2009}. The authors prove convergence under the assumptions that $ f $ is differentiable with a globally Lipschitz continuous gradient and $ \phi $ being real-valued and convex, see \cite[Section~II.G]{WrightNowakFigueiredo2009}. Implicitly, however, they also exploit the uniform continuity of $ \psi = f + \phi $ in their proof of \cite[Lemma~4]{WrightNowakFigueiredo2009}, a result like \Cref{Prop:xdiff_NM}, without stating this assumption explicitly. Taking this into account, our \Cref{Ass:ProxGradNonmonotone}~\ref{item:uniform_continuity} is actually weaker than the requirements used in \cite{WrightNowakFigueiredo2009}, so that the results of this section can be viewed as a generalization of the convergence theory from \cite{WrightNowakFigueiredo2009}. Furthermore, \cite[Section~3.1]{ChenGuoLuYe2017} and \cite[Appendix~A]{ChenLuPong2016} consider a nonmonotone proximal gradient method which is slightly different from \Cref{Alg:NonmonotoneProxGrad} since the acceptance criterion \eqref{Eq:NonStepCrit} is replaced by the slightly simpler condition \[ \psi (x^{k,i}) \leq \max_{j=0, 1, \ldots, m_k} \psi (x^{k-j}) - \frac{\delta}{2} \| x^{k,i} - x^k \|^2. \] In \cite[Theorem~4.1]{ChenLuPong2016}, the authors obtain convergence to M-stationary points whenever $\psi$ is bounded from below as well as uniformly continuous on the level set $\mathcal L_\psi(x^0)$, $f$ possesses a Lipschitzian derivative on some enlargement of $\mathcal L_\psi(x^0)$, and $\phi$ is continuous. Clearly, our convergence analysis of \cref{Alg:NonmonotoneProxGrad} does not exploit any Lipschitzianity of $\nabla f$, so our assumptions are weaker than those ones used in \cite{ChenLuPong2016}. In \cite[Theorem~3.3]{ChenGuoLuYe2017}, the authors claim that the results from \cite{ChenLuPong2016} even hold when the continuity assumption on $\phi$ is dropped. The proof of \cite[Theorem~3.3]{ChenGuoLuYe2017}, however, relies on the outer semicontinuity property \eqref{Eq:osc} of the limiting subdifferential, which does not hold for general discontinuous functions $ \phi $, so this result is not reliable. Finally, let us mention that the two references \cite{LiLin2015,WangLiu2021} also consider nonmonotone (and accelerated) proximal gradient methods. These methods are not directly comparable to our algorithm since they are based on a different kind of nonmonotonicity. In any case, although the analysis in both papers works for merely lower semicontinuous functions $\phi$, the provided convergence theory requires $ \nabla f $ to be globally Lipschitz continuous. \section{Conclusions}\label{Sec:Final} In this paper, we demonstrated how the convergence analysis for monotone and nonmonotone proximal gradient methods can be carried out in the absence of (global) Lipschitz continuity of the derivative associated with the smooth function. Our results, thus, open up these algorithms to be reasonable candidates for subproblem solvers within an augmented Lagrangian framework for the numerical treatment of constrained optimization problems with lower semicontinuous objective functions, see e.g.\ \cite{ChenGuoLuYe2017} where this approach has been suggested but suffers from an incomplete analysis, and \cite{GuoDeng2021,DeMarchiJiaKanzowMehlitz2022,JiaKanzowMehlitzWachsmuth2021} where this approach has been corrected and extended. Let us mention some remaining open problems regarding the investigated proximal gradient methods. First, it might be interesting to find minimum requirements which ensure global convergence of \cref{Alg:MonotoneProxGrad,Alg:NonmonotoneProxGrad}. We already mentioned in \cref{Sec:GenSpecGrad} that it is an open question whether the convergence analysis for \cref{Alg:MonotoneProxGrad} can be generalized to the setting where $f$ is only continuously differentiable while $\phi$ is just lower semicontinuous. Second, we did not investigate if the \emph{Kurdyka--\L ojasiewicz} property could be efficiently incorporated into the convergence analysis in order to get stronger results even in the absence of strong Lipschitz assumptions on the derivative of $f$. Third, our analysis has shown that \cref{Alg:MonotoneProxGrad,Alg:NonmonotoneProxGrad} compute M-stationary points of \eqref{Eq:P} in general. In the setting of \cref{rem:non_Lipschitz_constrained_optimization}, i.e., where constrained programs with a merely lower semicontinuous objective function are considered, the introduced concept of M-stationarity is, to some extent, \emph{implicit} since it comprises an unknown subdifferential. In general, the latter can be approximated from above in terms of initial problem data only in situations where a qualification condition is valid. The resulting stationarity condition may be referred to as \emph{explicit} M-stationarity. It seems to be a relevant topic of future research to investigate whether \cref{Alg:MonotoneProxGrad,Alg:NonmonotoneProxGrad} can be modified such that they compute explicitly M-stationary points in this rather general setting. Fourth, it might be interesting to investigate whether other types of nonmonotonicity, different from the one used in \cref{Alg:NonmonotoneProxGrad}, can be exploited in order to get rid of the uniform continuity requirement from \cref{Ass:ProxGradNonmonotone}\,\ref{item:uniform_continuity}. Finally, we note that there exist several generalizations of proximal gradient methods using, e.g., inertial terms and Bregman distances, see e.g.\ \cite{BauschkeBolteTeboulle2017,BolteSabachTeboulleVaisbourd2018,BotCsetnek2016,BotCsetnekLaszlo2016} and the references therein. The corresponding convergence theory is also based on a global Lipschitz assumption for the gradient of the smooth term or additional convexity assumptions which allow the application of a descent-type lemma. It might be interesting to see whether our technique of proof can be adapted to these generalized proximal gradient methods in order to weaken the postulated assumptions.
proofpile-arXiv_065-29
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{intro} \begin{figure}[tbp] \includegraphics[width=0.95\columnwidth, clip]{ph_diag_z2_large.eps} \caption{Sketch of the phase diagram of the 3D ${\mathbb Z}_2$ gauge Higgs model (\ref{HiggsH}). The dashed line is the self-dual line, cf. Eq.~(\ref{selfdual}), the thick line corresponds to first-order transitions on the self-dual line, extending for a finite interval. The two lines labelled ``${\mathbb Z}_2$" are related by duality, cf. Eq.~(\ref{dualitymap}), and correspond to Ising-like continuous transitions. They end at $J = J_{\rm Is} \approx 0.22165$, $\kappa=\infty$ and at $J =0$, $\kappa = \kappa_c \approx 0.76141$. The three lines are conjectured to meet at a multicritical point (MCP) on the self-dual line, at $[\kappa^\star\approx 0.7525,J^\star\approx 0.2258]$. We argue in the paper that the multicritical behavior belongs to the $XY$ universality class. The other endpoint of the first-order transition line should give rise to a critical endpoint (CEP). } \label{phadia} \end{figure} The three-dimensional (3D) ${\mathbb Z}_2$ gauge Higgs model is one of the simplest gauge theories with matter fields, that shows a nontrivial phase diagram characterized by the presence of a topological phase, see, e.g., Refs.~\onlinecite{Wegner-71,BDI-74,BDI-75,FS-79,Kogut-79,JSJ-80,HL-91 GGRT-03,Kitaev-03,VDS-09,TKPS-10,GHMS-11,DKOSV-11,WDP-12 Fradkin-book,Sachdev-19,SSN-21,Grady-21,HSAFG-21}. The model can also be related to the quantum two-dimensional toric model in the presence of external {\em magnetic} fields, by a quantum-to-classical mapping~\cite{Wegner-71,Kitaev-03,TKPS-10}, and to a statistical ensemble of membranes~\cite{HL-91,GHMS-11}. A notable feature of the model~\cite{Wegner-71,FS-79,Kogut-79} is the existence of a duality transformation, which relates the free energy at different points of the phase diagram~\cite{Wegner-71,BDI-75,FS-79,Kogut-79}. A particular line in the phase diagram, which will play an important role in the following, is the self-dual line which is left invariant by the duality transformation. In Fig.~\ref{phadia} we sketch the phase diagram of the model, in the space of the Hamiltonian parameters [they are defined in Eq.~(\ref{HiggsH})]. It presents a topologically ordered deconfined phase, delimited by two continuous Ising transition lines that are related by duality. In the context of two-dimensional quantum systems, such a topological ordered phase is realized in ${\mathbb Z}_2$ spin liquids~\cite{RC-89,Kivelson-89,RS-91,Wen-91,SF-00,MSF-01}, which is the phase of matter realized by the toric code~\cite{Kitaev-03}. Moreover, the 3D ${\mathbb Z}_2$ gauge Higgs model presents a first-order transition line running along the self-dual line, for a limited range of the Hamiltonian parameters~\cite{TKPS-10,GGRT-03,JSJ-80}. The available numerical results are consistent with the existence of a multicritical point (MCP), where the first-order transition line and the two continuous Ising transition lines meet, see, e.g., Refs.~\onlinecite{TKPS-10,SSN-21}. Assuming the existence of the MCP, an interesting question concerns the nature of the multicritical behavior. This issue has been recently investigated in Ref.~\onlinecite{SSN-21}, which reported apparently puzzling results, leading to estimates of the critical exponents that are substantially consistent with those of the $XY$ universality class. This may suggest that the multicritical behavior at the MCP is controlled by the 3D $XY$ fixed point, with an effective enlargement of the symmetry of the multicritical modes to the continuous O(2) group. This scenario was considered unlikely in Ref.~\onlinecite{SSN-21}, because of the unclear relationship between the multicritical $XY$ behavior and the mutual statistics of the condensing quasiparticles~\cite{TKPS-10,VDS-09,GM-12,Burnell-18} along the two distinct Ising transition lines meeting at the MCP. These mutual statistics do not affect critical exponents on the Ising lines, because only one of the two excitations is massless on them, but both excitations must become massless at the MCP. Therefore, it is not clear how their competition can give rise to the effective enlargement of the symmetry at the MCP, as required by the $XY$ universality class. In this paper we investigate the multicritical behavior at the MCP. We argue that the multicritical behavior is controlled by the stable $XY$ fixed point of the 3D multicritical Landau-Ginzburg-Wilson (LGW) field theory with two competing scalar fields associated with the ${\mathbb Z}_2$ transition lines meeting at the MCP~\cite{LF-72,FN-74,NKF-74,PV-02,CPV-03}. Duality properties play a crucial role for the realization of the multicritical $XY$ scenario, which implies an effective enlargement of the symmetry of the multicritical modes, to the continuous symmetry group O(2). To provide further support to this scenario, we also report some numerical finite-size scaling (FSS) analyses of data from Monte Carlo (MC) simulations. The paper is organized as follows. In Sec.~\ref{model} we present the 3D lattice ${\mathbb Z}_2$ gauge Higgs model, and summarize the known features of its phase diagram. In Sec.~\ref{multicr} we discuss the multicritical theory appropriate for the MCP, and apply the multicritical LGW field theory to predict a multicritical $XY$ behavior. In Sec.~\ref{numres} we report some numerical results supporting the multicritical $XY$ scenario, obtained by FSS analyses of MC simulations. Finally in Sec.~\ref{conclu} we draw our conclusions. \section{The ${\mathbb Z}_2$ gauge Higgs model} \label{model} \subsection{Hamiltonian and duality transformations} \label{modelH} We consider a lattice gauge model with ${\mathbb Z}_2$ gauge invariance defined on a cubic 3D lattice with periodic boundary conditions. The fundamental variables are Ising spins $s_{\bm x}=\pm 1$ defined on the lattice sites and Ising spins $\sigma_{{\bm x},\mu}=\pm 1$ defined on the bonds ($\sigma_{{\bm x},\mu}$ is associated with the bond starting from site ${\bm x}$ in the $\mu$ direction, $\mu=1,2,3$). The model is defined by the lattice Hamiltonian~\cite{Wegner-71,FS-79,Kogut-79} \begin{eqnarray} && H = - J \sum_{{\bm x},\mu} s_{\bm x} \, \sigma_{{\bm x},\mu} \, s_{{\bm x}+\hat{\mu}} - \kappa \sum_{{\bm x},\mu>\nu} \Pi_{{\bm x},\mu\nu}\,, \label{HiggsH}\\ &&\Pi_{{\bm x},\mu\nu}= \sigma_{{\bm x},\mu} \,\sigma_{{\bm x}+\hat{\mu},\nu} \,\sigma_{{\bm x}+\hat{\nu},\mu} \,\sigma_{{\bm x},\nu}\,. \label{plaquette} \end{eqnarray} The corresponding partition function and free-energy density are \begin{equation} Z = \sum_{\{s,\sigma\}} e^{-\beta H(J,\kappa)}\,, \qquad F(J,\kappa) = - {T\over L^d} \ln Z\,, \label{partfuncmodel} \end{equation} where $\beta=1/T$ is the inverse temperature, and $L^d$ is the volume of the system. This paper only consider three-dimensional systems, and therefore $d=3$. However, when arguments are independent of the space dimension, we keep $d$ generic. In the following, energies are measured in units of $T$, which is equivalent to fix $\beta=1$ in Eq.~\eqref{partfuncmodel}. The model can be simplified by considering the so-called unitary gauge. Indeed, the site variables $s_{\bm x}$ can be eliminated by redefining $\sigma_{{\bm x},\mu}$ as \begin{equation} s_{\bm x} \, \sigma_{{\bm x},\mu} \, s_{{\bm x}+\hat{\mu}}\,\to\, \sigma_{{\bm x},\mu} \,. \label{gaugeun} \end{equation} Correspondingly, the partition function can be written as \begin{eqnarray} && Z = \sum_{\{\sigma\}} e^{-H_{\rm ug}(J,\kappa)}\,,\label{zhiggs}\\ && H_{\rm ug} = - J \sum_{{\bm x},\mu} \sigma_{{\bm x},\mu} - \kappa \sum_{{\bm x},\mu>\nu} \Pi_{{\bm x},\mu\nu}\,. \label{HiggsHug} \end{eqnarray} An important property of the 3D lattice ${\mathbb Z}_2$ gauge Higgs model is the existence of a duality mapping~\cite{BDI-75} between the Hamiltonian parameters, that leaves the partition function unchanged, modulo a regular function of the parameters. If \begin{eqnarray} \left(J^\prime, \kappa^\prime\right)= \left( -{1\over 2} {\rm ln}\,{\rm tanh}\,\kappa\,, -{1\over 2} {\rm ln}\,{\rm tanh}\, J \right)\,, \label{dualitymap} \end{eqnarray} we have \cite{BDI-75} \begin{equation} F(J^\prime, \kappa^\prime) = F(J, \kappa) - {3\over 2} \ln[\sinh(2J)\sinh(2\kappa)]\,. \label{dualityZ} \end{equation} One can also define a self-dual line, \begin{equation} D(J,\kappa) = J - J^\prime = J + {1\over 2} {\rm ln}\,{\rm tanh}\,\kappa = 0\,, \label{selfdual} \end{equation} where the duality transformation maps the model into itself, i.e. $J^\prime = J$ and $\kappa^\prime = \kappa$. Note that $D(J,\kappa)$ is odd under the duality mapping $(J,\kappa) \to (J^\prime,\kappa^\prime)$, i.e., $D(J,\kappa) = - D(J^\prime,\kappa^\prime)$. \subsection{The phase diagram} \label{phasdiagr} Some features of the phase diagram are well established, see, e.g., Refs.~\onlinecite{FS-79,TKPS-10,SSN-21}. A sketch of the phase diagram is shown in Fig.~\ref{phadia}. For $\kappa\to\infty$ an Ising transition occurs at~\cite{FXL-18} $J_{\rm Is} = 0.221654626(5)$. By duality, in the pure ${\mathbb Z}_2$ gauge model a transition occurs in the corresponding point, $J=0$ and \begin{equation} \kappa_c = -{1\over 2} {\rm ln}\,{\rm tanh}\,J_{\rm Is} = 0.761413292(11)\,. \label{z2gaugecr} \end{equation} Two Ising-like continuous transition lines, related by the duality transformation (\ref{dualitymap}), start from these points~\cite{FS-79} and intersect along the self-dual line~\cite{SSN-21}. Moreover, some numerical studies~\cite{TKPS-10,GGRT-03} have provided evidence of first-order transitions along the self-dual line, in the relatively small interval \begin{equation} 0.688 \lesssim \kappa \lesssim 0.753\,,\qquad 0.258\gtrsim J \gtrsim 0.226\,. \label{foint} \end{equation} Since the first-order transition line is limited to an interval along the self-dual line, there are only two phases, separated by the two continuous transition lines, see Fig.~\ref{phadia}. For small $J$ and large $\kappa$ there is a topological deconfined phase. The remaining part of the phase diagram corresponds to a single phase that extends from the disordered small-$J,\kappa$ region to the whole large-$J$ region. In particular, no phase transition occurs along the line $\kappa=0$, where the model (\ref{HiggsHug}) becomes trivial. A natural conjecture is that the first-order and the two continuous Ising transition lines meet at the same point located along the self-dual line, giving rise to a multicritical point (MCP). Numerical results~\cite{SSN-21,TKPS-10} are consistent with this conjecture. In particular, Ref.~\onlinecite{SSN-21} reported evidence of a critical transition point along the self-dual line---we identify it with the MCP---with critical parameters $\kappa^\star \approx 0.7526$ and $J^\star\approx 0.2257$. The corresponding critical exponents are close to, and substantially consistent with, those associated with the $XY$ universality class~\cite{PV-02,CPV-03,CHPV-06,HV-11}. In spite of these results, Ref.~\onlinecite{SSN-21} considered an $XY$ multicritical behavior unlikely. In the paper, we rediscuss the issue, and give additional theoretical and numerical arguments that support the hypothesis that the MCP belongs to the $XY$ universality class. We finally note that the first-order transition line starting from the MCP ends at $J\approx 0.258$ and $\kappa\approx 0.688$. We expect this endpoint to correspond to a continuous transition, likely belonging to the Ising universality class. \section{Multicritical behavior} \label{multicr} As discussed above, the phase diagram of the lattice ${\mathbb Z}_2$ gauge Higgs model shows a MCP, where a first-order and two continuous transition lines meet (this MCP is usually called bicritical~\cite{LF-72,FN-74,NKF-74}). In the following, we first discuss the expected behavior of the model close to the MCP, on the basis of the renormalization-group (RG) theory. Then, we discuss a a LGW field theory characterized by two interacting local real scalar fields~\cite{LF-72,FN-74,NKF-74,CPV-03}, which may describe the multicritical behavior. \begin{figure}[tbp] \includegraphics[width=0.95\columnwidth, clip]{ph_diag_z2_small.eps} \caption{Sketch of the phase diagram close to the MCP. We report the first-order transition line (thick line), the self-dual line (dashed line), the two continuous transition lines (continuous lines), and the line (dotted line) where $u_2 = 0$. The line $u_1 = 0$ coincides with the self-dual line. } \label{phadiabis} \end{figure} \subsection{Multicritical scaling theory} \label{sec3A} At a MCP, the singular part of the free-energy density can be written as \begin{equation} F_{\rm sing}(J,\kappa,L) = L^{-d} {\cal F}(\{u_i L^{y_i}\})\,, \label{freeen} \end{equation} where $u_i$ are the nonlinear scaling fields and the RG exponents $y_i$ are ordered so that \begin{equation} y_1>y_2>y_3>y_4 > ...\,, \label{defyi} \end{equation} In the present model, we expect two relevant RG perturbations. Therefore, $y_1$ and $y_2$ are positive, and the corresponding scaling fields $u_1$ and $u_2$ vanish at the MCP. The exponents $y_i$ with $i\ge 3$ are instead negative and control the corrections to the multicritical behavior. All the scaling fields $u_i$ are analytic functions of the model parameters $J$ and $\kappa$. In the infinite-volume limit and neglecting subleading corrections, we can rewrite the singular part of the free energy density as \begin{eqnarray} &&F_{\rm sing}(J,\kappa) = |u_2|^{d/y_2} {\cal F}_\pm (X)\,, \label{freeen2}\\ && X \equiv u_1 |u_2|^{-\phi}\,, \qquad \phi\equiv {y_1/y_2}>1\,, \label{phidef} \end{eqnarray} where the functions ${\cal F}_\pm(X)$ apply to the parameter regions in which $\pm u_2 > 0$, and $\phi$ is the so-called crossover exponent associated with the MCP. Close to the MCP, the transition lines follow the equation \begin{equation} X = u_1 |u_2|^{-\phi} = {\rm const}\,, \label{traju1u2} \end{equation} with a different constant for each transition line. Since $\phi > 1$, they are tangent to the line $u_1 = 0$. The duality mapping (\ref{dualitymap}), and in particular Eq.~(\ref{dualityZ}), implies the relation \begin{equation} F_{\rm sing}(J^\prime,\kappa^\prime) = F_{\rm sing}(J,\kappa)\,. \label{fsingdua} \end{equation} Then, if we set \begin{equation} u^\prime_1 = u_1(J^\prime,\kappa^\prime)\,,\quad u^\prime_2 = u_2(J^\prime,\kappa^\prime)\,, \label{u1u2p} \end{equation} using Eq.~(\ref{freeen2}) we obtain the equality \begin{equation} |u_2|^{d/y_2}{\cal F}_\pm (u_1 |u_2|^{-\phi}) = |u_2^\prime|^{d/y_2}{\cal F}_\pm (u_1^\prime |u_2^\prime|^{-\phi})\,. \label{eqSF-freeenergy} \end{equation} Since along the self-dual line $u_1=u_1^\prime=0$, this relation can only be satisfied if $|u_2| = |u_2^\prime|$. If we then expand the scaling function ${\cal F}_\pm(X)$ in powers of $X$, Eq.~(\ref{eqSF-freeenergy}) implies $u_1^m = (u_1^\prime)^m$ for all values of $m$ such that the derivative \begin{equation} {\cal F}_{m} = \left. {\partial^{m} {\cal F}(X)\over \partial X^m} \right \vert_{X=0} \label{deffm} \end{equation} is nonvanishing. This condition can be satisfied only if $u_1$ changes at most by a sign under duality. As we shall argue below, $u_1$ is odd under duality, i.e., $u_1^\prime = -u_1$. In this case, we should additionally require ${\cal F}_m = 0$ for any odd $m$: the functions ${\cal F}_\pm(X)$ are even in $X$. To show that $u_1$ is odd, note that, as discussed in Sec.~\ref{phasdiagr}, the first-order transition line runs along the self-dual line (\ref{selfdual}) ending at the MCP, located at \begin{equation} J = J^\star,\quad \kappa = \kappa^\star = -{1\over 2} {\rm ln}\,{\rm tanh}\,J^\star\,. \label{tcpco} \end{equation} This transition line is expected to coincide~\cite{LF-72,FN-74,NKF-74} with the line $u_1 = 0$, close to the MCP. Since the self-dual line is given by $D(J,\kappa) = 0$, we can make the identification \begin{eqnarray} u_1 = D(J,\kappa)\,, \label{nonlinu1} \end{eqnarray} close to the MCP. As noted before, $D(J,\kappa)$ is odd under duality. The scaling field $u_2$ is then necessarily even under duality and is therefore given by \begin{equation} u_2(J,k) = - J + J^\star + {1\over 2} \ln {\tanh \kappa^{\phantom{\star}} \over \tanh\kappa^\star}\,. \label{u2def} \end{equation} The scaling fields can be straightforwardly linearized obtaining \begin{eqnarray} &&u_1 \approx \Delta J + c\, \Delta \kappa\,, \qquad u_2 \approx - \Delta J + c \,\Delta \kappa\,, \label{ulinear} \end{eqnarray} where \begin{eqnarray} &&\Delta J = J-J^\star\,,\qquad\Delta \kappa = \kappa-\kappa^\star\,, \nonumber\\ &&c = \sinh(2J^\star) = {1\over \sinh(2\kappa^\star)} \approx 0.467\,. \label{defcconstant} \end{eqnarray} In terms of $u_1$ and $u_2$, close to the MCP the first-order transition line corresponds to $ X = 0$, $u_2<0$. The two continuous transition lines are defined by $X = \pm k$ with $u_2 > 0$. Using the above results we can also predict how the latent heat $\Delta_h$ vanishes along the first-order transition line when approaching the MCP. A straightforward scaling argument~\cite{CPPV-04} gives \begin{equation} \Delta_t \sim |u_2|^\theta\,,\qquad \theta = {d-y_1\over y_2}\,. \label{latheat} \end{equation} Note that this scaling behavior is the same as that of the magnetization $M$ at the Ising transition, with the correspondence $y_1=y_h$ and $y_2=1/\nu$: $M$ indeed vanishes as $M\sim |T-T_c|^\beta$ with $\beta = (d-y_h)\nu$, see, e.g., Ref.~\onlinecite{PV-02}. \subsection{Scaling of the energy cumulants } \label{sec3B} Due to the fact that we are considering a lattice gauge theory, and therefore that we cannot easily access the order parameters associated with the phase transitions, we focus on the multicritical behavior of the energy operators. We define \begin{eqnarray} H_J &=& \sum_{x\,\mu} \sigma_{x,\mu}\,, \qquad H_\kappa = \sum_{x\,\mu>\nu} \Pi_{x,\mu\nu}\,,\label{hamjk}\\ H &=& -J H_J - \kappa H_\kappa\,.\nonumber \end{eqnarray} We consider the cumulants \begin{equation} C_{nm} = - L^d {\partial^{n+m} \over \partial J^n \partial \kappa^m} F(J,\kappa,L)\,, \label{cumulant-def} \end{equation} where $F$ is the free-energy density. For $n+m\le 3$, $C_{nm} = M_{nm}$, where $M_{nm}$ are the central moments defined by \begin{equation} M_{nm} = \langle (H_J - E_J)^n (H_\kappa - E_\kappa)^m \rangle\,, \label{nmomdef} \end{equation} with $E_J = \langle H_J \rangle$ and $E_\kappa = \langle H_\kappa \rangle$. For $n+m\ge 4$, central moments and cumulants differ. For instance, $C_{40} = M_{40} - 3 M_{20}^2$. Using the cumulants $C_{mn}$ we can easily construct the cumulants $K_n$ of the total energy $H$, defined by the derivatives of $\ln Z$ with respect to $\beta$, see Eq.~\eqref{partfuncmodel}. For example, we have \begin{eqnarray} K_2 &=& J^2 C_{20} + 2 J \kappa C_{11} + \kappa^2 C_{02}\,,\label{kncnm}\\ K_3 &=& - \left( J^3 C_{30} + 3 J^2 \kappa C_{21} + 3 J \kappa^2 C_{12} + \kappa^3 C_{03}\right)\,,\nonumber \end{eqnarray} etc... Note that the specific heat is given by $C_V=K_2/V$. In the following, we consider periodic boundary conditions, which preserve the duality property in finite-size systems. Using Eq.~(\ref{dualityZ}), and taking the appropriate derivatives with respect to $J$ and $\kappa$, we can obtain an infinite series of exact relations among the expectation values $E_J,\,E_\kappa$ and the cumulants $C_{mn}$ at $(J,\kappa)$ and at the corresponding duality-transformed couplings $(J^\prime, \kappa^\prime)$, cf. Eq.~(\ref{dualitymap}). Along the self-dual line where $(J,\kappa)=(J^\prime, \kappa^\prime)$, they turn into an infinite series of exact relations among expectation values of cumulants computed on the self-dual line. The lowest-order cumulants satisfy the relations \begin{eqnarray} && E_{\kappa}+\sinh(2J)\,E_J-3\cosh(2J)L^3=0\,, \label{exadualrel1}\\ && \sinh^2(2J)\,C_{20}-C_{02}-2\cosh(2J)\,E_{\kappa}+6L^3=0\,.\qquad \label{exadualrel2} \end{eqnarray} Relations for higher-order cumulants are more cumbersome. Neglecting the regular terms arising from the second term of the r.h.s. of Eq.~(\ref{dualitymap}), third-order cumulants satisfy the relations \begin{eqnarray} && C_{12} + \sinh(2J)\,C_{21} + 2 \cosh(2J) \,C_{11} \approx 0\,, \label{thirdcurel} \\ && C_{03} + \sinh^3(2J) \,C_{30}+ 6 \cosh(2J)\,C_{02} +\nonumber\\ &&\quad + 2 [3 + \cosh(4J)]E_\kappa \approx 0 \,. \nonumber \end{eqnarray} The scaling behavior of the cumulants $C_{nm}$ can be derived by differentiating the asymptotic scaling relation \begin{equation} F_{\rm sing}(J,\kappa,L) \approx L^{-d} f(x_1, x_2)\,, \qquad x_i = u_i L^{y_i}\,, \label{scalF-finiteV} \end{equation} where we only keep the relevant RG contributions. Note that the duality relation (\ref{dualityZ}) for the free energy, and the duality properties of $u_1$ and $u_2$, imply that \begin{equation} f(-x_1,x_2) = f(x_1,x_2)\,. \label{SF-funf} \end{equation} Introducing the derivatives \begin{equation} f_{n,m}(x_1,x_2) = {\partial^{n+m} f(x_1,x_2) \over \partial x_1^n \partial x_2^m}\,, \label{calcnmdef} \end{equation} the leading critical contribution is generally given by \begin{eqnarray} C_{nm}(J,\kappa,L) \approx u_{1,J}^n u_{1,\kappa}^m L^{(n+m)y_1} f_{n+m,0}(x_1,x_2) \,, \label{genscalingCnm} \end{eqnarray} where $u_{1,J}$ and $u_{1,\kappa}$ are the derivatives of $u_1$ with respect to $J$ and $\kappa$. The cumulants of the total energy are expected to develop an analogous scaling behavior, i.e. \begin{eqnarray} K_n(J,\kappa,L) \approx L^{n y_1} {\cal K}_{n}(x_1,x_2) \,. \label{genscalingK} \end{eqnarray} Along the self-dual line $u_1=0$ the duality symmetry leads to some cancellations, as a consequence of Eq.~(\ref{SF-funf}). For $n+m$ even, the leading terms of the cumulants $C_{nm}$ are given by \begin{eqnarray} C_{nm}(J,\kappa,L) &\approx& u_{1,J}^n u_{1,\kappa}^m L^{(n+m)y_1} f_{n+m,0}(0,x_2) \nonumber \\ &\approx& c^m L^{(n+m) y_1} f_{n+m,0}(0,x_2) \,. \label{scaling-even} \end{eqnarray} Note that Eq.~(\ref{scaling-even}) is consistent with the exact relations derived from duality, such as Eq.~(\ref{exadualrel2}). For $n+m$ odd, the relation (\ref{SF-funf}) implies that \begin{equation} f_{n+m,0}(0,x_2)= 0\,. \label{cmnx10} \end{equation} Therefore, the leading scaling behavior is obtained by differentiating once with respect to $u_2$. Thus, for odd $n+m$ we obtain \begin{eqnarray} C_{nm} &\approx& L^{(n+m-1) y_1 + y_2} f_{n+m-1,1}(0,x_2) \times \label{scaling-odd}\\ &&\;\;\times (n\ u_{1,J}^{n-1} u_{1,\kappa}^{m} u_{2,J} + m\ u_{1,J}^n u_{1,\kappa}^{m-1} u_{2,\kappa}) \nonumber \\ &\approx& (m - n) c^m L^{(n+m-1) y_1 + y_2} f_{n+m-1,1}(0,x_2) \,, \nonumber \end{eqnarray} where $u_{2,J}$ and $u_{2,\kappa}$ are the derivatives of $u_2$ with respect to $J$ and $\kappa$, respectively. Using these asymptotic behaviors along the self-dual line and the relations (\ref{kncnm}), we can also derive the corresponding asymptotic FSS of the cumulants $K_n$ of the total energy, which behave as \begin{eqnarray} K_n & \approx & L^{ny_1}\, \widetilde{\cal K}_n(x_2) \quad {\rm for} \;\;{\rm even}\;\; n\,, \label{Hcumulants-scaling-even}\\ K_n &\approx& L^{(n-1)y_1+y_2} \, \widetilde{\cal K}_n(x_2) \quad {\rm for} \;\;{\rm odd}\;\; n\,. \label{Hcumulants-scaling-odd} \end{eqnarray} It is also useful to consider combinations whose cumulants have definite properties under duality transformations. We define \begin{eqnarray} A &=& H_J - \sinh(2\kappa)\,H_\kappa\,, \label{defA}\\ S &=& H_J + \sinh(2\kappa)\,H_\kappa\,.\label{defS} \end{eqnarray} Since \begin{eqnarray} {\partial u_1\over \partial J} + \sinh(2\kappa) {\partial u_1 \over \partial \kappa} &=& 0\,, \\ {\partial u_2\over \partial J} - \sinh(2\kappa) {\partial u_2 \over \partial \kappa} &=& 0 \,,\nonumber \end{eqnarray} one can easily check that the cumulants $A_{n}$ of the operator $A$, defined in Eq.~(\ref{defA}), do not receive contributions associated with the scaling field $u_1$. Therefore, they generally scale as \begin{eqnarray} A_{n} \approx L^{n y_2} {\cal A}_n(x_1, x_2) \,, \quad {\cal A}_n = (-2)^n f_{0n}(x_1,x_2) \,. \label{HnAsca} \end{eqnarray} The cumulants $S_{n}$ of the operator $S$ behave as \begin{eqnarray} S_{n} \approx L^{n y_1} {\cal S}_n(x_1,x_2) \,, \quad {\cal S}_n = 2^n f_{n0}(x_1,x_2) \,. \label{HnSsca} \end{eqnarray} Along the self-dual line, however, this diverging behavior is not observed for $n$ odd, since $f_{n0}(0,x_2)=0$, thus $S_n$ is expected to diverge as $L^{(n-1) y_1}$. We finally note that the above scaling equations assume that the leading contribution is due to the singular part of the free energy. However, contributions due to the regular free-energy term, of order $L^d$, may provide the leading contribution for the lowest-order cumulants, depending on the values of the RG exponents $y_1$ and $y_2$. \subsection{Multicritical field theory} \label{sec3C} The results of Sections \ref{sec3A} and \ref{sec3B} only rely on the existence of a duality transformation and make no assumption on the nature of the MCP. To go further and make more quantitative predictions, it is crucial to understand the nature of the order parameters. Along the finite-$J$ transition line that ends at $\kappa = \infty$, the order parameter is expected to be a local function of the $s_x$ fields, which should correspond to the Ising magnetization. Of course, because of gauge invariance, any rigorous definition requires the introduction of an appropriate gauge fixing, which however would not change any gauge-invariant correlation function (in Ref.~\onlinecite{BN-87} this approach has been used to obtain rigorous results for the phase behavior of the U(1) Abelian-Higgs model). The order parameter for the ${\mathbb Z}_2$ gauge theory is instead expected to be nonlocal and indeed the transition has a topological nature. Apparently, this observation seems to indicate that one cannot use standard symmetry arguments to understand the critical behavior at the MCP, as they assume that the order parameters are coarse-grained local functions of the microscopic fields. We wish now to argue that, at the MCP (and only there), because of duality, we can assume that both order parameters are local. Strictly speaking, duality is only a mapping of the Hamiltonian parameters, but here we will enlarge its role and assume that duality provides a mapping for all RG operators. Essentially, let us assume that we are working in the infinite-dimensional space of Hamiltonians on which the RG transformations act~\cite{Wegner}. If we start from a ${\mathbb Z}_2$ gauge Hamiltonian, under RG transformations, we will generate a flow towards a ${\mathbb Z}_2$ gauge-invariant fixed point, while starting from the usual Ising model, we will observe a flow towards the Wilson-Fisher ${\mathbb Z}_2$ fixed point. The existence of an exact microscopic relation between the ${\mathbb Z}_2$ gauge model and the Ising model allows us to conjecture that the two fixed points are equivalent, with the same set of RG dimensions and operators. In other words, there is a mapping (we call it duality) between all RG operators at the different fixed points. It is then plausible that this duality transformation maps the local order parameter of the Ising model to the nonlocal order parameter of the gauge model. The mapping changes the Hamiltonian parameters, except on the self-dual line, and therefore at the MCP. Here, the mapping would imply the equivalence of the local and of the nonlocal order parameters for the same model. Therefore, it seems reasonable to describe the multicritical behavior in terms of two local quantities. We thus consider two different scalar fields $\varphi_1({\bm x})$ and $\varphi_2({\bm x})$, associated with the two transition lines. To derive a Lagrangian for the effective model, we note that the theory should be invariant under a change of sign of both fields, so that only even powers of each field are allowed. Under these conditions the LGW Hamiltonian is~\cite{LF-72,FN-74,NKF-74} \begin{eqnarray} {\cal H} &=& \frac{1}{2} \Bigl[ ( \partial_\mu \varphi_1)^2 + ( \partial_\mu \varphi_2)^2\Bigr] + \frac{1}{2} \Bigl( r_1 \varphi_1^2 + r_2 \varphi_2^2 \Bigr) \nonumber\\ &&+ \frac{1}{4!} \Bigl[ v_1 \varphi_1^4 + v_2 \varphi_2^4 + 2 w\, \varphi_1^2\varphi_2^2 \Bigr] \,. \label{bicrHH} \end{eqnarray} This model has been studied at length. In the mean-field approximation~\cite{LF-72,FN-74,NKF-74}, the field theory (\ref{bicrHH}) admits a bicritical point analogous to the one appearing in Fig.~\ref{phadiabis}. Moreover, if the transition is continuous, it should belong to the $XY$ universality class \cite{LF-72,FN-74,NKF-74,PV-02,CPV-03} thereby leading to an effective enlargement of the symmetry from ${\mathbb Z}_2\oplus{\mathbb Z}_2$ to O(2). Several field-theoretical and numerical works have determined the exponents $y_i$ entering the multicritical scaling ansatz (\ref{freeen}), see, e.g., Refs.~\onlinecite{CPV-03,HV-11}. As shown in Ref.~\onlinecite{CPV-03}, the leading exponents correspond to the RG dimensions at the isotropic $XY$ fixed point of quadratic and quartic perturbations that belong to different representations of O(2) group. The leading RG exponent $y_1$ is associated with the quadratic spin-two perturbation. The corresponding RG dimension is~\cite{CPV-03,HV-11} \begin{eqnarray} y_1 = 1.7639(11)\,. \label{y1XY} \end{eqnarray} The second largest exponent is associated with the spin-zero quadratic operator, and is directly related to the correlation-length critical exponent at standard $XY$ transitions: \begin{equation} y_2 = {1\over \nu_{xy}} = 1.4888(2)\,, \label{y2XY} \end{equation} where we used the estimate $\nu_{xy}=0.6717(1)$ (see, e.g., Refs.~\onlinecite{GZ-98,PV-02,CHPV-06,KP-17,Hasenbusch-19,CLLPSSV-20} for theoretical results by various methods). Using the above results, we can estimate the crossover exponent, \begin{equation} \phi = y_1/y_2 = 1.1848(8)\,. \label{phiXY} \end{equation} Scaling corrections at the multicritical $XY$ point are controlled by the negative RG dimensions $y_i$. The most relevant ones are \cite{CPV-03,CPV-00,HV-11,Hasenbusch-19} \begin{eqnarray} y_3 &=& -0.108(6)\,, \label{y3XY} \\ y_4 &=& -0.624(10)\,, \\ y_5 &=& -0.789(4)\,, \label{y5est} \end{eqnarray} which are related to the spin-4, spin-2, and spin-zero quartic perturbations, respectively. Note that, at standard $XY$ transitions, corrections decay as $L^{-\omega}$ with $\omega = - y_5 \approx 0.79$. At the MCP, scaling corrections decay much slower, as $L^{y_3}\approx L^{-0.108}$, which may complicate the analysis of the universal multicritical $XY$ behavior. Moreover, corrections with any integer combination of the subleading exponents are also expected, and thus corrections $L^{ny_3}$ with $n=2,3,\ldots$ should also appear. In the LGW approach the analogue of the duality mapping is the exchange of the two fields ($\varphi_1\to\varphi_2$, $\varphi_2\to\varphi_1$). The RG operators associated with the scaling fields $u_i$ must have definite properties under these transformations. The leading operator of RG dimension $y_1$ is odd under the field exchange. This implies that $u_1$ is odd under the simultaneous exchange of $r_1$, $r_2$ and of $v_1$, $v_2$. In the ${\mathbb Z}_2$ gauge Higgs model this implies that the scaling field $u_1$ is odd under duality, in agreement with the arguments presented in Sec.~\ref{sec3A}. Analogously, we predict $u_2$ to be even, as already discussed before. We can also predict the transformation properties of the irrelevant scaling fields: $u_3$ and $u_5$ are even functions under duality, while $u_4$ is odd. In particular, there are no corrections with exponent $y_4$ on the self-dual line. \section{Numerical results} \label{numres} In this section we report some numerical results supporting our hypothesis of an emerging $XY$ multicriticality at the MCP, as discussed in the previous sections. For this purpose, we present FSS analyses of MC simulations of the unitary-gauge model (\ref{HiggsHug}), using a standard Metropolis upgrading of the discrete spin link variables~\cite{Metropolis:1953am}. We consider cubic systems of size $L$ with periodic boundary conditions. We perform simulations along the self-dual line $u_1=0$ and along the line $u_2 = 0$, measuring the energy cumulants defined in Sec.~\ref{sec3B}. We verify the predicted FSS behavior, using the RG exponents $y_1 = 1.7639(11)$ and $y_2 = 1.4888(2)$ of the $XY$ universality class. We should remark that the observation of the asymptotic scaling behaviors predicted by the multicritical $XY$ scenario is made difficult by the existence of several sources of slowly decaying scaling corrections. The leading ones decay very slowly, as $L^{n y_3} \approx L^{- 0.108 n}$ with $n=1,2,\ldots$. Then, we should consider terms decaying as $L^{-(y_1-y_2)} \approx L^{-0.28}$ [they are absent on the self-dual line because of Eq.~(\ref{SF-funf})], as $L^{-2(y_1-y_2)} \approx L^{-0.55}$, $L^{-y_4}\approx L^{-0.62}$ (they are absent along the self-dual line), $L^{-y_5} \approx L^{-0.79}$. For $m+n=2$ also the regular background plays a role, giving rise to corrections of order $L^{3-2y_1} \approx L^{-0.53}$. \begin{figure}[tbp] \includegraphics[width=0.95\columnwidth, clip]{u1zero_u2_H3.eps} \caption{Scaling behavior of the third cumulant $K_3$ of the Hamiltonian along the $u_1=0$ line as a function of $u_2 L^{y_2}$. We use the $XY$ exponents $y_1 = 1.7639$, $y_2=1.4888$ and $\kappa^{\star} = 0.7525$.} \label{h3u10} \end{figure} \begin{figure}[t] \includegraphics[width=0.95\columnwidth, clip]{u1zero_u2_H3A.eps} \includegraphics[width=0.95\columnwidth, clip]{u1zero_u2_H4A.eps} \caption{Scaling behavior of cumulants $A_{3}$ (top) and $A_{4}$ (bottom) along the $u_1=0$ line as a function of $u_2 L^{y_2}$. We use the $XY$ exponents $y_1 = 1.7639$, $y_2=1.4888$ and $\kappa^{\star} = 0.7525$. } \label{HAu10} \end{figure} Along the self-dual line the scaling field $u_1$ vanishes. Thus, according to the RG analysis reported in Sec.~\ref{sec3B}, we expect that the asymptotic scaling behavior of the energy cumulants depends on the FSS variable $x_2=u_2 L^{y_2}$. Along the self-dual line the numerical FSS analyses of the energy cumulants $K_n\,,A_n\,,S_n$ are consistent with the predictions of the multicritical theory, see Eqs.~(\ref{Hcumulants-scaling-even}) and (\ref{Hcumulants-scaling-odd}) for the total energy, once the $XY$ values of the RG exponents reported in Eqs.~(\ref{y1XY}) and (\ref{y2XY}) are used. The most accurate estimate of the MCP point is obtained by biased analyses of the third cumulant $A_{3}\sim L^{3y_2}$ of the operator $A$, see Eq.~(\ref{defA}), along the self-dual line, using the $XY$ values for the exponents. Fitting the data to Eq.~(\ref{HnAsca}), we obtain \begin{equation} \kappa^\star = 0.7525(1)\,,\quad J^\star = 0.22578(5)\,. \label{mcpco} \end{equation} This estimate of the MCP is consistent with the results reported in Ref.~\onlinecite{SSN-21}. The analysis of the other cumulants gives consistent results. The accuracy of the description in terms of the multicritical $XY$ predictions is demonstrated by the scaling plots of the data of the cumulants using the $XY$ exponents and the estimates (\ref{mcpco}). In Fig.~\ref{h3u10} we show data for the third cumulant $K_3$ of the Hamiltonian, which is expected to scale with the power law $L^{2y_1 + y_2}$, cf. Eq.~(\ref{Hcumulants-scaling-odd}). We observe a reasonably good scaling: scaling corrections are hardly visible within the statistical errors. Note that, according to the multicritical $XY$ scenario, one expects slowly decaying corrections with exponent $|y_3|\approx 0.11$, cf. Eq.~(\ref{y3XY}). We do not observe them here. In our range of values of $L$, $L^{y_3}$ varies only slightly, and thus it is conceivable that they do not affect the divergent behavior of the observables, but only the accuracy of the scaling functions. In Fig.~\ref{HAu10} we report the scaling plots of $A_{3}$ and $A_{4}$. Data are again in good agreement with the theoretical predictions for their asymptotic scaling behavior, cf. Eqs.~(\ref{HnAsca}). We do not report the second cumulant $A_{2}$, whose singular part should scale as $L^{2 y_2}$. Since $2 y_2 \approx 2.9775 < 3$, its behavior is dominated by the regular contribution, that scales as the volume $L^3$. \begin{figure}[t] \includegraphics[width=0.95\columnwidth, clip]{u2zero_u1_H3.eps} \includegraphics[width=0.95\columnwidth, clip]{u2zero_u1_Cv.eps} \caption{ Scaling plot of the second cumulant $K_2$ (bottom) and of the third cumulant $K_3$ (top) of the Hamiltonian along the $u_2=0$ line, using the $XY$ exponent $y_1=1.7639$ and $\kappa^{\star}=0.7525$. Data confirm the scaling prediction, Eq.~(\ref{knscau20}) . Notice that the error bars of $K_3$ for $u_1 L^{y_1}\gtrsim 1$ may be underestimated. In this region of the phase diagram we observe an increasing inefficiency of the MC algorithm.} \label{h3u20} \end{figure} Beside checking the consistency of the numerical data with the multicritical $XY$ scenario, we can also perform unbiased fits, to determine $y_1$ and $y_2$. If we fit the third and fourth cumulant of the Hamiltonian (they should scale as $K_3\sim L^{2 y_1 + y_2}$ and $K_4\sim L^{4 y_1}$, respectively) we obtain $2 y_1 + y_2 = 5.0(1)$ and $4 y_1 = 7.0(1)$, which are consistent with the predictions $2 y_1 + y_2 \approx 5.02$ and $4 y_1 \approx 7.06$. The exponent $y_2$ can also be estimated from $A_{n}$. We obtain $y_2 = 1.495(10)$ and $y_2 = 1.48(2)$ from $A_{3}$ and $A_{4}$, respectively. The agreement with the conjectured $XY$ values is quite good. We also performed simulations along the $u_2=0$ line, see Eq.~(\ref{u2def}), using the estimate $\kappa^{\star}=0.7525$ obtained from the FSS analyses along the self-dual $u_1=0$ line. Along the $u_2=0$ line, the asymptotic FSS of the cumulants of the total energy is that given in Eq.~(\ref{genscalingK}), i.e. \begin{equation} K_n \approx L^{ny_1} {\cal K}_n(x_1,0)\,. \label{knscau20} \end{equation} Note that, for $n$ odd, consistency with the FSS behavior along the self-dual line, see Eq.~(\ref{Hcumulants-scaling-odd}), requires ${\cal K}_n(0,0) = 0$. The data are plotted in Fig.~\ref{h3u20}, We observe a nice collapse of the data, again fully supporting the multicritical $XY$ scenario. \begin{figure}[t] \includegraphics[width=0.95\columnwidth, clip]{u2zero_u1_H3A.eps} \includegraphics[width=0.95\columnwidth, clip]{u2zero_u1_H3S.eps} \caption{ Scaling plot of the third cumulants $A_{3}$ (top) and $S_{3}$ (bottom) of the operators $A$ and $S$ along the $u_2=0$ line, using the $XY$ exponents $y_1=1.7639$ and $y_2=1.4888$, and $\kappa^{\star}=0.7525$. } \label{AS3u20} \end{figure} Finally, we also check the scaling behavior of the third cumulants of $A$ and $S$ along the $u_2=0$ line, in Fig.~\ref{AS3u20}. The scaling behavior of the cumulants of $A$ is given in Eq.~(\ref{HnAsca}). It depends on $f_{0n}(x_1,0)$ which is always an even function of $x_1$. The data shown in the top Fig.~\ref{AS3u20} are definitely consistent within the errors. As for the cumulants of $S$, they scale as reported in Eq.~(\ref{HnSsca}). Relation (\ref{SF-funf}) implies that the odd (resp. even) cumulants are odd (resp. even) functions of $x_1$. Again, this is confirmed by the data shown in the bottom Fig.~\ref{AS3u20}. In particular, the ratio $S_3/L^{3 y_1}$ is consistent with zero at the critical point $x_1=0$. Note that statistical errors of the MC simulations along $u_2=0$ line increase significantly in the region $x_1 \gtrsim 1$. The link update algorithm for the model (\ref{HiggsHug}) becomes indeed less efficient as $\kappa$ and $J$ are increased. Autocorrelation times increase by more than one order of magnitude, likely due to a different dynamic regime related to the presence of relevant nonlocal configurations, which are hardly modified by local moves. The results we have presented here complement those reported in Ref.~\onlinecite{SSN-21}, which were already providing a remarkable evidence of the multicritical $XY$ behavior (although the authors were quite skeptical on its interpretation in terms of a multicritical $XY$ behavior). In particular, their estimates of the multicritical exponents $y_1= 1.778(6)$ and $y_2=1.495(9)$ (other compatible, but less precise, results were also reported in Refs.~\onlinecite{VDS-09,DKOSV-11}) are in substantial agreement with the $XY$ predictions (\ref{y1XY}) and (\ref{y2XY}). The small difference in the estimate of $y_1$ can be easily explained by the very slowly decaying scaling corrections predicted by the multicritical $XY$ scenario, that make a precise determination of the universal asymptotic quantities very hard. The leading one vanishes as $L^{-0.108}$, so that, to reduce it by a factor of two, the lattice size must be increased by a factor of 600, which is unattainable in practice. Overall, we believe that the numerical results presented in this paper, and those already reported in Ref.~\onlinecite{SSN-21}, provide strong evidence of the multicritical $XY$ scenario put forward in the previous sections. \section{Conclusions} \label{conclu} We study the multicritical behavior of 3D ${\mathbb Z}_2$ gauge Higgs model at the MCP, where one first-order transition line and two continuous Ising transition lines meet, as sketched in Fig.~\ref{phadia}. The duality properties of the model play a key role in the phase diagram, and in determining the main features of the multicritical behavior at the MCP located on the self-dual line. We exploit duality to identify the scaling fields associated with the relevant RG perturbations at the MCP, and outline the corresponding multicritical scaling behaviors. Moreover, we present arguments supporting the identification of the multicritical universality class with the one controlled by the stable $XY$ fixed point of the 3D multicritical LGW field theory (\ref{bicrHH}), with two competing scalar fields associated with the continuous ${\mathbb Z}_2$ transition lines meeting at the MCP. The $XY$ nature of the MCP implies an effective enlargement of the symmetry of the multicritical modes, to the continuous group O(2). We have also reported numerical FSS analyses of several energy cumulants. The results are in good agreement with the theoretical predictions based on the multicritical $XY$ scenario. We believe that our numerical results, together with those already reported in the literature, see, in particular, Ref.~\onlinecite{SSN-21}, provide a strong evidence in favor of the multicritical $XY$ scenario at the MCP. Of course, this picture calls for a deeper understanding of the mechanisms that combine the local and nonlocal critical modes of the ${\mathbb Z}_2$ gauge Higgs model to give rise to the multicritical $XY$ behavior, entailing an effective enlargement of the symmetry at the MCP, to the continuous group O(2).
proofpile-arXiv_065-30
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} We gave a characterisation of the $(c)$-regularity in Theorem 2.4 of \cite{bekkakoike}. In order to show the theorem, we used a statement that a one-dimensional subspace of $\mathbb{R}^{n+p}$ $$ K_x := (Ker \ d\rho (x) \cap T_x X)^{\bot} \cap T_x X $$ is orthogonal to $\mathbb{R}^n (:= \mathbb{R}^n \times \{ 0\}$) at any $x \in X$. But this statement does not necessarily hold, namely $K_x$ is not always orthogonal to $\mathbb{R}^n$ as T. Gaffney pointed out to us in \cite{gaffney}. Therefore the proof of Theorem 2.4 in \cite{bekkakoike} is false. In addition, we proved in \cite{bekkakoike} Theorem 2.7 (resp. Theorem 2.8) that the stratification $\Sigma (\mathbb{R}^n \times J)$ (see \S 3 for the definition) is $(c)$-regular under the Kuo condition (resp. the second Kuo condition), using Theorem 2.4. In this note we show an alternative lemma to Theorem 2.4, adding one more assumption to the original two assumptions. The $(c)$-regularity follows from these three assumptions. In addition, we show that the Kuo condition (resp. the second Kuo condition) implies not only the original assumptions but the new assumtion of the lemma. In other words, Theorem 2.7 (resp. Theorem 2.8) in \cite{bekkakoike} follows from the lemma. The authors would like to thank T. Gaffney for pointing out to us a mistake in the proof of Theorem 2.4 in \cite{bekkakoike}. \section{Alternative lemma}\label{lemma} Let $M$ be a smooth manifold, and let $X, \ Y$ be smooth submanifolds of $M$ such that $Y \subset \overline{X}$. We suppose now that $M$ is endowed with a Riemannian metric. Let $(T_Y , \pi , \rho )$ be a smooth tubular neighbourhood for $Y$ with the associated projection $\pi$ and a smooth non-negative control function $\rho$ such that $\rho^{-1}(0) = Y$ and $\grad \rho (x) \in Ker \ d\pi (x)$. In this section we treat a lemma on regularity conditions in the stratification theory. Let us recall some regularity conditions. \begin{defn} (1) We say that the pair $(X,Y)$ is {\em Whitney $(a)$-regular at} $y_0 \in Y$, if for any sequence of points $\{ x_i\}$ of $X$ which tends to $y_0$ such that the sequence of tangent spaces $\{ T_{x_i}X\}$ tends to some plane $\sigma$ in the Grassman space of $\dim X$-planes, then we have $T_{y_0}Y \subset \sigma$. We say that $(X,Y)$ is {\em $(a)$-regular} if it is $(a)$-regular at any point $y_0 \in Y$. (See \cite{whitney1, whitney2} for properties of the Whitney $(a)$-regularity.) (2) We say that the pair $(X,Y)$ is {\em $(c)$-regular at $y_0 \in Y$ for the control function} $\rho$, if for any sequence of points $\{ x_i\}$ of $X$ which tends to $y_0$ such that the sequence of planes $\{ Ker \ d\rho (x_i) \cap T_{x_i}X\}$ tends to some plane $\tau$ in the Grassman space of $(\dim X - 1)$-planes, then we have $T_{y_0}Y \subset \tau$. We say that $(X,Y)$ is {\em $(c)$-regular for the control function} $\rho$ if it is $(c)$-regular at any point $y_0 \in Y$ for the function $\rho$. (See \cite{bekka} for properties of the $(c)$-regularity.) \end{defn} \begin{defn} We say that the pair $(X,Y)$ satisifies {\em condition $(m)$}, if there exists some positive number $\epsilon > 0$ such that $(\pi ,\rho )|_{X \cap T^{\epsilon}_Y} : X \cap T^{\epsilon}_Y \to Y \times \mathbb{R}$ is a submersion where $T^{\epsilon}_Y := \{ x \in T_Y \ | \ \rho (x) < \epsilon \}$. \end {defn} Let $y_0 \in Y$, and let $\{ x_i\}$ be an arbitrary sequence of points of $X$ which tends to $y_0$ such that the sequence of planes $\{ Ker \ d\rho (x_i ) \cap T_{x_i} X\}$ tends to some plane $\tau$ in the Grassmann space of $(\dim X - 1)$-planes. We call such a sequence of points of $X$ {\em pre-regular}. Taking a subsequence of $\{ x_i\}$ if necessary, we may assume that the sequence of planes $\{ Ker \ d\rho (x_i )\}$ and $\{ T_{x_i} X\}$ tend to some planes $\mu$ and $\sigma$ in the Grassmann spaces of $(\dim M -1)$-planes and $( \dim X)$-planes, respectively. We say that the pair $(X,Y)$ satisfies {\em condition $(c_d)$ at $y_0$}, if $\dim (\mu \cap \sigma) = \dim X - 1$ for any pre-regular sequence of points of $X$ tending to $y_0$. In addition, we say that the pair $(X,Y)$ satisfies {\em condition $(c_d)$}, if $(X,Y)$ satisfies condition $(c_d)$ at any point $y_0 \in Y$. \begin{rem}\label{remark1} Let $M = \mathbb{R}^{n+p}$ and $Y = \mathbb{R}^n \times \{ 0\}$. Then $\dim (\mu \cap \sigma ) = \dim X - 1$ if and only if $\mu$ and $\sigma$ are transverse at $y_0$ if and only if $\sigma \not\subset \mu$ if and only if $\mu^{\perp} \not\subset {\sigma}^{\perp}$. \end{rem} We show the following lemma, which gives a sufficient condition for the $(c)$-regularity. \begin{lem}\label{Lemma} Suppose that the pair $(X,Y)$ is $(a)$-regular at $y_0 \in Y$ and satisfies condition $(m)$ for a control function $\rho$ and condition $(c_d)$ at $y_0 \in Y$. Then $(X,Y)$ is $(c)$-regular at $y_0 \in Y$ for the function $\rho$. \end{lem} \begin{proof} We first recall the setting of the proof of Theorem 2.4 in \cite{bekkakoike}. There is a chart $\Phi : (U, U \cap Y, y_0) \to (\mathbb{R}^{n+p}, \mathbb{R}^n \times \{ 0 \} , 0)$ for $Y$ at $y_0$ such that \vspace{3mm} (1) $\Phi \circ \pi \circ \Phi^{-1}$ is the projection $(y,t) \mapsto (y,0)$ from $\mathbb{R}^{n+p} \to \mathbb{R}^n \times \{ 0\}$. (2) $\grad (\rho \circ \Phi^{-1} )(y,t) \in \{ 0\} \times \mathbb{R}^p$, i.e. is orthogonal to $\mathbb{R}^n \times \{ 0 \}$. \vspace{3mm} \noindent It follows that it is enough to prove the lemma in the case where $M = T^{\epsilon}_Y = \mathbb{R}^{n+p}$ and $Y = \mathbb{R}^n \times \{ 0\}$, $X$ is Whitney $(a)$-regular over $\mathbb{R}^n \times \{ 0\}$ at $0 \in \mathbb{R}^{n+p}$, $\pi : \mathbb{R}^n \times \mathbb{R}^p \to \mathbb{R}^n$ defined by $\pi (y,t) = t$ and $\rho : \mathbb{R}^{n+p} \to \mathbb{R}$ satisfy \vspace{3mm} \begin{itemize} \item $\rho^{-1}(0) = Y$, \item $(\pi ,\rho )|_X : X \to \mathbb{R}^n \times \mathbb{R}$ is a submersion, \item $\grad \rho (y,t) \in \{ 0\} \times \mathbb{R}^p$ if $(y,t) \in \mathbb{R}^{n+p} \setminus \mathbb{R}^n \times \{ 0\}$, \end{itemize} \vspace{3mm} \noindent and $(X,Y)$ satisfies condition $(c_d)$ at $0 \in \mathbb{R}^{n+p}$. Let us remark that condition $(m)$ guarantees $$ \dim (Ker \ d\rho (x) \cap T_x(X)) = \dim X - 1 $$ at any point $x \in X$. Let $\{ x_i\}$ be a sequence of points of $X$ which tends to $0 \in \mathbb{R}^{n+p}$ such that the sequence of planes $\{ Ker \ d\rho (x_i ) \cap T_{x_i} X\}$ tends to some plane $\tau$ in the Grassmann space of $(\dim X - 1)$-planes. Taking a subsequence of $\{ x_i\}$ if necessary, we may assume that the sequences of planes $\{ Ker \ d\rho (x_i )\}$ and $\{ T_{x_i} X\}$ tend to some planes $\mu$ and $\sigma$ in the Grassmann spaces of $(n + p -1)$-planes and $( \dim X)$-planes, respectively. Note that $\mathbb{R}^n \times \{ 0\} \subset Ker \ d\rho (x_i)$ for any $i \in \mathbb{N}$. Therefore we have $$ \mu = \lim_{i \rightarrow \infty} Ker \ d\rho (x_i) \supset \mathbb{R}^n \times \{ 0\}. $$ By the Whitney ($a$)-regularity, we have $$ \sigma = \lim_{i \rightarrow \infty} T_{x_i}X \supset \mathbb{R}^n \times \{ 0\}. $$ Therefore we have $\mu \cap \sigma \supset \mathbb{R}^n \times \{ 0\} .$ On the other hand, $\tau = \lim_{i \rightarrow \infty} (Ker \ d\rho (x_i) \cap T_{x_i}X)$. It follows that $\tau \subseteq \mu \cap \sigma$. Since $(X,Y)$ satisfies condition $(c_d)$ at $0 \in \mathbb{R}^{n+p}$, we have $\dim (\mu \cap \sigma ) = \dim X - 1$. Therefore we have $\tau = \mu \cap \sigma \supset \mathbb{R}^n \times \{ 0\}$. Thus $(X,Y)$ is $(c)$-regular at $0 \in Y$ for the function $\rho$. \end{proof} \bigskip \section{Proofs of Theorems 2.7, 2.8 in \cite{bekkakoike}}\label{proof} Let $\mathcal{E}_{[s]} (n,p)$, $n \ge p$, denote the set of $C^s$ map-germs $: (\mathbb{R}^n,0) \to (\mathbb{R}^p,0)$, and let $j^r f(0)$ denote the $r$-jet of $f$ at $0 \in \mathbb{R}^n$ for $f \in \mathcal{E}_{[s]}(n,p)$, $s \ge r$. For $f \in \mathcal{E}_{[r]}(n,p)$, $\mathcal{H}_r(f;\overline{w})$ denotes the {\em horn-neighbourhood of $f^{-1}(0)$ of degree $r$ and width} $\overline{w}$, $$ \mathcal{H}_r(f;\overline{w}) := \{ x\in \mathbb{R}^n : \ |f(x)| \le \overline{w} |x|^r \} . $$ Let $v_1, \cdots , v_p$ be $p$ vectors in $\mathbb{R}^n$ where $n \ge p$. The {\em Kuo distance $\kappa$} (\cite{kuo}) is defined by $\displaystyle \kappa(v_1, \ldots,v_p) = \displaystyle \mathop{\rm min}_{i}\{\text{distance of }\, v_i\, \text{ to }\, V_i\}, $ where $V_i$ is the span of the $ v_j$'s, $j\ne i$. In the case where $p = 1$, $\kappa (v) = \| v \| .$ \begin{defn}\label{Kuo condition} A map-germ $f \in \mathcal{E}_{[r]}(n,p)$ satisfies the {\em Kuo condition}, if there are positive numbers $C$, $\alpha$, $\overline{w} > 0$ such that $$ \kappa (\grad f_1 (x), \cdots , \grad f_p (x)) \ge C |x|^{r-1} $$ in $\mathcal{H}_r (f;\overline{w}) \cap \{ |x| < \alpha \}$. \end{defn} \begin{defn}\label{second Kuo condition} A map-germ $f \in \mathcal{E}_{[r+1]}(n,p)$ satisfies the {\em second Kuo condition}, if for any map $g \in \mathcal{E}_{[r+1]}(n,p)$ with $j^{r+1}g(0) = j^{r+1}f(0)$, there are positive numbers $C$, $\alpha$, $\overline{w}, \delta > 0$ (depending on $g$) such that $$ \kappa (\grad f_1 (x), \cdots , \grad f_p (x)) \ge C |x|^{r-\delta} $$ in $\mathcal{H}_{r+1} (g;\overline{w}) \cap \{ |x| < \alpha \}$. \end{defn} Let us recall Theorems 2.7 and 2.8 in \cite{bekkakoike}. Let $f : (\mathbb{R}^n,0) \to (\mathbb{R}^p,0)$, $n \ge p$, be a $C^r$ (resp. $C^{r+1}$) map, and let $J$ be a bounded open interval containing $[0, 1]$. For arbitrary $g \in \mathcal{E}_{[r]}(n,p)$ (resp. $\mathcal{E}_{[r+1]} (n,p)$) with $j^r g(0) = j^r f(0)$, define a $C^r$ (resp. $C^{r+1}$) map $F : (\mathbb{R}^n \times J, \{ 0\} \times J) \to (\mathbb{R}^p,0)$ by $F(x,t) := f(x) + t(g(x) - f(x))$. Let us remark that the Kuo condition (resp. the second Kuo condition) guarantees that $F^{-1}(0) \setminus \{ 0\} \times J$ is smooth around $\{ 0\} \times J$ if it is not empty. Therefore, if $F^{-1}(0) \ne \{ 0\} \times J$ as set-germs at $\{ 0\} \times J$, $$ \Sigma (\mathbb{R}^n \times J) := \{ \mathbb{R}^n \times J \setminus F^{-1}(0), F^{-1}(0) \setminus \{ 0\} \times J, \{ 0\} \times J \} $$ gives a stratification of $\mathbb{R}^n \times J$ around $\{ 0\} \times J$ under the assumption of the Kuo condition (resp. the second Kuo condition). In this case, $\dim (F^{-1}(0) \setminus \{ 0\} \times J) = n +1 - p$. If $F^{-1}(0) = \{ 0\} \times J$ as set-germs at $\{ 0\} \times J$, $$ \Sigma (\mathbb{R}^n \times J) := \{ \mathbb{R}^n \times J \setminus \{ 0\} \times J, \{ 0\} \times J \} $$ gives a stratification of $\mathbb{R}^n \times J$ around $\{ 0\} \times J$. \begin{thm}\label{thm2.7} (\cite{bekkakoike}, Theorem 2.7) If a $C^r$ map $f \in \mathcal{E}_r (n,p)$ satisfies the Kuo condition, then the stratification $\Sigma (\mathbb{R}^n \times J)$ is $(c)$-regular. \end{thm} \begin{thm}\label{thm2.8} (\cite{bekkakoike}, Theorem 2.8) Let $f \in \mathcal{E}_{[r+1]}(n,p)$. If, for any polynomial map $h$ of degree $r + 1$ such that $j^r h(0) = j^r f(0)$, there are positive numbers $C$, $\alpha$, $\overline{w}, \delta > 0$ (depending on $h$) such that $$ \kappa (\grad f_1 (x), \cdots , \grad f_p (x)) \ge C |x|^{r-\delta} $$ in $\mathcal{H}_{r+1} (h;\overline{w}) \cap \{ |x| < \alpha \}$, then the stratification $\Sigma (\mathbb{R}^n \times J)$ is $(c)$-regular. \end{thm} \begin{rem}\label{remark21} The condition which $f \in \mathcal{E}_{r+1} (n,p)$ in Theorem \ref{thm2.8} satisfies is equivalent to the second Kuo condition. \end{rem} \begin{proof}[Proof of Theorem \ref{thm2.7}] Let us show Theorem \ref{thm2.7}, using Lemma \ref{Lemma}. In the case where $F^{-1}(0) = \{ 0\} \times J$ as set-germs at $\{ 0\} \times J$, it is obvious that $\Sigma (\mathbb{R}^n \times J)$ is a $(c)$-regular stratification. Therefore we consider only the case where $F^{-1}(0) \ne \{ 0\} \times J$ as set-germs at $\{ 0\} \times J$. We set $X := \mathbb{R}^n \setminus F^{-1}(0)$, $Y := F^{-1}(0) \setminus \{ 0\} \times J$ and $Z := \{ 0\} \times J$. Then we can easily see that the pairs $(X,Y)$ and $(X,Z)$ are $(c)$-regular (around $Z$). In order to show that the pair $(Y,Z)$ is $(c)$-regular, we have to check the $(a)$-regularity, condition $(m)$ and condition $(c_d)$. Here the control function is a non-negative function $\hat{\rho} : \mathbb{R}^n \times \mathbb{R} \to \mathbb{R}$ defined by $\hat{\rho}(x,t) := x_1^2 + \cdots + x_n^2$. We can show the $(a)$-regularity and condition $(m)$ similarly to the proof of Theorem 2.7 in \cite{bekkakoike}. Therefore it remains to show that the pair $(Y,Z)$ satisfies condition $(c_d)$. For $t \in J$, define a $C^r$ map $f_t : (\mathbb{R}^n ,0) \to (\mathbb{R}^p ,0)$ by $f_t(x) := F(x,t)$, namely $f_t(x) := f(x) + t(g(x) - f(x))$. The $r$-jet of $f$ at $0 \in \mathbb{R}^n$, $j^r f(0)$, has a unique polynomial representative $z$ of degree not exceeding $r$. We do not distinguish the $r$-jet $j^r f(0)$ and the polynomial representative $z$ here. We set $q(x) := f(x) - z(x)$ and $r(x) := g(x) - z(x)$, and define $P_t (x) := q(x) + t(r(x) - q(x))$, $t \in J$. Then $f_t (x) = z(x) + P_t(x)$ where $P_t : (\mathbb{R}^n,0) \to (\mathbb{R}^p,0)$ be a $C^r$ map such that $j^r P_t(0) = 0$ for $t \in J$. \begin{rem}\label{remark3} (1) For $a _1> 0$, there are positive numbers $b_1$, $\beta_1 > 0$ with $0 < b_1 \le a_1$ such that $$ \mathcal{H}_r (f;b_1) \cap \{ |x| < \beta_1 \} \subset \mathcal{H}_r (f_t;a_1) \cap \{ |x| < \beta_1 \} $$ for any $t \in J$. (2) For $a _2> 0$, there are positive numbers $b_2$, $\beta_2 > 0$ with $0 < b_2 \le a_2$ such that $$ \mathcal{H}_r (f;;b_2) \cap \{ |x| < \beta_2 \} \subset \mathcal{H}_r (z;a_2) \cap \{ |x| < \beta_2 \} . $$ \end{rem} We denote by $V_x$ (resp. $V_{t,x}$) the $p$-dimensional subspace of $\mathbb{R}^n$ spanned by $$ \{ \grad z_1 (x), \cdots , \grad z_p (x)\} \ \ (\text{resp.} \ \{ \grad f_{t,1}(x), \cdots , \grad f_{t,p}(x)\} ) $$ for $x \in \mathcal{H}_r(f;\overline{w}) \cap \{ |x| < \alpha \}$ and $t \in J$. Concerning $V_x$, we proved the following property in \cite{bekkakoike}. \begin{ass}\label{claimI} (\cite{bekkakoike}, Claim I) Let $\epsilon_1$ be an arbitrary positive number. Then there are positive numbers $\alpha_1$, $\overline{w}_1$ with $0 < \alpha_1 \le \alpha$ and $0 < \overline{w}_1 \le \overline{w}$ such that $$ d(x,V_x) \ge (1 - \epsilon_1)|x| \ \ in \ \ \mathcal{H}_r(z;\overline{w}_1) \cap \{ |x| < \alpha_1\} . $$ \end{ass} We first recall the proof of Claim III in \cite{bekkakoike} (since the details are not mentioned in the paper). Let us denote by $v(x)$ and $v_t(x)$ the projections of $x$ on $V_x$ and $V_{t,x}$, respectively. For $x \in \mathcal{H}_r(f;\overline{w} ) \cap \{ |x| < \alpha \}$ (and $t \in J$), we consider $\{ N_1 (x), \cdots , N_p(x)\}$ the basis of $V_x$ constructed as follows: $$ N_j (x) := \grad z_j(x) - \tilde{N}_j(x) \ \ (1 \le j \le p), $$ where $\tilde{N}_j(x)$ is the projection of $\grad z_j(x)$ to the subspace $V_{x,j}$ spanned by the $\grad z_k(x)$, $k \ne j$, and $\{ N_{t,1} (x), \cdots , N_{t,p} (x)\}$ the corresponding basis of $V_{t,x}$. Then we have the following assertion. \begin{ass}\label{claimIV} (\cite{bekkakoike}, Claim IV) For any $\epsilon_2 > 0$, there are positive numbers $\alpha_2$, $\overline{w}_2$ with $0 < \alpha_2 \le \alpha$ and $0 < \overline{w}_2 \le \overline{w}$ such that the following inequality holds: $$ (1 + \epsilon_2 )|N_j (x)| \ge |N_{t,j} (x)| \ge (1 - \epsilon_2)|N_j(x)| \ \ (1 \le j \le p) $$ for $x \in \mathcal{H}_r(f;\overline{w}_2 ) \cap \{ |x| < \alpha_2 \}$ and $t \in J$. \end{ass} \noindent By Lemma 3.2 in T.-C. Kuo \cite{kuo}, we have $$ v(x) = \sum_{j=1}^p <x, \grad z_j (x)> \frac{N_j(x)}{|N_j(x)|^2}, \ \ v_t(x) = \sum_{j=1}^p <x, \grad f_{t,j} (x)> \frac{N_{t,j}(x)}{|N_{t,j}(x)|^2} \ \ $$ for $x \in \mathcal{H}_r(f;\overline{w} ) \cap \{ |x| < \alpha \}$ and $t \in J$. From the proof of Claim I in \cite{bekkakoike}, we can assume that for any $\epsilon_3 > 0$, there are positive numbers $\alpha_3$, $\overline{w}_3$ with $0 < \alpha_3 \le \alpha$ and $0 < \overline{w}_3 \le \overline{w}$ such that \begin{equation}\label{|v|} |v(x)| \le \sum_{j=1}^p \frac{|<x, \grad z_j (x)>|}{|N_j (x)|} \le \epsilon_3 |x| \ \ \text{in} \ \ \mathcal{H}_r(z;\overline{w}_3 ) \cap \{ |x| < \alpha_3\} . \end{equation} Since $f_t(x) = z(x) + P_t(x)$, $t \in J$, there are positive numbers $\alpha_4$, $\overline{w}_4$ with $0 < \alpha_4 \le \mathop{\rm min} \{ \alpha_2 , \alpha_3\}$ and $0 <\overline{w}_4 \le \mathop{\rm min} \{ \overline{w}_2 , \overline{w}_3\}$ such that for $x \in \mathcal{H}_r(f;\overline{w}_4 ) \cap \{ |x| < \alpha_4\}$ and $t \in J$, the following inequalities hold: \begin{eqnarray*} & |v_t(x)| & \le \sum_{j=1}^p \frac{|<x, \grad f_{t,j} (x)>|}{|N_{t,j} (x)|} \\ & & \le \sum_{j=1}^p \frac{|<x, \grad z_j (x)>|}{|N_{t,j} (x)|} + \sum_{j=1}^p \frac{|<x, \grad P_{t,j} (x)>|}{|N_{t,j} (x)|} \\ & & \le \frac{\epsilon_3}{1 - \epsilon_2} |x| + \sum_{j=1}^p \frac{|<x, \grad P_{t,j} (x)>|}{|N_{t,j} (x)|}. \end{eqnarray*} Note that $$ j^{r-1}(\frac{\partial P_{t,j}}{\partial x_i}) (0) = 0 \ \ (1 \le i \le n, \ 1 \le j \le p) \ \text{for} \ t \in J. $$ Therefore, for any $\epsilon_4 > 0$, there are positive numbers $\alpha_5$, $\overline{w}_5$ with $0 < \alpha_5 \le \alpha_4$ and $0 < \overline{w}_5 \le \overline{w}_4$ such that \begin{equation}\label{|v_t|} |v_t (x)| \le \epsilon_4 |x| \ \ \text{for} \ x \in \mathcal{H}_r(f;\overline{w}_5 ) \cap \{ |x| < \alpha_5\} \ \text{and} \ t \in J \end{equation} under the assumption of the Kuo condition. Then, by (\ref{|v|}) and (\ref{|v_t|}), we have the following assertion. \begin{ass}\label{claimIII} (\cite{bekkakoike}, Claim III) For any $\epsilon_5 > 0$, there are positive numbers $\alpha_6$, $\overline{w}_6$ with $0 < \alpha_6 \le \alpha_5$ and $0 < \overline{w}_6 \le \overline{w}_5$ such that $$ |d(x,V_{t,x}) - d(x,V_x)| \le |v_t (x) - v(x)| \le |v_t(x)| + |v(x)| \le \epsilon_5 |x| $$ for $x \in \mathcal{H}_r(f;\overline{w}_6 ) \cap \{ |x| < \alpha_6\}$ and $t \in J$. \end{ass} The most important result of \S 3 in \cite{bekkakoike} follows from Assertions \ref{claimI} and \ref{claimIII}. \begin{lem}\label{claimII} (\cite{bekkakoike}, Claim II) There are positive numbers $\alpha_0$, $\overline{w}_0$ with $0 < \alpha_0 \le \mathop{\rm min} \{ \alpha_1 , \alpha_6\}$ and $0 < \overline{w}_0 \le \mathop{\rm min} \{ \overline{w}_1 , \overline{w}_6\}$ such that $$ d(x,V_{t,x}) \ge \frac{1}{2} |x| \ \ \text{for} \ x \in \mathcal{H}_r(f;\overline{w}_0 ) \cap \{ |x| < \alpha_0 \} \ \text{and} \ t \in J. $$ \end{lem} We next denote by $W_{(x,t)}$ the $p$-dimensional subspace of $\mathbb{R}^n \times \mathbb{R}$ spanned by $$ \{ \grad F_1 (x,t), \cdots , \grad F_p (x,t)\} \ \ \text{for} \ x \in \mathcal{H}_r(f;\overline{w}) \cap \{ |x| < \alpha \} \ \text{and} \ t \in J. $$ Then we can show the following lemma. \begin{lem}\label{c_d} There are positive numbers $\alpha_{11}$, $\overline{w}_{11}$ such that $$ d((x,0),W_{(x,t)}) \ge \frac{1}{4} |(x,0)| \ \ \text{for} \ x \in \mathcal{H}_r(f;\overline{w}_{11} ) \cap \{ |x| < \alpha_{11} \} \ \text{and} \ t \in J. $$ \end{lem} \begin{proof} Let us remark that $$ \grad F_j (x,t) = (\grad f_{t,j}(x), \frac{\partial F_j}{\partial t}(x,t)), \ \ 1 \le j \le p. $$ We denote by $U_{(x,t)}$ the $p$-dimensional subspace of $\mathbb{R}^n \times \mathbb{R}$ spanned by $$ \{ (\grad f_{t,1}(x), 0), \cdots , (\grad f_{t,p}(x), 0)\} \ \ \text{for} \ x \in \mathcal{H}_r(f;\overline{w}) \cap \{ |x| < \alpha \} \ \text{and} \ t \in J. $$ By Lemma \ref{claimII}, we have \begin{equation}\label{1/2} d((x,0), U_{(x,t)}) = d(x,V_{t,x}) \ge \frac{1}{2} |x| \ \text{for} \ x \in \mathcal{H}_r(f;\overline{w}_0 ) \cap \{ |x| < \alpha_0 \} \ \text{and} \ t \in J. \end{equation} Let $u(x,t)$ and $\omega (x,t)$ be the projections of $(x,0)$ on $U_{(x,t)}$ and $W_{(x,t)}$, respectively. Then we have $$ d((x,0),U_{(x,t)}) = |(x,0) - u(x,t)|, \ \ d((x,0),W_{(x,t)}) = |(x,0) - \omega (x,t)| $$ for $x \in \mathcal{H}_r(f;\overline{w} ) \cap \{ |x| < \alpha \}$ and $t \in J$. Therefore we have \begin{equation}\label{ineq} |d((x,0),U_{(x,t)}) - d((x,0),W_{(x,t)})| \le |u(x,t) - \omega (x,t)| \le |u(x,t)| + |\omega (x,t)| \end{equation} for $x \in \mathcal{H}_r(f;\overline{w} ) \cap \{ |x| < \alpha \}$ and $t \in J$. For $x \in \mathcal{H}_r(f;\overline{w} ) \cap \{ |x| < \alpha \}$ and $t \in J$, let us consider $\{ M_1 (x,t), \cdots , M_p(x,t)\}$ the basis of $U_{(x,t)}$ constructed as follows: $$ M_j (x,t) := (\grad f_{t,j}(x),0) - \tilde{M}_j(x,t) \ \ (1 \le j \le p), $$ where $\tilde{M}_j(x,t)$ is the projection of $(\grad f_{t,j}(x),0)$ to the subspace $U_{(x,t),j}$ spanned by the $(\grad f_{t,k}(x),0)$, $k \ne j$, and let $\{ L_1 (x,t), \cdots , L_p (x,t)\}$ be the corresponding basis of $W_{(x,t)}$. By definition, we have $$ |\grad F_j (x,t) - (\grad f_{t,j}(x), 0)| = |\frac{\partial F_j}{\partial t}(x,t)| = |g_j (x) - f_j(x)| $$ where $j^r (g_j - f_j)(0) = 0$ $(1 \le j \le p)$. Therefore there are positive numbers $\alpha_7$, $\overline{w}_7$ with $0 < \alpha_7 \le \alpha$ and $0 < \overline{w}_7 \le \overline{w}$ such that for any $\lambda_j$, $$ \frac{|\sum_j \lambda_j (\grad F_j(x,t) - (\grad f_{t,j}(x),0))|} {|\sum_j \lambda_j (\grad f_{t,j}(x),0)|} \rightarrow 0 \ \ \ \text{as} \ x \to 0 $$ in $x \in \mathcal{H}_r(f;\overline{w}_7) \cap \{|x| < \alpha_7\}$ (uniformly for $t \in J$) under the assumption of the Kuo condition. Then, using a similar argument to the proof of Claim IV in \cite{bekkakoike} (see Assertion \ref{claimIV} above), we can show the following assertion. \begin{ass}\label{est} For any $\epsilon_6 > 0$, there are positive numbers $\alpha_8$, $\overline{w}_8$ with $0 < \alpha_8 \le \alpha_7$ and $0 < \overline{w}_8 \le \overline{w}_7$ such that the following inequality holds: $$ (1 + \epsilon_6 )|M_j (x,t)| \ge |L_j (x,t)| \ge (1 - \epsilon_6)|M_j(x,t)| \ \ (1 \le j \le p) $$ for $x \in \mathcal{H}_r(f;\overline{w}_8 ) \cap \{ |x| < \alpha_8 \}$ and $t \in J$. \end{ass} For $x \in \mathcal{H}_r(f;\overline{w} ) \cap \{ |x| < \alpha \}$ and $t \in J$, $$ u(x,t) = \sum_{j=1}^p <(x,0), (\grad f_{t,j} (x),0)> \frac{M_j (x,t)}{|M_j (x,t)|^2}, $$ $$ \omega (x,t) = \sum_{j=1}^p <(x,0), \grad F_j (x,t)> \frac{L_j (x,t)}{|L_j (x,t)|^2}. $$ By construction, $<(x,0), (\grad f_{t,j}(x),0)> = <x, \grad f_{t,j}(x)>$ and $|M_j (x,t)| = |N_{t,j}(x)|$, $j = 1, \cdots , p$, for $t \in J$. It follows from (\ref{|v_t|}) that \begin{equation}\label{|u(x,t)|} |u(x,t)| \le \sum_{j=1}^p \frac{|<(x,0), (\grad f_{t,j}(x), 0)>|}{|M_j (x,t)|} \le \epsilon_4 |x| \end{equation} for $x \in \mathcal{H}_r(f;\overline{w}_5 ) \cap \{ |x| < \alpha_5\}$ and $t \in J$. On the other hand, $$ |\omega (x,t)| \le \sum_{j=1}^p \frac{|<(x,0), \grad F(x,t)>|}{|L_j (x,t)|} = \sum_{j=1}^p \frac{|<(x,0), (\grad f_{t,j}(x), 0)>|}{|L_j (x,t)|} $$ for $x \in \mathcal{H}_r(f;\overline{w} ) \cap \{ |x| < \alpha \}$ and $t \in J$. By Assertion \ref{est}, we have \begin{equation}\label{|w(x,t)|} |\omega (x,t)| \le \frac{\epsilon_4}{1 - \epsilon_6} |x| \ \ \text{for} \ x \in \mathcal{H}_r(f;\overline{w}_9 ) \cap \{ |x| < \alpha_9\} \ \text{and} \ t \in J, \end{equation} where $\alpha_9 = \mathop{\rm min} \{ \alpha_5 , \alpha_8 \}$ and $\overline{w}_9 = \mathop{\rm min} \{ \overline{w}_5, \overline{w}_8 \}$. By (\ref{ineq}), (\ref{|u(x,t)|}) and (\ref{|w(x,t)|}), we have the following assertion. \begin{ass}\label{III} For any $\epsilon_7 > 0$, there are positive numbers $\alpha_{10}$, $\overline{w}_{10}$ with $0 < \alpha_{10} \le \alpha_9$ and $0 < \overline{w}_{10} \le \overline{w}_9$ such that $$ |d((x,0),U_{(x,t)}) - d((x,0).W_{(x,t)})| \le \epsilon_7 |x| \ \ \text{for} \ x \in \mathcal{H}_r(f;\overline{w}_{10}) \cap \{ |x| < \alpha_{10}\} \ \text{and} \ t \in J. $$ \end{ass} By (\ref{1/2}) and Assertion \ref{III}, there are positive numbers $\alpha_{11}$, $\overline{w}_{11}$ {\em with} $0 < \alpha_{11} \le \alpha_{10}$ and $0 < \overline{w}_{11} \le \overline{w}_{10}$ such that $$ d((x,0),W_{(x,t)}) \ge \frac{1}{4} |(x,0)| \ \ \text{for} \ x \in \mathcal{H}_r(f;\overline{w}_{11} ) \cap \{ |x| < \alpha_{11} \} \ \text{and} \ t \in J. $$ \end{proof} By Lemma \ref{c_d}, we have $$ d(\frac{(x,0)}{|(x,0)|},W_{(x,t)}) \ge \frac{1}{4} \ \ \text{for} \ x \in \mathcal{H}_r(f;\overline{w}_{11} ) \cap \{ 0 < |x| < \alpha_{11} \} \ \text{and} \ t \in J. $$ Let $\ell_{(t,x)}$ be the the $1$-dimensional subspace of $\mathbb{R}^n \times \mathbb{R}$ spanned by $\grad \hat{\rho} (x,t)$ for $x \ne 0$ and $t \in \mathbb{R}$. Here $\grad \hat{\rho} (x,t) = (2x_1, \cdots , 2x_n , 0)$. Therefore we have \begin{equation}\label{keyestimation} \overline{d} (\ell_{(x,t)},W_{(x,t)}) \ge \frac{1}{4} \ \ \text{for} \ x \in \mathcal{H}_r(f;\overline{w}_{11} ) \cap \{ 0 < |x| < \alpha_{11} \} \ \text{and} \ t \in J. \end{equation} Here $$ \overline{d}(\ell ,W) := \max_{||v|| = 1} \{ d(v,W) \ | \ v \in \ell \} $$ for subspaces $\ell$ , $W$ of $\mathbb{R}^m$ with $\dim \ell \le \dim W$. Note that $\ell \subset W$ if and only if $\overline{d}(\ell , W) = 0$. Let $(0,t_0) \in Z = \{ 0\} \times J$, and let $\{ (x_i,t_i)\}$ be any pre-regular sequence of points of $Y = F^{-1}(0) \setminus \{ 0 \} \times J$ which tends to $(0,t_0)$. Namely, the sequence of planes $\{ Ker \grad \hat{\rho} ((x_i,t_i) ) \cap T_{(x_i,t_i)} Y\}$ tends to some plane in the Grassmann space of $(n - p)$-planes. Taking a subsequence of $\{ (x_i,t_i)\}$ if necessary, we may assume that the sequence of planes $\{ Ker \grad \hat{\rho} ((x_i,t_i) )\}$ and $\{ T_{(x_i,t_i)} Y\}$ tend to some planes $\mu$ and $\sigma$ in the Grassmann spaces of $n$-planes and $(n + 1 - p)$-planes, respectively. By (\ref{keyestimation}), we have $$ \overline{d} (\ell_{(x_i,t_i)},W_{(x_i,t_i)}) \ge \frac{1}{4} \ \ \text{for} \ (x_i,t_i) \in Y \cap \{ 0 < |x| < \alpha_{11} \} . $$ Since $W_{(x_i,t_i)} = (T_{(x_i,t_i)}Y)^{\perp}$ and $\ell_{(x_i,t_i)} = (Ker \grad \hat{\rho}((x_i,t_i)))^{\perp}$, it follows that $\mu^{\perp} \not\subset \sigma^{\perp}$. By Remark 1, this implies that $(Y,Z)$ is $(c_d)$-regular at $(0,t_0)$. This completes the proof of Theorem \ref{thm2.7}. \end{proof} The proof of Theorem \ref{thm2.8} goes almost in the same way as the above argument. \bigskip
proofpile-arXiv_065-31
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The properties of particle interactions determine the evolution of a quantum chromodynamical (QCD) system. Thorough understanding of these properties can help answer many fundamental questions in physics, such as the origin of the Universe or the unification of forces. This is one of the important reasons to collect data with particle accelerators, such as the Large Hadron Collider (LHC) at CERN. However, when collecting this data, we only register complex signals of high dimensionality which we can later interpret as signatures of final particles in the detectors. This interpretation stems from the fact that we, more or less, understand the underlying processes that produce the final particles. In essence, from the all the particles produced in a collision at the accelerator, only the electron, the proton, the photon and the neutrinos are stable and can be reconstructed with certainty, given that you have the proper detector. Other particles are sometimes also directly detected, given that they reach the active volume of the detector without first decaying. These include muons, neutrons, charged pions and charged kaons. On the other hand, short-lived particles will almost surely decay before reaching the detector and we can only register the particles they decay into. A similar situation arises with quarks, antiquarks and gluons, the building blocks of colliding nuclei. When a high energy collision happens, a quark within a nucleus behaves almost as if it doesn't interact with neighbouring particles, because of a property called asymptotic freedom. If it is struck with a particle from the other nucleus, it can be given sufficient momentum pointing outwards from the parent nucleus. However, we know that there are no free quarks in nature and that this quark needs to undergo a process called hadronisation. This is a process in which quark-antiquark pairs are generated such that they form hadrons. Most of the hadrons are short-lived and they decay into other, more stable, hadrons. The end result of this process is a jet of particles whose average momentum points in the direction of the original outgoing quark. Unfortunately, we don't know the exact quark, nor gluon, decay properties, which serves as a motivation for this work. The determination of these properties is a long standing problem in particle physics. To determine them, we turn to already produced data and try to fit decay models onto them. With every new set of data our understanding changes. This is evident from the fact that we want to simulate a collision event, we can obtain, on average, slightly different results with different versions of the same tool \cite{pythia}. Therefore, even though simulation tools are regularly reinforced with new observations from data, we can not expect the complete physical truth from them. Instead of trying to perform direct fits to data, we propose the use of machine learning methods to determine the decay properties. In fact, the onset of these methods is already hinted in the traditional approach, since a multivariate fit of decay models to data is already a form of a machine learning technique. It is only natural to extend the existing methods since we can't rely entirely on simulated data. In this work, we develop an interpretable model by first simulating a system of particles with well defined masses, decay channels, and decay probabilities. We take this to be the ,,true system'', whose decay properties we pretend not to know and want to reproduce. Mimicking the real world, we assume to only have the data that this system produces in the detector. Next, we employ an iterative method which uses a neural network as a classifier between events produced in the detector by the ,,true system'' and some arbitrary ,,test system''. In the end, we compare the distributions obtained with the iterative method to the ,,true'' distributions. This paper is organized as follows: in the Materials and methods section we describe the developed artificial physical system and the algorithm used to recover underlying probability distributions of the system. Also, we present in detail the methodology used to obtain the presented results. In the Results section we present our findings to see whether our hypothesis holds true. We conclude the paper with the Discussion section... \section{Materials and methods} The code used for the development the particle generator, the neural network models and the calculations is written in the the Python programming language using the Keras module with the TensorFlow2 backend \cite{keras}. The calculations were performed using a standardized PC setup equipped with an NVIDIA Quadro p6000 graphics processing unit. \subsection{The physical system} In particle physics, jets are detected as collimated streams of particles. The jet production mechanism is in essence clear: partons from the initial hard process undergo the fragmentation and hadronization processes. In this work, we develop a simplified physical model in which the fragmentation process is modeled as cascaded $1 \rightarrow 2$ independent decays of partons with a constant number of decays. This way, any single jet can be represented as a perfect binary tree of depth $N$, corresponding to $2^N$ particles in the final state. Since the initial parton properties are set, jets can be described by $2^N - 1$ decay parameters. We represent each decay of a mother parton of mass $M$ by four real numbers $(\frac{m_1}{M}, \frac{m_2}{M}, \theta, \phi)$, where $m_1$ and $m_2$ are the masses of the daughter particles and $\theta$ and $\phi$ are the polar and azimuthal angle of the lighter particle, as measured from the rest frame of the mother particle. For simplicity we make all the decays isotropic, which isn't necessarily true in real processes. To fully define our physical system we set a decay probability distribution function $p(m_1, m_2 | M)$, the details of which are given in the following subsection. The aim of our proposed algorithm is to recover these underlying probability distributions, assuming we have no information on them, using only a dataset consisting of jets described with final particles' four-momenta, as one would get from a detector. \subsection{Particle generator} \label{ParticleGenerator} To generate the jets, we developed an algorithm where we take a particle of known mass that undergoes three successive decays. We consider only the possibility of discrete decays, in the sense that the decay product masses and decay probabilities are well defined. We consider a total of 10 types of particles, labelled A -- J, which can only decay into each other. The masses and the decay probabilities of these particles are given in Table \ref{TableParticles}. In this scenario, the ,,decay probabilities'' $p$ are given by the ratios of decay amplitudes. Thus, the total sum of the probabilities for a given particle to decay into others has to be one, and the probabilities describe the number of produced daughters per $N$ decays, scaled by $1/N$. \vskip 5mm \begin{table}[h!t!] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{particle} & \multicolumn{2}{|c|}{A} & \multicolumn{2}{|c|}{B} & \multicolumn{2}{|c|}{C} & \multicolumn{2}{|c|}{D}& \multicolumn{2}{|c|}{E} \\\hline \multicolumn{2}{|c|}{mass} & \multicolumn{2}{|c|}{0.1} & \multicolumn{2}{|c|}{0.6} & \multicolumn{2}{|c|}{1.3} & \multicolumn{2}{|c|}{1.9}& \multicolumn{2}{|c|}{4.4} \\\hline \multicolumn{2}{|c|}{$p$ / channel} & 1 & A & 0.7 & B & 1 & C & 0.3 & A+C & 0.6 & C+C \\\hline \multicolumn{2}{|c|}{} & & & 0.3 & A+A & & & 0.3 & A+A & 0.4 & E \\\hline \multicolumn{2}{|c|}{} & & & & & & & 0.4 & D & & \\\hline\hline \multicolumn{2}{|c|}{particle} & \multicolumn{2}{|c|}{F} & \multicolumn{2}{|c|}{G} & \multicolumn{2}{|c|}{H} & \multicolumn{2}{|c|}{I}& \multicolumn{2}{|c|}{J} \\\hline \multicolumn{2}{|c|}{mass} & \multicolumn{2}{|c|}{6.1} & \multicolumn{2}{|c|}{8.4} & \multicolumn{2}{|c|}{14.2} & \multicolumn{2}{|c|}{18.1}& \multicolumn{2}{|c|}{25} \\\hline \multicolumn{2}{|c|}{$p$ / channel} & 0.5 & A+A & 0.9 & B+B & 0.6 & D+D & 1 & F+G & 0.5 & F+I \\\hline \multicolumn{2}{|c|}{} & 0.5 & B+C & 0.1 & A+F & 0.25 & D+E & & & 0.4 & G+H \\\hline \multicolumn{2}{|c|}{} & & & & & 0.15 & E+F & & & 0.1 & E+E \\\hline \end{tabular} \caption{Allowed particle decays in the discrete model. The designation $p$/channel shows the probability that a mother particle will decay into specific daughters.} \label{TableParticles} \end{table} \vskip 5mm Particles A--E are set to be long lived and can thus be detected in the detector, which only sees the decay products after several decays. This can be seen in table \ref{TableParticles} as a probability for a particle to decay into itself. In this way, we assure two things: first, that we have stable particles and second, that each decay in the binary tree is recorded, even if it is represented by a particle decaying into itself. Particles A and C are completely stable, since they only have one ,,decay'' channel, in which they decay back into themselves. On the other hand, particles F--I are hidden resonances: if one of them appears in the i-th step of the decay chain, it will surely decay into other particles in the next, (i+1)-th step of the chain. To create a jet, we start with particle J, which we call the mother particle, and allow it to decay in one of the decay channels. Each of the daughter particles then decays according to their decay channels, and this procedure repeats a total of 3 times. In the end, we obtain a maximum of 8 particles from the set A--E, with known momenta as measured from the rest frame of the mother particle. An example of a generated jet is given in Fig.\ref{FigRaspadi}. \begin{figure}[h!t!] \centering \begin{forest} for tree={ grow=east, edge={->}, parent anchor=east, child anchor=west, s sep=1pt, l sep=1cm }, [J [F [A[A]] [A[A]] ] [I [F [B] [C] ] [G [B] [B] ] ] ] \end{forest} \caption{An example of the operation of the discrete jet generator. The mother particle J decays into particles I and F. According to decay probabilities, this happens in half the generated jets. The daughter particles subsequently decay two more times, leaving only stable, detectable particles in the final state.} \label{FigRaspadi} \end{figure} \subsection{Introduction to the algorithm} Let's assume we have two distinct datasets: one that consists of samples from a random variable X distributed with an unknown probability density $p(x)$, which we call the ,,real'' dataset, and the other, which consists of samples from a random variable Y distributed with a known probability density $q(x)$, which we call the ,,test'' dataset. We would like to perform a hypothesis test between $H_{0}:p = p(x)$ and $H_{1}:p = q(x)$ using a likelihood-ratio test. The approach we use follows earlier work employs the Neyman–Pearson lemma \cite{NNNP1, NNNP2, NNNP3}. This lemma states that the likelihood ratio, $\Lambda$, given by: \begin{equation} \Lambda (p \mid q)\equiv \frac {{\mathcal {L}}(x \mid real)}{{\mathcal {L}}(x \mid test)} = \frac{p(x)}{q(x)} \label{NP} \end{equation} is the most powerful test at the given significance level \cite{NeyPear}. We can obtain an approximate likelihood ratio $\Lambda$ by transforming the output of a classifier used to discriminate between the two datasets. Assume that the classifier is a neural network optimized by minimizing the \textit{crossentropy} loss. In this case, the network output gives the probability of $x$ being a part of the real dataset $C_{nn}(x) = p(real \mid x)$ \cite{NNProbability}. If the datasets consist of the same number of samples, we can employ the Bayes' theorem in a simple manner: \begin{eqnarray} p(real \mid x) &=& \frac{p(x \mid real)p(real)}{p(x \mid real) p(real)+p(x \mid test)p(test)} \nonumber \\ &=& \frac{p(x \mid p_{\textrm{real}})}{p(x \mid real)+p(x \mid test)} = \frac{\Lambda}{\Lambda+1}\,. \label{Bayes} \end{eqnarray} A simple inversion of Eq.\ref{Bayes} gives: \begin{equation} \Lambda = \frac{p(x)}{q(x)} = \frac{C_{\textrm{NN}}(x)}{1 - C_{\textrm{NN}}(x)}, \end{equation} \begin{equation} p(x) = \frac{C_{\textrm{NN}}(x)}{1 - C_{\textrm{NN}}(x)} q(x). \label{pq} \end{equation} Therefore, in ideal conditions, the unknown probability density $p(x)$ describing the real dataset can be recovered with the help of the known probability density $q(x)$ and a classifier, using \ref{pq}. It must be noted that \ref{pq} is strictly correct only for optimal classifiers, which are unattainable. In our case, the classifier is optimized by minimizing the \textit{crossentropy} loss defined by: \begin{equation} L = -\frac{1}{n}\sum_{i=1}^{n}\left[y(x_i)\ln C_{\textrm{NN}}(x_i) + (1-y(x_i))\ln (1-C_{\textrm{NN}}(x_i)) \right]\,, \end{equation} where $y(x_i)$ is 1 if $x_i$ is a part of the real dataset, and 0 if $x_i$ is a part of the test dataset. We can safely assume that the final value of loss of the suboptimal classifier is greater than the final value of loss of the optimal classifier: \begin{equation} L_{\textrm{optimal}} < L < \ln{2} \,. \end{equation} The value of $\ln 2$ is obtained under the assumption of the \textit{worst} possible classifier. To prove our findings, in the next step we regroup the sums in the loss function in two parts, corresponding to the real and the test distributions: \begin{equation} -\frac{1}{n}\sum_{i \in real}\ln C_{\textrm{NN}}^{\textrm{optimal}}(x_i) < -\frac{1}{n}\sum_{i \in real}\ln C_{\textrm{NN}}(x_i) < -\frac{1}{n}\sum_{i \in real}\ln \frac{1}{2}, \label{Lreal} \end{equation} \begin{equation} -\frac{1}{n}\sum_{i \in test}\ln\left[1 - C_{\textrm{NN}}^{\textrm{optimal}}(x_i) \right]< -\frac{1}{n}\sum_{i \in test}\ln\left[1 - C_{\textrm{NN}}(x_i)\right] < -\frac{1}{n}\sum_{i \in test}\ln \frac{1}{2}. \label{Ltest} \end{equation} After expanding inequality \ref{Lreal} we obtain: \begin{equation} -\frac{1}{n}\sum_{i \in real}\ln \left[ \frac{C_{\textrm{NN}}^{\textrm{optimal}}(x_i)}{1 - C_{\textrm{NN}}^{\textrm{optimal}}(x_i)}\right] < -\frac{1}{n}\sum_{i \in real}\ln \left[\frac{C_{\textrm{NN}}(x_i)}{1 - C_{\textrm{NN}}(x_i)}\right] < -\frac{1}{n}\sum_{i \in real}\ln 1. \label{Expanded} \end{equation} According to Eq.\ref{pq}, we can recover the real probability density $p(x)$ when using the optimal classifier. However, if one uses a suboptimal classifier, a slightly different probability density $p'(x)$ will be calculated. Since the ratios that appear as arguments of the logarithms in Eq.\ref{Expanded} correspond to distribution ratios, it follows that: \begin{equation} -\frac{1}{n}\sum_{i \in real}\ln \left[ \frac{p(x_i)}{q(x_i)}\right] < -\frac{1}{n}\sum_{i \in real}\ln \left[ \frac{p' (x_i)}{q(x_i)}\right] < -\frac{1}{n}\sum_{i \in real}\ln 1. \end{equation} After some simplification this becomes: \begin{equation} \sum_{i \in real} \ln p(x_i) > \sum_{i \in real} \ln p'(x_i) > \sum_{i \in real} \ln q(x_i). \label{proof1} \end{equation} If an analogous analysis is carried out for inequality \ref{Ltest} we get: \begin{equation} \sum_{i \in test} \ln p(x_i) < \sum_{i \in test} \ln p'(x_i) < \sum_{i \in test} \ln q(x_i). \label{proof2} \end{equation} From this, it can be seen that probability density $p'(x)$ is on average closer to the real probability density $p(x)$ than to the test probability density $q(x)$. In a realistic case, Eq.\ref{pq} can't be used to completely recover the real probability density $p(x)$. However, it can be used in an iterative method; starting with a known distribution $q(x)$, we can approach the real distribution more and more with each iteration step. \subsection{A simple example} Let us illustrate the recovery of an unknown probability density by using a classifier on a simple example. We start with a set of 50 000 real numbers generated from a random variable with a probability density given by \begin{equation} p_{\textrm{real}}(x) = \frac{1}{4} \mathcal{N}(-1,1) + \frac{3}{4}\mathcal{N}(3,1)\,, \label{eqpreal} \end{equation} where $\mathcal{N}(\mu,\sigma^2)$ denotes a normal distribution. A histogram of values in this set is shown in \ref{hsimple}. Let's now assume we don't know $p_{\textrm{real}}(x)$ and want to recover it using the procedure outlined in the previous subsection. This set will be denoted as the ,,real'' dateset and the underlying probability density will be denoted as the ,,real'' probability density. \begin{figure}[h!t!] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth]{Images/nn_simple_real.png} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth]{Images/hsimple.png} \end{subfigure} \caption{(\textbf{a}) The normalized probability density for the example, given by Eq. \ref{eqpreal}. (\textbf{b}) A histogram of values sampled from the set generated by the same equation.} \label{hsimple} \end{figure} To construct the ,,test'' dataset, we generate values with a uniform probability density in the interval $\left[-10,10 \right]$. Finally, we construct a simple neural network which is used as a classifier that distinguishes the examples from the real dataset from examples from the test dataset. The classifier we use is a simple \textit{feed-forward} neural network with 100 hidden units using a ReLU activation function. The activation function of the final neural network output is the \textit{sigmoid} function, which we use to constrain the output values to the interval [0,1]. After the classifier is trained to discriminate between the two datasets by minimizing the \textit{binary crossentropy} loss, we evaluate its output at 200 equidistant points between -10 and 10. Using Eq.\ref{pq}, the probability distribution $p_{\textrm{calculated}}(x)$ is calculated using the classifier outputs. The calculated $p_{\textrm{calculated}}(x)$ is compared to the real probability density $p_{\textrm{real}}(x)$ and is shown in \ref{nn_simple_0}. Although the resulting probability density differs from the real probability density due to the non-ideal classifier, we can conclude that the calculated $p_{\textrm{calculated}}(x)$ is considerably closer to $p_{\textrm{real}}(x)$ than to uniform probability density $q(x)$ used to generate the test dataset. Now, if we use the calculated $p_{\textrm{calculated}}(x)$ to construct a new test dataset and repeat the same steps, we can improve the results even more. This procedure can therefore iteratively improve the resemblance of $p_{\textrm{calculated}}(x)$ to $p_{\textrm{real}}(x)$ to the point where the datasets are so similar that the classifier cannot distinguish between them. In this simple example convergence is reached after the 5th iteration, since no significant improvement is observed afterwards. The calculated probability density $p_{\textrm{calculated}}(x)$ after the final iteration is shown in \ref{nn_simple_0} compared to the real distribution $p_{\textrm{real}}(x)$. It is clear that in this case the procedure converges, and we could possibly obtain a better match between $p_{\textrm{calculated}}(x)$ and $p_{\textrm{real}}(x)$ if we used a more optimal classifier. \begin{figure}[h!t!] \centering \includegraphics[width=15cm]{Images/nn_simple_0_new.png} \caption{The calculated $p_{\textrm{calculated}}(x)$ (blue line) compared to the real probability density $p_{\textrm{real}}$(x) (orange line). (\textbf{a}) The left panel shows the comparison after one iteration of the algorithm, alongside the starting ,,test'' distribution (green line).(\textbf{b}) The right panel shows the comparison after the 5th iteration.} \label{nn_simple_0} \end{figure} In essence, a simple histogram could be used in this simple example to determine the underlying probability distribution instead of using the method described above. However, in case of multivariate probability distributions, which can be products of unknown probability distributions, a histogram approach would not prove useful. \subsection{Extension to jets} We would now like to apply the described procedure on the datasets that contain jets. Every jet, represented by a binary tree of depth $N$, consists of $2^N-1$ independent decays producing a maximum of $2^N$ particles in the final state. Since all the decays are isotropic in space, a jet can be described with a 4 $\times$ $(2^N-1)$--dimensional vector $\vec{x}$ and a probability distribution function given by: \begin{equation} p\left(\vec{x} \right) = \prod_i^{2^N-1} p(m_1^i, m_2^i | M)p(\theta^i) p(\phi^i), \label{jet_prob} \end{equation} where $i$ denotes the decay index and ($m_1^i$, $m_2^i$, $\theta^i$, $\phi_i$) are the components of the vector $\vec{x}$. Since both angles are uniformly spatially distributed, they contribute to the probability with a simple constant factor. Therefore, when plugging $p\left(\vec{x} \right)$ from Eq.\ref{jet_prob} into Eq.\ref{pq} we can omit angles, since the constant factors will cancel each other out: \begin{equation} \prod_i^{2^N-1} p(m_1^i, m_2^i | M) = \frac{C_{NN}(\vec{x})}{1 - C_{NN}(\vec{x})} \prod_i^{2^N-1} q(m_1^i, m_2^i | M). \label{pq_jets} \end{equation} Taking the logarithm of both sides: \begin{equation} \sum_i^{2^N-1} \ln p(m_1^i, m_2^i | M) = \ln{C_{NN}(\vec{x})} - \ln({1 - {C_{NN}(\vec{x})}}) + \sum_i^{2^N-1} \ln q(m_1^i, m_2^i | M). \label{log_pq_jets} \end{equation} Unfortunately, we cannot explicitly obtain the probability $p(m_1, m_2 \mid M)$ directly from Eq.\ref{log_pq_jets} without solving a linear system of equations. This task proves to be computationally exceptionally challenging due to the high dimensionality of the dataset. In order to avoid this obstacle, we introduce a neural network $f$ to approximate $\ln p(m_1,m_2|M)$. We can optimize this neural network by minimizing the \textit{mean squared error} applied to the two sides of Eq.\ref{log_pq_jets}. \subsection{The 2 Neural Networks (2NN) algorithm} At this point we are ready to recover the underlying probability distributions from an existing dataset that consists of jets described by the four-momenta of the final particles. We denote the jets from this dataset as ,,real''. The building blocks of the full recovery algorithm are two independent neural networks; the aforementioned classifier $C_{NN}$ and the neural network $f$. Based on the usage of 2 neural networks, we dubbed the algorithm \textit{2NN}. The detailed architectures of both networks are given in Appendix A. The workflow of the 2NN algorithm is simple: first we initialize the parameters of both neural networks. Then, we generate a test dataset using the neural network $f$. The test dataset and the real dataset are fed into the classifier network, which produces a set of linear equations in the form of Eq.\ref{log_pq_jets}. We approximate the solution to these by fitting the neural network $f$, which in turn produces a new test dataset. The procedure is continued iteratively until there are no noticeable changes in the difference of the real and test distributions. More detailed descriptions of the individual steps are given in the next subsections. \subsubsection{Generating the test dataset} After the parameters of the neural network $f$ are initialized, we need to generate a test dataset of jets with known decay probabilities $q(\vec{x})$. The input of the neural network $f$ is a vector consisting of 3 real numbers: $a = m_1/M$, $b = m_2/M$ and $M$. We denote the output of the neural network with $f(a,b,M)$. Due to conservation laws, the sum $a+b$ needs to be strictly less or equal to 1. We can assume $a \leq b$ without any loss of generality. In order to manipulate with probabilities a partition function: \begin{equation} Z(M) = \int_{\Omega} e^{f(a,b,M)} \,\mathrm{d}a \mathrm{d}b \label{Z} \end{equation} needs to be calculated. Here, $\Omega$ denotes the entire probability space and is shown as the gray area in the left panel of Fig. \ref{prob_space}. To calculate the integral in the above expression, the probability space is discretized into 650 equal areas shown in the right panel of Fig.\ref{prob_space}. These areas are obtained by discretizing the parameters $a$ and $b$ into equidistant segments of length 0.2. After the discretization, the partition function $Z(M)$ then becomes: \begin{equation} Z(M) \approx \sum_j \sum_{k} e^{f(a_j,b_k,M)} \,. \label{Z_discrete} \end{equation} \begin{figure}[h!t!] \begin{center} \resizebox{\columnwidth}{!}{% \begin{tikzpicture} \draw[->,thick,>=stealth] (-1,0) -- (12,0) node[right] {{\huge $a$}}; \draw[->,thick,>=stealth] (0,-1) -- (0,12) node[left] {{\huge $b$}}; \draw[dashed,thick] (0,10)--(10,0); \draw[dashed,thick] (0,0)--(10,10); \node[rotate=45] at (8,8.5) {\huge $a = b$}; \node[rotate=-45] at (8,2.5) {\huge $a + b = 1$}; \fill[black!10] (0,0) -- (5,5) -- (0,10) -- cycle; \node[] at (2,5) {{\fontsize{40}{60}\selectfont $\Omega$}}; \draw[->,thick,>=stealth] (14,0) -- (27,0) node[right] {{\huge $a$}}; \draw[->,thick,>=stealth] (15,-1) -- (15,12) node[left] {{\huge $b$}}; \draw[dashed,thick] (15,10)--(25,0); \draw[dashed,thick] (15,0)--(25,10); \node[rotate=45] at (23,8.5) {\huge $a = b$}; \node[rotate=-45] at (23,2.5) {\huge $a + b = 1$}; \foreach \x in {0,1,2,...,25} { \draw[thick] (15+0.2*\x,-0.2+0.2*\x) -- (15+0.2*\x,10.2-0.2*\x);}; \foreach \y in {0,1,2,...,25} { \draw[thick] (15,-0.2+0.2*\y) -- (15+0.2*\y,-0.2+0.2*\y);}; \draw[thick] (15,-0.2+0.2*26) -- (15+0.2*25,-0.2+0.2*26); \foreach \y in {0,1,2,...,25} { \draw[thick] (15,5.2+0.2*\y) -- (20-0.2*\y,5.2+0.2*\y);}; \end{tikzpicture}% } \caption{(\textbf{a}) The left panel shows the entire allowed probability space of the parameters $a$ and $b$, designated by $\Omega$. Due to conservation laws, $a+b \leq 1$ needs to hold true. To describe our system, we selected the case where $a \leq b$, which we can do without loss of generality. (\textbf{b}) The right panel shows the discretized space $\Omega$, as used to evaluate the partition function.} \label{prob_space} \end{center} \end{figure} To generate the jets which form the test dataset, we must generate each decay in the cascading evolution using the neural network $f$. Each of the decays is generated by picking a particular pair of parameters $(a,b)$ from the 650 possible pairs which form the probability space for a given mass $M$. The decay probability is then given by: \begin{equation} q(m_1, m_2 \mid M) = \frac{e^{f(a,b,M)}}{Z(M)}\,. \label{q} \end{equation} After applying this procedure we have a test dataset in which each jet is represented as a list of $2^N$ particles and their four-momenta. For each decay, we also store the pairs $(a^i,b^i)$ as well the corresponding decay probabilities. \subsubsection{Optimizing the classifier} The classifier used in this work is a convolutional neural network. The input to these type network are sets of images. For this purposes, all the jets are preprocessed by transforming the list of particles' four-momenta into jet images. Two 32$\times$32 images are produces for a single jet. In both images the axes correspond to the decay angles $\theta$ and $\phi$, while the pixel values are either the energy of the momentum of the particle found in that particular pixel. If a pixel contains two or more particles, their energy and momenta are summed and stored as pixel values. The transformation of the jet representations is done on both the real and the test datasets. We label the ,,real'' jet images with the digit 1 and ,,test'' jet images with the digit 0. The classifier is then optimized by minimizing the \textit{binary crossentropy} loss between the real and the test datasets. The optimization is performed by ADAM algorithm \cite{adam}. It is important to note that the sizes of both datasets need to be the same. \subsubsection{Optimizing the neural network $f$} After the classifier is optimized, a new jet dataset is generated by using the neural network $f$. Just as earlier, the generated jets are first transformed to jet images and then fed to the classifier. Since we have access to each of the decay probabilities for each jet, the right side of Eq.\ref{log_pq_jets} can be easily calculated for all the jet vectors $\vec{x}$ in the dataset. This way we can obtain the desired log value of the total probability for each jet $p(\vec{x})$: \begin{equation} \ln p(\vec{x}) = \ln{C_{NN}(\vec{x})} - \ln({1 - {C_{NN}(\vec{x})}}) + \sum_i^{2^N-1} \ln q(m_1^i, m_2^i | M). \label{p} \end{equation} Finally, we update the parameters of the neural network $f$ by minimizing the expression given by: \begin{equation} L = \frac{1}{n} \sum_i^n \left[ \sum_{j}^{2^N-1} f(a_i^j,b_i^j,M_j) - \ln p_i(\vec{x})\right]^2, \label{loss} \end{equation} where $i$ denotes the jet index and $j$ denotes the decay index in a particular jet. After this step, the weights of the neural network are updated in such a way that the network output values $f(a,b,M)$ are on average closer to the real log value of $p(m_1,m_2 \mid M)$. The updated network $f$ is used to generate the test dataset in the next iteration. \subsection{Evaluation of the 2NN algorithm} Upon completion of each iteration of the algorithm, the underlying probability densities can be obtained from the output values of the neural network $f$ according to Eq.\ref{q}. In the Results section the 2NN algorithm is evaluated in terms of the Kullback-Leibler divergence (KL) in the following way \cite{KLD}: \begin{equation} KL(M) = \sum_{j} \sum_{k} p_{\textrm{real}} (m_1^j, m_2^k \mid M)\left[ \ln p_{\textrm{real}} (m_1^j, m_2^k \mid M) - f(a^j, b^k, M) + \ln{Z(M)}\right] \label{kl} \end{equation} where the sum is performed over the whole probability space. The KL-divergence is a non-negative measure of the difference between two probability densities defined on same probability space. If the probability densities are identical, the KL divergence is zero. \subsection{Hardware and software} Code for calculations in this reasearch are is written in Python programming langauge using \textit{Tensorflow 2} and \textit{Numpy} modules. GPU unit NVIDIA Quadro p6000 obtained from the NVIDIA Grant for academic resarch is used to increase the speed of the performed calculations. \section{Results} In this section we present our findings after applying the $2NN$ algorithm on 500 000 jets created using the particle generator described in \ref{ParticleGenerator}. In each iteration, the classifier is optimized using 50 000 randomly picked jets from the ,,real'' dataset and 50 000 jets generated using the neural network $f$. To optimize the neural network $f$, we use 50 000 jets as well. The algorithm performed 800 iterations. After the final iteration of the $2NN$ algorithm we obtain the calculated probability densities, which can be then used to generate samples of jets. First, we show the energy spectrum of the particles in the final state in jets generated by the calculated probabilities in \ref{hE}. This spectrum is directly compared to the energy spectrum of particles taken from jets belonging to the ,,real'' dataset and shown on Figure \ref{hE}. \begin{figure}[h!t!] \centering \includegraphics[width=15cm]{Images/hE.png} \caption{The energy spectrum of the particles in the final state in jets generated by the calculated probabilities, compared to the energy spectrum of particles taken from jets belonging to the ,,real'' dataset.} \label{hE} \end{figure} The plotted spectra are obtained using 10 000 jets from each dataset. The error bars in the histogram are smaller than the marker size and are hence not visible. A resemblance between the two spectra is notable, especially at higher energies. This points to the fact that the calculated probabilities are approximately correct, so we can use them to generate samples of jets that resemble ,,real'' jets. To further examine the calculated probability densities we need to reconstruct the hidden resonances which are not found in the final state. For this purpose, the calculated probability densities for mother particle masses of $M = 25.0$, $M = 18.1$, $M = 14.2$ and $M = 1.9$ are analyzed and compared to the real probability densities in the following subsections. These masses are chosen since they match the masses of the hidden resonances, as was introduced in table \ref{TableParticles}. \subsection{Mother particle with mass $M$ = 25.0} The calculated 2$d$-probability density $p(m_1,m_2 \mid M)$ is shown in Figure \ref{probs25}, compared to the real probability density. A simple eye reveals that 3 possible decays of particle of mass $M = 25.0$ are recognized by the algorithm. After dividing the probability space as in panel (c) in Figure \ref{probs25} with lines $m_2 > 16.0$ and $m_2 < 10.0$, we calculate the mean and the variance of the data on each subspace. As a result, we obtain $(m_1, m_2) = (18.1 \pm 0.5, 6.1 \pm 0.5)$ for $m_2 > 16.0$, $(m_1, m_2) = (14.0 \pm 0.7, 8.4 \pm 0.7)$ for $16.0 \leq m_2 > 10.0$ and $(m1, m2) = (4.8 \pm 0.2, 4.6 \pm 0.2)$ for $m_2 \leq 10.0$. These mean values closely agree with the masses of the resonances expected as the products of decays of the particle with mass $M = 25.0$. The calculated small variances indicate that the algorithm is very precise. The total decay probabilities for each of the subspaces are equal to $p_1 = 0.48$, $p_2 = 0.47$, $p_3 = 0.05$, which approximately agree with the probabilities of decay channels of the particle with mass $M = 25.0$, as defined in table \ref{TableParticles}. \begin{figure}[h!t!] \centering \begin{subfigure}{0.3\textwidth} \includegraphics[width=\linewidth]{Images/probability_25.png} \caption{} \end{subfigure} \begin{subfigure}{0.3\textwidth} \includegraphics[width=\linewidth]{Images/preal_25.png} \caption{} \end{subfigure} \begin{subfigure}{0.3\textwidth} \includegraphics[width=\linewidth]{Images/pl_25.png} \caption{} \end{subfigure} \caption{The calculated probability density for a decaying particle of mass $M = 25.0$. (\textbf{a}) The left panel shows the density evaluated on the entire discretized probability space. (\textbf{b}) The probability density of ,,real'' data. (\textbf{c}) A division of the probability space into three subspaces, in order to isolate particular decays.} \label{probs25} \end{figure} These results show that we can safely assume that the $2NN$ algorithm successfully recognizes all the decay modes of the particle that initiates a jet. To quantify the difference between the calculated probabilty density and the real probability density, we use the KL-divergence. \begin{figure}[h!t!] \centering \includegraphics[width=13cm]{Images/kl_25.png} \caption{The KL-divergence between the calculated and the real probability densities, evaluated in the case of particle of mass $M = 25.0$. The presented results are averaged over 50-iteration intervals. The error bars represent the standard deviation calculated on the same intervals.} \label{kl25} \end{figure} Figure \ref{kl25} shows the dependence of the KL-divergence on the iteration of the $2NN$ algorithm. First, we observe an initial steep decrease in the value of the divergence. Large variations in divergence value are observed later. This is an indicator that the approximate probability density is found relatively quickly - after a few hundred iterations. As the algorithm decreases the width of the peaks found in the probability distribution, the KL-divergence becomes very sensitive to small variations in the location of these peaks and can therefore vary by a large relative amount. \subsection{Mother particle with mass $M$ = 18.1} A similar analysis is performed for the particle with mass $M = 18.1$. The calculated probability density is shown in Figure \ref{probs18} compared to the expected probability density. In this case, only one decay is allowed, so a division into probability subspaces is not necessary, as was for the case when $M$=25.0. The calculated mean and the variance of the shown probability density are $(m_1, m_2) = (5.9 \pm 0.4, 8.2 \pm 0.6)$. In this case, just as in the former, the calculated values closely agree with the only possible decay, in which the mother particle decays into two particles of masses 6.1 and 8.4. Also, just as in the previous subsection, the obtained result is very precise. Therefore, the algorithm can successfully find hidden resonances, as well as recognize the decay channels, without ever seeing them in the final state in the ,,real'' dataset. \begin{figure}[h!t!] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{Images/probability_18.png} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{Images/preal_18.png} \end{subfigure} \caption{The calculated probability density for a decaying particle of mass $M = 18.1$. (\textbf{a}) The calculated density evaluated on the entire discretized probability space. (\textbf{b}) The probability density of ,,real'' data. } \label{probs18} \end{figure} The calculated KL-divergence in the case of particle with mass $M = 18.1$ decreases over time in a very smooth manner, as can be seen in Figure \ref{kl18}. We believe this could be due to the simpler expected probability density, which algorithm manages to find very quickly. \begin{figure}[h!t!] \centering \includegraphics[width=13cm]{Images/kl_18.png} \caption{The KL-divergence between the calculated and the real probability densities, evaluated in the case of particle of mass $M = 18.1$. The presented results are averaged over 50-iteration intervals. The error bars represent the standard deviation calculated on the same intervals.} \label{kl18} \end{figure} \subsection{Mother particle with mass $M$ = 14.2} Figure \ref{probs14} shows the 2$d$-probability density for the decaying particle of mass $M = 14.2$. In this case, we can identify 3 possible decay channels, which are not as clearly separated as the channels in the previous subsections. Similar to the case of decaying particle of mass $M = 25.0$, we divided the probability space into 3 subpaces, each of which covered one of the possible decays. In this case, the three subspaces cover areas where $m_2 \leq 4.0$, $4.0 < m_2 \leq 5.5 $ and $m_2 > 5.5$. The mean values of the probability density on each of the subspaces are $(m_1,m2) = (2.4 \pm 0.5, 2.9 \pm 0.7)$, $(m_1,m_2)= (2.7 \pm 0.7, 4.3 \pm 0.3)$ and $(m_1,m_2) = (4.4 \pm 0.4, 6.2 \pm 0.3)$, respectively. The allowed decays of a mother particle with mass $M$ = 14.2 in the ,,real'' data are into channels with masses $(1.9,1.9)$, $(1.9, 4.4)$ and $(4.4, 6.2)$, which agree with the calculated results. However, in this case the calculations show higher variance, especially for decays where one of the products is a particle with mass 1.9. The total probabilities of decay in each of the subspaces are 0.89, 0.05 and 0.06, respectively. The relative probabilities of decay channels into particles with masses (4.4, 6.1) and (1.9, 4.4) are approximately the same as expected. However, the algorithm predicts more decays in the channel (1.9,1.9) than expected. The KL-divergence shows a steady decrease with occasional spikes, as shown on Figure \ref{kl14}. \begin{figure}[h!t!] \centering \begin{subfigure}{0.3\textwidth} \includegraphics[width=\linewidth]{Images/probability_14.png} \caption{} \end{subfigure} \begin{subfigure}{0.3\textwidth} \includegraphics[width=\linewidth]{Images/preal_14.png} \caption{} \end{subfigure} \begin{subfigure}{0.3\textwidth} \includegraphics[width=\linewidth]{Images/pl_14.png} \caption{} \label{probs2c} \end{subfigure} \caption{The calculated probability density for a decaying particle of mass $M = 14.2$. (\textbf{a}) The left panel shows the density evaluated on the entire discretized probability space. (\textbf{b}) The probability density of ,,real'' data. (\textbf{c}) A division of the probability space into three subspaces, in order to isolate particular decays.} \label{probs14} \end{figure} \begin{figure}[h!t!] \centering \includegraphics[width=13cm]{Images/kl_14.png} \caption{KL-divergence between calculated and real probability density evaluated for the $M = 14.2$. Results are averaged over the intervals of 50 iteration. Error bars represent standard deviation on the same interval} \label{kl14} \end{figure} \subsection{Mother particle with mass $M$ = 1.9} The last probability density we analyze is the probability density for the mother particle with mass $M$ = 1.9. Figure \ref{probs2} shows the calculated probability density. It can be seen that one of the decay modes present in the ,,real'' data, namely when the particle decays in the $(0.1, 0.1)$ channel, is not recognized by the algorithm, but the decay mode when the particle decays in the $(0.1, 1.3)$ channel is visible. If we isolate given decay as shown in the right panel of Figure \ref{probs2}, we get a mean value of $(m_1, m_2) = (0.14 \pm 0.09, 1.27 \pm 0.09)$, which agrees with the expected decay. We also observe significant decay probabilities along the line $m_1 + m_2 = 1.9$. The decays that correspond to the points on this line in effect create particles with zero momentum in the rest frame of the mother particle. In the lab frame this corresponds to the daughter particles flying off in the same direction as the mother particle. Since they reach the detector in the same time, they are registered as one particle of total mass $M = 1.9$. Thus, we can conclude that the probabilities on this line have to add up to the total probability of the mother particle not decaying. The calculated probabilities in the case of no decay and in the case when decaying into particles with masses $(0.1,1.3)$ are 0.71 and 0.29, respectively. We note that relative probabilities are not correct, but 2 of the 3 decay modes are still recognized by the algorithm. The KL-divergence in this case can't produce reasonable results, simply because of multiple points in the $(m_1,m_2)$ phase space which produce the same decay and is therefore omitted from the analysis. \begin{figure}[h!t!] \centering \begin{subfigure}{0.3\textwidth} \includegraphics[width=\linewidth]{Images/probability_2.png} \caption{} \end{subfigure} \begin{subfigure}{0.3\textwidth} \includegraphics[width=\linewidth]{Images/preal_2.png} \caption{} \end{subfigure} \begin{subfigure}{0.3\textwidth} \includegraphics[width=\linewidth]{Images/pl_2.png} \caption{} \end{subfigure} \caption{The calculated probability density for a decaying particle of mass $M = 1.9$. (\textbf{a}) The left panel shows the density evaluated on the entire discretized probability space. (\textbf{b}) The probability density of ,,real'' data. (\textbf{c}) A division of the probability space into three subspaces, in order to isolate particular decays.} \label{probs2} \end{figure} \subsection{The accuracy of the classifier} The accuracy of the classifier is defined as the fraction of correctly ,,guessed'' samples on a given dataset. The criterion used for guessing is checking whether the output of the classifier, $C_{NN}$, is greater than 0.5. The accuracy can indirectly indicate how distinguishable are some two datasets. In our algorithm, after starting from a test probability density, we approach the real probabilility density with increasing iteration number, so we can expect that the two jet datasets, the ,,real'' and the ,,test'' dataset, are less and less distinguishable over time. In Figure \ref{acc} we show the accuracy of the classifier in dependence on the iteration number. \begin{figure}[h!t!] \centering \includegraphics[width=13cm]{Images/acc.png} \caption{The calculated accuracy of the classifier in dependence on the iteration number.} \label{acc} \end{figure} After an initially high value, the accuracy decreases with growing iteration number, which demonstrates that the test dataset becomes more and more similar to the real dataset. Ideally, the datasets are no longer distinguishable by a given classifier if the evaluated accuracy reaches 0.5. Therefore, we can use the evaluated accuracy of the classifier as a criterion for stopping the algorithm. Other measures can also be used as the stopping criterion such as the loss value of the classifer or the area under reciever operating characteristic (ROC) curve of the classifier. In this work, the algorithm is stopped after the accuracy reaches a value of 0.65, because we didn't see any significant decreasy in the accuracy once it reached this value. An accuracy value of 0.65 clearly shows that the classifier is capable of further discriminating between the two datasets. This is explained by the fact that the neural network $f$ and its hyperparameters are not fully optimized. For the algorithm to perform better, we need to optimize the neural network $f$ and possibly improve the architecture for the selected task. \newpage \section{Discussion} In this work we propose a method for calculating underlying probability distributions in particle decays, using only the data that can be collected in a real-world physical system. First, we developed an artificial physical system based on the QCD fragmentation process. Next, we present the core part of the method: the $2NN$ algorithm, which we described in detail. The algorithm performs very well when tested on the developed physical system. It accurately predicts most of the hidden resonant particles, as well as their decay channels, which can occur in the evolution of jets. The energy spectra of the particles in the final state can also be accurately reproduced. Although tested only on the developed artificial physical system, we believe that the method is general enough to be applicable to real-world physical systems, such as collisions of high-energy particles, with a few possible modifications. For example, we hope that this method can in the future prove helpful in measuring the fragmentation functions of quarks and gluons. Also, one could employ such a method in the search for supersymmetric particles of unknown masses, or in measuring the branching ratios of known decays. The $2NN$ algorithm does not specify the exact architecture of the neural networks, nor the representation of the data used. Furthermore, the classifier does not need to be a neural network - it can be any machine learning technique which maximizes likelihood. Although the algorithm has a Generative Adversarial Netowrk (GAN)-like structure, it converges readily and does not show usual issues associated with GANs, such as mode collapse or vanishing gradients. The downside of the presented algorithm are high computational requirements. Continuous probability distributions, which we expect to occur in nature, are approximated by discrete probability distributions. In quest for higher precision and a better description of reality, one always aims to increase the resolution of discrete steps, but this carries a high computational cost. Also, the used neural networks are not fully optimized, which slows down the convergence of the algorithm. In conclusion, in order to cut down computational costs, a more thorough analysis of convergence is needed to achieve better performance. In future work we hope to make the method even more general and thus even more applicable to real-world physical systems. In particular, we want to introduce angle dependent probability distributions, which can be retrieved from some detector data. We would also like to investigate the possibility of including other decay modes, such as $1 \rightarrow 3$ type decays. \newpage
proofpile-arXiv_065-32
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} Automatic speech recognition models (ASRs) are widely used in a variety of applications, such as mobile virtual assistants (Siri, Google Assistant), in-vehicle voice navigation and voice smart home appliances like Alexa and Google Home with built-in voice assistants. Figure~\ref{fig:asr_structure} shows the structure of a typical ASR that takes as input an audio signal and transcribes the speech in the audio to text. Owing to the prevalence of ASRs in our daily lives, their security and integrity pose a great concern. The computational core of ASRs are deep neural networks (DNNs) that have been shown to be susceptible to adversarial perturbations; easily misused by attackers to generate malicious outputs~\cite{2017burger,nichols2017tvexamples,yuancommandersong}. \paragraph{Existing work on ASR adversarial attacks.} Adversarial perturbations\footnote{Also referred to as Adversarial examples or Adversarial attacks.} were first presented by Szegedy et al. to demonstrate the lack of robustness in DNN models -- a small perturbation of an input may lead to a significant perturbation of the output of a DNN model~\cite{szegedy2013intriguing}. This vulnerability can be exploited by adversaries to augment the original input with a crafted perturbation, invisible to a human but sufficient for the DNN model to misclassify this input. This influential work triggered several research contributions in the computer vision domain that generate adversarial attacks for testing security and robustness of vision tasks~\cite{goodfellow2014explaining, kurakin2016adversarial, moosavi2017universal}. Research on the use of adversarial attacks on ASRs is, however, only just emerging, and can be classified along two dimensions, \\ \textbf{1. Un-targeted or Targeted} The aim of un-targeted adversarial audio is to make an ASR model incorrectly transcribe speech while sounding similar to original input, while the aim of targeted adversarial attack is to cause an ASR model to output a specific transcription (target) injected by an adversary. This paper focuses on un-targeted adversarial attack. \\ \textbf{2. Whitebox or Blackbox Threat Model} In a whitebox threat model, the adversary assumes knowledge of the internal structure of the ASR model, while in a blackbox threat model, the adversary can only probe the ASR with input audio and analyze the resulting transcription. We use a blackbox threat model. Most existing methods~\cite{carlini2016,carlini2018,qin2019,Yakurarobust} for ASR adversarial attack generation are \emph{targeted and whitebox}. These methods suffer from one or more of the following drawbacks (1) Whitebox assumption is not practical and lacks portability since commercial ASR application developers do not typically reveal the internal workings of their systems, (2) time taken to generate attacks is considerable and cannot be used in real-time. , and (3) poor quality audio in attacks makes them easily detectable by defense techniques like ~\cite{carlini2018,universal}. Existing few methods~\cite{didyouhear,taori2019targeted} for \emph{blackbox, targeted} attacks suffer from the drawback of intractable number of queries to the ASR, that are time-consuming and impractical. \emph{Blackbox untargeted} attacks that do not require knowledge of the internal NN structure or query access for text output would address the above limitations and the only known technique was proposed by Abdullah et al. in 2020~\cite{abdullah2019hear}. To create adversarial audio, they decompose the original audio and remove components with low-amplitude that they believe will not affect audio comprehension. Although interesting, their approach does not strive to ensure the adversarial and original audio sound similar and difference in transcribed texts is not measured. Additionally, the ability of their attacks to bypass defense systems is not effective. \paragraph{Proposed Attack Generation} We propose a blackbox un-targeted attack generation approach that is faster, more portable across ASRs, and robust to a state-of-the-art defense than Abdullah et al. Our approach for attacking ASRs uses a psychoacoustics concept called frequency masking that determines how sounds interfere and mask each other. We manipulate masked (or inaudible) components of the original audio in such a way that their spectral density is different but they remain masked. Such a manipulation ensures the adversarial attack is indistinguishable from the original but has the potential to change the resulting transcription. We propose three attack generation approaches centered around this idea -- \texttt{Griffin Lim Reconstruction (GL),Original Phase (OP)} and \texttt{Deletion (DE)}. Additionally, to help increase similarity to the original audio, we provide the option of selectively introducing perturbations to a small fraction of audio frames rather than all of them. Our approach provides three frame selection options -- \texttt{Random, Important} and \texttt{All}. Among them, the \texttt{Important} option identifies the frames that cause the most change to output text when set to zero and we then introduce perturbations to just these important frames. We evaluate our approach on three different ASRs -- Deepspeech \cite{deepspeech2014}, Sphinx \cite{Sphinx} and Google cloud speech-to-text API, using two different input audio datasets -- Librispeech \cite{librispeech} and Commonvoice \cite{commonvoice:2020}. We assess the effectiveness of our approaches for attack generation and frame selection using the metrics - \texttt{WER, Similarity}, attack \texttt{Success Rate} and \texttt{Detection score}. We also compare our approach with a targeted whitebox state-of-the-art (SOTA) method~\cite{carlini2018} and an untargeted blackbox SOTA method~\cite{abdullah2019hear}. It is worth noting that the scale of our evaluation is much bigger than existing work~\cite{carlini2018,qin2019,abdullah2019hear} as we use different audio datasets and ASRs. We find our approach that uses \texttt{OP} or \texttt{DE} for attack generation combined with \texttt{Important} or \texttt{All} frame selection was effective at attacking all three ASRs. Our techniques were $312\times$ faster than the whitebox targeted SOTA, and $7\times$ faster than blackbox targeted SOTA method. The defense system, Waveguard~\cite{waveguard}, was less effective at detecting attacks generated with our techniques compared with the other two SOTA methods. \noindent In summary, the contributions in this paper are as follows: \begin{enumerate} \item A novel approach for untargeted blackbox adversarial attack generation on ASRs based on frequency masking. \item Frame selection option to selectively perturb frames in an audio. \item Extensive empirical evaluation of the attack generation and frame selection options within our approach on three ASRs and two audio datasets. We also compare performance against SOTA whitebox and blackbox techniques. \end{enumerate} \noindent The source code for our approach can be found at: \\ \indent \href{https://anonymous.4open.science/r/lalalala-9DEE}{https://anonymous.4open.science/r/lalalala-9DEE}. \section{Background} We present a brief description of a typical ASR model and the frequency masking concept used in our approach. \label{sec:background} \subsection{Automatic Speech Recognition (ASR)} \label{sec:ASR} \begin{figure*}[htbp] \includegraphics[scale=0.665]{ASR_structure.png} \caption{Pipeline showing Stages in a typical Automatic Speech Recognition (ASR) System. \label{fig:asr_structure} \end{figure*} Structure and workflow within a typical ASR is shown in Figure~\ref{fig:asr_structure}. Most current ASRs comprise the following stages when transcribing an input audio to a text output. \subsubsection{Preprocessing} This step removes high-frequency noise in the audio. A voice activity algorithm is used to detect human voice parts in a given input audio and then passes it through a low-pass filter to remove high-frequency noise that is inaudible to humans. \subsubsection{Signal Processing stage} \label{sec:signal} Output from this stage is audio features that are subsequently used by a deep neural network. In the signal processing stage, the audio signal in the time domain is sampled into frames with a certain sampling rate(like 16000HZ and 8000HZ) and every frame is converted to the frequency domain using Fast Fourier Transform. The result of this step is a complex matrix, where the real part of the matrix is the amplitude information of the frame, and the imaginary part is the phase information. The phase spectrum is discarded, and only the amplitude spectrum is retained. This amplitude spectrum is the expression of the audio in the frequency domain, which details different frequencies and corresponding intensities in the frame. Subsequent steps in the ASR are completed on the basis of the amplitude spectrum. To extract audio features, the amplitude spectrum is passed through Mel filters and Discrete Cosine Transform (DCT). The output is Mel Frequency Cepstral Coefficient (MFCC), which is commonly used in ASRs as features of audios. Detailed description of this step can be found in~\cite{SOK}. \subsubsection{Neural network prediction and output selection stage} The extracted features from the audio are fed into a deep neural network (DNN), such as a Recurrent Neural Network, that then predicts a probability distribution of characters for every time step or audio frame. From the character sequence distributions, an output selection algorithm, such as Beam search, is used to select the most likely translated text as shown in Figure~\ref{fig:asr_structure}. More details on this stage can be found in~\cite{SOK}. It is worth noting that much of the existing work on adversarial attacks against ASRs are aimed at the DNN stage (prediction stage) and typically use gradient-based optimization to minimize the difference between the target and output text~\cite{carlini2018,qin2019,Cal_masking}. In contrast, our approach for generating adversarial attacks does not rely on a target output text or query outputs from the ASR. We, instead, make changes to the original audio signal based on frequency masking of its components that is described in the next Section. \subsection{Frequency Masking and Masking Threshold Computation} \label{sec:masking} \begin{figure}[htbp] \includegraphics[width=0.5\textwidth, height=150pt]{origin_masking.png} \caption{Frequency masking phenomenon: the masker creates a \emph{masking threshold} in the nearby frequency domain such that other sounds below this threshold cannot be heard. \label{fig: masking_threshold} \end{figure} Frequency masking is a psychoacoustic phenomenon that occurs when the perception of a sound is affected and masked by the presence of another sound, distracting the ear from being able to clearly perceive the simultaneous sounds~\cite{Cal_masking}. For example, on a quiet night, consider that the sound of chirping crickets is audible but in the presence of the TV sound, we stop hearing the crickets chirping as the TV sound masks it. In Figure~\ref{fig: masking_threshold}, the TV sound would be the \emph{masker} (seen as a red bar) that creates a masking threshold~\cite{Cal_masking} which is the minimum level at which other sounds in the same frequency frame can be heard. The chirping sound of the crickets falls below the masking threshold (seen as a blue bar) and therefore is not audible in the presence of the TV. The chirping sound in Figure~\ref{fig: masking_threshold} would be the \emph{maskee}. \paragraph{Masking Threshold Computation} To calculate the masking threshold for a given audio, we need to first convert the audio from the expression in the time domain to the frequency domain (using FFT in Section~\ref{sec:signal}), then discard the phase information in the spectrum. We then use the amplitude information of the spectrum to calculate the log-magnitude power spectral density (PSD) of this audio. The PSD characterizes the energy distribution on a unit frequency, and is used widely to describe the frequency domain results of the signal~\cite{masking_threshold,Cal_masking}. The red and blue bars in Figure~\ref{fig: masking_threshold} represent the PSD (in dB) of maskers and maskees, respectively, for the given frequency bin. According to~\cite{Cal_masking,qin2019}, maskers are identified from the audio PSD using two conditions: the PSD of a masker should be greater than the absolute threshold of hearing (ATH), and it must be the highest PSD estimate within a certain surrounding frequency range. After identifying the maskers, their respective masking thresholds will be computed using a two-slope function, described in~\cite{masking_threshold}. If there are several maskers and associated masking thresholds, they will be combined into a global masking threshold for the audio like in~\cite{qin2019}. Once the maskers are identified, the other PSDs in the audio are labelled maskees. A more detailed description of the computation of masker, maskee and masking threshold can be found in~\cite{masking_threshold,qin2019}. We use this masking phenomenon observed with simultaneous sounds to create adversarial audio that sounds similar to the original audio but has the potential to produce a different transcription. We achieve this by first taking the original audio that is composed of many sounds, identifying the maskers and maskees in it using the approach from~\cite{masking_threshold,qin2019} (red and blue bars in Figure~\ref{fig: masking_threshold}). We then manipulate the PSD of the maskees so it stays below the masking threshold, ensuring they are not audible, like in the original audio. Nevertheless, this manipulation can still affect the transcribed text. We create the adversarial audio by composing together the unchanged maskers and manipulated maskees. In terms of our earlier example with the TV sound and crickets chirping, we identify the TV sound as the masker and the chirping crickets as the maskee. We then manipulate the PSD of the cricket sound, staying within the masking threshold, to produce an adversarial audio that composes the TV sound with the manipulated chirping sound. Section~\ref{sec:methodology} describes our approach and the techniques used for manipulation in detail. \iffalse First of all, the loudness in the louder sound mentioned above is not the same as the amplitude of the sound in the frequency domain\ajitha{This sentence is not well written}. The loudness here refers to log-magnitude power spectral density (PSD), which can be seen for details in \cite{Lin2015}. In fact, PSD is a new expression of signal in the frequency domain calculated from the amplitude information on the frequency spectrum. It retains the amplitude information of the spectrum, but loses the phase information. Therefore, Therefore, our purpose is to add noise with PSD less than or equal to the masking threshold to the audio. Please see section~\ref{sec:GL} for specific methods. \fi \begin{comment} The frequency masking refers to a phenomenon in which the first sound cannot be heard by the human ear due to the appearance of the second sound. Specifically, when there is no other sound, a slight sound can be received by the human ear; but when the second sound appears, the first sound is masked by this sound. The sound that masks other sounds is called the masker, and the second sound is the masked sound\cite{Cal_masking}. As shown in figure \ref{fig:masking_threshold},due to the appearance of masker $S_0$, $S_1$ and $S_2$ that could be heard in a quiet state are all below the masking threshold created by the masker and cannot be heard. Each masker actually creates a masking threshold curve for the entire frequency domain of this frame. When the masking thresholds of multiple maskers within one frame and the auditory thresholds in the quiet state are superimposed on each other, the global masking threshold on this frame is formed. \textbf{As long as the noise we construct can fall below or equal to the masking threshold, theoretically the new audio will sound no different from the original audio.} \begin{figure}[htbp] \centering \includegraphics[scale=0.7]{masking_threshold.jpg} \caption{Frequency masking\cite{masking_threshold}} \label{fig:masking_threshold} \end{figure} \iffalse \subsection{Calculation of masking threshold} The masking threshold is calculated in the frequency domain, so we need to change each frame of audio from the time domain to the frequency domain through a Short-Time Fourier Transform. This step is exactly the same as the first step (Fast Fourier Transform) of the signal processing stage of ASR mentioned earlier. Then we calculate the log-magnitude power spectral density (PSD) of each frequency bin on each frame. Then we use the method in \cite{Cal_masking} to calculate the masking threshold of this frame. When we construct the noise, as long as the normalized PSD estimate of the noise is less than or equal to the masking threshold, the noise can be masked by the original audio so that it will not be noticed. This is actually the source of inspiration for our noise construction method, that is, we can reversely construct the maximum noise that can not be received by the human ear by using the masking threshold. \fi \end{comment} \subsection{Griffin-Lim Algorithm} \label{sec:GLbackground} To construct an adversarial audio from the maskers and manipulated maskees in the amplitude spectrum, we use the Griffin-Lim (GL) algorithm that helps reconstruct audio waveforms with a known amplitude spectrum but an unknown phase spectrum\cite{GL1984}. Steps in the algorithm are as follows: (1) Randomly initialize a phase spectrum, (2) Use this phase spectrum and the known amplitude spectrum to synthesize a new waveform through Inverse Short-Time Fourier Transform (3) Use the synthesized speech to get new amplitude spectrum and new phase spectrum through Short-time Fourier Transform, (4) Discard the new amplitude spectrum, (5) Repeat steps 2, 3, 4 for a fixed number of iterations. Output is a waveform with an estimated phase spectrum and the known input amplitude spectrum. \section{Methodology} \label{sec:methodology} \begin{figure}[htbp] \includegraphics[width=0.75\textwidth]{structure.png} \caption{Our approach for generating adversarial attacks comprises of three stages, 1. Frame Selection, 2. Attack generation and finally 3. Adversarial audio formed by combining information in the first two stages. \label{fig: framework} \end{figure} In this section, we propose techniques for generating adversarial attacks for ASRs. As seen in Figure~\ref{fig: framework}, our methodology has two important stages, 1. Audio Frame Selection and 2. Attack Generation. The general workflow in our approach is as follows: Given an input audio example, we first select frames within it using one of the three techniques for audio frame selection -- \texttt{Random, Important} and \texttt{All}. Independently, we generate manipulated audio from the input audio using one of three attack techniques -- \texttt{GL Reconstruction (GL), Original Phase (OP), Deletion (DE)}. We then replace the selected frames in the original audio with corresponding manipulated audio frames while keeping the rest of the audio unchanged. The combination of original and manipulated audio frames forms the adversarial attack audio. \paragraph{Threat Model and Assumptions} The attack techniques in our approach assume a black-box threat model, in which an adversary has no knowledge of the internal workings or architecture of the target ASR model. We treat the ASR as a black-box to which we make requests in the form of input audio and receive responses in the form of transcriptions in text format. We also assume that an adversary can only make a limited number of requests to the target ASR. We also accommodate the scenario when the adversary cannot make any requests to the target ASR. Finally, we assume an over the line attack. This means that digital files are sent directly to the target ASR system for transcription, as opposed to playing back audio files over the air through speakers. \iffalse As shown in this figure, we first send the original audio into the Importance Frames Selection component to get a list of the important frames in this audio. Then we save this list, and send the original audio to the attack generation component, and use three methods(including white noise method, GL reconstruction method, and original phase reconstruction method) on the original audio. After this part, three corresponding adversarial noises are generated. Finally,combine the important frames in the noise which are recorded in the previously generated important frames list with the original audio, and get the last adversarial attack that is more similar to the original audio. \fi \subsection{Stage 1: Frame Selection} As mentioned in~\ref{sec:signal}, the audio signal input to an ASR is sampled into frames in the signal processing stage. We explore generation of adversarial audio by modifying a subset of frames in the entire audio. We provide three approaches to select audio frames that will be later manipulated -- \texttt{Random, Important} and \texttt{All}. We will start by describing the technique to select \texttt{Important} frames. \subsubsection{Important:} The rationale for selecting important frames is to restrict manipulation to a small number of significant frames. This allows the adversarial audio to remain similar to the original while still affecting the output transcription text. We define importance of frames based on the proportion of \texttt{WER} produced by masking that frame in the original audio. The steps involved in selecting important frames are as follows, \begin{enumerate} \item For every input audio example, record output translated text from ASR. \item Pick one of the input audio examples. For every frame in the processed audio example, set it to zero (masked) while keeping the remaining frames unchanged. Record translated text using the ASR for the masked audio. \item Compute \texttt{WER} between the masked and original output. Repeat this for all frames. The frames that result in a non-zero \texttt{WER} are identified as important frames for that audio example. Magnitude of \texttt{WER} change for frame selection can be altered to suit needs. \item Repeat Steps 2 and 3 for the remaining input audio examples. \end{enumerate} At the end of this process, every input audio example is associated with a list of important frames. \iffalse We judge the importance of frames including the following steps: 1) The original audio is input into ASR to get the original translation text. 2) Set a frame to zero for masking, and input this audio into ASR to get the corresponding translated text after this frame is set to zero. 3) Calculate the \texttt{WER} between the translated text after masking and the original translated text. 4) Repeat the second and third steps until all frames have been processed. 5) Record the frames that can change the translated text by setting them to zero as important frames. \fi \subsubsection{Random:} To enable us to compare the effectiveness of only using important frames in frame selection, we also provide a means to select frames randomly. The number of frames selected for a given audio example is set to be the same as the number of important frames in that audio. \subsubsection{All:} We simply use \emph{all} the frames from the manipulated audio generated in Stage 2 (see Section~\ref{sec:stage2}). Using \texttt{All} frames helps us assess how much \texttt{WER} was achievable. In addition it helps quantify the tradeoff in \texttt{WER} and \texttt{Similarity} when compared to frame selection with \texttt{Important} and \texttt{Random}. \begin{figure}[htbp] \includegraphics[width=0.5\textwidth, height=180pt]{method.png} \caption{Attack generation methods, GL and OP, increase the PSD of maskees to the masking threshold. Attack generation with DE suppresses the PSD of maskees to zero. \label{fig: method_3} \end{figure} \subsection{Stage 2: Attack Generation} \label{sec:stage2} We discuss three attack generation techniques -- GL, OP and DE, that manipulate the amplitude spectrum of the input audio example using the concept of frequency masking, described in Section~\ref{sec:masking}. We illustrate the manipulations in Figure~\ref{fig: method_3} and describe them in the Sections below. All three techniques take the input audio, generate audio frames in the frequency domain (obtained with sampling and fast fourier transform), with each frame having amplitude and phase information. For each frame, we compute the masking threshold, maskers and maskees using established techniques discussed in Section~\ref{sec:masking \subsubsection{GL Reconstruction (GL)} \label{sec:GL} As seen in the top part of Figure~\ref{fig: method_3}, \texttt{GL} (and \texttt{OP}) increases the PSD of all maskees (blue bars in the original audio) to the global masking threshold. Masker PSDs remain unchanged. We then compute an updated amplitude based on the maskers and altered maskees PSD inversely~\cite{masking_threshold}\footnote{$Amplitude(k)=N\sqrt{10^{\frac{PSD(k))}{10}}}$, where $k$ is the index of the frequency bin and $N$ represents the length of frame. }. \texttt{GL} discards phase information of the input audio waveform. Instead, it estimates phase information using the GL reconstruction technique discussed in Section~\ref{sec:GLbackground}. The estimated phase information is combined with the updated amplitude information and is used to synthesize the attack audio through inverse FFT. \iffalse The \texttt{GL} technique combines updated amp As shown in~\ref{fig: method_3}, in the \texttt{GL} and \texttt{OP} techniques, we aim to increase the PSD of all maskee to the global masking threshld. Then we compute the modified amplitude information based on the modified PSD. Finaly, the modified amplitude information is combined with phase information estimated by the Griffin-Lim algorithm to give the output perturbed audio signal. Steps involved are listed below.\\ \iffalse As mentioned in Section~\ref{sec:background}, in the signal processing stage of the ASR pipeline, the amplitude information of an audio example is first extracted using short-time Fourier transform, and the subsequent steps such as Mel filtering are performed using this amplitude information. To produce an audio that is not very different from the original audio, the noise needs to be controlled below the masking threshold calculated from the original audio. So we can reconstruct the adversarial and inaudible noise from the masking threshold through the Griffin-Lim algorithm. \fi \xiaoliang{need to be changed} \noindent \textbf{1. Masking Threshold Calculation:}c \xiaoliang{need to delete something} The masking threshold is calculated in the frequency domain, so we change each frame of the given audio from the time domain to the frequency domain through a Short-Time Fourier Transform. This will result in amplitude and phase information for each audio frame. It is worth noting that the phase information is discarded for the next step and we reconstruct phase in Step 3. We then calculate the log-magnitude power spectral density (PSD) of each frequency bin on each frame. Then we use the method in~\cite{Cal_masking} to calculate the masking threshold of every frame in a given audio.\\ \noindent \textbf{2. Amplitude from Modified PSD:} After that we get the masking threshold, we use the calculated masking threshold (for each frame) as the PSD of the maskee. We then compute the corresponding adversarial amplitude information for the adversarial audio based on the modified PSD inversely~\cite{Cal_masking}. \noindent \textbf{3. Estimating Phase:} As a final step we compute the phase information for the adversarial audio. We do this using the Grffin-Lim algorithm described in Section~\ref{sec:GLbackground} to estimate the phase information of the adversarial audio. \noindent \textbf{4. Synthesize Adversarial Audio:} We synthesize the adversarial audio signal using inverse FFT over the estimated phase, computed in Step 3, and the masking threshold based amplitude from Step 2. \fi \subsubsection{Original Phase (OP)} The primary difference between the \texttt{OP} and \texttt{GL} technique is in the phase information. Estimating phase using the GL algorithm introduces distortion and lack of consistency across multiple runs. To avoid this problem, the \texttt{OP} technique retains phase information from the original audio. We believe using phase information from the original audio to synthesize the attack audio will make it more similar to the original audio. \iffalse Steps involved in the \texttt{OP} technique are shown below (differences with GL are italicized),\\ \noindent \textbf{1. Masking Threshold Calculation:} Same as Step 1 of \texttt{GL} technique, described in Section~\ref{sec:GL}, except that we \emph{save the phase information of the original audio example} as opposed to discarding it. \\ \noindent \textbf{2. Modifying PSD:} Same as Step 2 of \texttt{GL} technique. \\ \noindent \textbf{3. Synthesize Adversarial Audio:} As with \texttt{GL}, we use inverse FFT to synthesize the noisy audio but using \emph{phase information from the original audio} and the amplitude corresponding to the masking threshold. \fi \subsubsection{Deletion (DE)} Previous methods, \texttt{OP} and \texttt{GL}, ensure the attack audio sounds no different from the original input by increasing the PSD of the maskees up to the maximum limit (which is the masking threshold) for them to remain masked. The \texttt{DE} technique, on the other hand, suppresses the PSD of the maskees to the minimum value of zero which is akin to deleting them. This manipulation will not affect the audio perception as the masking threshold is unaffected. The \texttt{DE} technique, thus, deletes all maskee PSDs that are hidden under the masking threshold. Subsequently, we use the modified amplitude after deletion and combine it with the \emph{original phase} information from the input audio (similar to \texttt{OP}'s use of phase). We use inverse FFT as before to synthesize attack audio from the amplitude and phase information. \iffalse On the premise of ensuring that the masking threshold does not change (that is, no audible perception change), the above two methods(\texttt{OP} and \texttt{GL}) both increase the PSD of the maskee to the maximum limit. As shown in~\ref{fig: method_3}, then we think we can reduce the PSD of the maskee to a minimum under the same premise. Therefore, we also propose a way to delete all maskee that are hidden under the masking threshold to construct an adversarial audio that is not detectable by the human ear. Steps involved in the \texttt{DE} technique are shown below. \noindent \textbf{1. Maskers Recognition:} In the process of calculating the masking threshold, we will first identify the maskers in the original audio after calculation PSD of every frames. As shown in~\ref{sec:masking}, after obtaining the PSD estimate for each frame of audio, we need to find maskers based on several conditions \noindent \textbf{2. Deleting maskees:} We keep these maskers and set the PSD of all other maskee to 0. In this way, while changing the audio, it sounds theoretically indistinguishable from the original audio. \\ \noindent \textbf{3. Synthesize Adversarial Audio:} As with \texttt{OP}, we use inverse FFT to synthesize the noisy audio using \emph{phase information from the original audio}. \fi \subsection{Stage 3: Combining Original and Attack Audio} In this final stage, we create an adversarial attack by taking the original audio, replacing the selected frames (identified in Stage 1) with corresponding frames from the attack audio (generated in Stage 2). Other frames from the original audio are left unchanged. This modified version of the original serves as an adversarial attack. The source code for our adversarial attack generation approach, with the three attack generation and three frame selection methods, can be found at ~\href{https://anonymous.4open.science/r/lalalala-9DEE}{https://anonymous.4open.science/r/lalalala-9DEE}. \section{Experiments} We evaluate the effectiveness of our techniques, described in Section~\ref{sec:methodology}, using two different datasets -- (1) 1000 audio samples from Librispeech~\cite{librispeech} and (2) 200 audio samples from Commonvoice~\cite{commonvoice:2020}. We use three ASRs in our evaluation, namely, Deepspeech~\cite{deepspeech2014}, Sphinx~\cite{Sphinx}, and Google ASR. Our choice of datasets and ASRs were inspired by their use in related work for adversarial ASR attack generation~\cite{abdullah2019hear}\cite{carlini2018}\cite{qin2019}\cite{Zhang2017DolphinAttack}. We discuss the defense system used to assess the effectiveness of the adversarial attacks, evaluation metrics and the research questions in our experiments in the rest of this Section. \subsection{Detection and defense} \label{sec:defense} The ability to evade defense systems is an important measure of effectiveness for adversarial attacks. Defense systems have evolved to detect and defend a significant fraction of adversarial attacks. In our experiments, we use a SOTA adversarial audio detection and defense system, Waveguard~\cite{waveguard}, proposed by Hussain et al. in 2021. We chose Waveguard as our defense system as it is demonstrated to be faster, more effective and capable of detecting both targeted and untargeted attacks compared to existing detection techniques, like Temporal Dependency Detection Method~\cite{Temporal}. We report how well Waveguard performed (as an AUC score) in detecting adversarial attacks in our experiments. Attack detection within Waveguard is divided into two steps. The first step is to transform the input audio using one of several functions that are meant to preserve (or closely preserve) the transcription text. For example, a transformation may start by down-sampling the input audio, followed by up-sampling to the original sampling rate using interpolation. The second step is to compare the Character Error Rate(CER) between the transcription text for the original and transformed audio. If the difference between the texts is greater than a predefined threshold, then the input audio is classified as adversarial, and benign otherwise. \subsection{Evaluation Metrics} \label{sec:metrics} We use four metrics to measure the effectiveness of our techniques -- \texttt{Word Error Rate (WER), Similarity, Success Rate} and \texttt{Detection score}. We are interested in generating adversarial attacks that sound similar to the the original audio (high \texttt{Similarity}) but produce a transcription different from the original (high \texttt{WER}). Additionally, we would like the technique to be portable, i.e generate adversarial attacks that are usable across several ASRs (high \texttt{Success Rate}). Finally, we want the generated attacks to be robust to get past SOTA defense systems, like Waveguard~\cite{waveguard} (lower \texttt{Detection score}). We provide definitions of each of these metrics below. \paragraph{\texttt{WER}} is a common metric to evaluate the difference in ASR transcription from original versus adversarial audio~\cite{WER1}~\cite{WER2}. \texttt{WER} is computed using Equation~\eqref{WER}, \begin{equation} \text{WER}=\frac{\text {Insertions}+\text {Substitutions}+\text {Deletions}}{\text {Total Words in Correct Transcript}} \label{WER} \end{equation} \paragraph{Similarity} We use the widely used \texttt{PESQ} metric~\cite{PESQ} that measures quality of audio relative to a reference audio to assess similarity of adversarial audio to the original. The PESQ algorithm accepts a noisy signal, which in our case is the adversarial attack, and an original reference signal, which is the input audio for our method. The PESQ score ranges from -0.5 to 4.5. The higher the score, the better the voice quality. According to~\cite{PESQover3}, audio quality is deemed ``good" when its \texttt{PESQ} score is above 3.0. We use this standard for classifying the quality of the adversarial audio. In this paper, we use \texttt{Similarity} metric to mean the \texttt{PESQ} score. \paragraph{\texttt{Success Rate}} shown in Equation~\eqref{success}, refers to the ratio of adversarial attacks that can successfully attack a given ASR. A successful attack, as defined by Abdullah et al~\cite{abdullah2019hear}, happens when the adversarial attack results in a non-zero WER with respect to the original transcription. \begin{equation} \text{Success Rate}=\frac{\text {Number of successful attacks}}{\text {Total number of adversarial attacks}} \label{success} \end{equation} \paragraph{\texttt{Detection score}} refers to the effectiveness of the Waveguard defense system in correctly classifying adversarial attacks. We use the area under the curve (AUC) metric, reported by Waveguard~\cite{waveguard}, to evaluate correct classification of adversarial attacks. The AUC score ranges from 0.0 to 1.0. We aim for a lower Waveguard AUC score or \texttt{Detection score} with our techniques. \subsection{Research Questions} \label{sec:RQ} We aim to answer the following research questions (RQs) in our experiments, \\ \noindent\textbf{RQ1:} Which frame selection method among \texttt{Random, Important, All} performs best? \\ We compare the \texttt{WER} and \texttt{Similarity} achieved by the different frame selection techniques across three different ASRs and two input audio datasets. Answering this research question will help us assess the value of selecting a subset of frames versus just changing the whole audio. \noindent\textbf{RQ2:} Which attack generation technique among \texttt{GL, OP, DE} performs best?\\ We compare the \texttt{WER}, \texttt{Similarity} achieved by the different attack generation techniques across three different ASRs and two different input datasets. We also measure \texttt{Time} taken by each technique. \noindent\textbf{RQ3:} Are the adversarial attacks portable across ASRs?\\ One of the primary selling points of our techniques is that they are blackbox and untargeted, and therefore agnostic to the structure and workings within ASRs. We validate this by evaluating the \texttt{Success Rate} of the generated adversarial attacks across three different ASRs. \noindent\textbf{RQ4:} Does our technique perform better than SOTA techniques? \\ We selected representative and high-performing SOTAs in our comparison, namely a whitebox targeted technique proposed by Carlini et al~\cite{carlini2018}, and a blackbox technique by Abdullah et al~\cite{abdullah2019hear}. Carlini et al. generate adversarial attacks using Deepspeech ASR and the Commonvoice input dataset. To allow comparison, we use the same ASR and input dataset with our techniques. Owing to the targeted nature of their technique, they require the transcription text to be specified in advance. To address this need, we use the transcription from Deepspeech ASR with adversarial attacks generated by our technique as Carlini et al.'s target. We then compare our technique with Carlini et al.~with respect to time taken to generate adversarial attacks, \texttt{Similarity} to original audio, \texttt{Success Rate} on other ASRs, Google and Sphinx, and \texttt{Detection score}. Since the transcription text in both techniques are the same, it is not useful to compare \texttt{WER}. We compare our technique against Abdullah et al. using \texttt{WER, Similarity, Success Rate, Detection Score, Time} over different ASRs and both the Commonvoice and Librispeech dataset. \iffalse \noindent\textbf{RQ4:} Does our blackbox untargeted technique perform better than a whitebox targeted technique recently proposed by Carlini et al.~\cite{carlini2018}? \\ Owing to the absence of available blackbox untargeted techniques that we can directly compare against, we instead use a highly cited whitebox targeted technique by Carlini et al.~\cite{carlini2018} that has not been fully surpassed by any other whitebox technique thus far. Additionally, Carlini et al.~\cite{carlini2018} have been compared against in similar articles\cite{qin2019}\cite{Cal_masking}\cite{schonherr2020imperio}. Carlini's whitebox targeted method for generating adversarial audio is based on the Deepspeech ASR and uses \texttt{Commonvoice} as the input dataset. To enable comparison, we use results from our technique for the same ASR and input dataset. Owing to the targeted nature of their technique, they require the text output to be used as target to be specified in advance. We, therefore, use the the text output from our adversarial attacks as the targets in their technique. We then compare our technique with Carlini et al.~with respect to time taken to generate adversarial audio examples, \texttt{Similarity} to original audio, and \texttt{Success Rate} on other ASRs, Google and Sphinx. Since the text outputs in both techniques are the same, it does not make sense to compare \texttt{WER}. \fi \paragraph{Experiment settings} We use Google Colab Pro with two NVIDIA Tesla T4 GPUs(16GB RAM, 2560 cores) to run our experiments. We use the following audio parameters in our experiments: Sampling rate of $16000HZ$, frame length of $2048$ and frame shift of $512$. \section{Results and Analysis} \label{sec:results} We present and discuss the results from our experiments in the context of the research questions presented earlier. It is worth noting that \texttt{WER} and \texttt{Similarity} are measured for each attack, while \texttt{Success rate} and \texttt{Detection score} are measured across an entire dataset. Techniques should try to maximise \texttt{WER, Similarity} and \texttt{Success rate} while minimising \texttt{Detection score} by Waveguard. \iffalse \begin{figure}[htbp] \includegraphics[scale=0.45]{wer_librispeech.png} \caption{Box plots of the \texttt{WER} of the adversarial attacks generated with the Librispeech dataset.} \label{fig: librispeech_wer_simi} \end{figure} \begin{figure}[htbp] \includegraphics[scale=0.45]{wer_commonvoice.png} \caption{Box plots of the \texttt{WER} of the adversarial attacks generated with the Commonvoice dataset.} \label{fig: commonvoice_wer_simi} \end{figure} \fi \begin{table*}[htbp] \centering \setlength{\tabcolsep}{3mm}{ \begin{tabular}{cc} Librispeech & Commonvoice \\ \includegraphics[scale=0.45]{wer_librispeech.png} & \includegraphics[scale=0.45]{wer_commonvoice.png} \\ \end{tabular}} \caption{Box plots of the \texttt{WER} of the adversarial attacks generated with two different datasets.} \label{table: wer} \end{table*} \begin{figure}[htbp] \includegraphics[scale=0.60]{similarity.png} \caption{Box plots of the \texttt{Similarity} of the adversarial attacks generated with all datasets.} \label{fig: Similarity} \end{figure} \begin{figure}[htbp] \includegraphics[scale=0.60]{optimal_im.png} \caption{Pareto front over adversarial attacks generated by \texttt{Random}, \texttt{Important} and \texttt{All} frame selection techniques on Commonvoice dataset and Deepspeech ASR using \texttt{OP}.} \label{fig: optimal_frames} \end{figure} \begin{figure}[htbp] \includegraphics[scale=0.60]{optimal.png} \caption{Pareto front over adversarial attacks generated by \texttt{GL}, \texttt{OP} and \texttt{DE} on Commonvoice dataset and Deepspeech ASR using \texttt{Important} frames.} \label{fig: optimal} \end{figure} \iffalse \begin{figure*}[htbp] \centering \subfloat{\includegraphics[width=0.45\textwidth, height=150pt]{optimal.png}}\hfill \subfloat{\includegraphics[width=0.45\textwidth, height=150pt]{optimal_im.png}}\\ \caption{An array of figures} \label{figs} \end{figure*} \fi \iffalse \begin{table*}[htbp] \centering \setlength{\tabcolsep}{3mm}{ \begin{tabular}{|c| >{\columncolor[HTML]{FFCCC9}}c >{\columncolor[HTML]{FFCCC9}}c >{\columncolor[HTML]{FFCCC9}}c | >{\columncolor[HTML]{FFFC9E}}c >{\columncolor[HTML]{FFFC9E}}c >{\columncolor[HTML]{FFFC9E}}c |} \hline & \multicolumn{3}{c|}{\cellcolor[HTML]{FFCCC9}Librispeech} & \multicolumn{3}{c|}{\cellcolor[HTML]{FFFC9E}Commonvoice} \\ \cline{2-7} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}Deepspeech} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}Sphinx} & Google & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}Deepspeech} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}Sphinx} & Google \\ \hline All vs Random & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}0.001} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}0.001} & 0.19 & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}0.033} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}0.014} & 0.27 \\ \hline All vs Important & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}0.001} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}0.001} & 0.15 & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}0.044} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}0.023} & 0.9 \\ \hline Important vs Random & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}0.001} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}0.032} & 0.85 & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}0.037} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}0.044} & 0.14 \\ \hline \end{tabular}} \caption{ P-values using One way Anova and Tukey’s HSD for pairwise comparison of \texttt{WER} achieved by frame selection methods (using \texttt{OP} attack generation).} \label{table: P-value_WER_Frames} \end{table*} \begin{table*}[htbp] \centering \setlength{\tabcolsep}{3.5mm}{ \begin{tabular}{|c| >{\columncolor[HTML]{FFCCC9}}c >{\columncolor[HTML]{FFCCC9}}c >{\columncolor[HTML]{FFCCC9}}c | >{\columncolor[HTML]{FFFC9E}}c >{\columncolor[HTML]{FFFC9E}}c >{\columncolor[HTML]{FFFC9E}}c |} \hline & \multicolumn{3}{c|}{\cellcolor[HTML]{FFCCC9}Librispeech} & \multicolumn{3}{c|}{\cellcolor[HTML]{FFFC9E}Commonvoice} \\ \cline{2-7} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}Deepspeech} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}Sphinx} & Google & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}Deepspeech} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}Sphinx} & Google \\ \hline GL vs OP & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}0.001} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}0.001} & 0.001 & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}0.001} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}0.001} & 0.189 \\ \hline GL vs DE & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}0.001} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}0.001} & 0.001 & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}0.002} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}0.001} & 0.05 \\ \hline OP vs DE & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}0.04} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}0.03} & 0.60 & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}0.9} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}0.58} & 0.818 \\ \hline \end{tabular}} \caption{P-values using One way Anova and Tukey’s HSD for pairwise comparison of \texttt{WER} achieved by attack generation methods (using \texttt{All} frames). } \label{table: P-value_WER_Techs} \end{table*} \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|} \hline {\color[HTML]{000000} } & {\color[HTML]{000000} Librispeech} & {\color[HTML]{000000} Commonvoice} \\ \hline {\color[HTML]{000000} Random VS All} & {\color[HTML]{000000} 0.001} & {\color[HTML]{000000} 0.001} \\ \hline {\color[HTML]{000000} Important VS All} & {\color[HTML]{000000} 0.001} & {\color[HTML]{000000} 0.001} \\ \hline {\color[HTML]{000000} Random VS Important} & {\color[HTML]{000000} 0.041} & {\color[HTML]{000000} 0.51} \\ \hline \end{tabular} \caption{P-values using One way Anova and Tukey’s HSD for pairwise comparison of \texttt{Similarity} achieved by frame selection methods (using \texttt{OP} attack generation).} \label{table: P-value_Simi_Frames} \end{table} \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|} \hline {\color[HTML]{000000} } & {\color[HTML]{000000} Librispeech} & {\color[HTML]{000000} Commonvoice} \\ \hline {\color[HTML]{000000} OP VS GL} & {\color[HTML]{000000} 0.001} & {\color[HTML]{000000} 0.001} \\ \hline {\color[HTML]{000000} DE VS GL} & {\color[HTML]{000000} 0.001} & {\color[HTML]{000000} 0.001} \\ \hline {\color[HTML]{000000} OP VS DE} & {\color[HTML]{000000} 0.06} & {\color[HTML]{000000} 0.56} \\ \hline \end{tabular} \caption{P-values using One way Anova and Tukey’s HSD for pairwise comparison of \texttt{Similarity} achieved by attack generation methods (using \texttt{All} frames).} \label{table: P-value_Simi_Techs} \end{table} \fi \begin{table*}[htbp] \centering \setlength{\tabcolsep}{4.5mm}{ \begin{tabular}{|c| >{\columncolor[HTML]{FFCCC9}}c >{\columncolor[HTML]{FFCCC9}}c >{\columncolor[HTML]{FFCCC9}}c | >{\columncolor[HTML]{FFFC9E}}c >{\columncolor[HTML]{FFFC9E}}c >{\columncolor[HTML]{FFFC9E}}c |} \hline \multicolumn{1}{|l|}{} & \multicolumn{3}{c|}{\cellcolor[HTML]{FFCCC9}Librispeech} & \multicolumn{3}{c|}{\cellcolor[HTML]{FFFC9E}Commonvoice} \\ \cline{2-7} \multicolumn{1}{|l|}{\multirow{2}{*}} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}{\color[HTML]{000000} GL}} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}{\color[HTML]{000000} OP}} & {\color[HTML]{000000} DE} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}GL} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}OP} & DE \\ \hline {\color[HTML]{000000} Deepspeech} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}{\color[HTML]{000000} 96\%}} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}{\color[HTML]{000000} 95\%}} & {\color[HTML]{000000} 91\%} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}95\%} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}90\%} & 90\% \\ \hline {\color[HTML]{000000} Sphinx} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}{\color[HTML]{000000} 99\%}} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}{\color[HTML]{000000} 96.5\%}} & {\color[HTML]{000000} 94\%} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}98\%} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}89\%} & 90\% \\ \hline {\color[HTML]{000000} Google} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}{\color[HTML]{000000} 99\%}} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}{\color[HTML]{000000} 97.5\%}} & {\color[HTML]{000000} 95.5\%} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}85\%} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}80\%} & 80\% \\ \hline Average & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}98\%} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}96.3\%} & 93.5\% & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}92\%} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFC9E}86.3\%} & 86.6\% \\ \hline \end{tabular}} \caption{The \texttt{Success Rate}s of the adversarial attacks with \texttt{GL,OP,DE} attack generation methods across the three ASRs and two datasets.\texttt{All} frames is used as the frame selection method.} \label{table: success_rate} \end{table*} \iffalse \begin{table*}[htbp] \centering \setlength{\tabcolsep}{1mm}{ \begin{tabular}{|c|c|c|cc|c|} \hline {Technique} & {Time} & {Similarity} & \multicolumn{2}{c|}{Success rate} & {Detection score} \\ \cline{4-5} & & & \multicolumn{1}{c|}{Google} & Sphinx & \\ \hline \textbf{Carlini~\cite{carlini2018}} & 780 seconds & 3.63 & \multicolumn{1}{c|}{33\%} & 77\% & 0.33 \\ \hline \textbf{OP+Important} & 155 seconds & 3.93 & \multicolumn{1}{c|}{75\%} & 78\% & {\color[HTML]{FE0000}0.48} \\ \hline \textbf{OP+All} & 3.5 seconds & 3.22 & \multicolumn{1}{c|}{{\color[HTML]{FE0000}80\%}} & 89\% & 0.47 \\ \hline \textbf{DE+Important} & 154 seconds & {\color[HTML]{FE0000} 4.29} & \multicolumn{1}{c|}{74\%} & 77\% & 0.45 \\ \hline \textbf{DE+All} & {\color[HTML]{FE0000}2.5 seconds} & 3.13 & \multicolumn{1}{c|}{{\color[HTML]{FE0000}80\%}} & {\color[HTML]{FE0000}90\%} & 0.44 \\ \hline \end{tabular}} \caption{Comparison of \texttt{OP+All, OP+Important, DE+All, DE+Important} with Carlini et al.~\cite{carlini2018} with respect to generation time per adversarial example, \texttt{Similarity} to original examples, \texttt{Success Rate} in attacking an ASR and \texttt{Detection score} from Waveguard.} \label{comparation} \end{table*} \fi \begin{table*}[htbp] \centering \setlength{\tabcolsep}{1mm}{ \begin{tabular}{|c|c|c|ccl|ccl|c|} \hline {Technique} & {Time} & {Similarity} & \multicolumn{3}{c|}{Success rate} & \multicolumn{3}{c|}{WER} & {Detection score} \\ \cline{4-9} & & & \multicolumn{1}{c|}{Deepspeech} & \multicolumn{1}{c|}{Sphinx} & Google & \multicolumn{1}{c|}{Deepspeech} & \multicolumn{1}{c|}{Sphinx} & Google & \\ \hline \textbf{Carlini~\cite{carlini2018}} & 780 seconds & 3.63 & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{77\%} & 33\% & \multicolumn{1}{c|}{N/A } & \multicolumn{1}{c|}{N/A} & {N/A} & 0.67 \\ \hline \textbf{Abdullah~\cite{abdullah2019hear}} & 18 seconds & 3.12 & \multicolumn{1}{c|}{80\%} & \multicolumn{1}{c|}{77\%} & 54\% & \multicolumn{1}{c|}{0.39} & \multicolumn{1}{c|}{0.44} & 0.14 & 0.65 \\ \hline \textbf{OP+Important} & 155 seconds & 3.93 & \multicolumn{1}{c|}{86\%} & \multicolumn{1}{c|}{78\%} & 75\% & \multicolumn{1}{c|}{0.41} & \multicolumn{1}{c|}{0.41} & 0.39 & {\color[HTML]{FE0000}0.52} \\ \hline \textbf{OP+All} & 3.5 seconds & 3.22 & \multicolumn{1}{c|}{{\color[HTML]{FE0000}90\%}} & \multicolumn{1}{c|}{89\%} & {\color[HTML]{FE0000}80\%} & \multicolumn{1}{c|}{{\color[HTML]{FE0000}0.44}} & \multicolumn{1}{c|}{0.47} & {\color[HTML]{FE0000}0.40} & 0.53 \\ \hline \textbf{DE+Important} & 154 seconds & {\color[HTML]{FE0000}4.29} & \multicolumn{1}{c|}{84\%} & \multicolumn{1}{c|}{77\%} & 74\% & \multicolumn{1}{c|}{0.39} & \multicolumn{1}{c|}{0.40} & 0.36 & 0.55 \\ \hline \textbf{DE+All} & {\color[HTML]{FE0000}2.5 seconds} & 3.13 & \multicolumn{1}{c|}{{\color[HTML]{FE0000}90\%}} & \multicolumn{1}{c|}{{\color[HTML]{FE0000}90\%}} & {\color[HTML]{FE0000}80\%} & \multicolumn{1}{c|}{{\color[HTML]{FE0000}0.44}} & \multicolumn{1}{c|}{{\color[HTML]{FE0000}0.50}} & 0.38 & 0.56 \\ \hline \end{tabular}} \caption{Comparison of \texttt{OP+All, OP+Important, DE+All, DE+Important} with Abdullah et al.~\cite{abdullah2019hear} and Carlini et al.~\cite{carlini2018} with respect to generation time for per adversarial attack, \texttt{Similarity} to original audio examples,\texttt{WER}, \texttt{Success Rate} and \texttt{Detection score} against defense system~\cite{waveguard} in attacking all three ASRs} \label{comparation_abdul} \end{table*} \iffalse \begin{table}[] \centering \begin{tabular}{|c|c|c|c|} \hline &OP&GL&DE \\\hline Time&3.5 seconds& 3.5 seconds& 2.5 seconds\\\hline \end{tabular} \caption{Generation time of one example using three attack generation techniques} \label{table: time} \end{table} \fi \subsection{RQ1: Comparison of Frame Selection Techniques} The best performing frame selection technique is one that achieves high \texttt{WER} and high \texttt{Similarity} across audio examples. However, these two metrics are often conflicting. We discuss and compare \texttt{WER} and \texttt{Similarity} achieved by the three frame selection techniques in our approach below. Figures in Table~\ref{table: wer} shows the \texttt{WER} achieved by different frame section techniques for the Librispeech and Commonvoice datasets across different ASRs and attack generation techniques while Figure~\ref{fig: Similarity} shows the \texttt{Similarity} achieved. \paragraph{\texttt{All} frames} We find in Table~\ref{table: wer} and Figure~\ref{fig: Similarity}, that the \texttt{All} frame selection achieves the highest \texttt{WER} and lowest \texttt{Similarity} compared to \texttt{Important} and \texttt{Random} across ASRs, input datasets and attack generation methods. This is in line with our expectations as the other two frame selection techniques select a small part of the audio to introduce noise into achieving lower \texttt{WER} but higher \texttt{Similarity} to original audio. \paragraph{\texttt{Important} versus \texttt{Random}:} For most combinations of ASR, dataset and attack generation, we find \texttt{Random} frame selection produces the lowest \texttt{WER} and the highest \texttt{Similarity}, while \texttt{Important} frame selection results in a \texttt{WER} and \texttt{Similarity} between \texttt{Random} and \texttt{All}. \paragraph{Statistical Analysis.} We confirmed the statistical significance (at 5\% significance level) of the difference in means between the frame selection techniques using one-way Anova and did a post-hoc Tukey's Honest Significant Difference (HSD) test to reveal which differences between pairs of means are significant. Supplementary material Sections 1.1.1 and 1.1.2 list the P-values for pairwise comparisons of \texttt{WERs} and \texttt{Similarities} between frame selection techniques. For the \texttt{WER} metric, we find the \texttt{All} frames selection technology is significantly better than \texttt{Important} and \texttt{Random} on majority of ASR, dataset, attack technique combinations. In contrast, for \texttt{Similarity} measure, \texttt{Random} and \texttt{Important} frame selections significantly outperformed \texttt{All}. \iffalse For this situation, we made an analysis and found that the proportion of important frames filtered out on the Timit dataset is very low (an average of 6.9 frames are selected for each audio, which is only 7\% of the average length of 94.7 frames; and with that In comparison, the Commonvoice data set has an average length of 72 frames, and on average 24.26 important frames are selected for each audio, accounting for 33.7\%.) This means that on the Timit data set, the adversarial attacks using the \texttt{Important} frames selection technology and the \texttt{Random} frames selection technology are not too different. And the gap with the original audios is very small. To confirm the results we get, we use one-way Anova and Tukey's Honest Significant Difference (HSD) test to check whether the difference among the frames selection techniques is significant(at 5\% significance level). As shown in Table~\ref{table: P-value_WER_Frames}, in terms of \texttt{WER}, in most data sets and ASRs, the \texttt{All} frames selection technology is significantly better than the other two technologies and \texttt{Important} frames selection technology is better than \texttt{Random} frames selection technology. As shown in Table~\ref{table: P-value_Simi_Frames}, in terms of \texttt{Similarity}, in all cases, the \texttt{All} frames selection technology is inferior to the other two technologies and \texttt{Important} frames selection technology is inferior to \texttt{Random} frames selection technology. It is worth noting that, on Commonvoice, the performance of \texttt{Important} and \texttt{Random} on Google is not much different. This may be because we use Deepspeech when selecting important frames, but because the internal structure and parameters of Deepspeech are different from Google, the important frames considered by Deepspeech may not be considered important by Google, so such a difference will eventually occur. \fi \paragraph{Pareto front} Owing to the conflicting nature of the \texttt{WER} and \texttt{Similarity} metrics, all three frame selection techniques achieve a trade-off between them. We use the Pareto front with these two metrics, shown in Figure~\ref{fig: optimal_frames} for one of the datasets and ASRs, to determine the number of non-dominated attack examples (that fall on the Pareto front) from each frame selection. We find \texttt{Important} frame selection has the most number of non-dominated attacks (25 examples); \texttt{Random} was second with 15 examples, while \texttt{All} frames only had 1 non-dominated attack example. This trend is observed across all ASRs, attack technologies and datasets (see results in Supplementary material Section 1.1.3). Based on the number of non-dominated examples, we believe that \texttt{Important} frames is effective at achieving a trade-off between \texttt{WER} and \texttt{Similarity}. \paragraph{Summary} In terms of \texttt{WER}, we find \texttt{All} frames performs best. However, \texttt{Important} and \texttt{Random} frames perform better in terms of \texttt{Similarity}. We find \texttt{Important} is the best at optimising trade-off between the two metrics, achieving reasonable performance in both \texttt{WER} and \texttt{Similarity}. \subsection{RQ2: Comparison of Attack Generation Techniques} We present \texttt{WER} achieved by \texttt{GL, OP, DE} using different ASRs and datasets in Table~\ref{table: wer}, while we show \texttt{Similarity} achieved in Figure~\ref{fig: Similarity}. Best performing attack generation technique is one that results in a high \texttt{WER} and high \texttt{Similarity} to original audio. \paragraph{WER Performance} \texttt{GL} attack generation performs better than both \texttt{OP} and \texttt{DE} in terms of WER achieved. We confirm the differences are significant using One-way Anova and Tukey's HSD test (see P-values in Section 1.2.1 of the Supplementary material). Between \texttt{OP} and \texttt{DE} attacks, \texttt{OP} outperforms \texttt{DE} with DeepSpeech and Sphinx ASRs over the Librispeech dataset. There is no significant difference between the two techniques over the other dataset and ASRs. \paragraph{Similarity Performance} Both \texttt{OP} and \texttt{DE} significantly outperform \texttt{GL} in terms of \texttt{Similarity}, confirmed with pairwise comparison using one-way Anova followed by Tukey's HSD test (P-value tables in Supplementary material Section 1.2.2). The median \texttt{Similarity} or PESQ score for \texttt{GL} tends to be below the value of $3.0$ (shown by the dashed line), irrespective of frame selection used. According to Beuran et al.~\cite{PESQover3}, the standard for good quality audio is a PESQ score of greater than $3$ and \texttt{GL} technique does not meet this standard in our experiments. We believe this is because \texttt{GL} uses estimated, rather than actual, phase information which causes distortion that reduces the PESQ score. Between OP and DE, there is no significant difference in their \texttt{Similarity} performance. The benefit with using \texttt{DE} lies in faster generation of an adversarial attack. The average time to generate a single adversarial attack using \texttt{DE} is $2.5 seconds$, a second faster than the \texttt{OP} technique ($3.5 seconds$ on average) as \texttt{OP} relies on calculating the masking threshold for every input example. \iffalse \paragraph{Performance of \texttt{GL}} In terms of \texttt{WER}, the \texttt{GL} attack generation technique performs better than both \texttt{OP} and \texttt{DE}. We confirm the differences are significant using One-way Anova and Tukey's HSD test (see P-values in Table~\ref{table: P-value_WER_Techs}). We find the \texttt{GL} attack generation technique performs the worst in terms of \texttt{Similarity}, as seen in Figure~\ref{fig: Similarity}. According to Beuran et al.~\cite{PESQover3}, good quality audio have a PESQ score of greater than 3. In our experiment, we expect the \texttt{Similarity} with the original audio to be $>3$ for the adversarial audio to be considered good quality. However, as seen in Figure~\ref{fig: Similarity}\ajitha{draw a dashed line in the similarity plot at pESQ value of 3 to clearly see GL is always below it}, the median attack audio using \texttt{GL} does not meet the standard for good quality, regardless of frame selection method used. This is because the estimated phase information in the \texttt{GL} method causes distortion that reduces the PESQ score. \xiaoliang{add the table} \paragraph{Performance of \texttt{OP, DE}} These two techniques do well with respect to \texttt{Similarity} when compared to \texttt{GL}, confirmed with pairwise comparison using one-way Anova followed by Tukey's HSD test as seen in Table~\ref{table: P-value_Simi_Techs}. \footnote{P-values using \texttt{Random} and \texttt{Important} frame selections is available at \href{https://anonymous.4open.science/r/lalalala-9DEE/SupplementaryResults.pdf}{https://anonymous.4open.science/r/lalalala-9DEE/SupplementaryResults.pdf}}. With DeepSpeech and Sphinx over the Lieberispeech dataset, \texttt{OP} outperforms \texttt{DE} in terms of \texttt{WER}. There is no significant difference between the two techniques over the other dataset and ASRs. Similarity across the two techniques is not significantly different (based on P-values in Table~\ref{table: P-value_Simi_Techs}). Overall we find, both \texttt{OP} and \texttt{DE} produce adversarial audio that is similar sounding to the original example with relatively high \texttt{WER}. \paragraph{\texttt{OP} versus \texttt{DE}} We find \texttt{OP} and \texttt{DE} are not significantly different in their \texttt{Similarity} and \texttt{WER} for all two datasets. This is confirmed with pairwise comparison using one-way Anova followed by Tukey's HSD test as seen in Table~\ref{table: P-value_WER_Techs} and Table~\ref{table: P-value_Simi_Techs}. The benefit with using \texttt{DE} lies in its faster speed of generating an adversarial attack. According to our experiments, the time to generate an adversarial attack using the \texttt{OP} technique(3.5 seconds) is on average nearly 30\% slower than that of the \texttt{DE} technique(2.5 seconds).This is because, compared with the OP method, the DE method does not need to calculate the masking threshold, nor does it need to recalculate the PSD with the calculated masking threshold. Instead, it only needs to identify the maskers and delete the part of the PSD except the maskers. \fi \paragraph{Pareto Front} As with RQ1, we draw the Pareto front using \texttt{WER} and \texttt{Similarity}, shown in Figure~\ref{fig: optimal}. We find \texttt{DE} technique has the most number of non-dominated attacks (28 examples); \texttt{OP} is second with 10 examples, while \texttt{GL} only has 1 non-dominated attack example. This trend is observed across all ASRs, frame selections and datasets (Results available in Section 1.1.3 of the Supplementary material). \paragraph{Summary} Based on the number of non-dominated examples, we believe that \texttt{DE} is a suitable choice for optimising both \texttt{WER} and \texttt{Similarity}. Additionally, \texttt{DE} is the fastest attack generation technique. Taking both these aspects into account, we believe \texttt{DE} would be the best choice for attack generation. \iffalse \paragraph{Summary} \texttt{OP} and \texttt{DE} are superior techniques for adversarial audio generation when compared to \texttt{GL}, as they produce noisy audio very similar to the original (average \texttt{Similarities} using \texttt{Important} frames selection are 3.90 and 4.24, respectively) across ASRs and datasets. And though \texttt{OP} and \texttt{DE} do not show significant difference on \texttt{Similarities} and \texttt{WER}, we find that \texttt{DE} does better in terms of balancing these two metrics. \fi \subsection{RQ3: Portability across ASRs} We evaluate portability of the adversarial attacks generated by \texttt{OP,GL,DE} across the three ASRs using the \texttt{Success Rate} metric, described in Section~\ref{sec:metrics}. Table~\ref{table: success_rate} presents \texttt{Success Rates} achieved with the Librispeech and Commonvoice datasets. We find \texttt{GL} achieves the best success rates over all ASRs, with both the Librispeech dataset (average of $98\%$) and the Commonvoice dataset (average of $92\%$). \texttt{OP} comes next, performing better than \texttt{DE} on the Librispeech dataset ($96\%$ versus $93.5\%$, respectively). \texttt{OP} and \texttt{DE} have similar performance over the Commonvoice dataset (average of $86\%$). \paragraph{Summary} All three attack generation techniques have high success rates across the three ASRs producing portable adversarial attacks. \texttt{GL} outperforms \texttt{OP} and \texttt{DE} in portability but the magnitude of difference is small (on average $2\%$ to $5\%$). \texttt{OP} and \texttt{DE} have comparable performance on the ASRs, especially with the Commonvoice dataset. \iffalse Compared to \texttt{GL} (average ranging from 92\% to 98\%), the \texttt{Success Rate} of adversarial attacks generated by \texttt{OP} (average of 86\% to 96\%) and \texttt{DE} (average of 86\% to 93\%) techniques is slightly lower. However, as we mentioned earlier, the adversarial attacks generated by the \texttt{GL} method are not similar enough to the original audio to meet our requirement of similarity, so it is meaningless for us to discuss it. Average \texttt{Success Rates} for each technique across ASRs within each dataset can be found on the last row of Table~\ref{table: success_rate}. Across datasets and ASRs, \texttt{OP} and \texttt{DE} produce fairly high average \texttt{Success Rates} across datasets (92.3\% and 90\%, respectively). Between them, \texttt{OP} does better than \texttt{DE}, although the magnitude of difference is small. Based on these results, we can confidently state that our approach using \texttt{DE} and \texttt{OP} produce adversarial attacks that are portable across ASRs with high \texttt{Success Rates}. \fi \subsection{RQ4: Comparison to Existing Techniques} \label{sec:RQ4} As mentioned in Section~\ref{sec:RQ}, we compare performance of our approach against a whitebox targeted technique proposed by Carlini et al.~\cite{carlini2018} and a blackbox untargeted technique proposed by Abdullah et al.~\cite{abdullah2019hear} using the metrics -- \texttt{WER, Similarity, Success rate, Time, Detection score}. \subsubsection{Comparison with Carlini et al} We fix the ASR to Deepspeech and input dataset to Commonvoice to match the experiments in Carlini et al.~\cite{carlini2018}. For comparison, we use the best performing techniques in our approach (for \texttt{Similarity} and \texttt{WER}) -- \texttt{OP} and \texttt{DE} for attack generation with \texttt{Important} and \texttt{All} frame selections. We show results in Table~\ref{comparation_abdul}. We do not compare \texttt{WER} as the target text for Carlini et al.~\cite{carlini2018} is the transcription text from our adversarial attacks, so there will be no difference. \paragraph{\texttt{Time} and \texttt{Similarity}} We find time taken to generate attack examples is faster with our approaches, \texttt{OP} and \texttt{DE}, compared to Carlini et al. with a maximum speedup of $312 \times$ achieved with \texttt{DE+All}. We also achieve higher \texttt{Similarity} scores when using \texttt{Important} frames -- $4.3$ (\texttt{DE+Important}) and $3.9$ (\texttt{OP+Important}), compared to $3.6$ by Carlini et al. We confirm the statistical significance (at 5\% significance level) of the observed differences in \texttt{Similarity} using one-way Anova and Tukey's Honest Significant Difference (HSD) test. We find our techniques are a clear winner in terms of time taken, and outperform Carlini at al. in \texttt{Similarity} when using \texttt{Important} frames but not \texttt{All} frames. \texttt{Similarity} performance difference between \texttt{Important} and \texttt{All} was discussed in RQ1. \paragraph{Success Rate} To evaluate portability of adversarial attacks, we transcribe the adversarial attacks using Google and Sphinx (since DeepSpeech is used by Carlini et al.). We find when used with Google ASR, adversarial attacks generated by Carlini et al.~have a much lower \texttt{Success Rate} than our techniques (33\% versus 74\% to 80\%), respectively. For Sphinx, the difference in \texttt{Success Rate} is smaller but the trend remains (77\% Carlini versus 77\% to 90\% for ours). The lower \texttt{Success Rate} observed with Carlini et al.~is because their technique specifically targets the neural network inside Deepspeech, and may not be as effective when used on other ASRs with different NNs. This is a drawback also encountered with other whitebox attacks. However, since our method is blackbox, we find it is easier to port our adversarial attacks to different ASRs. \paragraph{Detection score} Attack examples generated by Carlini et al. are more easily detected by Waveguard, with a higher \texttt{Detection score} score of $0.67$, compared to techniques in our approach, whose \texttt{Detection score} range from $0.52$ to $0.56$. We believe this is because Carlini et al use noise in their attack generation which is detected more easily by Waveguard. We find the four techniques in our approach perform better than Carlini et al at evading the Waveguard defense. Across all four evaluation metrics, we find one of the four techniques from our approach is the winner (highlighted in red in Table~\ref{comparation_abdul}), outperforming Carlini et al. Among them, \texttt{OP+Important} and \texttt{DE+Important} is superior to Carlini et al.~\cite{carlini2018} across all metrics. \texttt{OP+All} and \texttt{DE+All} show significant gains in generation time and \texttt{Success Rate} but at the cost of \texttt{Similarity} which is slightly lower than Carlini et al. \subsubsection{Comparison with Abdullah et al} Like our approach, Abdullah et al.~\cite{abdullah2019hear} use a blackbox, untargeted attack generation technique that is meant to be fast and portable on different ASRs. Unlike the comparison with Carlini et al., we can include \texttt{WER} as a performance metric (in addition to the other 4 metrics) and Deepspeech ASR in our comparison. We discuss performance for each of the metrics below using the Commonvoice dataset\footnote{Results for Librispeech dataset follow a similar trend and can be viewed in Supplementary material Section 1.3.2.}. \paragraph{\texttt{Time} and \texttt{Similarity}} We find our approach, \texttt{OP} and \texttt{DE} with \texttt{All} frames, is much faster in generating attacks than Abdullah et al. ($5 \times$ and $7 \times$ faster, respectively). In contrast, Abdullah et al. is $8$ times faster than \texttt{OP} and \texttt{DE} when they use \texttt{Important} frames, where much of the time with our approach is spent in frame selection. For the \texttt{Similarity} metric, our approach outperforms Abdullah et al. with all 4 techniques (at 5\% significance level, P-value tables in the Supplementary material.) As noted in RQ1, \texttt{Important} frame selection achieves better \texttt{Similarity} scores than \texttt{All} frames. \paragraph{\texttt{Success rate}, \texttt{WER} and \texttt{Detection score}} Attack examples generated with \texttt{OP} and \texttt{DE} have a higher \texttt{Success rate} than Abdullah et al. across all ASRs. Selecting \texttt{All} frames with our attack techniques achieves the best \texttt{Success rate}. We see a similar trend with \texttt{WER}, where \texttt{OP} and \texttt{DE} outperform Abdullah et al. (at 5\% statistical significance). Finally, \texttt{OP} and \texttt{DE} surpass Abdullah et al. with respect to getting past Waveguard's defense system by achieving lower detection scores of $0.52 - 0.56$ versus $0.65$ for Abdullah et al.. In summary, we find our attack techniques, \texttt{OP} and \texttt{DE}, surpass Abdullah et al. for each of the five evaluation metrics (best performing is highlighted in red in Table~\ref{comparation_abdul}). Choice of frame selection within \texttt{OP} and \texttt{DE} impacts attack generation \texttt{Time} and \texttt{Similarity} while the relative performance on the remaining metrics is largely unaffected. \section{Related Work} \begin{table}[htbp] \centering \begin{tabular}{|p{3cm}|p{4.5cm}|} \hline Attack Type & Existing work \\\hline Whitebox-Targeted & Vaidya et al.~\cite{Vaidya_cocaine}, Carlini et al,~\cite{carlini2016,carlini2018}, Qin et al.~\cite{qin2019},Yuan et al.~\cite{yuancommandersong},Yakura et al.~\cite{Yakurarobust}, Schönherr et al.~\cite{schonherr2018,schonherr2020imperio}, Szurley et al.~\cite{szurley2019perceptual} \\\hline Blackbox-Targeted & Zhang et al.~\cite{Zhang2017DolphinAttack}, Alzantot et al.~\cite{didyouhear}, Taori et al.~\cite{taori2019targeted} \\\hline Blackbox-Untargeted & Abdullah et al.~\cite{abdullah2019hear} \\\hline \end{tabular} \caption{Existing work on adversarial ASR attack generations.} \vspace{-10pt} \label{existing work} \end{table} As mentioned in Section~\ref{sec:intro}, existing adversarial attack generation on ASR models can be classified along two dimensions: 1. Targeted for a given transcription or untargeted, and 2. Whitebox, with knowledge of the internal ASR structure or Blackbox. Table~\ref{existing work} lists the existing techniques using these two dimensions and they are discussed in more detail in the rest of this Section. \subsection{Targeted Attacks} \label{sec:targeted} Vaidya et al.~\cite{Vaidya_cocaine} pioneered the first whitebox targeted method for attacking ASR in 2015. Given the transcription to target, they gradually approach the target by continuously fine-tuning the parameters of the extracted MFCC features. Once the goal is reached, they use the obtained adversarial MFCC features to reconstruct the speech waveform. On the basis of Vaidya's work and in an effort to improve the efficiency of their approach, Carlini et al.\cite{carlini2016} proposed Hidden Voice Command in 2016, adding noise that is often encountered in real life. However, neither of these two types of attacks can conceal the existence of noise, and such adversarial attacks can be easily detected as noise rather than effective commands. Yuan et al.~\cite{yuancommandersong} proposed a method for embedding commands into songs so that when these songs are played, the commands will be translated by an ASR. Additionally, they improve the realistic nature of adversarial attacks by introducing noise generated by hardware devices. This approach, however, is restricted to songs as the carrier of commands, and is, therefore, limited in application scenarios. Carlini et al.~\cite{carlini2018} in 2018 used a whitebox approach that applies gradient descent to modify the original audio so that the difference between the transcription and the target text is smaller. Their experimental results show their attack \texttt{Success Rates} reached 100\% on Deepspeech ASR. However, their approach faces the following drawbacks: First, it can take up to several hours to generate attacks; second, the gradient descent method requires the attacker to have a good understanding of all the internal parameters and structures of the attacked system before it can be used; and finally the adversarial attacks generated will be invalid over other ASRs. Yakura et al.~\cite{Yakurarobust} proposed some improvements to~\cite{carlini2018} to maintain attack performance under over-the-air conditions (mixed with sound of the surrounding environment). They generate adversarial attacks accounting for noise caused by echo and recording in real life, so as to obtain more robust adversarial attacks. However, other shortcomings in Carlini et al.\cite{carlini2018} (such as long generation time and weak transferability) have not been addressed. In 2018, Schönherr et al.~\cite{schonherr2018} developed a whitebox approach that applies the knowledge of masking threshold to generate adversarial attacks. They proposed to limit the generated noise below the masking threshold of the original audio to ensure that the obtained perturbation is not audible to the human ear. In more recent work~\cite{schonherr2020imperio}, they introduced room impulse response (RIR) simulator to improve the robustness of examples that produces different types of noise for different environment configurations. Inspired by Schönherr et al., Qin and Carlini et al.~\cite{qin2019} developed a whitebox method and optimized perturbations to make it lower than the masking threshold of the original audio. This method achieved a 100\% attack \texttt{Success Rate} on the Lingvo system. Like other whitebox targeted approaches, their work lacks portability to other ASRs and is time consuming for attack generation. Around the same time, Szurley et al.~\cite{szurley2019perceptual} proposed a whitebox method similar to Schönherr et al.~\cite{schonherr2018, schonherr2020imperio} and Carlini et al.~\cite{qin2019, carlini2018} that constructed an optimization based on masking threshold and combined it with room reverberation. Their method reached a 100\% \texttt{Success Rate} on Deepspeech but still suffers from limitations of lack of portability and time consuming attack generation. \paragraph{Blackbox-targeted approaches} Few Blackbox Targeted adversarial attack generation techniques exist in the literature~\cite{Zhang2017DolphinAttack,didyouhear,taori2019targeted}. Zhang et al.~\cite{Zhang2017DolphinAttack} in 2017 modulated the voice on the ultrasonic carrier to insert preset commands(like "Open the window") into the original audio. However, this method is not easy to reproduce as it uses hardware characteristics of the microphone to complete the attack. Alzantot et al.~\cite{didyouhear} proposed a iterative optimization method that adds a small amount of noise iteratively to a benign example until the ASR outputs a target label. Taori et al.~\cite{taori2019targeted} used a genetic algorithm to achieve iterative optimization, mutating benign examples until the ASR output matches a target label. These approaches for blackbox targeted attacks suffer from the following two weaknesses: First, they require thousands of queries to ASRs to generate one adversarial attack, which is unrealistic. Secondly, these attacks are only applicable to ASRs that aim to classify audios, not translate audios. \subsection{Untargeted Attacks} \label{sec:untargeted} The only known untargeted blackbox adversarial ASR attack generation approach is that proposed by Abdullah et al.~\cite{abdullah2019hear} in 2019. They construct an adversarial attack by decomposing and reconstructing the original audio. Specifically, they decompose the original audio into components called eigenvectors via Singular Spectrum Analysis (SSA). These eigenvectors represent the various trends and noises that make up the audio. They believe that eigenvectors with smaller eigenvalues convey limited information. They choose a threshold to classify eigenvalues as small and subsequently eliminate small eigenvectors. They then reconstruct an audio from the remaining components as the adversarial attack. We compare performance of our techniques against their approach in Section~\ref{sec:RQ4}. \iffalse Their method reduced the time to generate adversarial attacks to a few seconds. However, they did not evaluate on different input datasets, and the extent of difference between the translated text of adversarial attacks and the translated text of the original audio is unclear. For example, if the original translation of an audio is "I love apples" and the translation of an adversarial attack is "I lovee apples", the difference between the two translated texts is small. However, in their experiments, they record it as a successful attack without measuring \texttt{WER} that gives a measure of text output difference. Compared with their experiments, we explore the performance of our method on different audio datasets, and we were able to generate adversarial attacks with an average \texttt{WER} over $0.44$ (with \texttt{OP+All}). Additionally, we find that the ability of our attack to bypass denfense systems is more effective than their methods. \fi \section{Conclusion} We proposed a blackbox untargeted adversarial attack generation technique for ASRs using frequency masking to make the adversarial audio sound similar to the original while producing a change in the transcription. Our approach provides three attack generation options -- \texttt{GL, OP} and \texttt{DE}. We also provide the option of selectively introducing perturbations to a small fraction of audio frames using three frame selection options –- \texttt{Random, Important} and \texttt{All}. Evaluation of our techniques over three ASRs and two audio datasets showed that our techniques can be effective at achieving high \texttt{WERs} (average of $44\%$ with \texttt{OP+All}) while also achieving high \texttt{Similarity} (average of $3.93$ with \texttt{OP+Important}). The choice in attack generation and frame selection helps achieve a good balance between these two metrics, with \texttt{DE} attack generation and \texttt{Important} frames achieving the best trade-off. We also confirmed that our techniques were portable across ASRs and superior to existing whitebox targeted technique~\cite{carlini2018} and blackbox untargeted technique~\cite{abdullah2019hear} in terms of \texttt{WER, Similarity, Success Rate, Time} and \texttt{Detection score}. \bibliographystyle{ACM-Reference-Format}
proofpile-arXiv_065-33
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Let $W$ be a finite Coxeter group. The Iwahori-Hecke algebra $\Heq$ of $W$ is a deformation of the group algebra of $W$ with nonzero parameter $q$. Iwahori-Hecke algebras arise in the representation theory of finite groups of Lie type and Knot theory \cite{Geck2000a}. Setting $q=0$ results in the $0$-Hecke algebra $\Hec$. A first (and thorough) study of $\Hec$ was carried out by Norton \cite{Norton1979}. Its structure diverges considerably from the generic $q\neq 0$ case \cite{Carter1986}. The 0-Hecke algebras appear in the modular representation theory of finite groups of Lie type \cite{Norton1979,Carter1976}. The Grothendieck ring of the finitely generated modules of the 0-Hecke algebras of the symmetric groups is isomorphic to the Hopf algebra of quasisymmetric functions \cite{Krob1997}. This article is concerned with the center $Z(\He)$ of the 0-Hecke algebra $\He$ of the symmetric group $\SG_n$. Fayers mentions the description of $Z(\Hec)$ as an open problem in \cite{Fayers2005}. Brichard gives a formula for the dimension of the center in type $A$ \cite{Brichard2008}. Yang and Li obtain a lower bound for the dimension of $Z(\Hec)$ for irreducible $W$ in several types other than $A$ \cite{Yang2015}. Moreover, they specify the dimension in type $I_2(n)$ for $n\geq 5$. In \cite{He2015} He describes a basis of $Z(\Hec)$ in arbitrary type indexed by a set of equivalence classes $\wmaxa$ of $W$. These classes are rather subtle. In fact, \cite{He2015} contains no result on the number of these classes which is the dimension of the center. Motivated by the connection to the center of the 0-Hecke algbera $\He$, the present paper aims to shed some light on the quotient set $\smaxa$. To be precise, we parametrize it by compositions, obtain a complete set of representatives and develop a combinatorial description for certain elements of $\smaxa$. Let $S$ be the set of Coxeter generators of $W$ and $\ell$ be the length function of $W$. Define $W_{\min}$ and $W_{\max}$ to be the set of elements of~$W$ whose length is minimal and maximal in their conjugacy class, respectively. Geck and Pfeiffer introduce in \cite{Geck1993} a relation $\to$ on $W$ known as \emph{cyclic shift relation}. It is the reflexive and transitive closure of the relations $\overset{s}{\to}$ for $s\in S$ where we have $w\overset{s}{\to}w'$ if $w' = sws$ and $\ell(w')\leq \ell(w)$. In the case where $W$ is a Weyl group, Geck and Pfeiffer show that $W_{\min}$ in conjunction with $\to$ has remarkable properties and how these properties can be used in order to define a character table of $\Heq$ with $q\neq 0$ \cite{Geck1993}. Since then their results have been generalized to finite \cite{Geck1996}, affine \cite{He2014} and finally to all Coxeter groups \cite{Marquis2021}. The relation $\to$ can also be used to describe the conjugacy classes of Coxeter groups \cite{Geck2000a,Marquis2020structure} in particular for computational purposes \cite{Geck1996,Geck2000a}. Geck, Kim and Pfeiffer introduce a twisted version $\to_\delta$ of the relation belonging to twisted conjugacy classes of $W$ in \cite{Geck2000}. Building on the results of \cite{Geck1993}, Geck and Rouquier define a basis of $Z(\Heq)$ for $q\neq 0$ and $W$ a finite Weyl group, which is naturally indexed by the conjugacy classes of $W$ \cite{Geck1997}. By setting $w\approx w'$ if and only if $w\to w'$ and $w' \to w$ one obtains an equivalence relation $\approx$ on $W$. The $\approx$-equivalence classes of $W$ are known as \emph{cyclic shift classes}. On $\faktid{W}{}$ the relation $\to$ gives rise to a partial order. Gill considers the corresponding subposets $\faktid{\mc O}{}$ where $\mc O$ is a conjugacy class of $W$ \cite{Gill2000}. For an element $\Sigma$ of the quotient set $\wmaxa$, He defines the element $\ngen_{\leq \Sigma} := \sum_x \ngen_x$ where $x$ runs over the order ideal in Bruhat order of $W$ generated by $\Sigma$ \cite{He2015}. Then he shows that the elements $\ngen_{\leq \Sigma}$ for $\Sigma \in \wmaxa$ form a basis of $Z(\Hec)$. We consider He's approach in \Cref{sec:preliminaries}. For each composition $\alpha \vDash n$, Kim defines the \emph{element in stair form} $\sigma_\alpha \in \SG_n$ \cite{Kim1998}. Moreover, she calls $\alpha \vDash n$ \emph{maximal} if there is $k \geq 0$ such that the first~$k$ parts of $\alpha$ are even and the remaining parts are odd and weakly decreasing. In this case we write $\alpha \vDash_e n$. We show in \Cref{thm:parametrizations_of_kim} that the elements in stair form $\sigma_\alpha$ for $\alpha \vDash_e n$ form a system of representatives of $\smaxa$. For $\alpha \vDash_e n$ let $\Sigma_\alpha \in \smaxa$ be the equivalence class of the element in stair form $\sigma_\alpha$. It follows that $\alpha \mapsto \Sigma_\alpha$ is a parametrization of $\smaxa$ by the maximal compositions of $n$. That is, the elements $\ngen_{\leq \Sigma_\alpha}$ for $\alpha \vDash_e n$ form a basis of $Z(\He)$. This leads to an alternative proof of Brichards dimension formula from \cite{Brichard2008}, which she obtained by considering braid diagrams on the Möbius strip. Since $\ngen_{\leq\Sigma_\alpha}$ depends on the order ideal generated by $\Sigma_\alpha$, a description of the elements of $\Sigma_\alpha$ is desirable. We obtain combinatorial characterizations of the equivalence classes $\Sigma_{(n)}$ (\Cref{thm:characterization_of_Sigma_(n)}) and $\Sigma_{(k,1^{n-k})}$ with~$k$ odd (\Cref{thm:characterization_of_Sigma_for_odd_hook}) and a decomposition rule $\Sigma_{(\alpha_1,\dots, \alpha_l)} =\Sigma_{(\alpha_1)} \iprod \Sigma_{(\alpha_2,\dots,\alpha_l)}$ if $\alpha_1$ is even given by an injective operator $\iprod$ which we call the \emph{inductive product} (\Cref{thm:indcutive_product_bijection}). This allows us to describe $\Sigma_\alpha$ for all $\alpha \vDash_e n$ whose odd parts form a hook. Moreover, we will see how these $\Sigma_\alpha$ can be computed recursively. In particular, we infer a characterization of $\Sigma_{(k,1^{n-k})}$ for even $k$ as well (\Cref{thm:characterization_of_Sigma_for_arbitrary_hook}). The structure is as follows. In \Cref{sec:preliminaries} we present the background material and review He's basis of the center of $\Hec$ and the connection to the quotient set $\wmaxa$. In \Cref{sec:parametrization} we obtain the parametrization by maximal compositions and a set of representatives of $\smaxa$. In \Cref{sec:equivalence_classes} we combinatorially describe the elements $\Sigma_\alpha \in\smaxa$ indexed by maximal compositions whose odd parts form a hook. \section{Preliminaries} \label{sec:preliminaries} Let~$\K$ be an arbitrary field. We set $\N := \set{1,2,\dots}$ and always assume that $n\in \N$. For $a,b\in \Z$ we define the \emph{discrete interval} $[a,b]:=\set{c\in \Z\mid a\leq c \leq b}$ and use the shorthand $[a] := [1,a]$. \subsection{Coxeter groups} We consider basic concepts from the theory of finite Coxeter groups. Our motivation is the application to the symmetric groups. Refer to \cite{Humphreys1990,Bjorner2006} for details. Let~$S$ be a set. A \emph{Coxeter matrix} is a map $m \colon S \times S \to \N \cup \set{\infty}$ such that \begin{enumerate*} \item $m(s,s') = 1$ if and only if $s' = s$ and \item $m(s,s') = m(s',s)$ \end{enumerate*} for all $s,s'\in S$. The corresponding \emph{Coxeter graph} is the undirected graph with vertex set $S$ containing the edge $\set{s,s'}$ if and only if $m(s,s') \geq 3$. If $m(s,s') \geq 4$ then the edge $\set{s,s'}$ is labeled with $m(s,s')$. A group $W$ is called \emph{Coxeter group} with \emph{Coxeter generators} $S$ if $W$ is generated by $S$ subject to the relations \begin{enumerate} \item $s^2 = 1$ for all $s\in S$, \item $(ss's \cdots)_{m(s,s')} = (s'ss' \cdots)_{m(s,s')}$ for all $s,s'\in S$ with $s\neq s'$ and\\ $m(s,s') < \infty$ \end{enumerate} where $(ss's\cdots)_p$ denotes the the alternating product of $s$ and $s'$ with $p$ factors. Let $W$ be a Coxeter group with Coxeter generators $S$. We always assume that $W$ is finite. For $I\subseteq S$ the \emph{parabolic subgroup} $W_I$ is the subgroup of $W$ generated by $I$. It is a Coxeter group with Coxeter generators $I$. Each $w\in W$ can be written as a product $w = s_1 \cdots s_k$ with $s_i \in S$. Then $s_1 \cdots s_k$ is called a \emph{word} for $w$. If~$k$ is minimal among all words for $w$, $s_{1} \cdots s_{k}$ is a \emph{reduced word} for $w$ and $\ell(w) := k$ is the \emph{length} of $w$. The \emph{left} and the \emph{right descent set} of $w\in W$ are given by \begin{align} \begin{aligned} D_L(w) &:= \set{s\in S \mid \ell(sw) < \ell(w)}, \\ D_R(w) &:= \set{s\in S \mid \ell(w s) < \ell(w)}. \end{aligned} \label{eq:descent_sets_W} \end{align} The \emph{Bruhat order} $\leq$ is the partial order on $W$ given by $u \leq w$ if and only if there exists a reduced word for $w$ which contains a reduced word of $u$ as a subsequence. The Bruhat poset is graded by the length function $\ell$. Since $W$ is finite, there exists a greatest element $w_0 \in W$ in Bruhat order. This element is called the \emph{longest element} of $W$. It has the following useful properties. \begin{lem}[{\cite[Proposition 2.3.2 and Corollary 2.3.3]{Bjorner2006}}] \label{thm:w0_properties} Let $w_0$ be the longest element of $W$. Then we have \begin{enumerate} \item $w_0^2 = 1$, \item $\ell(ww_0) = \ell(w_0w) = \ell(w_0) - \ell(w)$ for all $w \in W$, \item $\ell(w_0ww_0) = \ell(w)$ for all $w\in W$. \end{enumerate} \end{lem} \begin{lem}[{\cite[Propositions 2.3.4 and 3.1.5]{Bjorner2006}}] \label{thm:w0_maps} For the Bruhat order on $W$, we have that \begin{enumerate} \item $w\mapsto ww_0$ and $w\mapsto w_0w$ are antiautomorphisms, \item $w\mapsto w_0 w w_0$ is an automorphism. \end{enumerate} \end{lem} We now define the 0-Hecke algebra of $W$. Refer to Chapter~1 of \cite{Mathas1999} for background information on $\Hec$. \begin{defi} \label{thm:0-Hecke algebra_Coxeter_group} The \emph{0-Hecke algebra}~$\Hec$ of $W$ is the unital associative $\K$-algebra generated by the elements $T_s$ for $s\in S$ subject to the relations \begin{enumerate} \item $\ngen_s^2 = -\ngen_s,$ \item $(\ngen_{s}\ngen_{s'}\ngen_{s} \cdots)_{m(s,s')} = (\ngen_{s'}\ngen_{s}\ngen_{s'} \cdots)_{m(s,s')}$ for all $s,s'\in S$ with $s\neq s'$. \end{enumerate} \end{defi} For $w \in W$ define $\ngen_w := \ngen_{s_1} \cdots \ngen_{s_k}$ where $\fs_{1} \cdots \fs_{k}$ is a reduced word for $w$. The word property \cite[Theorem 3.3.1]{Bjorner2006} ensures that this is well defined. By \cite[Theorem 1.13]{Mathas1999}, we have that $\set{\ngen_{w} \mid w \in W}$ is a $\K$-basis of $\Hec$ with multiplication given by \begin{align*} \ngen_s \ngen_w = \begin{cases} \ngen_{s w} & \text{if}\ \ell(s w ) > \ell(w) \\ - \ngen_{w} & \text{if}\ \ell(s w ) < \ell(w) \\ \end{cases} \end{align*} for $w\in W$ and $s\in S$. \subsection{The symmetric group} For a finite set $X$ we define $\SG(X)$ to be the group formed by all bijections from $X$ to itself. The \emph{symmetric group} $\SG_n$ is the group $\SG([n])$. Its elements are called \emph{permutations}. Let $S$ be the set of adjacent transpositions $s_i := (i, i+1) \in \SG_n$ for $i = 1,\dots, n-1$. The elements of $S$ generate $\SG_n$ as a Coxeter group subject to the relations \begin{alignat*}{3} \fs_i^2 &= 1, \\ \fs_i\fs_{i+1}\fs_i &= \fs_{i+1}\fs_i\fs_{i+1}, \\ \fs_i\fs_j &= \fs_j \fs_i \ \text{if } |i-j|\geq 2 \end{alignat*} \cite[Proposition 1.5.4]{Bjorner2006}. For $n\geq 2$, $\SG_n$ is an irreducible Coxeter group of type $A_{n-1}$. While considering the symmetric group $\SG_n$, we always assume that $S$ is the corresponding set of adjacent transpositions. For $\sigma \in \SG_n$ we have \begin{align} \begin{aligned} D_L(\sigma) &= \set{s_i \in S \mid \sigma^{-1}(i) > \sigma^{-1}(i+1)}, \\ D_R(\sigma) &= \set{s_i \in S \mid \sigma(i) > \sigma(i+1)} \end{aligned} \label{eq:descent_sets_of_Sn} \end{align} \cite[Proposition 1.5.3]{Bjorner2006}. The longest element $w_0$ of $\SG_n$ is given by $w_0(i) = n-i+1$ for $i\in [n]$. We denote the \emph{$0$-Hecke algebra of the symmetric group} $\SG_n$ with $\He := H_{\SG_n}(0)$ and use the shorthand $\ngen_i := \ngen_{s_i}$ for $i\in [n-1]$. \subsection{Combinatorics} A \emph{composition} $\alpha = \parts{\alpha}l$ is a finite sequence of positive integers. The \emph{length} and the \emph{size} of $\alpha$ are given by $\ell(\alpha):= l$ and $|\alpha| := \sum_{i=1}^l \alpha_i$, respectively. The $\alpha_i$ are called \emph{parts} of $\alpha$. If $\alpha$ has size~$n$, $\alpha$ is called \emph{composition of~$n$} and we write $\alpha \vDash n$. A \emph{weak} composition of $n$ is a finite sequence of nonnegative integers that sum up to $n$. We write $\alpha \vDash_0 n$ if $\alpha$ is a weak composition of $n$. The \emph{empty composition} $\emptyset$ is the unique composition of length and size~$0$. A \emph{partition} is a composition whose parts are weakly decreasing. We write $\lambda \vdash n$ if $\lambda$ is a partition of size~$n$. For example, $(1,4,3)\vDash 8$ and $(4,3,1)\vdash 8$. Partitions of $n$ of the form $(k, 1^{n-k})$ with $k\in [n]$ are called \emph{hooks}. A permutation $\sigma\in \SG_n$ can be represented in cycle notation where cycles of length one may be omitted. The \emph{cycle type} (or simply \emph{type}) of a permutation $\sigma\in \SG_n$ is the partition of~$n$ whose parts are the sizes of all the cycles of $\sigma$. If $\sigma$ has cycle type $(k,1^{n-k})$ for a $k\in [n]$ we also call it a $k$-cycle. A $k$-cycle is \emph{trivial} if $k = 1$. Writing $\sigma$ in cycle notation is the same as expanding $\sigma$ into a product $\sigma_1 \cdots \sigma_r$ of disjoint cycles where the trivial cycles may be omitted in the expansion. On the other hand, in order to describe the cycle notation of a permutation combinatorially, it can be useful to include them. In \Cref{sec:equivalence_classes} we will characterize the elements of certain equivalence classes of $\SG_n$ by considering them in cycle notation. \subsection{Centers of 0-Hecke algebras} \label{sec:preliminaries:center} In this \namecref{sec:preliminaries:center} we introduce He's basis of the center of $\He$. Following his approach in \cite{He2015}, we take a more general point of view and consider the center of $\Hec$ for a finite Coxeter group $W$ twisted by an automorphism $\delta$. This enables us to prove a useful invariance property in \Cref{thm:conjugation_with_w0_and_equivalence_classes}. By setting $W = \SG_n$ and $\delta = \id$ we recover the desired results on the center of $\He$. Let~$W$ be a finite Coxeter group with Coxeter generators $S$ and $\delta$ be a~$W$-automorphism with $\delta(S) = S$. For instance, we can chose $\delta=\id$. Another example is given by the conjugation with $w_0$. For $u,w\in W$ we use the shorthand $w^u = u w u\inv$. Define $\nu\colon W \to W$, $w\mapsto w^{w_0}$. Then $\nu$ is a group automorphism and from \Cref{thm:w0_properties} it follows that $\ell(\nu(w)) = \ell(w)$ for all $w\in W$ so that $\nu(S) = S$. In general, each graph automorphism of the Coxeter graph of~$W$ gives rise to a~$W$-automorphism that fixes~$S$. By the next \namecref{thm:delta_is_diagram_automorphism}, the converse direction is also true. The result is not new. For instance, it was already used implicitly in \cite[Section 2.10]{Geck2000}. \begin{lem} \label{thm:delta_is_diagram_automorphism} Let $\delta$ be a group automorphism of $W$ with $\delta(S) = S$. \begin{enumerate} \item $\delta$ is an automorphism of the Coxeter graph of~$W$. \item $\delta$ is an automorphism of the Bruhat order of~$W$. \end{enumerate} \end{lem} \begin{proof} For $w\in W$ denote the order of~$w$ with $\ord(w)$. Let $m$ be the Coxeter matrix and $\Gamma$ be the Coxeter graph of~$W$. Then $m(s,s') = \ord(ss')$ for all $s,s' \in S$. Since $\delta$ is a group automorphism, we have $\ord(\delta(w)) = \ord(w)$ for all $w\in W$. Hence for all $s,s'\in S$ \begin{align*} m(\delta(s),\delta(s')) = \ord(\delta(s)\delta(s')) = \ord(s s') = m(s,s'). \end{align*} Thus, $\delta$ is an automorphism of $\Gamma$. By a comment following \cite[Proposition 2.3.4]{Bjorner2006}, we have that multiplicatively extending a graph automorphism of $\Gamma$ yields an Bruhat order automorphism of $W$. Hence, $\delta$ is such an automorphism. \end{proof} \begin{exa} \label{thm:delta_for_Sn} The Coxeter graph of $\SG_n$ is shown below. \begin{center} \begin{tikzpicture} \filldraw (0,0) circle (2pt) node (s1) {}; \draw (s1) node[align=center, above] {$s_1$}; \filldraw (1,0) circle (2pt) node (s2) {}; \draw (s2) node[align=center, above] {$s_2$}; \filldraw (2,0) circle (2pt) node (s3) {}; \draw (s3) node[align=center, above] {$s_3$}; \node (s4) at (2.5,0) {}; \node (s5) at (3.0,0) {}; \filldraw (3.5,0) circle (2pt) node (sn-2) {}; \draw (sn-2) node[align=center, above] {$s_{n-2}$}; \filldraw (4.5,0) circle (2pt) node (sn-1) {}; \draw (sn-1) node[align=center, above] {$s_{n-1}$}; \draw (s1) -- (s2) -- (s3) -- (s4); \draw[dotted] (s4) -- (s5); \draw (s5) -- (sn-2) -- (sn-1);; \end{tikzpicture} \end{center} This graph has at most two automorphisms: the identity and the mapping given by $s_i \mapsto s_{n-i}$. For $n\geq 3$ these maps are distinct. Let $w_0$ be the longest element of $\SG_n$. Then $w_0(j) = n-j+1$ for all $j\in [n]$ and therefore $s_i^{w_0} = (n-i+1, n-i) = s_{n-i}$. Hence the second map is $\nu$. Thus, $\id$ and $\na$ are the only possibilities for $\delta$ if $W = \SG_n$. \end{exa} Two elements $w,w' \in W$ are called \emph{$\delta$-conjugate} if there is an $x\in W$ such that $w' = x w \delta(x)^{-1}$. The set of $\delta$-conjugacy classes of~$W$ is denoted by $\cl(W)_\delta$. For $\mc O \in \cl(W)_\delta$ the set of elements of minimal length in $\mc O$ and the set of elements of maximal length in $\mc O$ is denoted by $\mc O_{\min}$ and $\mc O_{\max}$, respectively. We want to decompose these sets using an equivalence relation. Let $w, w' \in W$. For $s \in S$ we write $w \overset{s}{\to}_\delta w'$ if $w' = s w \delta(s)$ and $\ell(w') \leq \ell(w)$. We write $w \to_\delta w'$ if there is a sequence $w = w_1,w_2, \dots, w_{k+1} = w'$ of elements of~$W$ such that for each $i\in [k]$ there exists an $s\in S$ such that $w_{i} \overset{s}{\to}_\delta w_{i+1}$. If $w\to_\delta w'$ and $w'\to_\delta w$ we write $w \approx_\delta w'$. Then $\approx_\delta$ is an equivalence relation. If $w \approx_\delta w'$ then $\ell(w) = \ell(w')$. Thus, for all $\mc O \in \cl(W)_\delta$, $\mc O_{\min}$ and $\mc O_{\max}$ decompose into equivalence classes of $\approx_\delta$. Define $W_{\delta,\min} := \bigcup_{\mc O \in \cl(W)_\delta} \mc O_{\min}$ and $\wfakt{\delta}{\min}$ to be the quotient set of $W_{\delta,\min}$ by $\approx_\delta$. Analogously, define the sets $W_{\delta,\max} := \bigcup_{\mc O \in \cl(W)_\delta} \mc O_{\max}$ and $\wfakt{\delta}{\max}$. In the case $\delta = \id$ we may omit the index $\delta$. \begin{exa} \label{exa:delta_quotient_sets} We have $(1,2,3) \overset{(1,2)}{\to} (1,3,2) \overset{(1,2)}{\to} (1,2,3)$ so that $(1,2,3) \approx (1,3,2)$. Moreover, $\ell((1,2)) = \ell((2,3)) = 1$ and $\ell((1,3)) = 3$. Hence, \begin{align*} \set{1},\ \set{(1,2,3),(1,3,2)} \ \text{and} \ \set{(1,3)} \end{align*} are the elements of $\faktor{(\SG_3)_{\max}}{\approx}$. \end{exa} Since $\delta$ is a Bruhat order automorphism of $W$ by \Cref{thm:delta_is_diagram_automorphism}, we obtain an algebra automorphism of $\Hec$ by setting $\ngen_{s} \mapsto \ngen_{\delta(s)}$ for all $s\in S$ and extending multiplicatively and linearly. This algebra automorphism is also denoted by $\delta$. The \emph{$\delta$-center} of $\Hec$ is given by \begin{align*} Z(\Hec)_\delta := \set{z \in \Hec \mid a z = z \delta(a) \text{ for all } a \in \Hec}. \end{align*} We now come to He's basis of $Z(\Hec)_\delta$. For $\Sigma \in \wfakt{\delta}{\max}$ set \begin{align*} W_{\leq \Sigma} &:= \set{x\in W \mid x \leq w \text{ for some } w \in \Sigma} \end{align*} and \begin{align*} \ngen_{\leq \Sigma} &:= \sum_{x\in W_{\leq \Sigma}} \ngen_x. \end{align*} \begin{thm}[{\cite[Theorem 5.4]{He2015}}] \label{thm:basis_of_center_general} The elements $\ngen_{\leq \Sigma}$ for $\Sigma \in \wfakt{\delta}{\max}$ form a $\K$-basis of $Z(\Hec)_\delta$. \end{thm} We are concerned with the following special case. \begin{cor} \label{thm:basis_of_center} The elements $\ngen_{\leq \Sigma}$ for $\Sigma \in \smaxa$ form a basis of $Z(\He)$. \end{cor} \begin{exa} Note that in $\SG_3$ \begin{align*} (1,2,3) = s_1s_2, \ (1,3,2) = s_2s_1 \ \text{and} \ (1,3) = w_0. \end{align*} Thus, \Cref{exa:delta_quotient_sets} and \Cref{thm:basis_of_center} yield that the elements \begin{align*} 1,\ 1+ \ngen_1 + \ngen_2 + \ngen_1\ngen_2 + \ngen_2\ngen_1 \ \text{and} \ \sum_{w \in \SG_3} \ngen_w \end{align*} form a basis of $Z(H_3(0))$. \end{exa} The basis of $Z(\He)$ from \Cref{thm:basis_of_center} depends on $\smaxa$. This is the motivation for considering $\smaxa$ in this paper. The remainder of this \namecref{sec:preliminaries} is devoted to show that $\na(\Sigma) = \Sigma$ for all $\Sigma\in \smaxa$. This result will be useful in \Cref{sec:equivalence_classes}. In order to obtain it, we further study the quotient sets of $W_{\delta, \min}$ and $W_{\delta, \max}$ under $\approx_\delta$. Define $\delta' := \na \circ \delta$. Then $\delta'$ is a $W$-automorphism with $\delta'(S) = S$ as well. The Bruhat order antiautomorphism $w\mapsto ww_0$ from \Cref{thm:w0_maps} relates $\faktor{W_{\delta, \min}}{\approx_\delta}$ to $\faktor{W_{\delta', \max}}{{\approx_{\delta'}}}$. \begin{lem}[{\cite[Section 2.2]{He2015}}] \label{thm:delta_min_and_delta_max} The map $\faktor{W_{\delta, \min}}{\approx_\delta} \to \faktor{W_{\delta', \max}}{{\approx_{\delta'}}}$, $\Sigma \mapsto \Sigma w_0$ is a bijection. \end{lem} We now come to parametrizations of $\wfakt{\delta}{\min}$ and $\wfakt{\delta}{\max}$ which are due to He. A $\delta$-conjugacy class $\mc O \in \cl(W)_\delta$ is called \emph{elliptic} (or \emph{cuspidal}) if $\mc O \cap W_I = \emptyset$ for all $I \subsetneq S$ such that $\delta(I) = I$. Define \begin{align*} \Gamma_\delta := \set{(I,C) \mid \text{$I\subseteq S$, $I = \delta(I)$ and $C\in \cl(W_I)_\delta$ is elliptic} }. \end{align*} \begin{prop}[{\cite[Corollaries 4.2 and 4.3]{He2015}}] \label{thm:parametrizations_of_he} The maps \begin{align*} \begin{aligned} \Gamma_\delta &\to \wfakt{\delta}{\min} \\ (I,C) &\mapsto C_{\min} \end{aligned} \quad \text{and} \quad \begin{aligned} \Gamma_{\delta'} &\to \wfakt{\delta}{\max} \\ (I,C) &\mapsto C_{\min}w_0 \end{aligned} \end{align*} are bijections. \end{prop} The quotient sets $\wfakt{\delta}{\min}$ and $\wfakt{\delta}{\max}$ are given rather implicit both in their definitions and in \Cref{thm:parametrizations_of_he}. The aim of this paper is to describe $\smaxa$ more explicitly. In \Cref{sec:parametrization} we obtain a parametrization of $\smaxa$ by certain kinds of compositions and a set of representatives of $\smaxa$. Then in \Cref{sec:equivalence_classes} we combinatorially describe some of the elements of $\smaxa$. Such matters are not discussed in \cite{He2015}. From \Cref{thm:parametrizations_of_he} we deduce the following invariance properties. \begin{lem} \label{thm:delta_fixes_equivalence_classes} \hfil \begin{enumerate} \item We have $\delta(\Sigma) = \Sigma$ for each $\Sigma \in \wfakt{\delta}{\min}$. \item We have $\delta'(\Sigma) = \Sigma$ for each $\Sigma \in \wfakt{\delta}{\max}$. \end{enumerate} \end{lem} \begin{proof} \begin{wideenumerate} \item Let $\Sigma \in \wfakt{\delta}{\min}$ and $w\in \Sigma$. By \Cref{thm:parametrizations_of_he} there exists a tuple $(I,C) \in \Gamma_\delta$ such that $C \in \cl(W_I)_\delta$ and $\Sigma = C_{\min}$. Hence $w \in W_I$ and therefore $w^{-1} \in W_I$. It follows that \begin{align*} \delta(w) = w^{-1} w \delta(w^{-1})^{-1} \in C. \end{align*} Moreover, $\ell(\delta(w)) = \ell(w)$ because $\delta$ is a Bruhat order automorphism by \Cref{thm:delta_is_diagram_automorphism}. Therefore, $\delta(w) \in C_{\min{}} = \Sigma$. Hence, $\delta(\Sigma) = \Sigma$. \item Let $\Sigma \in \wfakt{\delta}{\max}$. From \Cref{thm:delta_min_and_delta_max} it follows that $\Sigma w_0 \in \wfakt{\delta'}{\min}$. Hence, \begin{align*} \delta'(\Sigma)w_0 = \delta'(\Sigma w_0) = \Sigma w_0, \end{align*} where we use that $\delta'$ is a group homomorphism with $\delta'(w_0) = w_0$ for the first and Part~(1) for the second equality. Now multiply from the right with $w_0$. \qedhere \end{wideenumerate} \end{proof} Setting $W = \SG_n$ and $\delta = \id$ in the second part of \Cref{thm:delta_fixes_equivalence_classes} yields the desired result on $\smaxa$ and $\na$. \begin{cor} \label{thm:conjugation_with_w0_and_equivalence_classes} We have $\na(\Sigma) = \Sigma$ for each $\Sigma \in \smaxa$. \end{cor} \section{A parametrization by compositions} \label{sec:parametrization} The goal of this \namecref{sec:equivalence_classes} is to obtain a new parametrization of $\smaxa$ and, by \Cref{thm:basis_of_center}, of He's basis of $Z(\He)$. The parameters will be compositions of the following kind. \begin{defi}[{Kim, \cite{Kim1998}}] \label{def:maximal_composition} Let $\alpha = \parts{\alpha}{l} \vDash n$. We call $\alpha$ \emph{maximal} and write $\alpha \vDash_e n$ if there exists a $k$ with $0\leq k \leq l$ such that $\alpha_i$ is even for $i\leq k$, $\alpha_i$ is odd for $i>k$ and $\alpha_{k+1} \geq \alpha_{k+2} \geq \dots \geq \alpha_l$. \end{defi} For example, among the two compositions $(4, 6, 2, 3, 1, 1)$ and $(6, 4, 3, 2, 1, 1)$ of $17$ only the first one is maximal. The parametrization corresponds to a set of representatives of $\smaxa$ given by \emph{elements in stair form}: \begin{defi}[{Kim, \cite{Kim1998}}] \label{thm:element_in_stair_form} Let $\alpha = \parts{\alpha}{l} \vDash n$. We define the list $(x_1, x_2, \dots, x_n)$ by setting $x_{2i-1} := i$ and $x_{2i} := n-i+1$. The \emph{element in stair form $\sigma_\alpha \in \SG_n$ corresponding to $\alpha$} is given by \begin{align*} \sigma_\alpha := \sigma_{\alpha_1} \sigma_{\alpha_2} \cdots \sigma_{\alpha_l} \end{align*} where $\sigma_{\alpha_i}$ is the $\alpha_i$-cycle \begin{align*} \sigma_{\alpha_i} := \left( x_{\alpha_1 + \dots + \alpha_{i-1} + 1}, x_{\alpha_1 + \dots + \alpha_{i-1} + 2}, \dots , x_{\alpha_1 + \dots + \alpha_{i-1} + \alpha_i} \right ). \end{align*} \end{defi} For instance, $\sigma_{(4,2)} = (1, 6, 2, 5)(3,4)$. We obtain $\sigma_\alpha$ for $\alpha = \parts{\alpha}l\vDash n$ as follows. Let $d_i := \sum_{j=1}^{i}\alpha_i$ for $i = 1,\dots, l$ and consider the list $(x_1, x_2, \dots, x_n)$ given as above. Then split the list between $x_{d_i}$ and $x_{d_i+1}$ for $i=1,\dots, l-1$. The resulting sublists are the cycles of $\sigma_\alpha$. In particular, if $\alpha$ and $\beta$ are compositions with $\sigma_\alpha = \sigma_\beta$ then $\alpha = \beta$. The term \emph{maximal} is justified by the following result, which goes back to Kim \cite{Kim1998} and was proven by Geck, Kim and Pfeiffer \cite[Theorem 3.3]{Geck2000}. \begin{lem} \label{thm:elememt_in_stair_form_maximality} Let $\alpha \vDash n$. Then $\sigma_\alpha \in (\SG_n)_{\max}$ if and only if $\alpha$ is a maximal composition. \end{lem} \begin{defi} For $\alpha \vDash_e n$ define $\Sigma_\alpha \in \smaxa$ to be the $\approx$-equivalence class of the element in stair form $\sigma_\alpha$. \end{defi} Thanks to \Cref{thm:elememt_in_stair_form_maximality} this is well defined. We now come to the parametrization of $\smaxa$. \begin{thm} \label{thm:parametrizations_of_kim} The map \begin{align*} \set{\alpha \vDash_e n} \to \smaxa, \quad \alpha \mapsto \Sigma_\alpha \end{align*} is a bijection. In particular, $\set{\sigma_\alpha \mid \alpha \vDash_e n}$ is a complete set of representatives of $\smaxa$. \end{thm} \begin{exa} For $n=3$ we have \begin{align*} \begin{array}{r|ccc} \alpha \vDash_e 3 & (3) & (2,1) & (1^3) \\ \hline \sigma_\alpha & (1,3,2) & (1,3) & 1 \\ \end{array} \end{align*} which by \Cref{exa:delta_quotient_sets} is a complete set of representatives of $\faktor{(\SG_3)_{\max}}{\approx}$. \end{exa} Before we begin with the proof of \Cref{thm:parametrizations_of_kim}, we discuss some immediate consequences. By combining \Cref{thm:basis_of_center} and \Cref{thm:parametrizations_of_kim}, we obtain the following. \begin{cor} \label{thm:basis_of_center_from_elements_in_stair_form} The elements $\ngen_{\leq \Sigma_\alpha}$ for $\alpha \vDash_e n$ form a basis of $Z(\He)$. \end{cor} This leads to an alternative proof of Brichards dimension formula. \begin{cor}[{\cite[Section 5.1]{Brichard2008}}] \label{thm:dimension_of_center_and_cocenter} The dimension of $Z(\He)$ equals \begin{align*} \sum_{\lambda \vdash n} \frac{n_\lambda !}{m_\lambda} \end{align*} where for $\lambda = (1^{k_1}, 2^{k_2},\dots) \vdash n$, $m_\lambda := \prod_{i\geq 1} k_{2i}!$ and $n_\lambda := \sum_{i\geq 1} k_{2i}$ is the number of even parts of $\lambda$. \end{cor} \begin{proof} Each summand is the number of maximal compositions that have the same multiset of parts as $\lambda \vdash n$. Hence, the sum is the number of maximal compositions of~$n$. By \Cref{thm:basis_of_center_from_elements_in_stair_form} this is the dimension of $Z({\He})$. \end{proof} \begin{rem} From \Cref{thm:parametrizations_of_kim} and \Cref{thm:delta_min_and_delta_max} it follows that we have a bijection $\set{\alpha \vDash_e n} \to \sfakt\na\min, \alpha \mapsto [\sigma_\alpha w_0]_\na$, where $[\sigma_\alpha w_0]_\na$ denotes the $\approx_\na$-equivalence class of $\sigma_\alpha w_0$. That is, the elements $\sigma_\alpha w_0$ for $\alpha \vDash_e n$ form a system of representatives of $\sfakt\na\min$. By {\cite[Theorem 6.5]{He2015}}, a basis of the $\na$-cocenter of $\He$ is given by such a system. \end{rem} We now come to the proof of \Cref{thm:parametrizations_of_kim}. Because of \Cref{thm:elememt_in_stair_form_maximality}, it remains to show the following. \begin{enumerate}[label = \textup{(}\alph*\textup{)}] \item \label{enum:parametrization_of_kim_surjectivity} For each $\Sigma \in \smaxa$ there is an $\alpha \vDash_e n$ such that $\sigma_\alpha \in \Sigma$. \item \label{enum:parametrization_of_kim_injectivity} If $\alpha,\beta \vDash_e n$ and $\sigma_\alpha \approx \sigma_\beta$ then $\alpha = \beta$. \end{enumerate} \Cref{thm:basis_of_center} and Brichards dimension formula, \Cref{thm:dimension_of_center_and_cocenter}, imply that \begin{align*} \card{\smaxa} = \dim Z(\He) = \card{\set{\alpha \vDash_e n}}. \end{align*} Therefore, by simply citing the dimension formula from \cite{Brichard2008}, it would suffice to prove either \ref{enum:parametrization_of_kim_surjectivity} or \ref{enum:parametrization_of_kim_injectivity}. However, we chose to show both statements here as both proofs involve intermediate results that will be useful in the next \namecref{sec:equivalence_classes}. In order to prove Statement~\ref{enum:parametrization_of_kim_surjectivity} we need the following result. \begin{lem} \label{thm:arrow_and_length} Let $W$ be a finite Coxeter group and $w,w'\in W$ be such that $w\rightarrow w'$ and $\ell(w) = \ell(w')$. Then $w\approx w'$. \end{lem} \begin{proof} Let $S$ be the set of Coxeter generators of $W$. It suffices to consider the case where $w \overset{s}\rightarrow w'$ for some $s\in S$ because by definition $\rightarrow$ is the transitive closure of all the relations $\overset{t}\rightarrow$ with $t \in S$. Then $w' = s w s$. Thus, $w = s w' s$ and since $\ell(w) = \ell(w')$, we have $w'\overset{s}\rightarrow w$. Hence $w\approx w'$. \end{proof} \begin{proof}[Proof of Statement~\ref{enum:parametrization_of_kim_surjectivity}] Let $\Sigma \in \smaxa$ and $\sigma\in \Sigma$. In \cite[Section 3]{Kim1998} it is shown that there is a $\beta \vDash n$ such that $\sigma_\beta \rightarrow \sigma$. Moreover, Statement~(a$''$) of Section~3.1 in \cite{Geck2000} provides the existence of an $\alpha \vDash_e n$ such that $\sigma_\alpha \rightarrow \sigma_\beta$. Therefore, $\sigma_\alpha \rightarrow \sigma$. Hence, $\sigma_\alpha$ and $\sigma$ are conjugate and $\ell(\sigma_\alpha) \geq \ell(\sigma)$. But the length of $\sigma$ is maximal in its conjugacy class. Hence, $\ell(\sigma_\alpha) = \ell(\sigma)$ and \Cref{thm:arrow_and_length} yields $\sigma_\alpha \approx \sigma$. \end{proof} We begin working towards Statement \ref{enum:parametrization_of_kim_injectivity}. As before, we will trace the relation $\approx$ back to the elementary steps $\overset{s_i}{\rightarrow}$ with $i\in [n-1]$. Consider $\sigma\in \SG_n$ and $\tau = s_i \sigma s_i$. Then we have $\tau \overset{s_i}{\rightarrow} \sigma$ or $\sigma \overset{s_i}{\rightarrow} \tau$ depending on $\ell(s_i \sigma s_i) - \ell(\sigma)$. Moreover $\sigma \approx \tau$ if and only if the difference vanishes. Thus our first goal is to determine $\ell(s_i \sigma s_i) - \ell(\sigma)$ depending on $\sigma$ and $s_i$. \begin{lem} \label{thm:length_and_conjugation_lem} Let $\sigma \in \SG_n$ and $i, j\in [n-1]$. Then $\set{\sigma(i),\sigma(i+1)} \neq \set{j,j+1}$ if and only if $\left( s_j\in D_L(\sigma) \iff s_j\in D_L(\sigma s_i)\right)$. \end{lem} \begin{proof} We consider all permutations in one-line notation. From \Cref{eq:descent_sets_of_Sn} it follows for each $\sigma \in \SG_n$ that $j\in D_L(\sigma)$ if and only if $j+1$ is left of~$j$ in $\sigma$. Now fix $\sigma \in \SG_n$. Observe that we obtain $\sigma s_i$ from $\sigma$ by swapping $\sigma(i)$ and $\sigma(i+1)$. Since these are two consecutive characters in the the one-line notation of $\sigma$, the relative positioning of~$j$ and $j+1$ is affected by this interchange if and only if $\set{\sigma(i),\sigma(i+1)} = \set{j,j+1}$. Now use the note on left descents from the beginning to deduce the claim. \end{proof} \begin{lem} \label{thm:length_and_conjugation} Let $\sigma \in \SG_n$ and $i\in [n-1]$. \begin{enumerate} \item If $\set{\sigma(i),\sigma(i+1)} \neq \set{i,i+1}$ then \begin{align*} \ell(s_i \sigma s_i) = \begin{cases} \ell(\sigma) -2 &\myif \sigma(i) > \sigma(i+1) \text{ and } \sigma\inv(i) > \sigma\inv(i+1), \\ \ell(\sigma) +2 &\myif \sigma(i) < \sigma(i+1) \text{ and } \sigma\inv(i) < \sigma\inv(i+1), \\ \ell(\sigma) &\text{else.} \end{cases} \end{align*} \item If $\set{\sigma(i),\sigma(i+1)} = \set{i,i+1}$ then $i$ and $i+1$ either~are fixpoints of $\sigma$ or form a~$2$-cycle in $\sigma$. In particular, $s_i \sigma s_i = \sigma$. \end{enumerate} \end{lem} \begin{proof} Part~$(2)$ should be clear. For Part~$(1)$ assume that $\set{\sigma(i),\sigma(i+1)} \neq \set{i,i+1}$. We have that \begin{align*} \ell(s_i \sigma s_i) - \ell(\sigma)= \ell(s_i \sigma s_i) - \ell(\sigma s_i) + \ell(\sigma s_i) - \ell(\sigma). \end{align*} Equation~\Cref{eq:descent_sets_W} yields that each of the two differences on the right hand side is $-1$ or~$1$ depending the truth value of the statements $s_i \in D_L(\sigma s_i)$ and $s_i \in D_R(\sigma)$, respectively. From \Cref{thm:length_and_conjugation_lem} we have that $s_i\in D_L(\sigma s_i)$ if and only if $s_i\in D_L(\sigma)$. That is, the first difference depends on whether $s_i \in D_L(\sigma)$ or not. Thus, \Cref{eq:descent_sets_of_Sn} implies the claim. \end{proof} We now show for each $\alpha\vDash_e n$ that all elements of $\Sigma_\alpha$ have the same orbits of even length on $[n]$. \begin{lem} \label{thm:orbits_of_maximal_classes} Let $\alpha \vDash_e n$ and $\sigma \in \SG_n$ such that $\sigma_\alpha \approx \sigma$. Then we have the following. \begin{enumerate} \item The orbits of even length of $\sigma$ and $\sigma_\alpha$ on $[n]$ coincide. \item Let $\mc O$ be an $\sigma$-orbit on $[n]$ of even length. Then the orbits of $\sigma^2$ and $\sigma_\alpha^2$ on $\mc O$ coincide. \end{enumerate} \end{lem} \begin{proof} Since $\sigma_\alpha \approx \sigma$, we have $\sigma_\alpha \rightarrow \sigma$ and $\ell(\sigma_\alpha) = \ell(\sigma)$. Using induction on the minimal number of elementary steps $w \overset{s}\rightarrow w'$ (with some $w,w'\in \SG_n$ and $s \in S$) necessary to relate $\sigma_\alpha$ to $\sigma$, we may assume that there are $\tau \in \SG_n$ and $s_i\in S$ such that $\sigma_\alpha \rightarrow \tau \overset{s_i}\rightarrow \sigma$ and $\tau$ satisfies~$(1)$ and~$(2)$ ($\sigma_\alpha$ certainly does). Then $\ell(\sigma_\alpha) \geq \ell(\tau) \geq \ell(\sigma)$ so that in fact $\ell(\sigma_\alpha) = \ell(\tau) = \ell(\sigma)$ and $\sigma_\alpha \approx \tau \approx \sigma$ by \Cref{thm:arrow_and_length}. It remains to show that $\overset{s_i}\rightarrow$ transfers Properties~$(1)$ and~$(2)$ from $\tau$ to $\sigma$. Because $\sigma = s_i \tau s_i$, we obtain $\sigma$ from $\tau$ by interchanging~$i$ and $i+1$ in the cycle notation of $\tau$. If~$i$ and $i+1$ both appear in orbits of uneven length of $\tau$ then~$(1)$ and~$(2)$ are not affected by this interchange. Thus, we are left with two cases. \textbf{Case 1.} Assume that~$i$ and $i+1$ appear in different orbits of $\tau$, say $\mc O_1$ and $\mc O_2$ such that at least one of them, say $\mc O_1$, has even length. We show that this case does not occur. To do this, let $m_1$ and $m_2$ be the minimal elements of $\mc O_1$ and $\mc O_2$, respectively. If $\mc O_2$ also has even length, we assume $m_1 < m_2$. For $w\in \SG_n$ and $j\in [n]$ let $\langle w \rangle$ denote the subgroup of $\SG_n$ generated by~$w$ and $\langle w \rangle j$ be the orbit of~$j$ under the natural action of $\langle w \rangle$ on $[n]$. Since $\tau$ satisfies Property~$(2)$ and $\mc O_1$ has even length, there is a $p_1 \geq m_1$ such that \begin{align} \label{eq:orbits_of_tau} \begin{aligned} \mc O_1^< & := \langle \tau^2 \rangle m_1 = \langle \sigma_\alpha^2 \rangle m_1 = \left\{m_1, m_1+1,\dots, p_1 \right\},\\ \mc O_1^>& := \langle \tau^2 \rangle \tau(m_1) = \langle \sigma_\alpha^2 \rangle \sigma_\alpha(m_1) = \left\{n-m_1+1, n-m_1,\dots, n-p_1+1 \right\}. \end{aligned} \end{align} \begin{cla*} Let $a \in \mc O_1^<, b \in \mc O_2$ and $c\in \mc O_1^>$. Then $a<b<c$. \end{cla*} To prove the claim, consider the positions of elements of $[n]$ in the cycle notation $\sigma_\alpha =\sigma_{\alpha_1} \cdots \sigma_{\alpha_l}$ given by \Cref{thm:element_in_stair_form}. The elements on odd positions $1, 2, 3,\dots$ form an strictly increasing sequence. The elements on even positions $n, n-1, \dots $ form an strictly decreasing sequence but they are always greater than the entries on odd positions. We want to show that the elements of $\mc O_2$ all appear right of the cycle consisting of the elements of $\mc O_1$. If $\mc O_2$ has even length this is clear. If $\mc O_2$ has odd length, we can use that by Property~$(1)$, the unions of odd orbits of $\tau$ and $\sigma_\alpha$ coincide and that in $\sigma_\alpha$ the elements of odd orbits are all located right of the elements of the even orbits. Let $a\in \mc O^<_1$. Then~$a$ is on an odd position and thus it is smaller than any entry right of it. On the other hand, $c\in \mc O^>_1$ implies that~$c$ is on an even position and thus is greater then any entry right of it. Finally, in the last paragraph we have shown that each $b \in \mc O_2$ is located right of~$a$ and~$c$. This establishes the claim. Now, we have to deal with two cases. If $i\in \mc O_1$ and $i+1 \in \mc O_2$ then the claim implies $i\in \mc O^<_1$. Then $\tau^{-1}(i),\tau(i) \in \mc O^>_1$. Since $\tau^{-1}(i+1),\tau(i+1) \in \mc O_2$, our claim yields $\tau^{-1}(i) > \tau^{-1}(i+1)$ and $\tau(i) > \tau(i+1)$. In addition, since $\mc O_1$ has even length and $i+1\not \in \mc O_1$, $\tau(i) \neq i,i+1$. Thus, we obtain from \Cref{thm:length_and_conjugation} that $\ell(\sigma) < \ell(\tau)$, a contradiction to $\ell(\tau) = \ell(\sigma)$. If $i+1\in \mc O_1$ and $i \in \mc O_2$ then the claim implies $i+1\in \mc O^>_1$ and similarly as before we obtain $\tau^{-1}(i) > \tau^{-1}(i+1)$ and $\tau(i) > \tau(i+1)$ and thus the same contradiction using \Cref{thm:length_and_conjugation}. That is, we have shown that~$i$ and $i+1$ cannot appear in two different orbits if one of the latter has even length. \textbf{Case 2}. Assume that~$i$ and $i+1$ appear in the same orbit with even length $\mc O_1$ of $\tau$. Then~$(1)$ also holds for $\sigma$. To show~$(2)$, assume $i+1 \in \langle \tau^2 \rangle i$ first. Then both elements appear in the same cycle of $\tau^2$. As we obtain $\sigma^2$ from $\tau^2$ by swapping~$i$ and $i+1$ in cycle notation,~$(2)$ also holds for $\sigma$. Lastly, we show that $i+1 \in \langle \tau^2 \rangle i$ is always true. For the sake of contradiction, assume $i+1 \not \in \langle \tau^2 \rangle i$. Suppose in addition that $\card{\mc O_1} = 2$. Then $\set{\tau(i),\tau(i+1)} = \set{i,i+1}$ and from \Cref{thm:length_and_conjugation} we obtain $\sigma = s_i\tau s_i = \tau$. This contradicts the minimality of the sequence of arrow relations from $\sigma_\alpha$ to $\sigma$. Now suppose $\card{\mc O_1} > 2$. Then $\set{\tau(i),\tau(i+1)} \neq \set{i,i+1}$. Since $i+1 \not \in \langle \tau^2 \rangle i$, it follows from \Cref{eq:orbits_of_tau} that $i = \max \mc O^<_1$ and $i+1= \min \mc O^>_1$. Consequently, $\tau^{-1}(i), \tau(i) \in \mc O_1^>$ and $\tau^{-1}(i+1), \tau(i+1) \in \mc O_1^<$. But this means that \begin{align*} \tau^{-1}(i)> \tau^{-1}(i+1) \text{ and }\tau(i)> \tau(i+1). \end{align*} Because $\set{\tau(i),\tau(i+1)} \neq \set{i,i+1}$, we can now apply \Cref{thm:length_and_conjugation} and obtain that $\ell(\sigma) < \ell(\tau)$. Again, we end up with a contradiction. \end{proof} Let $\sigma\in \SG_n$. Then the set of orbits of $\sigma$ on $[n]$ is a set partition of $[n]$. We denote this partition by $\partition(\sigma)$. The set of even orbits of $\sigma$ is given by \begin{align*} \partition_e(\sigma) := \set{\mc O \in P(\sigma) \mid \text{$|\mc O|$ is even}} \end{align*} If $\partition(\sigma) = \partition(\sigma')$ for $\sigma,\sigma'\in \SG_n$ then $\sigma$ and $\sigma'$ have the same type, \ie they are conjugate. \begin{lem} \label{thm:element_in_stair_form_same_even_orbits_implies_equality} Let $\alpha,\beta \vDash_e n$ such that $\sigma_\alpha$ and $\sigma_\beta$ are conjugate. If $\partition_e(\sigma_\alpha) = \partition_e(\sigma_\beta)$ then $\alpha = \beta$. \end{lem} \begin{proof} Let $\alpha=\parts{\alpha}l, \beta=\parts{\beta}{l'} \vDash_e n$ and $(x_1, x_2, \dots, x_n)$ be the sequence with $x_{2i-1} = i$ and $x_{2i} = n-i+1$. Since $\alpha$ is maximal, there is a $k\in [0,l]$ such that $\alpha_i$ is even for $i\leq k$ and odd for $i>k$. Assume that $\sigma_\alpha$ and $\sigma_\beta$ are conjugate and $\partition_e(\sigma_\alpha) = \partition_e(\sigma_\beta)$. Because $\sigma_\alpha$ and $\sigma_\beta$ are conjugate, $\alpha$ and $\beta$ have the same multiset of parts. In particular, $l = l'$. Since $\alpha$ and $\beta$ are maximal, the odd parts of $\alpha$ and $\beta$ form an weakly decreasing sequence at the end of $\alpha$ and $\beta$, respectively. As both compositions have the same length and multiset of parts, it follows that $\alpha_i = \beta_i$ for $i = k+1, \dots, l$. We show that $\alpha_i = \beta_i$ for $i=1,\dots, k$ with induction. Assume that $i\in [k]$ and $\alpha_j = \beta_j$ for all $1\leq j<i$. Define $d := \sum_{j=1}^{i-1} \alpha_i$. Then by assumption $d = \sum_{j=1}^{i-1} \beta_i$. Moreover, let $\mc O_{\alpha_{i}}$ and $\mc O_{\beta_{i}}$ be the orbits of $x_{d+1}$ under $\sigma_\alpha$ and $\sigma_\beta$, respectively. From the definition of elements in stair form it follows that \begin{align*} \mc O_{\alpha_{i}} = \set{ x_{d+1}, x_{d+2} , \dots, x_{d+\alpha_i}}, \\ \mc O_{\beta_{i}} = \set{ x_{d+1}, x_{d+2} , \dots, x_{d+\beta_i}}. \end{align*} In particular $|\mc O_{\alpha_{i}}| = \alpha_i$ and $|\mc O_{\beta_{i}}| = \beta_i.$ Since $i\leq k$, $\alpha_i$ and $\beta_i$ are even. Consequently, $\mc O_{\alpha_{i}}$ and $\mc O_{\beta_{i}}$ both have even length. Moreover, they have the element $x_{d+1}$ in common. Hence, $\partition_e(\sigma_\alpha) = \partition_e(\sigma_\beta)$ implies $\mc O_{\alpha_{i}} = \mc O_{\beta_{i}}$. Thus, $\alpha_i = |\mc O_{\alpha_{i}}|= |\mc O_{\beta_{i}}| = \beta_i$. \end{proof} We are now in the position to prove Statement~\ref{enum:parametrization_of_kim_injectivity}. This finishes the proof of \Cref{thm:parametrizations_of_kim}. \begin{proof}[Proof of Statement~\ref{enum:parametrization_of_kim_injectivity}] Let $\alpha,\beta \vDash_e n$ such that $\sigma_\alpha \approx \sigma_\beta$. Then $\sigma_\alpha$ and $\sigma_\beta$ are conjugate. Moreover, \Cref{thm:orbits_of_maximal_classes} implies $\partition_e(\sigma_\alpha) = \partition_e(\sigma_\beta)$. Hence $\alpha = \beta$ by \Cref{thm:element_in_stair_form_same_even_orbits_implies_equality}. \end{proof} We use some of the intermediary results that lead to \Cref{thm:parametrizations_of_kim} in order to prepare a result for later use in \Cref{sec:equivalence_classes:inductive_product}. \begin{prop} \label{thm:characterization_of_Sigma_using_length} Let $\alpha \vDash_e n$ and $\sigma \in \SG_n$. Then $\sigma \in \Sigma_\alpha$ if and only if \begin{enumerate} \item $\sigma$ and $\sigma_\alpha$ are conjugate in $\SG_n$, \item $\ell(\sigma) = \ell(\sigma_\alpha)$, \item $\partition_e(\sigma) = \partition_e(\sigma_\alpha)$. \end{enumerate} \end{prop} \begin{proof} First, assume $\sigma \in \Sigma_\alpha$. Because $\sigma_\alpha \in \Sigma_\alpha$ and $\Sigma_\alpha\in \smaxa$, $\sigma$ satisfies $(1)$ and $(2)$. By \Cref{thm:orbits_of_maximal_classes}, $(3)$ holds as well. Second, assume that $\sigma$ satisfies $(1) - (3)$. By $(1)$, $\sigma$ is in the same conjugacy class as $\sigma_\alpha$. From $(2)$ it follows, that $\sigma$ is maximal in its conjugacy class. Then \Cref{thm:parametrizations_of_kim} provides the existence of a $\beta\vDash_e n$ such that $\sigma \in \Sigma_\beta$. Using the already proven implication from left to right, we obtain that $\sigma$ and $\sigma_\beta$ are conjugate and $\partition_e(\sigma) = \partition_e(\sigma_\beta)$. But as $\sigma$ satisfies $(1)$ and $(3)$, it follows that $\sigma_{\beta}$ and $\sigma_\alpha$ are conjugate and $\partition_e(\sigma_\beta) = \partition_e(\sigma_\alpha)$. Thus, \Cref{thm:element_in_stair_form_same_even_orbits_implies_equality} yields $\beta = \alpha$ as desired. \end{proof} We end this \namecref{sec:parametrization} with a remark on conjugacy classes. \begin{rem} The conjugacy classes of $\SG_n$ are parametrized by the partitions of $n$ via the cycle type. For a composition $\alpha$ we denote the partition obtained by sorting the parts of $\alpha$ in decreasing order by $\widetilde \alpha$. Let $\lambda \vdash n$ and $\mc O$ be the conjugacy class whose elements have cycle type $\lambda$. From \Cref{thm:element_in_stair_form} it follows that for $\alpha \vDash_e n$ the element in stair form $\sigma_\alpha$ is contained in $\mc O$ if and only if $\widetilde \alpha = \lambda$. Hence, \Cref{thm:parametrizations_of_kim} implies that $\set{\sigma_\alpha \mid \alpha \vDash_e n, \widetilde \alpha = \lambda}$ is a complete set of representatives of $\faktid{\mc O}{\max}$. In particular, we have that \begin{center} $\left|\faktid{\mc O}{\max}\right| = 1$ if and only if the even parts of $\lambda$ are all equal. \end{center} \end{rem} \section{\texorpdfstring{Equivalence classes of $(\SG_n)_{\max}$ under $\approx$}{Equivalence classes}} \label{sec:equivalence_classes} Recall that for $\alpha \vDash_e n$, $\Sigma_\alpha$ is the $\approx$-equivalence class of the element in stair form $\sigma_\alpha$. From \Cref{thm:parametrizations_of_kim} we have that $\smaxa = \set{ \Sigma_\alpha \mid \alpha \vDash_e n}$. In \Cref{thm:basis_of_center_from_elements_in_stair_form} we concluded that the elements $\ngen_{\leq \Sigma_\alpha}$ for $\alpha \vDash_e n$ form a basis of $Z(\He)$. We emphasize that $\ngen_{\leq \Sigma_\alpha}$ directly depends on $\Sigma_\alpha$ since $\ngen_{\leq \Sigma_\alpha} = \sum_x \ngen_x$ where $x$ runs over the order ideal in Bruhat order generated by $\Sigma_\alpha$. This \namecref{sec:equivalence_classes} is devoted to the description of equivalence classes $\Sigma_\alpha$ and bijections between them. In \Cref{sec:equivalence_classes:one_part} we consider the case where $\alpha$ has only one part. The first result is the characterization of the elements of $\Sigma_{(n)}$ by properties of their cycle notation. From this we obtain bijections relating $\Sigma_{(n-1)}$ with $\Sigma_{(n)}$ for $n\geq 4$ and a closed formula for the cardinality of $\Sigma_{(n)}$. In \Cref{sec:equivalence_classes:odd_hook} we generalize the characterization of $\Sigma_{(n)}$ to odd hooks, where a hook $\alpha := (k,1^{n-k})$ is called \emph{odd} if~$k$ is odd and \emph{even} otherwise. Moreover, we define a bijection $\Sigma_{(k)} \times [m+1, n-m] \to \Sigma_{(k,1^{n-k})}$ where $m := \frac{k-1}2$. From this we obtain the cardinality of $\Sigma_{(k,1^{n-k})}$. In \Cref{sec:equivalence_classes:inductive_product} we consider the inductive product $\iprod$ that allows the decomposition $\Sigma_{(\alpha_1,\dots, \alpha_l)} =\Sigma_{(\alpha_1)} \iprod \Sigma_{(\alpha_2,\dots,\alpha_l)}$ if $\alpha_1$ is even. Using the results of the previous \namecref{sec:equivalence_classes:inductive_product}s, we infer a description of $\Sigma_\alpha$ for all $\alpha \vDash_e n$ whose odd parts form a hook. In particular, we obtain a characterization of $\Sigma_\alpha$ in the case where $\alpha$ is an even hook. \subsection{\texorpdfstring{Equivalence classes of~$n$-cycles}{Equivalence classes of n-cycles}} \label{sec:equivalence_classes:one_part} In this \namecref{sec:equivalence_classes:one_part} we seek a combinatorial description of the elements of $\Sigma_{(n)}$. Examples are given in \Cref{tbl:Sigma_(n)}. \begin{table} \caption{The elements of $\Sigma_{(n)}$ for small~$n$ with the element in stair form $\sigma_{(n)}$ in the top row.\\} \label{tbl:Sigma_(n)} $ \begin{array}{c|cccccc} \alpha & (1) & (2) & (3) & (4) & (5) & (6) \\ \hline \multirow{6}{*}{$\Sigma_\alpha$} & (1) & (1,2) & (1,3,2) & (1,4,2,3) & (1,5,2,4,3)& (1,6,2,5,3,4) \\ & & & (1,2,3) & (1,3,2,4) & (1,5,2,3,4) & (1,6,2,4,3,5) \\ & & & & & (1,5,3,2,4) & (1,6,3,4,2,5) \\ & & & & & (1,4,2,3,5) & (1,5,2,4,3,6) \\ & & & & & (1,4,3,2,5) & (1,5,3,4,2,6) \\ & & & & & (1,3,4,2,5) & (1,4,3,5,2,6)\\ \end{array} $ \end{table} The description is given by two properties: \emph{being oscillating} and \emph{having connected intervals}. We begin with the property of being oscillating. \begin{defi} \label{def:oscillating_n-cycle} We call the $n$-cycle $\sigma \in \SG_n$ \emph{oscillating} if there exists a positive integer $m \in \set{\frac{n-1}2, \frac{n}2, \frac{n+1}2}$ such that $\sigma([m]) = [n-m+1, n].$ \end{defi} In \Cref{thm:oscillating_n-cycle_cycle_notation} we will obtain a more descriptive characterization of oscillating $n$-cycles. It turns out that the $n$-cycle $\sigma$ of $\SG_n$ (represented in cycle notation) is oscillating if $n$ is even and the entries of $\sigma$ alternate between the sets $[1, \frac{n}2]$ and $[\frac{n}2 + 1, n]$ or $n$ is odd and after deleting the entry $\frac{n+1}2$ from $\sigma$ the remaining entries alternate between the sets $[\frac{n-1}2]$ and $[\frac{n+3}2,n]$. \begin{samepage} \begin{exa} \begin{enumerate} \item Recall that for $n\in \N$ the element in stair form $\sigma_{(n)}$ is an $n$-cycle of $\SG_n$. For \begin{align*} \text{ $\sigma_{(5)} = (1,5,2,4,3)$, \quad $\sigma_{(5)}\inv = (1,3,4,2,5)$ \quad and \quad $\sigma_{(6)} = (1,6,2,5,3,4)$ } \end{align*} we have \begin{align*} \text{ $\sigma_{(5)}([2]) = [4,5]$, \quad $\sigma_{(5)}\inv([3]) = [3,5]$ \quad and \quad $\sigma_{(6)}([3]) = [4,6]$. } \end{align*} Hence, they are oscillating and the integer $m$ used in \Cref{def:oscillating_n-cycle} is given by \begin{align*} \text{ $m = 2 = \frac{5-1}2$, \quad $m = 3 = \frac{5+1}2$ \quad and \quad $m = 3 = \frac{6}2$, } \end{align*} respectively. Note that the entries in the cycles alternate as described after \Cref{def:oscillating_n-cycle}. \item All the elements shown in \Cref{tbl:Sigma_(n)} are oscillating. \end{enumerate} \end{exa} \end{samepage} We explicitly write down the three cases for $m$ in \Cref{def:oscillating_n-cycle}. \begin{rem} Let $\sigma$ be an oscillating $n$-cycle $\sigma \in \SG_n$ with parameter $m$ from \Cref{def:oscillating_n-cycle}. Then we have \begin{enumerate} \item $n$ is even and $\sigma([\frac{n}2]) = [\frac{n}2 + 1, n]$ if $m = \frac{n}2$, \label{enum:def_oscillating_n-cycle_even} \item $n$ is odd and $\sigma([\frac{n-1}2]) = [\frac{n+3}2, n]$ if $m = \frac{n-1}2$, \label{enum:def_oscillating_n-cycle_odd_sigma(n+1/2)<n+1/2} \item $n$ is odd and $\sigma([\frac{n+1}2]) = [\frac{n+1}2, n]$ if $m = \frac{n+1}2$. \label{enum:def_oscillating_n-cycle_odd_sigma(n+1/2)>n+1/2} \end{enumerate} \end{rem} Our next aim is to give a characterization of the term \emph{oscillating} in \Cref{thm:oscillating_n-cycle_local}. By considering complements in $[n]$ we obtain the following. \begin{lem} \label{thm:oscillating_n_cycle_complements} Let $\sigma \in \SG_n$ be an $n$-cycle and $m \in [n]$. Then $\sigma([m]) = [n-m+1, n]$ if and only if $\sigma([m+1, n]) = [n-m]$. \end{lem} \Cref{thm:oscillating_n_cycle_complements} implies that an $n$-cycle $\sigma\in \SG_n$ is oscillating with parameter $m$ if and only if $\sigma([m+1, n]) = [n-m]$. \begin{lem} \label{thm:oscilating_n-cycle_and_inversion} Let $\sigma\in \SG_n$ be an $n$-cycle. Then $\sigma$ is oscillating if and only if $\sigma\inv$ is oscillating. \end{lem} \begin{proof} Let $M := \N \cap \set{\frac{n-1}2, \frac{n}2, \frac{n+1}2}$. If $n = 1$ then $\sigma = \id = \sigma\inv$ (which is oscillating). Thus assume $n\geq 2$. It suffices to show the implication from left to right. Suppose that $\sigma$ is oscillating. Then there is an $m \in M$ such that $\sigma([m]) = [n-m+1, n].$ Consequently, $\sigma([m+1, n]) = [n-m]$ by \Cref{thm:oscillating_n_cycle_complements} and hence \begin{align*} \sigma\inv([n-m]) = [m+1, n]. \end{align*} Moreover, $m+1 = n-(n-m)+1$ and we have $n-m \in M$ since $m \in M$ and $n\geq 2$. Therefore, $\sigma\inv$ is oscillating. \end{proof} In the following we rephrase \Cref{def:oscillating_n-cycle} from a more local point of view. \begin{lem} \label{thm:oscillating_n-cycle_local} Let $\sigma\in \SG_n$ be an $n$-cycle. We consider the four implications for all $i \in [n]$ \begin{enumerate} \item[] \begin{enumerate} \item $i < \frac {n+1}2 \implies \sigma(i) \geq \frac {n+1}2$, \label{enum:oscillating_n-cycle_characterization_implication_i<} \item $i < \frac {n+1}2 \implies \sigma^{-1}(i)\geq \frac {n+1}2$, \label{enum:oscillating_n-cycle_characterization_implication_i<_inv} \item $i > \frac {n+1}2 \implies \sigma(i) \leq \frac {n+1}2$, \label{enum:oscillating_n-cycle_characterization_implication_i>} \item $i > \frac {n+1}2 \implies \sigma^{-1}(i) \leq\frac {n+1}2 $, \label{enum:oscillating_n-cycle_characterization_implication_i>_inv} \end{enumerate} \end{enumerate} and if $n$ is odd the statement \begin{enumerate} \item[] \begin{enumerate}[label = \textup{(}\Alph*\textup{)}] \item either $\sigma^{-1}(\frac{n+1}2) > \frac{n+1}2$ or $\sigma(\frac{n+1}2) > \frac{n+1}2$. \label{enum:oscillating_n-cycle_characterization_xor_contidion} \end{enumerate} \end{enumerate} Then the following are equivalent. \begin{enumerate} \item $\sigma$ is oscillating. \item One of \ref{enum:oscillating_n-cycle_characterization_implication_i<} -- \ref{enum:oscillating_n-cycle_characterization_implication_i>_inv} is true and if $n$ is odd and $n\geq 3$ then also \ref{enum:oscillating_n-cycle_characterization_xor_contidion} is true. \item Each one of \ref{enum:oscillating_n-cycle_characterization_implication_i<} -- \ref{enum:oscillating_n-cycle_characterization_implication_i>_inv} is true and if $n$ is odd and $n\geq 3$ then also \ref{enum:oscillating_n-cycle_characterization_xor_contidion} is true. \end{enumerate} \end{lem} \begin{proof} First suppose that $n$ is odd. If $n=1$ then $\sigma= \id$ is oscillating and the implications \ref{enum:oscillating_n-cycle_characterization_implication_i<} -- \ref{enum:oscillating_n-cycle_characterization_implication_i>_inv} are trivially satisfied. Assume $n\geq 3$. We show for each of the implications (x) that \ref{enum:oscillating_n-cycle_characterization_xor_contidion} and (x) is true if and only if $\sigma$ is oscillating. As $n$ is odd and $n\geq 3$, Statement~\ref{enum:oscillating_n-cycle_characterization_xor_contidion} can be expanded as \begin{center} \begin{tabular}{cl} either &$\sigma\inv(\frac{n+1}2) > \frac{n+1}2$ and $\sigma(\frac{n+1}2) < \frac{n+1}2$ \\ or &$\sigma\inv(\frac{n+1}2) < \frac{n+1}2$ and $\sigma(\frac{n+1}2) > \frac{n+1}2$ . \end{tabular} \end{center} Moreover, \ref{enum:oscillating_n-cycle_characterization_implication_i<} can be rephrased as $\sigma([\frac{n-1}2]) \subseteq [\frac{n+1}2,n]$. Hence, we have \ref{enum:oscillating_n-cycle_characterization_xor_contidion} and \ref{enum:oscillating_n-cycle_characterization_implication_i<} if and only if \begin{center} \begin{tabular}{ccl} either & $\sigma([\frac{n-1}2]) = [\frac{n+3}2, n]$ &(if $\sigma\inv(\frac{n+1}2) > \frac{n+1}2$ and $\sigma(\frac{n+1}2) < \frac{n+1}2$) \\ or &$\sigma([\frac{n+1}2]) = [\frac{n+1}2, n]$ &(if $\sigma\inv(\frac{n+1}2) < \frac{n+1}2$ and $\sigma(\frac{n+1}2) > \frac{n+1}2$). \end{tabular} \end{center} In other words, $\sigma([m]) = [n-m+1, n]$ for either $m = \frac{n-1}2$ or $m =\frac{n+1}2$, \ie $\sigma$ is oscillating. Similarly, we have \ref{enum:oscillating_n-cycle_characterization_xor_contidion} and \ref{enum:oscillating_n-cycle_characterization_implication_i>} if and only if \begin{center} \begin{tabular}{llll} either &$\sigma([\frac{n+1}2,n]) = [\frac{n+1}2]$ & or & $\sigma([\frac{n+3}2,n]) = [\frac{n-1}2]$. \end{tabular} \end{center} That is, $\sigma([m+1,n]) = [n-m]$ for either $m = \frac{n-1}2$ or $m = \frac{n+1}2$. This is equivalent to $\sigma$ being oscillating by \Cref{thm:oscillating_n_cycle_complements}. So far we have shown that \begin{align} \text{ \ref{enum:oscillating_n-cycle_characterization_xor_contidion} and \ref{enum:oscillating_n-cycle_characterization_implication_i<} $\iff$ $\sigma$ is oscillating $\iff$ \ref{enum:oscillating_n-cycle_characterization_xor_contidion} and \ref{enum:oscillating_n-cycle_characterization_implication_i>}. } \label{eq:oscilating_n-cycle_and_implications} \end{align} By \Cref{thm:oscilating_n-cycle_and_inversion} we therefore also have \begin{align} \text{ \ref{enum:oscillating_n-cycle_characterization_xor_contidion} and \ref{enum:oscillating_n-cycle_characterization_implication_i<_inv} $\iff$ $\sigma$ is oscillating $\iff$ \ref{enum:oscillating_n-cycle_characterization_xor_contidion} and \ref{enum:oscillating_n-cycle_characterization_implication_i>_inv}. } \label{eq:oscilating_n-cycle_and_implications_inv} \end{align} This finishes the proof for odd $n$. Suppose now that $n$ is even. Note that $\frac{n+1}2 \not \in [n]$ as it is not an integer. It is not hard to see that the equivalences from \Cref{eq:oscilating_n-cycle_and_implications} and therefore those from \Cref{eq:oscilating_n-cycle_and_implications_inv} hold if we drop Statement~\ref{enum:oscillating_n-cycle_characterization_xor_contidion}. \end{proof} We continue with two consequences of \Cref{thm:oscillating_n-cycle_local}. We first infer the description of oscillating $n$-cycles mentioned at the beginning of the \namecref{sec:equivalence_classes:one_part}. \begin{cor} \label{thm:oscillating_n-cycle_cycle_notation} Let $\sigma \in \SG_n$ be an $n$-cycle. We consider $\sigma$ in cycle notation. Then $\sigma$ is oscillating if and only if one of the following is true. \begin{enumerate} \item $n$ is even and the entries of $\sigma$ alternate between the sets $\left[\frac{n}2\right]$ and $\left[\frac{n}2+1,n\right]$. \item $n$ is odd and after deleting the entry $\frac{n+1}2$ from $\sigma$, the remaining entries alternate between the sets $\left[\frac{n-1}2\right]$ and $\left[\frac{n+3}2, n\right]$. \end{enumerate} \end{cor} \begin{proof} With \ref{enum:oscillating_n-cycle_characterization_xor_contidion}, \ref{enum:oscillating_n-cycle_characterization_implication_i<} and \ref{enum:oscillating_n-cycle_characterization_implication_i>} we refer to the statements of \Cref{thm:oscillating_n-cycle_local}. Suppose that $n$ is even. By \Cref{thm:oscillating_n-cycle_local}, $\sigma$ is oscillating if and only if the implications \ref{enum:oscillating_n-cycle_characterization_implication_i<} and \ref{enum:oscillating_n-cycle_characterization_implication_i>} are satisfied which is the case if and only if the entries of $\sigma$ alternate between $[\frac{n}2]$ and $[\frac{n}2+1,n]$. Suppose that $n$ is odd. If $n\geq 3$ then property \ref{enum:oscillating_n-cycle_characterization_xor_contidion} states that one of the neighbors $\sigma\inv(\frac{n+1}2)$ and $\sigma(\frac{n+1}2)$ of $\frac{n+1}2$ in $\sigma$ is an element of $[\frac{n-1}2]$ and the other one is an element of $[\frac{n+3}2, n]$. Therefore, $\sigma$ satisfies \ref{enum:oscillating_n-cycle_characterization_xor_contidion}, \ref{enum:oscillating_n-cycle_characterization_implication_i<} and \ref{enum:oscillating_n-cycle_characterization_implication_i>} if and only if after deleting $\frac{n+1}2$ from the cycle notation of $\sigma$, the remaining entries alternate between the sets $[\frac{n-1}2]$ and $[\frac{n+3}2, n]$. Thus, \Cref{thm:oscillating_n-cycle_local} yields that the latter property is satisfied if and only if $\sigma$ is oscillating. \end{proof} By considering $\sigma$ in cycle notation beginning with $1$, we can rephrase \Cref{thm:oscillating_n-cycle_cycle_notation} in a more formal way. \begin{cor} \label{thm:reformulation_of_oscillating} Let $\sigma \in \SG_n$ be an~$n$-cycle. If~$n$ is odd, let $0\leq l \leq n-1$ be such that $\sigma^l(1) = \frac{n+1}2$. If~$n$ is even, set $l := \infty$. Then $\sigma$ is oscillating if and only if for all $0\leq k \leq n-1$ we have \begin{align*} \begin{aligned} \sigma^k(1) &< \frac{n+1}2 \quad \text{if $k<l$ and~$k$ is even or $k>l$ and~$k$ is odd}, \\ \sigma^k(1) &> \frac{n+1}2 \quad \text{if $k<l$ and~$k$ is odd or $k>l$ and~$k$ is even}. \end{aligned} \end{align*} \end{cor} We now consider the second property in the characterization of $\Sigma_{(n)}$: the property of \emph{having connected intervals}. Roughly speaking, an $n$-cycle of $\SG_n$ has connected intervals if in its cycle notation for each $1\leq k\leq \frac{n}2$ the elements of the interval $[k, n-k+1]$ are grouped together. \begin{defi} \label{def:connected_intervals_n-cycle} \begin{enumerate} \item Let $\sigma \in \SG_n$ and $M\subseteq [n]$. We call~$M$ \emph{connected} in $\sigma$ if there is an $m\in M$ such that \begin{align*} M = \set{m, \sigma(m), \sigma^2(m),\dots, \sigma^{|M|-1}(m)}. \end{align*} \item Let $\sigma \in \SG_n$ be an~$n$-cycle. We say that $\sigma$ has \emph{connected intervals} if the interval ${[k, n-k+1]}$ is connected in $\sigma$ for all integers $k$ with $1\leq k \leq \frac n2$. \end{enumerate} \end{defi} \begin{exa} All elements shown in \Cref{tbl:Sigma_(n)} have connected intervals. In particular, the element in stair form $\sigma_{(6)} = (1,6,2,5,3,4)$ has connected intervals. In contrast, in $(1,5,2,6,3,4)$ the set $[2,5]$ is not connected. \end{exa} The main result of this \namecref{sec:equivalence_classes:one_part} is that an~$n$-cycle $\sigma \in \SG_n$ is an element of $\Sigma_{(n)}$ if and only if $\sigma$ is oscillating and has connected intervals. We now begin working towards this result. \begin{lem} \label{thm:sigma_(n)_osc_and_c.I.} The element in stair form $\sigma_{(n)} \in \SG_n$ is oscillating and has connected intervals. \end{lem} \begin{proof} By \Cref{thm:element_in_stair_form}, \begin{align*} \sigma_{(n)} = \begin{cases} (1, n, 2, n-1, \dots, \frac n2, n - \frac n2 +1) & \text{if~$n$ is even} \\ (1, n, 2, n-1, \dots, \frac{n-1}2, n - \frac{n-1}2+1, \frac {n+1}2) & \text{if~$n$ is odd}. \end{cases} \end{align*} Thus, $\sigma_{(n)}([\tfrac{n}2]) = [\tfrac{n}2 +1, n]$ if $n$ is even and $\sigma_{(n)}([\tfrac{n-1}2]) = [\tfrac{n+3}2, n]$ if $n$ is odd. That is, $\sigma_{(n)}$ is oscillating. For all $k\in \N$ with $1\leq k \leq \frac{n}2$ the rightmost ${|[k,n-k+1]|}$ elements in the cycle of $\sigma_{(n)}$ from above form ${[k,n-k+1]}$. Thus, $\sigma_{(n)}$ has connected intervals. \end{proof} Let $\sigma \in \SG_n$. Sometimes it will be convenient to consider $\sigma^{w_0}$ instead of $\sigma$. We will now show that conjugation with the longest element $w_0$ of $\SG_n$ preserves the properties of being oscillating and having connected intervals. \begin{lem} \label{thm:conjudation_with_w0_oscillating_and_c.I.} Let $\sigma\in \SG_n$ be an~$n$-cycle. \begin{enumerate} \item If $\sigma$ is oscillating then $\sigma^{w_0}$ is oscillating. \item If $\sigma$ has connected intervals then $\sigma^{w_0}$ has connected intervals. \end{enumerate} \end{lem} \begin{proof} If $n = 1$ the result is trivial. Thus suppose $n\geq 2$. \begin{wideenumerate} \item Set $M := \N \cap \set{\frac{n-1}2, \frac{n}2, \frac{n+1}2}$ and assume that $\sigma$ is oscillating. Then there is an $m\in M$ such that $\sigma([m]) = [n-m+1,n]$ and from \Cref{thm:oscillating_n_cycle_complements} it follows that $\sigma([m+1,n]) = [n-m]$. Using $w_0(i) = n-i+1$ for $i\in [n]$, we obtain \begin{align*} \sigma^{w_0}([n-m]) &= w_0\sigma w_0([n-m]) \\ &= w_0\sigma ([m+1,n]) \\ &= w_0([n-m]) \\ &= [n-(n-m)+1, n]. \end{align*} As $n-m\in M$, it follows that $\sigma^{w_0}$ is oscillating. \item Let $I := [k,n-k+1]$ be given by an integer $k$ with $1\leq k \leq \frac{n}2$. Then $w_0(I)= I$. Hence, if~$I$ is connected in $\sigma$ then it is also connected in $\sigma^{w_0}$. \qedhere \end{wideenumerate} \end{proof} In the following result we study the interplay between the conjugation with $w_0$ and the relation $\approx$. The generalization to all finite Coxeter groups is straight forward. \begin{lem} \label{thm:conjudation_with_w0_and_arrow} Let $w,w'\in \SG_n$ and $\na$ be the automorphism of $\SG_n$ given by $x \mapsto x^{w_0}$. \begin{enumerate} \item If $w\overset{s_i}{\to}w'$ then $\na(w)\overset{s_{n-i}}{\to}\na(w')$. \item If $w\approx w'$ then $\na(w) \approx \na(w')$. \end{enumerate} \end{lem} \begin{proof} Assume $w\overset{s_i}{\to}w'$. Then $w' = s_i w s_i$ and $\ell(w') \leq \ell(w)$. Since $\na(s_i) = s_{n-i}$, we have $\na(w') = s_{n-i} \na(w) s_{n-i}$. Moreover, $\ell(\na(w')) \leq \ell(\na(w))$ because $\ell(x) = \ell(\na(x))$ for all $x\in \SG_n$. Thus, $\na(w)\overset{s_{n-i}}{\to}\na(w')$. Now, use the definition of $\approx$ to obtain $(2)$ from $(1)$. \end{proof} Consider $n=5$, the oscillating~$n$-cycle $\sigma = (\bs{1},4,2,3,\bs{5})$ and its connected interval $I = \set{2,3,4}$. In the cycle notation of $\sigma$, this interval is enclosed by the two elements $a = 1$ and $b = 5$. Note that $\frac{n+1}2 = 3$, $a<3$ and $b>3$. This illustrates a property of oscillating~$n$-cycles which is the subject of the next \namecref{thm:oscillation_around_intervals}. \begin{lem} \label{thm:oscillation_around_intervals} Assume that $\sigma\in \SG_n$ is an oscillating~$n$-cycle with a connected interval $I := [i,n-i+1]$ such that $i\in \N$ and $2\leq i \leq \frac{n+1}2$. Let $r := |I|$ and $m\in I$ be such that $I = \set{\sigma^k(m) \mid k = 0,\dots, r-1}$. Moreover, set $a := \sigma^{-1}(m)$ and $b:=\sigma^r(m)$. Then $a,b \neq \frac{n+1}2$ and \begin{align*} a <\frac{n+1}2 \iff b>\frac{n+1}2. \end{align*} \end{lem} \begin{proof} Let $p\in [n-1]$ be such that $\sigma^p(1) = a$. Then $\sigma^{p+r+1}(1) = b$. Since $i>1$, $1\not \in I$ and thus $p+r+1\leq n-1$. We have $r = n-2i+2$. Hence,~$r$ has the same parity as~$n$. We want to apply \Cref{thm:reformulation_of_oscillating}. If~$n$ is odd, let $l\in [0,n-1]$ be such that $\sigma^l(1) = \frac{n+1}2$. Then $\frac{n+1}2 \in I$ so that $p< l < p + r+1$. In particular, $a,b \neq \frac{n+1}2$. Clearly, if~$n$ is even then $a,b \neq \frac{n+1}2$. Therefore, \begin{align*} a=\sigma^p(1) < \frac{n+1}2 &\iff \text{$p$ is even} \\ &\iff \begin{cases} \text{$p+r+1$ is odd} & \text{if~$n$ even} \\ \text{$p+r+1$ is even} & \text{if~$n$ odd} \end{cases}\\ &\iff b=\sigma^{p+r+1}(1) > \frac{n+1}2. \end{align*} where we use \Cref{thm:reformulation_of_oscillating} (and $p<l<p+r+1$ if~$n$ is odd) for the first and third equivalence. \end{proof} Since the $\to$ relation is the transitive closure of the $\overset{s_i}{\to}$ relations, we are interested in the circumstances under which the conjugation with $s_i$ preserves the property of being oscillating with connected intervals. \begin{lem} \label{thm:characterization_osc_and_ci} Let $\sigma\in \SG_n$ be an oscillating~$n$-cycle with connected intervals, $i\in [n-1]$ with $i\leq \frac {n+1}2$ and $\sigma' := s_i \sigma s_i$. Then $\sigma'$ is oscillating and has connected intervals if and only if \begin{enumerate} \item if $i = \frac n2$ then $n=2$, \item if $i =\frac{n-1}2$ or $i = \frac{n+1}2$ then $\sigma(i) = i+1$ or $\sigma^{-1}(i) = i+1$, \item if $i < \frac{n-1}2$ then \begin{align*} \text{$\sigma(i) \in I$ and $\sigma(i+1) \not \in I$ or $\sigma\inv(i)\in I$ and $\sigma\inv(i+1) \not \in I$} \end{align*} where $I := [i+1, n-i]$. \end{enumerate} \end{lem} \begin{proof} We will use \Cref{thm:oscillating_n-cycle_local} without further reference. Note that $\sigma' = s_i \sigma s_i$ means that we obtain $\sigma'$ from $\sigma$ by interchanging $i$ and $i+1$ in cycle notation. We show the equivalence case by case, depending on~$i$. \begin{caseenum} \item Suppose $i = \frac{n}2$. In this case~$n$ is even. If $n=2$ then $(1,2)$ is the only~$2$-cycle in $\SG_n$. Thus, $\sigma = \sigma' = (1,2)$. This element is oscillating and has connected intervals. Assume now that $n>2$. Since $\sigma$ is oscillating, \begin{align*} \text{$\sigma(i) > \frac n2$ and $\sigma\inv(i) > \frac n2$.} \end{align*} Moreover as $n>2$, at most one of $\sigma(i)$ and $\sigma\inv(i)$ equals $i+1$. Since we obtain $\sigma'$ from $\sigma$ by swapping~$i$ and $i+1$ in cycle notation we infer \begin{align*} \text{$\sigma'(i+1) > \frac n2$ or ${\sigma'}^{-1}(i+1) > \frac n2$.} \end{align*} As $i+1 > \frac{n}2$, this means that $\sigma'$ is not oscillating \item Suppose $i = \frac{n-1}2$ or $i = \frac{n+1}2$. In this case~$n$ is odd and $n\geq 3$. Moreover, $i,i+1 \in [k,n-k+1]$ for $k= 1, \dots, \frac{n-1}2$. Hence, each of the intervals remains connected if we interchange~$i$ and $i+1$. Therefore, $\sigma'$ has connected intervals. It remains to determine in which cases $\sigma'$ oscillates. We do this for $i= \frac{n-1}2$. The proof for $i = \frac{n+1}2$ is similar. For $i= \frac{n-1}2$ we have $i+1 = \frac{n+1}2$. Since $\sigma$ is oscillating, \begin{align*} \text{$\sigma(i) \geq \frac{n+1}2$ and $\sigma^{-1}(i) \geq \frac{n+1}2$}. \end{align*} Because $n\geq 3$, there is at most one equality among these two inequalities. Assume that there is no equality at all. Then \begin{align*} \text{$\sigma'\left(\frac{n+1}2\right) > \frac{n+1}2$ and ${\sigma'}^{-1}\left(\frac{n+1}2\right) > \frac{n+1}2$} \end{align*} since $\sigma' = s_i\sigma s_i$. Hence, $\sigma'$ is not oscillating. Conversely, assume that $\sigma(i) = i+1$ or $\sigma^{-1}(i) = i+1$. In other words, there exists an $\varepsilon \in \set{-1,1}$ such that $\sigma^\varepsilon(i) = i+1$. Since $i+1 = \frac{n+1}2$ and $\sigma$ is oscillating, we then have $a:= \sigma^{-\varepsilon}(i) > \frac{n+1}2$. Moreover, $\sigma^{-\varepsilon}(i+1) = i < \frac{n+1}2$. Thus $\sigma$ being oscillating implies that $b := \sigma^{\varepsilon}(i+1) > \frac{n+1}2$. By definition of~$a$ and~$b$, \begin{align*} \sigma^{\varepsilon} = ( a , i, i+1, b, \dots ). \end{align*} As a consequence, \begin{align*} {\sigma'}^{\varepsilon} = ( a , i+1, i, b, \dots ) \end{align*} and $\sigma^{\varepsilon}$ and ${\sigma'}^{\varepsilon}$ coincide on the part represented by the dots because $\sigma' = s_i \sigma s_i$. From $a> \frac{n+1}2$, $i+1 = \frac{n+1}2$, $i < \frac{n+1}2$ and $b > \frac{n+1}2$ it now follows that $\sigma'$ is oscillating. \item Suppose $i< \frac{n-1}2$. Note that then $n\geq 4$. Define $I := [i+1, n-i]$ as in the theorem and set $r := |I|$. Since $i+1< \frac{n+1}2$, we have $r>1$. We show the implication from left to right first. Assume that $\sigma'$ is oscillating and has connected intervals. Note that \begin{align*} \tau^\varepsilon(j) \neq i,i+1 \text{ for all } \tau \in \set{\sigma,\sigma'}, \varepsilon\in \set{-1,1} \text{ and } j\in \set{i,i+1} \end{align*} since $\sigma$ and $\sigma'$ are oscillating and $i,i+1<\frac{n+1}2$. Because~$I$ is connected in $\sigma'$, $i+1\in I$ and $r>1$, we have that \begin{align*} \exists \varepsilon \in \set{-1,1} \text{ such that } {\sigma'}^\varepsilon(i+1) \in I. \end{align*} Therefore, \begin{align*} \exists \varepsilon \in \set{-1,1} \text{ such that } \sigma^\varepsilon(i) \in I \end{align*} as $\sigma' = s_i\sigma s_i$ and ${\sigma'}^\varepsilon(i+1) \neq i,i+1$. In fact, the statement \begin{align} \label{eq:xor_for_i} \exists \varepsilon \in \set{-1,1} \text{ such that } \sigma^\varepsilon(i) \in I \text{ and } \sigma^{-\varepsilon}(i)\not\in I \end{align} is true since otherwise we would have \begin{align*} \sigma = (n+i-1 , \dots, \sigma\inv(i), i, \sigma(i) , \dots ) \end{align*} with $\sigma\inv(i), \sigma(i) \in I$ and $i, n+i-1\not \in I$ in which case~$I$ would not be connected in $\sigma$. By interchanging the roles played by $\sigma$ and $\sigma'$ in the argumentation leading to \Cref{eq:xor_for_i}, we get that \begin{align*} \exists \varepsilon \in \set{-1,1} \text{ such that } {\sigma'}^\varepsilon(i) \in I \text{ and } {\sigma'}^{-\varepsilon}(i)\not\in I. \end{align*} From this we obtain that \begin{align} \label{eq:xor_for_i+1} \exists \varepsilon \in \set{-1,1} \text{ such that } \sigma^\varepsilon(i+1) \in I \text{ and } \sigma^{-\varepsilon}(i+1)\not\in I \end{align} by swapping~$i$ and $i+1$ in cycle notation and using that $\sigma'(i),{\sigma'}^{-1}(i) \neq i,i+1$. Now, let $\varepsilon\in\set{-1,1}$ be such that $\sigma^\varepsilon(i) \in I$ and $\sigma^{-\varepsilon}(i)\not\in I$. Then \begin{align} \label{eq:description_of_I} I = \set{\sigma^{\varepsilon k}(i) \mid k = 1,\dots, r} \end{align} since~$I$ is connected in $\sigma$ and $i\not \in I$. From \Cref{eq:xor_for_i+1} it follows that $i+1$ appears at the border of~$I$ in the cycle notation of $\sigma$. Hence, \Cref{eq:description_of_I} implies that \begin{align*} \text{$\sigma^{\varepsilon}(i) = i+1$ or $\sigma^{\varepsilon r}(i) = i+1$.} \end{align*} As $\sigma^\varepsilon(i) \neq i+1$, it follows that $i+1 = \sigma^{\varepsilon r}(i)$. Thus, \Cref{eq:description_of_I} yields that $\sigma^{-\varepsilon}(i+1) \in I$ and $\sigma^{\varepsilon}(i+1)\not \in I$. Therefore, we have $\sigma^{\varepsilon}(i) \in I$ and $\sigma^{\varepsilon}(i+1)\not \in I$ for an $\varepsilon \in \set{-1,1}$ as desired. Lastly, we prove the direction from right to left of the equivalence. We are still in the case $i < \frac{n-1}2$. Thus, assume that there is an $\varepsilon \in \set{-1,1}$ such that $\sigma^{\varepsilon}(i)\in I$ and $\sigma^{\varepsilon}(i+1) \not \in I$. Since $\sigma$ is oscillating and we interchange two elements $i,i+1<\frac{n+1}2$ in $\sigma$ in order to obtain $\sigma'$ from $\sigma$, $\sigma'$ is also oscillating. It remains to show that $\sigma'$ has connected intervals. Since $i\not \in I$, $\sigma^{\varepsilon}(i)\in I$ and~$I$ is connected in $\sigma$, we have \Cref{eq:description_of_I}. Moreover, from $i+1\in I$, $\sigma^{\varepsilon}(i+1)\not\in I$ and~$I$ being connected in $\sigma$, it follows that $\sigma^{\varepsilon r}(i) = i+1$. Thus, \begin{align*} I = \set{{\sigma'}^{\varepsilon k}(i+1) \mid k = 0,\dots, r-1} \end{align*} because $\sigma' = s_i \sigma s_i$. That is,~$I$ is connected in $\sigma'$. Let $J := [k, n-k+1]$ for $k\in \N$ with $1\leq k \leq \frac{n}2$ and $k\neq i+1$ be an interval different from~$I$. Then either $i,i+1 \in J$ or $i,i+1 \not \in J$. As~$J$ is connected in $\sigma$ and $\sigma' = s_i\sigma s_i$, it follows that~$J$ is connected in $\sigma'$. Therefore, $\sigma'$ has connected intervals. \qedhere \end{caseenum} \end{proof} \begin{exa} Consider $\sigma = \sigma_{(6)} = (1, 6,2,5,3,4)$ and $\sigma_i := s_i\sigma s_i$ for $i = 1,2$. Then $\sigma$ is oscillating with connected intervals. Since $\sigma^{-1}(1)\in [2,5]$ and $\sigma^{-1}(2) \not \in [2,5]$, \Cref{thm:characterization_osc_and_ci} yields that $\sigma_1$ is oscillating with connected intervals. In contrast, $\sigma_2$ is not oscillating with connected intervals because of $\sigma(2),\sigma^{-1}(2)\not \in [3,4]$ and \Cref{thm:characterization_osc_and_ci}. This can also be checked directly. We have \begin{align*} \sigma_1 = (1,5,3,4,2,6) \quad \text{and} \quad \sigma_2 = (1,6,3,5,2,4). \end{align*} For instance, $[3,4]$ is not connected in $\sigma_2$. \end{exa} In the next result we show that the relation $\approx$ is compatible with the concept of oscillating~$n$-cycles with connected intervals. \begin{lem} \label{thm:equivalence_implies_oscillating_and_ci} Let $\sigma\in \SG_n$ be an oscillating~$n$-cycle with connected intervals, $i \in [n-1]$ and $\sigma' := s_i \sigma s_i$. If $\sigma \approx \sigma'$ then $\sigma'$ is oscillating and has connected intervals. \end{lem} \begin{proof} We do a case analysis depending on~$i$. \textbf{Case 1.} Suppose $i=\frac n2$. Then $n$ is even. By \Cref{thm:characterization_osc_and_ci}, $\sigma'$ is oscillating with connected intervals if and only if $n=2$. Thus, we have to show that $\sigma \not \approx \sigma'$ if $n\geq 4$. In this case we have $\sigma(i),\sigma\inv(i) > \frac n2$ and $\sigma(i+1),\sigma\inv(i+1) \leq \frac n2$ because $\sigma$ is oscillating. But then \Cref{thm:length_and_conjugation} yields $\ell(\sigma') < \ell(\sigma)$ so that $\sigma'\not \approx \sigma$. \textbf{Case 2.} Suppose $i =\frac {n-1}2$ or $i = \frac {n+1}2$. We only do the case $i =\frac {n-1}2$. The other one is similar. Let $I := [i, n-i+1] = \set{i,i+1,i+2}$. We show the contraposition and assume that $\sigma'$ is not oscillating or that it does not have connected intervals. Then from \Cref{thm:characterization_osc_and_ci} it follows that $\sigma(i) \neq i+1$ and $\sigma^{-1}(i) \neq i+1$. Furthermore, there is an $m\in I$ such that \begin{align*} I = \set{\sigma^{-1}(m), m, \sigma(m)} \end{align*} since~$I$ is connected in $\sigma$. Thus, $m= i+2$. Assume $\sigma\inv(i+2) = i$ and $\sigma(i+2) = i$ (the proof of the other case with $\sigma(i+2) = i$ is analogous). Then $\sigma^{-1}(i) > i+2$ as $\sigma$ is oscillating and $\sigma\inv(i) \neq i+1,i+2$. Moreover, \Cref{thm:oscillation_around_intervals} applied to~$I$ in $\sigma$ and $\sigma^{-1}(i)> \frac{n+1}2$ yields $\sigma(i+1) < \frac{n+1}2 = i+1$. Therefore, \begin{align*} \sigma(i) = i+2 > \sigma(i+1) \text{\quad and \quad} \sigma\inv(i) > i+2 = \sigma\inv(i+1) \end{align*} so that $\ell(\sigma') < \ell(\sigma)$ by \Cref{thm:length_and_conjugation} and hence $\sigma'\not\approx \sigma$. \textbf{Case 3}. Suppose $i<\frac{n-1}2$. Then for all $j\in\set{i,i+1}$ we have $\sigma(j),\sigma\inv(j) \geq \frac{n+1}2$ since $j< \frac{n+1}2$ and $\sigma$ is oscillating. We assume $\sigma\approx \sigma'$ and show that $\sigma'$ is oscillating and has connected intervals. Define $I_k := [k,n-k+1]$ for all $k\leq \frac{n+1}2$ and $I := I_{i+1} = [i+1,n-i]$. Thanks to \Cref{thm:characterization_osc_and_ci} it suffices to show \begin{align*} \text{$\sigma(i) \in I$ and $\sigma(i+1) \not \in I$ or $\sigma\inv(i)\in I$ and $\sigma\inv(i+1) \not \in I$}. \end{align*} Since $\sigma \approx \sigma'$, $\ell(\sigma) = \ell(\sigma')$. Hence, \Cref{thm:length_and_conjugation} implies that either $\sigma(i) < \sigma(i+1)$ or $\sigma\inv(i) < \sigma\inv(i+1)$. We assume $\sigma(i) < \sigma(i+1)$ and $\sigma\inv(i) > \sigma\inv(i+1)$. The other case is similar. First we show $\sigma(i) \in I$. Assume $\sigma(i) \not \in I$ instead. Then $\sigma(i) \geq \frac{n+1}2$ implies $\sigma(i) > \max I$. Now we use that $\sigma(i) < \sigma(i+1)$ to obtain $\sigma(i+1)\not \in I$. From this it follows that \begin{align*} I = \set{\sigma^{-k}(i+1) \mid k = 0,\dots, r-1} \end{align*} where $r := |I|$ since~$I$ is connected in $\sigma$ and $i+1\in I$. Now we consider the interval $I_i = [i,n-i+1]$ in $\sigma$. Because $\sigma$ is oscillating, $\sigma(i+1) > \frac{n+1}2$. An application of \Cref{thm:oscillation_around_intervals} to~$I$ in $\sigma$ yields $\sigma^{-r}(i+1) < \frac{n+1}2$. In particular, $\sigma^{-r}(i+1) \neq n-i+1$. But we also have $i \neq \sigma^{-r}(i+1)$ because $\sigma(i)\not \in I$. That is $\sigma^{-r}(i+1) \not \in I_i$. As a consequence, \begin{align*} I_i = \set{\sigma^{-k}(i+1) \mid k = 0,\dots, r-1} \cup \set{\sigma(i+1), \sigma^2(i+1)} \end{align*} since $I \subseteq I_i$ and $I_i$ is connected in $\sigma$. Hence \begin{align*} \set{\sigma(i+1), \sigma^2(i+1)} = \set{i, n-i+1}. \end{align*} As $\sigma(i+1) > \frac{n+1}2$, it follows that $\sigma(i+1) = n-i+1$ and $\sigma^2(i+1) = i$. Consequently, \begin{align*} \sigma(i) > \max I_i = n-i+1 = \sigma(i+1). \end{align*} This is a contradiction to $\sigma(i) < \sigma(i+1)$ and shows that $\sigma(i)\in I$. It remains to show that $\sigma(i+1)\not \in I$. Because $i\not \in I$, $\sigma(i) \in I$ and~$I$ is connected, \begin{align*} I = \set{\sigma^{k}(i) \mid k = 1,\dots, r}. \end{align*} We can apply \Cref{thm:oscillation_around_intervals} to~$I$ in $\sigma$ and $i<\frac{n+1}2$ to obtain $\sigma^{r+1}(i) > \frac{n+1}2$. Thus $\sigma^{r}(i) \leq \frac{n+1}2$. In particular, $\sigma^r(i) \neq n-i$. If $i = \frac n2 -1$ then $I =\set{i+1, n-i}$ and it follows that $\sigma(i) = n-i$ and $\sigma^2(i) = i+1$. That is, $\sigma(i+1) \not \in I$ as desired. Now suppose $i< \frac n2 - 1$. Then $i+2 \leq \frac{n+1}2$ and we consider $I_{i+2} = [i+2, n-i-1]$. Assume for the sake of contradiction that $\sigma(i+1)\in I$. This means that $\sigma^r(i) \neq i+1$. In addition, we have already seen that $\sigma^r(i) \neq n-i$. Therefore, $\sigma^r(i) \in I_{i+2}$. Since $I_{i+2}$ is connected in $\sigma$ and $I_{i+2} \subseteq I$, we have \begin{align*} I_{i+2} = \set{\sigma^{k}(i) \mid k = 3,\dots, r}. \end{align*} and hence $\set{\sigma(i), \sigma^2(i)} = \set{i+1, n-i}$. As $i < \frac{n+1}2$, it follows that $\sigma(i) = n-i$ and $\sigma^2(i) = i+1$. But then \begin{align*} \sigma(i) = n-i > n-i-1 = \max I_{i+2} \geq \sigma(i+1) \end{align*} which again contradicts the assumption $\sigma(i) < \sigma(i+1)$ and thus shows that $\sigma(i+1) \not \in I$. \textbf{Case 4.} Suppose $i>\frac {n+1}2$. Assume $\sigma \approx \sigma'$ and let $\na\colon \SG_n \to \SG_n, x\mapsto x^{w_0}$, $\tau := \na(\sigma)$ and $\tau':=\na(\sigma')$. Since $\sigma$ is oscillating and has connected intervals, \Cref{thm:conjudation_with_w0_oscillating_and_c.I.} implies that $\tau$ is oscillating and has connected intervals. In addition, from \Cref{thm:conjudation_with_w0_and_arrow} we have $\tau \approx \tau'$. Because $\tau' = s_{n-i}\tau s_{n-i}$ with $n-i < \frac{n+1}2$, we now obtain from the already proven cases that $\tau'$ is oscillating and has connected intervals. Hence, $\sigma' = \na(\tau')$ and \Cref{thm:conjudation_with_w0_oscillating_and_c.I.} yield that $\sigma'$ is oscillating with connected intervals. \end{proof} In order to show that each oscillating~$n$-cycle with connected intervals is $\approx$-equivalent to $\sigma_{(n)}$, we use an algorithm that takes an oscillating~$n$-cycle $\sigma\in \SG_n$ with connected intervals as input and successively conjugates $\sigma$ with simple reflections until we obtain $\sigma_{(n)}$. This algorithm has the property that all permutations appearing as interim results are oscillating with connected intervals and $\approx$-equivalent to $\sigma$. Eventually, it follows that $\sigma \approx \sigma_{(n)}$. The mechanism of the algorithm is due to Kim \cite{Kim1998}. She used it in order to show that for each $\alpha\vDash_e n$ the element in stair form $\sigma_\alpha$ has maximal length in its conjugacy class. The next lemma corresponds to one step of the algorithm. \begin{lem} \label{thm:kim_algorithm_and_n-cycles} Let $\alpha = (n)$ and $\sigma\in \SG_n$ be an oscillating~$n$-cycle with connected intervals which is different from the element in stair form $\sigma_{\alpha}$. Then there exists a minimal integer $p$ such that $1\leq p\leq n-1$ and $\sigma^p(1) \neq \sigma_\alpha^p(1)$. Set $a := \sigma^p(1)$, $b := \sigma_\alpha^p(1)$ and \begin{align*} \sigma' := \begin{cases} s_{a-1} \sigma s_{a-1} &\myif a > b \\ s_a \sigma s_a & \myif a < b. \\ \end{cases} \end{align*} Then $\sigma' \approx \sigma$ and $\sigma'$ is oscillating and has connected intervals. \end{lem} \begin{proof} Set $I_k := [k, n-k+1]$ for all $k\in \N$ with $k\leq \frac{n+1}2$. Because $\sigma \neq \sigma_\alpha$ and both permutations are~$n$-cycles, we have $p\leq n-2$. Recall that by \Cref{thm:element_in_stair_form}, \begin{align*} \sigma_{\alpha} = \begin{cases} (1, n, 2, n-1, \dots, \frac n2, \frac n2 +1) & \text{if~$n$ is even} \\ (1, n, 2, n-1, \dots, \frac{n-1}2, \frac {n+3}2, \frac {n+1}2) & \text{if~$n$ is odd}. \end{cases} \end{align*} If~$n$ is odd then $\frac{n+1}2 = \sigma^{n-1}(1)$ and hence $p \leq n-2$ implies $b \neq \frac{n+1}2$. If~$n$ is even then $b\neq \frac{n+1}{2}$ anyway. We assume $b < \frac{n+1}2$. The proof in the case $b>\frac{n+1}2$ is similar and therefore omitted. By the choice of~$p$, we have $b\neq 1$ so that $1<b<\frac{n+1}2$. The definition of $\sigma_\alpha$ implies \begin{align} \label{eq:cycle_of_1_and_I_b} \begin{aligned} \set{\sigma^k_\alpha(1)\mid k = 0,\dots, p-1} &= [n] \setminus I_b, \\ \set{\sigma^k_\alpha(1)\mid k = p,\dots, n-1} &= I_b. \end{aligned} \end{align} Again by the choice of~$p$, the same equalities hold for $\sigma$. Hence, $b < a$ as $a\in I_b$ and $b= \min I_b$. Therefore, we consider $\sigma' = s_{a-1}\sigma s_{a-1}$ and show that $\sigma \approx \sigma'$. Then \Cref{thm:equivalence_implies_oscillating_and_ci} implies that $\sigma'$ also is oscillating and has connected intervals. It follows from the definition of $\sigma_\alpha$ and $b<\frac{n+1}2$ that \begin{align} \label{eq:thm:kim_algorithm_and_n-cycles:sigma_inv(a)} \sigma^{-1} (a) = \sigma_\alpha^{-1}(b) = n-b+ 2 > \frac{n+1}{2}. \end{align} As $\sigma$ is oscillating, we obtain that $a\leq \frac{n+1}2$ from \Cref{thm:oscillating_n-cycle_local}. Since \Cref{eq:cycle_of_1_and_I_b} holds for $\sigma$ and $p>0$, \begin{align*} \sigma\inv(a) \not \in I_b \supseteq I_{a-1} \supseteq I_{a}. \end{align*} Let $r := |I_a|$. Because $I_a$ is connected in $\sigma$, $a\in I_a$ and $\sigma^{-1}(a) \not \in I_a$, we have \begin{align*} \set{\sigma^k(a) \mid k = 0,\dots, r-1} &= I_{a}. \end{align*} Now we can use that $I_{a-1} = I_a \cup \set{a-1, n-a+2}$ is connected in $\sigma$ and that $\sigma\inv(a) \not\in I_{a-1}$ to obtain \begin{align*} \set{\sigma^k(a) \mid k = 0,\dots, r+1} &= I_{a-1} \end{align*} The descriptions of $I_a$ and $I_{a-1}$ imply that \begin{align*} \set{\sigma^{r}(a), \sigma^{r+1}(a)} = \set{a-1, n-a+2}. \end{align*} \Cref{thm:oscillation_around_intervals} applied to $I_a$ in $\sigma$ and $\sigma^{-1}(a) > \frac{n+1}2$ now imply that $\sigma^{r}(a) < \frac{n+1}2$. Thus, $\sigma^r(a) = a-1$ and $\sigma^{r+1}(a) = n-a+2$. That is, \begin{align} \label{eq:thm:kim_algorithm_and_n-cycles:sigma(a-1)} \sigma(a-1) = n-a+2 \end{align} Moreover, $\sigma\inv(a-1) \in I_a$ implies \begin{align} \label{eq:thm:kim_algorithm_and_n-cycles:sigma_inv(a-1)} \sigma\inv(a-1) \leq n-a+1. \end{align} We now show \begin{align} \label{eq:thm:kim_algorithm_and_n-cycles:sigma(a)} \sigma(a) \leq n-a+1. \end{align} and deal with two cases. If $a= \frac{n+1}2$ then $n-a+1 = a$. Furthermore, we then have $r=1$ and therefore $\sigma(a) = a-1 < n-a+1$. If $a< \frac{n+1}2$ then $r>1$ so that $\sigma(a) \in I_a$ and thus $\sigma(a) \leq n-a+1$ as desired. From~\cref{eq:thm:kim_algorithm_and_n-cycles:sigma_inv(a-1),eq:thm:kim_algorithm_and_n-cycles:sigma_inv(a)} it follows that \begin{align*} \sigma^{-1}(a-1) &\leq n-a+1 < n-b+2 = \sigma^{-1}(a). \end{align*} Moreover, \cref{eq:thm:kim_algorithm_and_n-cycles:sigma(a-1),eq:thm:kim_algorithm_and_n-cycles:sigma(a)} imply \begin{align*} \sigma(a-1) &= n-a+2 > n-a+1 \geq \sigma(a). \end{align*} Since $\sigma' = s_{a-1}\sigma s_{a-1}$, \Cref{thm:length_and_conjugation} now yields $\ell(\sigma') = \ell(\sigma)$. Hence, $\sigma' \approx \sigma$ by \Cref{thm:arrow_and_length}. \end{proof} \begin{exa} Let $n = 5$ and $\alpha = (n)$. The $n$-cycle $\sigma = (1,3,4,2,5)\in\SG_n$ is oscillating and has connected intervals. We can successively use \Cref{thm:kim_algorithm_and_n-cycles} in order to obtain the sequence \begin{align*} \sigma = \sigma^{(0)} &= (1,3,4,2,5), \\ \sigma^{(1)}&= (1,4,3,2,5) = s_3 \sigma^{(0)} s_3,\\ \sigma^{(2)}&= (1,5,3,2,4) = s_4 \sigma^{(1)} s_4,\\ \sigma^{(3)}&= (1,5,2,3, 4) = s_2 \sigma^{(2)} s_2, \\ \sigma_\alpha = \sigma^{(4)}&= (1,5,2,4,3) = s_3 \sigma^{(3)} s_3. \end{align*} Moreover, \Cref{thm:kim_algorithm_and_n-cycles} ensures that each $\sigma^{(j)}$ is oscillating with connected intervals and all $\sigma^{(j)}$ are $\approx$-equivalent. Therefore, $\sigma \in \Sigma_\alpha$ by \Cref{thm:parametrizations_of_kim}. \end{exa} We now come to the characterization of $\Sigma_{(n)}$. \begin{thm} \label{thm:characterization_of_Sigma_(n)} Let $\sigma\in \SG_n$ be an~$n$-cycle. Then $\sigma \in \Sigma_{(n)}$ if and only if $\sigma$ is oscillating and has connected intervals. \end{thm} \begin{proof} Let $\sigma\in \SG_n$ be an~$n$-cycle. Recall that $\sigma \in \Sigma_{(n)}$ if and only if $\sigma \approx \sigma_{(n)}$ by \Cref{thm:parametrizations_of_kim}. Assume that $\sigma \in \Sigma_{(n)}$. Then $\sigma \approx \sigma_{(n)}$ which by definition of $\approx$ implies that there are sequences $\sigma_\alpha = \sigma^{(0)}, \sigma^{(1)}, \dots, \sigma^{(m)} = \sigma \in \SG_n$ and $i_1, \dots, i_m \in [n-1]$ such that $\sigma^{(j-1)} \approx \sigma^{(j)}$ and $\sigma^{(j)} = s_{i_j}\sigma^{(j-1)}s_{i_j}$ for $j\in [m]$. From \Cref{thm:sigma_(n)_osc_and_c.I.} we have that $\sigma_{(n)}$ is oscillating and has connected intervals. Moreover, \Cref{thm:equivalence_implies_oscillating_and_ci} yields that $\sigma^{(j)}$ is oscillating with connected intervals if $\sigma^{(j-1)}$ is oscillating with connected intervals. Hence, $\sigma$ is oscillating and has connected intervals by induction. Conversely, assume that $\sigma$ is oscillating and has connected intervals. Then we can use \Cref{thm:kim_algorithm_and_n-cycles} iteratively to obtain a sequence of $\approx$-equivalent~$n$-cycles starting with $\sigma$ and eventually ending with $\sigma_{\alpha}$. Thus $\sigma \approx \sigma_\alpha$. \end{proof} \begin{figure} \begin{tikzcd}[column sep=huge, row sep = huge] (1,3,4,2,5) \arrow[shift left]{dr}{\del_3} & (1,4,3,2,5) \arrow[shift left]{d}{\del_3} & (1,4,2,3,5) \arrow[shift left]{dl}{\del_3} \\ &(1,3,2,4) \arrow[shift left]{d}{\del_3} \arrow[shift left]{ul}{\ins_{3,1}} \arrow[shift left]{u}{\ins_{3,2}} \arrow[shift left]{ur}{\ins_{3,3}} &\\ &(1,2,3) \arrow[shift left]{u}{\ins_{3,1}} & \end{tikzcd} \caption Examples for the operators $\del_k$ and $\ins_{k,p}$ appearing in \Cref{thm:recursion_for_Sigma_(n)} and its proof. The lower part of the picture serves as an example for the operators used in the case when~$n$ is even. The upper part is an example for those used in the case when~$n$ is odd. Note that for the integer~$m$ from the \namecref{thm:recursion_for_Sigma_(n)} we have $m = \frac n2 +1 = 3$ if $n=4$ and $m=\frac{n+1}2=3$ if $n=5$.} \label{fig:del_and_ins} \end{figure} The goal of the remainder of this \namecref{sec:equivalence_classes:one_part} is to find bijections that relate $\Sigma_{(n-1)}$ to $\Sigma_{(n)}$. From this we will obtain a recursive description of $\Sigma_{(n)}$ and a formula for the cardinality of $\Sigma_{(n)}$. To achieve our goal, we define two operators $\ins$ and $\del$. Assume that the $n$-cycle $\sigma \in \SG_n$ is given in cycle notation starting with~$1$. Then for $k\in [2,n+1]$ $\ins_{k,p}(\sigma)\in \SG_{n+1}$ is the $(n+1)$-cycle obtained from $\sigma$ by adding~$1$ to each element greater or equal to~$k$ in $\sigma$ and then inserting~$k$ behind the~$p$th element in the resulting cycle. Likewise, for $k\in [2,n]$, $\del_{k}(\sigma) \in \SG_{n-1}$ is the $(n-1)$-cycle obtained by first deleting~$k$ from $\sigma$ and then decreasing each element greater than~$k$ by~$1$. See \Cref{fig:del_and_ins} for examples. We now define $\ins$ and $\del$ more formally. Let $\sigma\in \SG_n$ be an~$n$-cycle and $k\in \N$. Set \begin{align*} \varepsilon_r := \begin{cases} 0 &\myif \sigma^r(1) < k \\ 1 &\myif \sigma^r(1) \geq k \end{cases} \end{align*} for $r = 0 ,\dots, n-1$. In the following we will assume $k>1$. The operators could also be defined for $k=1$ but this is not necessary for our purposes and would only make the exposition less transparent. For $k\in [2,n+1]$ and $p\in [n]$, define $\ins_{k,p}(\sigma)$ to be the $(n+1)$-cycle of $\SG_{n+1}$ given by \begin{align*} \ins_{k,p}(\sigma)^r(1) := \begin{cases} \sigma^r(1) + \varepsilon_r &\myif r < p \\ k &\myif r = p \\ \sigma^{r-1}(1) + \varepsilon_{r-1} &\myif r > p \end{cases} \end{align*} for $r = 0, \dots, n$. For $k\in [2,n]$, define $\del_k(\sigma)$ to be the $(n-1)$-cycle of $\SG_{n-1}$ given by \begin{align*} \del_k(\sigma)^r(1) := \begin{cases} \sigma^r(1) - \varepsilon_r &\myif r<p \\ \sigma^{r+1}(1) - \varepsilon_{r+1} &\myif r\geq p \end{cases} \end{align*} for $r = 0,\dots, n-2$ where~$p$ is the element of $[0,n-1]$ with $\sigma^p(1) = k$. The next results relates $\Sigma_{(n)}$ with $\Sigma_{(n-1)}$ via a bijection for $n \geq 4$. \begin{thm} \label{thm:recursion_for_Sigma_(n)} Suppose $n\geq 4$. If~$n$ is even then set $m := \frac{n}2 + 1$ and \begin{align*} \psi \colon \Sigma_{(n-1)} \to \Sigma_{(n)}, \quad \sigma \mapsto \ins_{m,p}(\sigma) \end{align*} where~$p$ is the element of $[n-1]$ with $\sigma^{p-1}(1) = \min\set{\sigma\inv(\frac n2), \frac n2 }$. If~$n$ is odd then set $m := \frac{n+1}2$ and \begin{align*} \psi \colon \Sigma_{(n-1)} \times \set{0,1,2} \to \Sigma_{(n)}, \quad (\sigma,q) \mapsto \ins_{m, p+q}(\sigma) \end{align*} where~$p$ is the element of $[n-3]$ with $\sigma^{p-1}(1) \not\in \set{m-1,m}$ and $\sigma^p(1) \in \set{m-1,m}$. Then $\psi$ is a bijection. \end{thm} \begin{cor} \label{thm:recurrences_of_Sigma_n} Suppose $n\geq 4$. Then \begin{align*} \card{\Sigma_{(n)}} = \begin{cases} \card{\Sigma_{(n-1)}} & \myif \text{$n$ is even} \\ 3\card{\Sigma_{(n-1)}} & \myif \text{$n$ is odd}. \end{cases} \end{align*} \end{cor} \begin{proof}[Proof of \Cref{thm:recursion_for_Sigma_(n)}] \Cref{thm:characterization_of_Sigma_(n)} states that for all $n\in \N$, $\Sigma_{(n)}$ is the set of oscillating~$n$-cycles of $\SG_n$ with connected intervals. In this proof we repeatedly use this result without further notice. Let $n\geq 4$. We consider all permutations in the cycle notation where~$1$ is the leftmost entry in its cycle. In particular, deleting an entry from a permutation or inserting an entry into a permutation means that we do this in the chosen cycle notation. We distinguish two cases depending on the parity of~$n$. \begin{caseenum} \item Assume that~$n$ is even. Then $m= \frac{n}2+1$. For $\tau \in \Sigma_{(n-1)}$ let~$p$ be given as in the definition of $\psi$. Then $\min\set{\tau\inv(\frac n2), \frac n2 }$ is the~$p$th element in the cycle notation of $\tau$. Hence, we obtain $\psi(\tau)$ by increasing each element in $\tau$ greater or equal to~$m$ by one and then inserting~$m$ behind the element at position~$p$. Set $ \varphi \colon \Sigma_{(n)} \to \Sigma_{(n-1)}, \sigma \mapsto \del_m(\sigma).$ That is, for $\sigma \in \Sigma_{(n)}$ we obtain $\varphi(\sigma)$ by first deleting~$m$ from $\sigma$ and then decreasing each entry greater than~$m$ by~$1$. We show that $\varphi$ and $\psi$ are well defined and inverse to each other. \begin{wideenumerate} \item We prove that $\varphi$ is well defined. Let $\sigma \in \Sigma_{(n)}$ and $\tau := \varphi(\sigma)$. We have to show that $\tau\in \Sigma_{(n-1)}$. That is, we have to prove that $\tau$ is oscillating and has connected intervals. To show the latter, let $1\leq i \leq \frac{n-1}2 < \frac{n}2$. As $[i,n-i+1]$ is connected in $\sigma$ there is a $0\leq q\leq n-1$ such that \begin{align*} \set{\sigma^{q+1}(1), \dots, \sigma^{q+r}(1)} = [i,n-i+1] \end{align*} where $r:= \card{[i,n-i+1]}$. Moreover, $m \in [i,n-i+1]$. Thus, $\tau = \del_m(\sigma)$ implies \begin{align*} \set{\tau^{q+1}(1), \dots, \tau^{q+r-1}(1)} = [i,n-i]. \end{align*} Hence, $[i,(n-1)-i+1]$ is connected in $\tau$. It follows that $\tau$ has connected intervals. We now show that $\tau$ is oscillating. Note that $n-1$ is odd and $\frac{(n-1)+1}2 = \frac{n}2$. By \Cref{thm:oscillating_n-cycle_local}, it suffices to show that $\tau(i) \geq \frac{n}2$ for all $i\in [\frac{n}2-1]$ and that either $\tau^{-1}\left(\frac n2\right) > \frac n2$ or $\tau\left(\frac n2\right) > \frac n2$. Let $i\in [\frac{n}2-1]$. Since $i < \frac{n}2$ and $\sigma$ is oscillating, we infer $\sigma(i) > \frac n2$ from \Cref{thm:oscillating_n-cycle_local}. If $\sigma(i) \neq m$ then $\tau(i) = \sigma(i) -1 \geq \frac n2$. If $\sigma(i) = m$ then $\sigma^2(i) = \frac{n}2$ since $m = \frac{n}2+1$, $\set{\frac n2, \frac n2 + 1}$ is connected in $\sigma$ and $i\not \in \set{\frac n2, \frac n2 + 1}$. Thus, $\tau(i) = \frac n2$. We now show that either $\tau^{-1}\left(\frac n2\right) > \frac n2$ or $\tau\left(\frac n2\right) > \frac n2$. Since $\set{\frac n2, \frac n2 + 1}$ is connected in $\sigma$ there is a $0\leq q \leq n-1$ such that \begin{align*} \set{\sigma^q(1), \sigma^{q+1}(1)} = \set{\frac n2, \frac n2 + 1}. \end{align*} Hence, $\tau = \del_{\frac{n}2+1}(\sigma)$ implies $\tau^q(1) = \frac n2$. Because $n\geq 4$, we can apply \Cref{thm:oscillation_around_intervals} to $\set{\frac n2, \frac n2 + 1}$ in $\sigma$ and obtain that there are $a<\frac n2$ and $b>\frac n2+1$ such that \begin{align*} \set{\sigma^{q-1}(1), \sigma^q(1), \sigma^{q+1}(1), \sigma^{q+2}(1)} = \set{a,b,\frac n2, \frac n2 + 1}. \end{align*} Therefore, $\tau^q(1) = \frac n2$ and $\tau = \del_{\frac{n}2+1}(\sigma)$ yield $ \set{\tau^{-1}\left(\frac n2\right), \tau\left(\frac n2\right) } = \set{a,b-1}. $ That is, either $\tau^{-1}\left(\frac n2\right) > \frac n2$ or $\tau\left(\frac n2\right) > \frac n2$. Thus, $\tau$ is oscillating. \item We check that $\psi$ is well defined. Let $\tau \in \Sigma_{(n-1)}$ and $\sigma := \psi(\tau)$. We have to show $\sigma \in \Sigma_{(n)}$. The definition of $\psi$ implies that $\frac n2+1$ is a neighbor of $\frac n2$ in $\sigma$. In addition, $[i, n-i]$ is connected in $\tau$ for $i\in [\frac{n}2-1]$. Therefore, $[i, n-i+1]$ is connected in $\sigma$ for $i\in [\frac{n}2]$. That is, $\sigma$ has connected intervals. We now show that $\sigma$ is oscillating. By \Cref{thm:oscillating_n-cycle_local}, it suffices to show that $\sigma(i) > \frac{n}2$ for all $i \in [\frac{n}2]$. For $i <\frac{n}2$ this can be done as before. Thus, we only consider $i=\frac{n}2$. As $\tau$ is oscillating, \Cref{thm:oscillating_n-cycle_local} implies that one of the neighbors of $\frac n2$ is smaller than $\frac n2$ and the other one is greater than $\frac n2$. Let~$a$ be the smaller and~$b$ be the bigger neighbor of $\frac n2$. In the definition of $\psi$,~$p$ is chosen such that $\frac{n}2+1$ is inserted in $\tau$ between~$a$ and $\frac n2$. Thus, $\frac n2$ has neighbors $\frac n2 +1$ and $b+1$ in $\sigma$. Consequently, $ \sigma\left(\frac n2\right) > \frac n2$. \item We now show that $\psi \circ \varphi = \id$. Let $\sigma \in \Sigma_{(n)}$. Since $\set{\frac n2, \frac n2+1}$ is connected in $\sigma$, these two elements are neighbors in $\sigma$. As $\sigma$ is oscillating, there is an $a<\frac n2$ such that $\frac n2 +1$ has neighbors~$a$ and $\frac n2$. We obtain $\varphi(\sigma)$ from $\sigma$ by deleting $\frac n2 +1$ so that~$a$ and $\frac n2$ are neighbors in $\varphi(\sigma)$. On the other hand, we obtain $\psi(\varphi(\sigma))$ from $\varphi(\sigma)$ by inserting $\frac n2 +1$ between~$a$ and $\frac n2$. Thus $\psi(\varphi(\sigma))= \sigma$. \item Finally, we show that $ \varphi \circ \psi= \id$. Let $\tau \in \Sigma_{(n-1)}$. Then we obtain $\psi(\tau)$ from $\tau$ by inserting $\frac n2+1$ at some position and get $\varphi(\psi(\tau))$ from $\psi(\tau)$ by deleting it again. Hence, $\varphi(\psi(\tau)) = \tau$. \end{wideenumerate} \item Assume that~$n$ is odd. Then $m = \frac{n+1}2$. For $\tau \in \Sigma_{(n-1)}$ the set $\set{m-1,m}$ is connected. Thus, there is a unique integer $p$ with $1\leq p \leq n-3$ such that $\tau^{p-1}(1) \not\in \set{m-1,m}$ and $\tau^p(1) \in \set{m-1,m}$. That is, the integer $p$ from the definition of $\psi$ in the \namecref{thm:recursion_for_Sigma_(n)} is well defined. Note that~$p$ is the position of the left neighbor of the set $\set{m-1,m}$ in $\tau$. Conversely, for $\sigma \in \Sigma_{(n)}$, $I := \set{m-1, m, m+1}$ is connected in $\sigma$. Hence, there is a unique $0\leq p \leq n-1$ such that $I = \set{\sigma^{p+k}(1) \mid k= 0,1,2}$ and a unique $q\in \set{0,1,2}$ such that $\sigma^{p+q}(1) = m$. We define the map $\varphi \colon \Sigma_{(n)} \to \Sigma_{(n-1)} \times \set{0,1,2}$ by setting $\varphi(\sigma) := (\del_m(\sigma), q)$. Again, we show that $\varphi$ and $\psi$ are well defined and inverse to each other. \begin{wideenumerate} \item First we show that the two maps are inverse to each other. Let $\sigma \in \Sigma_{(n)}$ and $\varphi(\sigma) = (\tau,q)$. Then we have \begin{align*} q = \begin{cases} 0 &\myif \text{$m$ is the left neighbor of $\set{m-1, m+1}$ in $\sigma$}, \\ 1 &\myif \text{$m$ is located between $m-1$ and $m+1$ in $\sigma$}, \\ 2 &\myif \text{$m$ is the right neighbor of $\set{m-1, m+1}$ in $\sigma$}. \\ \end{cases} \end{align*} Conversely, let $\tau \in \Sigma_{(n-1)}$, $q\in \set{0,1,2}$ and $\sigma= \psi(\tau,q)$ then \begin{align} \label{eq:description_of_ins_m_p+q} \text{$m$ is } \begin{cases} \text{the left neighbor of $\set{m-1, m+1}$ in $\sigma$} &\myif q = 0,\\ \text{located between $m-1$ and $m+1$ in $\sigma$} &\myif q=1,\\ \text{the right neighbor of $\set{m-1, m+1}$ in $\sigma$} &\myif q = 2.\\ \end{cases} \end{align} From this it follows that $\varphi$ and $\psi$ are inverse to each other. \item In order to prove that $\varphi$ is well defined one has to show that $\del_m(\sigma) \in \Sigma_{(n-1)}$. This can be done similarly as in Case 1. \item To see that $\psi$ is well defined, let $\tau \in \Sigma_{(n-1)}$, $q\in \set{0,1,2}$ and $\sigma := \psi(\tau,q)$. We first show that $\sigma$ has connected intervals. Recall that $m = \frac{n+1}2$. Let $i\leq \frac{n-1}2 = m-1$. Then $[i, n-i]$ is connected in $\tau$ since $\tau$ has connected intervals. By the definition of $\psi$, we obtain the entries $[i,n-i+1]$ in $\sigma$ by adding~$1$ to each entry $\geq m$ of $[i, n-i]$ in $\tau$ and then inserting~$m$ such that by \Cref{eq:description_of_ins_m_p+q} at least one of the neighbors of~$m$ is $m-1$ or $m+1$. Since $m-1,m,m+1 \in [i,n-i+1]$ it follows that $[i, n-i+1]$ is connected in $\sigma$. Therefore, $\sigma$ has connected intervals. In order to show that $\sigma$ is oscillating, let $\tau'$ be the $(n-1)$-cycle of $\SG_n$ obtained by adding~$1$ to each entry of $\tau$ which is greater or equal than~$m$. Since $\tau$ is oscillating, the entries in $\tau'$ alternate between the sets $[m-1]$ and $[m+1,n]$. Furthermore, we obtain $\sigma$ from $\tau'$ by inserting~$m$ somewhere in $\tau'$. Thus, \Cref{thm:oscillating_n-cycle_cycle_notation} implies that $\sigma$ is oscillating. \qedhere \end{wideenumerate} \end{caseenum} \end{proof} From \Cref{tbl:Sigma_(n)} we know $\Sigma_{(n)}$ for $n = 1,2,3$. That is, \Cref{thm:recursion_for_Sigma_(n)} allows us to compute $\Sigma_{(n)}$ recursively for each $n\in \N$. This is illustrated in the following. \begin{exa} We want to compute $\Sigma_{(n)}$ for $n=4,5$. To do this we use the bijections $\psi$ and the related notation introduced in \Cref{thm:recursion_for_Sigma_(n)}. \begin{wideenumerate} \item Consider $n=4$. We have \begin{align*} \Sigma_{(4)} = \set{\psi(\sigma) \mid \sigma \in \Sigma_{(3)}} \end{align*} by \Cref{thm:recursion_for_Sigma_(n)}. From \Cref{tbl:Sigma_(n)} we obtain $\Sigma_{(3)} = \set{(1,3,2),(1,2,3)}$. For $\sigma = (1,3,2)$ we have $p=3$ since \begin{align*} \sigma^{3-1}(1) = 2 = \min\set{2,3} = \min\set{\sigma\inv\left(\frac{4}2\right), \frac{4}2}. \end{align*} Thus, \begin{align*} \psi(\sigma) = \ins_{3,3}((1,3,2)) = (1,3+1,2,3) = (1,4,2,3). \end{align*} For $\sigma = (1,2,3)$ we have $p=1$ and \begin{align*} \psi(\sigma) = \ins_{3,1}((1,2,3)) = (1,3,2,3+1) = (1,3,2,4). \end{align*} Therefore, $\Sigma_{(4)} = \set{(1,4,2,3), (1,3,2,4)}$. \item Consider $n=5$. \Cref{thm:recursion_for_Sigma_(n)} yields \begin{align} \label{eq:Sigma_(5)_recursion} \Sigma_{(5)} = \set{\psi(\sigma, q) \mid \sigma \in \Sigma_{(4)}, q\in \set{0,1,2}}. \end{align} Let $m = \frac{5+1}2 = 3$ and $I = \set{m-1,m} = \set{2,3}$. For $\sigma = (1,4,2,3)$ we have $p=2$ since $\sigma^{2-1}(1) = 4 \not \in I$ and $\sigma^{2}(1) = 2 \in I$. Thus, for instance we have \begin{align*} \psi(\sigma,1) = \ins_{3,3}((1,4,2,3)) = (1,4+1,2,3,3+1) = (1,5,2,3,4). \end{align*} For $\sigma = (1,3,2,4)$ we have $p=1$. Computing $\psi(\sigma,q)$ for all $\sigma\in \Sigma_{(4)}$ and $q\in \set{0,1,2}$, we obtain the following table. By \Cref{eq:Sigma_(5)_recursion}, it lists all elements of $\Sigma_{(5)}$. \begin{align*} \begin{array}{c|ccc} \psi(\sigma,q) & 0 & 1 & 2 \\ \hline (1,4,2,3) & (1,5,3,2,4) & (1,5,2,3,4) & (1,5,2,4,3) \\ (1,3,2,4) & (1,3,4,2,5) & (1,4,3,2,5) & (1,4,2,3,5) \end{array} \end{align*} \end{wideenumerate} \end{exa} \begin{cor} \label{thm:sizes_of_Sigma_n} Let $n\in \N$. Then \begin{align*} \card{\Sigma_{(n)}} = \begin{cases} 1 & \myif \text{$n\leq2$} \\ 2 \cdot 3^{\left \lfloor{\frac{n-3}2}\right \rfloor } & \myif \text{$n\geq 3$}. \\ \end{cases} \end{align*} \end{cor} \begin{proof} Let $x_n := |\Sigma_{(n)}|$ for $n\geq 1$, $y_1 := y_2 := 1$ and $y_n := 2\cdot 3^{\left \lfloor{\frac{n-3}2}\right \rfloor }$ for $n\geq 3$. We show that both sequences have the same initial values and recurrence relations. First note that \begin{align*} (x_1,x_2,x_3) = (1,1,2) = (y_1,y_2,y_3). \end{align*} where we obtain the $x_i$ from \Cref{tbl:Sigma_(n)}. Now let $n\geq 4$. By \Cref{thm:recurrences_of_Sigma_n} we have to show that $y_n = y_{n-1}$ if~$n$ is even and $y_n = 3y_{n-1}$ if~$n$ is odd. If~$n$ is even, we have \begin{align*} \left\lfloor{\frac{n-3}2}\right \rfloor = \left\lfloor{\frac{n-4}2 + \frac 12}\right \rfloor = \frac{n-4}2 = \left\lfloor{\frac{n-1-3}2 }\right \rfloor \end{align*} and thus $y_n = y_{n-1}$. If~$n$ is odd, we have \begin{align*} \left\lfloor{\frac{n-3}2}\right \rfloor = \frac{n-3}2 = \frac{n-5}2 + 1 = \left\lfloor{\frac{n-5}2 + \frac 12}\right \rfloor + 1 = \left\lfloor{\frac{n-4}2 }\right \rfloor + 1 \end{align*} and hence $y_n = 3y_{n-1}$. \end{proof} \subsection{Equivalence classes of odd hook type} \label{sec:equivalence_classes:odd_hook} Let $\alpha = (k, 1^{n-k}) \vDash n$ be a hook. Then $\alpha$ is a maximal composition. Recall that a hook $\alpha$ is called \emph{odd} if~$k$ is odd and called \emph{even} otherwise. The main result of this \namecref{sec:equivalence_classes:odd_hook} is a combinatorial characterization of $\Sigma_\alpha$ provided that $\alpha$ is an odd hook in \Cref{thm:characterization_of_Sigma_for_odd_hook}. We will use the inductive product which is the topic of \Cref{sec:equivalence_classes:inductive_product} in order to deal with the even hooks. Therefore, the characterization of $\Sigma_\alpha$ for even hooks $\alpha$ is postponed until \Cref{thm:characterization_of_Sigma_for_arbitrary_hook}. We want to generalize the concept of being oscillating and having connected intervals from $n$-cycles to arbitrary permutations. In order to do this, we standardize cycles in the following way. Let $\sigma := (c_1,\dots, c_k)\in \SG_n$ be a $k$-cycle. Replace the smallest element among $c_1, \dots, c_k$ by $1$, the second smallest by $2$ and so on. The result is a $k$-cycle with entries $1,2,\dots, k$ which can be regarded as an element $\SG_k$. This permutation is called the \emph{cycle standardization} $\cst(\sigma)$ of $\sigma$. \begin{exa} Consider $\sigma = (3,11,4,10,5)\in \SG_{11}$. Then $\cst(\sigma) = (1,5,2,4,3)\in \SG_5$ which is oscillating with connected intervals. \end{exa} We formally define the cycle standardization as follows. \begin{defi} \label{thm:cycle_standartization} \begin{enumerate} \item Given $\sigma \in \SG_n$ and $i\in [n]$, there is a cycle $(c_1,\dots, c_k)$ of $\sigma$ containing~$i$. Then we define \begin{align*} \rnk_\sigma(i) := \card{\set{j \in [k] \mid c_j\leq i}}. \end{align*} \item Let $\sigma = (c_1, \dots, c_k)\in \SG_n$ be a~$k$-cycle. The \emph{cycle standardization} of $\sigma$ is the~$k$-cycle of $\SG_k$ given by \begin{align*} \cst(\sigma) := (\rnk_\sigma(c_1), \rnk_\sigma(c_2), \dots, \rnk_\sigma(c_k)). \end{align*} \end{enumerate} \end{defi} Note that the permutation $\cst(\sigma)$ is independent from the choice of the cycle notation $\sigma = (c_1, c_2, \dots, c_k)$ in \Cref{thm:cycle_standartization}. \begin{rem} \label{thm:properties_of_cycle_standartization} Let $\sigma = (c_1, c_2, \dots, c_k)\in \SG_n$ be a~$k$-cycle. \begin{enumerate} \item The anti-rank of $i\in [n]$ among the elements in its cycle in $\sigma$ is $\rnk_\sigma(i)$. \item \label{enum:cycle_standartization_same_relative_order} For all $i,j\in [k]$ we have $c_i < c_j$ if and only if $\rnk_\sigma(c_i) < \rnk_\sigma(c_j)$. \item Let~$i$ be an element appearing in the cycle $(c_1, c_2, \dots, c_k)$. Then we have \begin{align*} \cst(\sigma)(\rnk_\sigma(i)) = \rnk_\sigma( \sigma(i)). \end{align*} \end{enumerate} \end{rem} We now generalize the notions of being oscillating and having connected intervals to arbitrary permutations via the cycle decomposition and the cycle standardization. Recall that trivial cycles are those of length~$1$. \begin{defi} \label{def:oscillating_and_connected_intervals_general} Let $\sigma\in \SG_n$ and write $\sigma$ as a product $\sigma = \sigma_1\cdots \sigma_l$ of disjoint cycles including the trivial ones. \begin{enumerate} \item We say that $\sigma$ is \emph{oscillating} if $\cst(\sigma_i)$ is oscillating for each cycle $\sigma_i$. \item We say that $\sigma$ has \emph{connected intervals} if $\cst(\sigma_i)$ has connected intervals for each cycle $\sigma_i$ \end{enumerate} \end{defi} Let $(c)\in \SG_n$ be a trivial cycle. Then $\cst((c)) = (1) \in \SG_1$ which is oscillating and has connected intervals. Therefore, in order to show that a permutation $\sigma$ is oscillating (has connected intervals) it suffices to consider the nontrivial cycles. \begin{exa} Let $\alpha = (4,5,3,1)\vDash_e 13$ and \begin{align*} \sigma_\alpha = (1, 13, 2, 12)(3, 11, 4, 10, 5)( 9, 6, 8)(7). \end{align*} The cycle standardizations of the nontrivial cycles of $\sigma_\alpha$ are \begin{align*} (1,4,2,3), (1,5,2,4,3) \text{ and } (1,2,3). \end{align*} Each of these three permutations is oscillating and has connected intervals (see \Cref{tbl:Sigma_(n)}). Thus, $\sigma_\alpha$ is oscillating and has connected intervals. \end{exa} Assume that $\sigma \in \SG_n$ is an~$n$-cycle. Then $\sigma$ has only one cycle $\sigma$ in cycle notation and $\cst(\sigma) = \sigma$. Thus, for~$n$-cycles our new notion of being oscillating (having connected intervals) from \Cref{def:oscillating_and_connected_intervals_general} is equivalent to the old concept from \Cref{def:oscillating_n-cycle} (\Cref{def:connected_intervals_n-cycle}). We now prove some general results on oscillating permutations with connected intervals. As in the last \namecref{sec:equivalence_classes:one_part}, we are interested in the effect of swapping entries $i$ and $i+1$ in cycle notation (that is, conjugating with $s_i$). This will in particular be useful to prove our results on odd hooks. We consider the case where $i$ and $i+1$ appear in the same cycle first. \begin{lem} \label{thm:equivalence_and_interchange_within_cycle} Let $\sigma \in \SG_n$ and write $\sigma$ as a product $\sigma = \sigma_1\cdots \sigma_l$ of disjoint cycles. Assume that there is an $i\in [n-1]$ and a $k\in [l]$ such that~$i$ and $i+1$ both appear in the cycle $\sigma_k$. Set $i' := \rnk_\sigma(i)$ and $\tau := \cst(\sigma_k)$. Then we have \begin{enumerate} \item $\cst(s_i\sigma_k s_i) = s_{i'} \tau s_{i'}$, \item $s_i \sigma s_i \approx \sigma$ if and only if $ s_{i'} \tau s_{i'} \approx \tau$. \end{enumerate} \end{lem} \begin{proof} By the definition of $\rnk_\sigma$, we have that $\rnk_\sigma(j) = \rnk_{\sigma_k}(j)$ for all entries~$j$ in the cycle $\sigma_k$. (1) We obtain $s_i\sigma_k s_i$ from $\sigma_k$ by interchanging~$i$ and $i+1$ in cycle notation. Since~$i$ and $i+1$ appear in $\sigma_k$, we have $\rnk_{\sigma_k}(i+1) = i'+1$. Thus, we obtain $\cst(s_i\sigma_k s_i)$ from $\tau = \cst(\sigma_k)$ by interchanging $i'$ and $i'+ 1$ in cycle notation. That is, $\cst(s_i\sigma s_i) = s_{i'} \tau s_{i'}.$ (2) We have $s_i \sigma s_i \approx \sigma $ if and only if $\ell(s_i \sigma s_i) = \ell(\sigma)$. By \Cref{thm:length_and_conjugation}, this is the case if and only if either $\sigma(i) < \sigma(i+1)$ or $\sigma\inv(i) < \sigma\inv(i+1)$. From the definition of the cycle standardization we obtain that $\tau(\rnk_\sigma(j)) = \rnk_\sigma (\sigma(j))$ for each entry~$j$ in $\sigma_k$ (\cf \Cref{thm:properties_of_cycle_standartization}). Moreover, by the definition of $\rnk_\sigma$ and the fact that~$i$ and $i+1$ appear in the same cycle of $\sigma$, \begin{align*} \sigma(i) < \sigma(i+1) \iff \rnk_\sigma(\sigma(i)) < \rnk_\sigma(\sigma(i+1)). \end{align*} Hence, \begin{align*} \sigma(i) < \sigma(i+1) \iff \tau(i') < \tau(i' + 1). \end{align*} Similarly, one shows that this equivalence is also true for $\sigma\inv$ and $\tau\inv$. Therefore, we have $s_i \sigma s_i \approx \sigma$ if and only if either $\tau(i') < \tau(i'+1)$ or $\tau\inv(i') < \tau\inv(i'+1)$. As for $\sigma$, the latter is equivalent to $ s_{i'} \tau s_{i'} \approx \tau$. \end{proof} We now infer from \Cref{thm:equivalence_and_interchange_within_cycle} that swaps of $i$ and $i+1$ within a cycle that preserve $\approx$ also preserve the properties of being oscillating with connected intervals. \begin{cor} \label{thm:equivalence_osc+c.I_and_interchange_within_cycle} Let $\sigma \in \SG_n$ be oscillating with connected intervals, $i\in [n-1]$ such that~$i$ and $i+1$ appear in the same cycle of $\sigma$ and $\sigma' := s_i \sigma s_i$. If $\sigma \approx \sigma'$ then $\sigma'$ is oscillating with connected intervals. \end{cor} \begin{proof} We write $\sigma$ as a product $\sigma = \sigma_1\cdots \sigma_l$ of disjoint cycles and choose $k$ such that~$i$ and $i+1$ appear in the cycle $\sigma_k$. Moreover, we set $\tau := \cst(\sigma_k)$, $\tau' := \cst(s_i\sigma_k s_i)$ and~$m$ to be the length of the cycle $\sigma_k$. As~$i$ and $i+1$ only appear in $\sigma_k$, $\sigma' = \sigma_1 \cdots \sigma_{k-1} ( s_i \sigma_k s_i) \sigma_{k+1} \cdots \sigma_l$ is the decomposition of $\sigma'$ in disjoint cycles. Since $\sigma$ is oscillating with connected intervals, $\cst(\sigma_j)$ is oscillating with connected intervals for all $j\in [l]$. Therefore, it remains to show that $\tau'$ has these properties. Since $\sigma \approx \sigma'$, \Cref{thm:equivalence_and_interchange_within_cycle} yields that $\tau \approx \tau'$. In addition, $\tau$ is an oscillating~$m$-cycle with connected intervals and thus $\tau \in \Sigma_{(m)}$ by \Cref{thm:characterization_of_Sigma_(n)}. Hence, also $\tau'\in \Sigma_{(m)}$, \ie $\tau'$ is oscillating with connected intervals. \end{proof} The next result is concerned with the interchange of $i$ and $i+1$ between two cycles. \begin{lem} \label{thm:equivalence_osc+c.I_and_interchange_between_cycles} Let $\sigma \in \SG_n$ be oscillating with connected intervals, $i\in [n-1]$ such that~$i$ and $i+1$ appear in different cycles of $\sigma$ and $\sigma' := s_i \sigma s_i$. Then $\sigma'$ is oscillating and has connected intervals. \end{lem} \begin{proof} We obtain $\sigma'$ from $\sigma$ by interchanging~$i$ and $i+1$ between two cycles in cycle notation. It is easy to see that this does not affect the cycle standardization of the cycles in question. In addition, all other cycles of $\sigma'$ appear as cycles of $\sigma$. Since $\sigma$ is oscillating with connected intervals, it follows that the standardization of each cycle of $\sigma'$ is oscillating with connected intervals. That is, $\sigma'$ is oscillating with connected intervals. \end{proof} We now come to the hooks. \begin{exa} \label{thm:odd_hook_example_(3_1_1)} Let $\alpha = (3,1,1) \hdash 5$. The elements of $\Sigma_\alpha$ are \begin{align*} (1,5,2),(1,2,5),(1,5,3),(1,3,5),(1,5,4),(1,4,5). \end{align*} Note that~$1$ and~$5$ always appear in the cycle of length~$3$. \end{exa} Recall that we use \emph{type} as a short form for \emph{cycle type}. \begin{defi} \label{def:hook_properties} Let $\alpha = (k,1^{n-k})\vDash_e n$ be a hook, $\sigma\in \SG_n$ of type $\alpha$, $m := \frac{k-1}2$ if~$k$ is odd and $m := \frac{k}2$ if~$k$ is even. We say that $\sigma$ satisfies the \emph{hook properties} if \begin{enumerate} \item $\sigma$ is oscillating, \item $\sigma$ has connected intervals, \item if $k>1$ then~$i$ and $n-i+1$ appear in the cycle of length~$k$ of $\sigma$ for all $i\in [m]$. \end{enumerate} \end{defi} The permutations from \Cref{thm:odd_hook_example_(3_1_1)} satisfy the hook properties. The main result of this \namecref{sec:equivalence_classes:odd_hook} is to show that for an odd hook $\alpha$, the elements of $\Sigma_\alpha$ are characterized by the hook properties. In \Cref{thm:characterization_of_Sigma_for_arbitrary_hook} of \Cref{sec:equivalence_classes:inductive_product} we will see that the same is true for even hooks. \begin{exa} \label{exa:odd_hook_properties} \begin{wideenumerate} \item Let $\sigma \in \SG_n$ be of type $(1^n)$. Then $\sigma = \id$ and $\sigma$ satisfies the hook properties. Moreover, $\Sigma_{(1^n)} = \set{\sigma}$. \item Let $\sigma \in \SG_n$ be of type $(n)$. That is, $\sigma$ is an~$n$-cycle. Then the third hook property is satisfied by $\sigma$ since all elements of $[n]$ appear in the only cycle of $\sigma$. Thus, $\sigma$ has the hook properties if and only if $\sigma$ is oscillating with connected intervals. By \Cref{thm:characterization_of_Sigma_(n)}, this is equivalent to $\sigma \in \Sigma_{(n)}$. \item \label{exa:odd_hook_properties_3,1,1} Let $\alpha = (3,1,1)\vDash n$. We want to determine all permutations in $\SG_n$ of type $\alpha$ that satisfy the hook properties. Let $\sigma \in \SG_n$ be of type $\alpha$, $\sigma_1$ be the cycle of length~$3$ of $\sigma$ and $\mc O_1$ be the set of elements in $\sigma_1$. Since $\sigma_1$ is the only nontrivial cycle of $\sigma$, $\sigma$ is oscillating and has connected intervals if and only if $\tau:= \cst(\sigma_1)$ has these properties. The type of $\tau$ is $(3)$. By \Cref{thm:characterization_of_Sigma_(n)}, the oscillating permutations of type $(3)$ with connected intervals form $\Sigma_{(3)}$. From \Cref{tbl:Sigma_(n)} we read $\Sigma_{(3)} =\set {(1,3,2), (1,2,3)}$. Let \begin{align*} M = \set{\set{1,5}\cup\set{j} \mid j \in [2,4]} = \set{ \set{1,2,5}, \set{1,3,5}, \set{1,4,5} } \end{align*} The third hook property is satisfied by $\sigma$ if and only if $ \mc O_1 \in M$. Therefore, $\sigma$ fulfills the hook properties if and only if there is a $\tau \in \Sigma_{(3)}$ and an $\mc O_1\in M$ such that we obtain $\sigma_1$ by writing $\mc O_1$ in a cycle such that the relative order of entries matches that one in $\tau$. For instance, from $\tau = (1,3,2)$ and $\mc O_1 = \set{1,4,5}$ we obtain $\sigma = (1,5,4)$. Going through all possibilities for $\tau$ and $\mc O_1$ we obtain the desired set of permutations. These are the ones shown in \Cref{thm:odd_hook_example_(3_1_1)}. \end{wideenumerate} \end{exa} For the proof of the characterization of $\Sigma_\alpha$ when $\alpha$ is an odd hook, we follow the same strategy as in in the case of compositions with one part from \Cref{sec:equivalence_classes:one_part}: For any odd hook $\alpha$ we show that $\sigma_\alpha$ satisfies the hook properties, $\approx$ is compatible with the hook properties and there is an algorithm that computes a sequence of $\approx$-equivalent permutations starting with $\sigma$ and ending up with $\sigma_\alpha$ for each permutation $\sigma$ of type $\alpha$ satisfying the hook properties. \begin{lem} \label{thm:sigma_odd_hook_has_odd_hook_properties} Let $\alpha\hdash n$ be an odd hook. Then the element in stair form $\sigma_\alpha\in \SG_n$ satisfies the hook properties. \end{lem} \begin{proof} Let $\alpha = \parts\alpha l = (k, 1^{n-k})\hdash n$ be an odd hook. If $k=1$ then $\sigma_\alpha$ is the identity which satisfies the hook properties. Assume $k>1$ and set $m := \frac{k-1}2$. By definition, the cycle of length~$k$ of $\sigma_\alpha$ is given by \begin{align*} \sigma_{\alpha_1} = (1, n, 2, n-1, \dots, m, n-m+1, m+1). \end{align*} Hence, $\sigma_\alpha$ satisfies the third hook property. In order to show that $\sigma_\alpha$ is oscillating and has connected intervals, it suffices to consider $\sigma_{\alpha_1}$ because the other cycles of $\sigma_\alpha$ are trivial. From the description of $\sigma_{\alpha_1}$ we obtain its cycle standardization \begin{align*} \cst(\sigma_{\alpha_1}) = (1, k, 2, k-1, \dots, m, k-m+1, m+1). \end{align*} That is, $\cst(\sigma_{\alpha_1})$ is the element in stair form $\sigma_{(k)}$ which is oscillating and has connected intervals by \Cref{thm:sigma_(n)_osc_and_c.I.}. \end{proof} Let $\alpha\vDash_e n$ be an odd hook and $\sigma \in \SG_n$ be of type $\alpha$ satisfying the hook properties. In order to show $\sigma_\alpha \approx \sigma$ we will successively interchange elements~$i$ and $i+1$ in the cycle notation of $\sigma$. The next lemma considers the case where at least one of~$i$ and $i+1$ is a fixpoint of $\sigma$. \begin{lem} \label{thm:hook_interchange_with_fixpoint} Let $\alpha = (k, 1^{n-k})\hdash n$ be an odd hook, $m:= \frac{k-1}2$ and $\sigma \in \SG_n$ of type $\alpha$ satisfying the hook properties. Furthermore, assume that there are $i,i+1\in [m+1, n-m]$ such that~$i$ or $i+1$ is a fixpoint of $\sigma$. Then $s_i \sigma s_i \approx \sigma$ and $s_i \sigma s_i$ satisfies the hook properties. \end{lem} \begin{proof} If both~$i$ and $i+1$ are fixpoints of $\sigma$ then $s_i \sigma s_i = \sigma$ and there is nothing to show. Therefore, we assume that either~$i$ or $i+1$ is not a fixpoint and call this element~$j$. By choice of~$i$ and $i+1$, $m < j < n-m+1$. Since $\sigma$ satisfies the hook properties, the cycle of length~$k$ of $\sigma$ consists of the elements $1,\dots, m, j, n-m+1, \dots, n$. First we show that $s_i\sigma s_i$ satisfies the hook properties. As $\sigma$ is oscillating with connected intervals and~$i$ and $i+1$ appear in different cycles of $\sigma$, \Cref{thm:equivalence_osc+c.I_and_interchange_between_cycles} yields that $s_i\sigma s_i$ is oscillating with connected intervals too. As we obtain $s_i\sigma s_i$ by interchanging~$i$ and $i+1$ in cycle notation of $\sigma$ and \begin{align*} i,i+1\not \in \set{1,\dots, m, n-m+1, \dots, n}, \end{align*} $s_i\sigma{s_i}$ satisfies the third hook property. In order to show $s_i \sigma s_i\approx \sigma$, we assume that $i+1$ is a fixpoint of $\sigma$ and~$i$ is not. The other case is proven analogously. Let $\tau := \cst(\sigma)$ and $i' := \rnk_\sigma(i)$. Then $i'= m+1 = \frac{k+1}2$ by the description of the cycle of length~$k$ from above. Since $\sigma$ is oscillating, $\tau$ is oscillating. Thus, \Cref{thm:oscillating_n-cycle_local} implies that there is an $\varepsilon \in \set{-1,1}$ such that \begin{align*} \text{ $\tau^\varepsilon(i') > m+1$ and $\tau^{-\varepsilon}(i') < m+1$.} \end{align*} Now we use that $\tau^\delta(i') = \rnk_\sigma (\sigma^\delta(i))$ for $\delta = -1,1$ and obtain that \begin{align*} \text{$\sigma^\varepsilon(i) \geq n-m+1$ and $\sigma^{-\varepsilon}(i) \leq m$.} \end{align*} As $\sigma(i+1) = i+1 \in [m+2,n-m]$, it follows that \begin{align*} \text{$\sigma^\varepsilon(i) > \sigma^\varepsilon(i+1)$ and $\sigma^{-\varepsilon}(i) < \sigma^{-\varepsilon}(i+1).$} \end{align*} Hence, \Cref{thm:length_and_conjugation} implies $\ell(s_i\sigma s_i) = \ell(\sigma)$. Therefore, $s_i\sigma s_i\approx \sigma$. \end{proof} The following \namecref{thm:equivalence_implies_odd_hook} shows that $\approx$ preserves the hook properties. It is an analogue to \Cref{thm:equivalence_implies_oscillating_and_ci}. \begin{lem} \label{thm:equivalence_implies_odd_hook} Given an odd hook $\alpha = (k, 1^{n-k})\hdash n$, $\sigma \in \SG_n$ of type $\alpha$ satisfying the hook properties and $\sigma' := s_i \sigma s_i$ with $\sigma \approx \sigma'$, we have that also $\sigma'$ satisfies the hook properties. \end{lem} \begin{proof} We show that $\sigma'$ has the hook properties. If $k=1$ then $\sigma = \sigma' = \id$ so that $\sigma'$ satisfies the hook properties. Hence, assume $k>1$. Set $m:= \frac{k-1}2$, $\tau := \cst(\sigma)$ and $\tau' := \cst(\sigma')$. We deal with three cases. First, assume that neither~$i$ nor $i+1$ is a fixpoint of $\sigma$. Then~$i$ and $i+1$ both appear in the cycle of length~$k$ of $\sigma$. Since $\sigma$ satisfies the hook properties, it is oscillating and has connected intervals. Therefore, \Cref{thm:equivalence_osc+c.I_and_interchange_within_cycle} yields that also $\sigma'$ has these properties. The elements $1,\dots, m,n-m+1,\dots m$ all appear in the cycle of length~$k$ of $\sigma$ because $\sigma$ satisfies the hook properties. Since we interchange two entries in this cycle to obtain $\sigma'$ from $\sigma$, all the elements also appear in the cycle of length~$k$ of $\sigma'$. Second, assume that $i+1$ is a fixpoint of $\sigma$ but~$i$ is not. Since $\sigma \approx \sigma'$, we have $\ell(\sigma) = \ell(\sigma')$ and by \Cref{thm:length_and_conjugation} \begin{align} \label{eq:hook_equivalence_preserves_hook_properties} \begin{aligned} \text{either } &\text{$\sigma(i) > i+1$ and $\sigma\inv(i) < i+1$} \\ \text{or } &\text{$\sigma(i) < i+1$ and $\sigma\inv(i) > i+1$} \end{aligned} \end{align} where we used $\sigma(i+1) = i+1$. The elements of the cycle of length~$k$ of $\sigma$ are $1, \dots, m, j, n-m+1 , \dots, n$ where $j \in [m+1,n-m]$. We now show that $i,i+1\in [m+1,n-m]$. As $i+1$ is a fixpoint, we have $i+1\leq n-m$ and it remains to show that $i \geq m+1$. Assume that $i\leq m$ instead and set $i' := \rnk_\sigma(i)$. Then $i' < \frac{k+1}2$. Since $\tau\in \SG_k$ is an oscillating $k$-cycle, \Cref{thm:oscillating_n-cycle_local} yields that $\tau\inv(i'),\tau(i') \geq \frac{k+1}2$. Because $\rnk_\sigma(j) =\frac{k+1}2$, it follows that $\sigma\inv(i),\sigma(i) \geq j$. Moreover, $i+1$ being a fixpoint and $i \leq m$ imply that $i+1 < j$. Hence, $\sigma\inv(i),\sigma(i) > i+1$ which contradicts \Cref{eq:hook_equivalence_preserves_hook_properties}. Since $i,i+1\in [m+1,n-m]$ and $i+1$ is a fixpoint of $\sigma$, we can apply \Cref{thm:hook_interchange_with_fixpoint} which implies that $\sigma'$ satisfies the hook properties. In the same vein, one proves the remaining case where~$i$ is a fixpoint but $i+1$ is not. \end{proof} We now extend \Cref{thm:kim_algorithm_and_n-cycles} to the case of odd hooks. That is, we consider one step of the algorithm mentioned earlier. \begin{lem} \label{thm:kim_algorithm_and_odd_hooks} Let $\alpha = (k, 1^{n-k})\hdash n$ be an odd hook and $\sigma \in \SG_n$ such that $\sigma$ is of type $\alpha$, $\sigma$ satisfies the hook properties and $\sigma\neq\sigma_\alpha$. Then there exists a minimal integer $p$ such that $1\leq p\leq k-1$ and $\sigma^p(1) \neq \sigma_\alpha^p(1)$. Set $a := \sigma^p(1)$, $b := \sigma_\alpha^p(1)$ and \begin{align*} \sigma' := \begin{cases} s_{a-1} \sigma s_{a-1} &\myif a > b \\ s_a \sigma s_a & \myif a < b. \\ \end{cases} \end{align*} Then $\sigma' \approx \sigma$ and $\sigma'$ satisfies the hook properties. \end{lem} \begin{proof} Set $m := \frac{k-1}2$. If $\alpha = (1^n)$ then the only permutation of type $\alpha$ is the identity and there is nothing to show. If $\alpha = (n)$ then this is \Cref{thm:kim_algorithm_and_n-cycles}. Therefore, assume $1<k<n$. Since $\sigma$ satisfies the hook properties,~$1$ appears in the cycle of length~$k$ of $\sigma$. By definition, $\sigma_\alpha$ has the form \begin{align*} \sigma_\alpha = \begin{cases} (1, n ,2 , n-1 ,\dots, m+1) (n-m) (m+2) \cdots (\frac{n+3}2) (\frac{n+1}2) & \myif \text{$n$ is odd} \\ (1, n ,2 , n-1 ,\dots, m+1) (n-m) (m+2) \cdots (\frac{n}2) (\frac{n}2+1) & \myif \text{$n$ is even}. \end{cases} \end{align*} In particular, $[m+2, n-m]$ is the set of fixpoints of $\sigma_\alpha$ and~$1$ also appears in the cycle of length~$k$ of $\sigma_\alpha$. Thus, from $\sigma \neq \sigma_\alpha$ it follows that there exists $p$ as claimed. In particular, we can define $a$, $b$ and $\sigma'$ as in the theorem. If~$n$ is odd, $k<n$ implies that $\frac{n+1}2$ is a fixpoint of $\sigma_\alpha$ and hence $b \neq \frac{n+1}2$. If~$n$ is even, we have $b \neq \frac{n+1}2$ anyway. Let $\tau := \cst(\sigma)$ and note that $\cst(\sigma_\alpha)$ is just the element in stair form $\sigma_{(k)}$. Moreover set $a' := \rnk_\sigma(a)$. Assume $b<\frac{n+1}2$. The proof for $b>\frac{n+1}2$ is similar and hence omitted. If $b<\frac{n+1}2$ then $b\leq m+1$ by the description of $\sigma_\alpha$ from above. The choice of~$p$ and $1 < b \leq m+1$ imply \begin{align*} \sigma^{-1}(a) = \sigma_\alpha^{-1}(b) = n-b+2 > m+1 \end{align*} and \begin{align*} \set{1, 2, \dots, b-1} \subseteq \set{\sigma_\alpha^r(1) \mid r=0,\dots, p-1} = \set{\sigma^r(1) \mid r=0,\dots, p-1}. \end{align*} The last equality and $a\neq b$ imply $b < a$. Thus, we consider $\sigma' := s_{a-1}\sigma s_{a-1}$. From the hook properties, we obtain that the elements in the cycle of length~$k$ of $\sigma$ are $1,\dots, m,j,n-m+1,\dots n$ where $j\in [m+1, n-m]$. Thus, $\sigma^{-1}(a) >m+1$ implies $\tau\inv(a') > m+1$. But since $\sigma$ is oscillating, $\tau$ is oscillating and therefore \Cref{thm:oscillating_n-cycle_local} implies $a' \leq m+1$. From the description of the elements in the~$k$-cycle of $\sigma$, it now follows that $a\leq n-m$. To sum up, we have $b < a \leq n-m$ and $\sigma' = s_{a-1}\sigma s_{a-1}$. Now we have two cases depending on $a-1$. If $a-1$ is a fixpoint of $\sigma$ then because of $a\leq n-m$, we can apply \Cref{thm:hook_interchange_with_fixpoint} and obtain that $\sigma'\approx \sigma$ and $\sigma'$ satisfies the hook properties. If $a-1$ is not a fixpoint of $\sigma$ then $\rnk_\sigma(a-1) = a'-1$. Moreover, interchanging $a-1$ and~$a$ in $\sigma$ does not affect the third part of the hook property. Therefore, we obtain from \Cref{thm:equivalence_and_interchange_within_cycle} that $\sigma'\approx \sigma$ and $\sigma'$ satisfies the hook properties if $\tau' := s_{a'-1}\tau s_{a'-1}\approx \tau$ and $\tau'$ is oscillating with connected intervals. By \Cref{thm:kim_algorithm_and_n-cycles}, $\tau'$ has these properties if $\tau^r(1) = \sigma_{(k)}^r(1)$ for $0\leq r \leq p-1$, $\tau^p(1) > \sigma^p_{(k)}(1)$ and $\tau^p(1) = a'$. This is what remains be shown. As $\sigma^r(1) = \sigma^r_\alpha(1)$ for $0\leq r \leq p-1$, we have the following equality of tuples \begin{align*} (\tau^0(1), \tau^1(1), \dots, \tau^{p-1}(1)) &= (\rnk_\sigma(1), \rnk_\sigma(n) , \rnk_\sigma(2), \rnk_\sigma(n-1),\dots , \rnk_\sigma(n-b+2)) \\ &= (1,k,2,k-1, \dots, k-b+2) \\ &= (\sigma_{(k)}^0(1), \sigma_{(k)}^1(1), \dots, \sigma_{(k)}^{p-1}(1)). \end{align*} Since the cycle of length~$k$ of $\sigma$ contains exactly one element of $[m+1,n-m]$, $a-1$ and~$a$ appear in this cycle and $a\leq n-m$, we have that $a\leq m+1$. Moreover, $1, \dots, m$ appear in the cycle of length~$k$ of $\sigma$ and $\sigma_\alpha$. Since $b< a \leq m+1$, this implies \begin{align*} \sigma^p_{(k)}(1) = \rnk_{\sigma_{\alpha}}(b) = b \text{ and } \tau^p(1) = \rnk_{\sigma}(a) = a. \end{align*} In particular, $a' = \tau^p(1)$. Moreover, we have $b< a$ so that $\sigma^p_{(k)}(1) < \tau^p(1)$ as desired. \end{proof} We now come to the main result of this \namecref{sec:equivalence_classes:odd_hook}. \begin{thm} \label{thm:characterization_of_Sigma_for_odd_hook} Let $\alpha\hdash n$ be an odd hook and $\sigma\in \SG_n$ of type $\alpha$. Then $\sigma \in \Sigma_\alpha$ if and only if $\sigma$ satisfies the hook properties. \end{thm} \begin{proof} Let $\alpha = (k, 1^{n-k})\hdash n$ be an odd hook and $\sigma_\alpha$ be the element in stair form. The proof is analogous to the one of \Cref{thm:characterization_of_Sigma_(n)}. First, $\sigma_\alpha$ satisfies the hook properties by \Cref{thm:sigma_odd_hook_has_odd_hook_properties}. Let $\sigma \in \SG_n$. For the direction from left to right assume that $\sigma \in \Sigma_\alpha$. Then $\sigma \approx \sigma_\alpha$. From the definition of $\approx$ and \Cref{thm:equivalence_implies_odd_hook} it follows that $\approx$ transfers the hook properties from $\sigma_\alpha$ to $\sigma$. For the converse direction, assume that $\sigma$ satisfies the hook properties. By using \Cref{thm:kim_algorithm_and_odd_hooks} iteratively, we obtain a sequence of $\approx$-equivalent permutations starting with $\sigma$ and ending in $\sigma_\alpha$. Hence $\sigma \in \Sigma_\alpha$. \end{proof} We continue with a rule for the construction of $\Sigma_{(k,1^{n-k})}$ from $\Sigma_{(k)}$ in the case where~$k$ is odd and $k\geq 3$. The rule can be sketched as follows. Given a $\tau \in \Sigma_{(k)}$ we can choose a subset of $[n]$ of size $k$ in accordance with the third hook property. Arranging the elements of this subset in a cycle of length $k$ such that its cycle standardization is $\tau$ (and letting the other elements of $[n]$ be fixpoints) then results in an element of $\Sigma_{(k,1^{n-k})}$. See Part~\ref{exa:odd_hook_properties_3,1,1} of \Cref{exa:odd_hook_properties} for an illustration. \begin{cor} \label{thm:odd_hook_bijection} Let $\alpha = (k,1^{n-k})\vDash_e n$ be an odd hook with $k\geq 3$. Set $m := \frac{k-1}2$. For $\tau \in \Sigma_{(k)}$ and $j \in [m+1, n-m]$ define $\varphi(\tau,j)$ to be the element $\sigma \in \SG_n$ of type $\alpha$ such that $\cst(\sigma) = \tau$ and the entries in the cycle of length~$k$ of $\sigma$ are $1,\dots, m, j ,n-m+1,\dots, n$. Then \begin{align*} \varphi \colon \Sigma_{(k)} \times [m+1, n-m] \to \Sigma_\alpha, \quad (\tau,j) \mapsto \varphi(\tau,j) \end{align*} is a bijection. \end{cor} \begin{proof} Given a $\tau \in \Sigma_{(k)}$ and a $j\in [m+1, n-m]$ there is only one way (up to cyclic shift) to write the elements $1,2,\dots, m, j , n-m+1, \dots, n$ in a cycle of length~$k$ such that the standardization of the corresponding~$k$-cycle in $\SG_n$ is $\tau$. This~$k$-cycle is $\varphi(\tau,j)$. By construction, $\varphi(\tau,j)$ satisfies the hook properties. Hence, \Cref{thm:characterization_of_Sigma_for_odd_hook} yields $\varphi(\tau,j) \in \Sigma_\alpha$. That is, $\varphi$ is well defined. Let $\sigma\in \Sigma_\alpha$. Then by \Cref{thm:characterization_of_Sigma_for_odd_hook}, $\sigma$ satisfies the hook properties. The third hook property yields that there is a unique $j\in [m+1, n-m]$ such that the elements in the cycle of length~$k$ of $\sigma$ are $1,2,\dots, m, j , n-m+1, \dots, n$. From the first two hook properties it follows that $\tau:= \cst(\sigma)$ is oscillating and has connected intervals. Thus, $\tau \in \Sigma_{(k)}$ by \Cref{thm:characterization_of_Sigma_(n)}. By definition of $\varphi$, the cycles of length~$k$ of $\varphi(\tau,j)$ and $\sigma$ contain the same elements. Moreover, they have the same cycle standardization $\tau$. Consequently, $\varphi(\tau,j) = \sigma$. That is, $\varphi$ is surjective. Since $\tau$ and~$j$ uniquely depend on $\sigma$, $\varphi$ is also injective. \end{proof} In the last result of the \namecref{sec:equivalence_classes:odd_hook} we determine the cardinality of $\Sigma_\alpha$ for each odd hook $\alpha$. \begin{cor} \label{thm:odd_hook_cardinality_of_Sigma_alpha} If $\alpha= (k,1^{n-k})\hdash n$ is an odd hook then \begin{align*} |\Sigma_{\alpha}| = \begin{cases} 1 & \myif \text{$k=1$} \\ 2 (n-k+1) 3^{\frac{k-3}2} & \myif \text{$k\geq 3$}. \\ \end{cases} \end{align*} \end{cor} \begin{proof} Let $\sigma \in \Sigma_\alpha$. If $k=1$ then $\Sigma_\alpha = \set{1}$. Now suppose that $k\geq 3$ and set $m := \frac{k-1}2$. The cardinality of $[m+1,n-m]$ is $n-k+1$. Hence, \Cref{thm:odd_hook_bijection} yields that $|\Sigma_\alpha| = (n-k+1) |\Sigma_{(k)}|.$ In addition, we have $|\Sigma_{(k)}| = 2 \cdot 3^{\frac{k-3}2}$ from \Cref{thm:sizes_of_Sigma_n}. \end{proof} \subsection{The inductive product} \label{sec:equivalence_classes:inductive_product} In this \namecref{sec:equivalence_classes:inductive_product} we define the inductive product $\iprod$ and use it to obtain in \Cref{thm:indcutive_product_bijection} a recursion the rule for $\Sigma_{(\alpha_1,\dots, \alpha_l)}$ in the case where $\alpha_1$ is even. This leads to a description of $\Sigma_\alpha$ for all maximal compositions $\alpha$ whose odd parts form a hook (see \Cref{thm:inductive_product_remark_reduction_to_odd_partitions}). As a consequence, we show in \Cref{thm:characterization_of_Sigma_for_arbitrary_hook} that $\Sigma_\alpha$ is characterized by the hook properties if $\alpha$ is an even hook. Recall that we write $\gamma \vDash_0 n$ if $\gamma$ is a weak composition of $n$, that is, a finite sequence of nonnegative integers that sum up to $n$. \begin{defi} \label{def:inductive_product} Let $(n_1,n_2) \vDash_0 n$. The \emph{inductive product} on $\SG_{n_1} \times \SG_{n_2}$ is the binary operator \begin{align*} \iprod \colon \SG_{n_1} \times \SG_{n_2} &\to \SG_n \\ (\sigma_1, \sigma_2) &\mapsto\sigma_1 \iprod \sigma_2 \end{align*} where $\sigma_1 \iprod \sigma_2$ is the element of $\SG_n$ whose cycles are the cycles of $\sigma_1$ and $\sigma_2$ altered as follows: \begin{enumerate} \item in the cycles of ${\sigma_1}$, add ${n_2}$ to each entry ${>k}$, \item in the cycles of ${\sigma_2}$, add ${k}$ to each entry \end{enumerate} where $k := \ceil{\frac{n_1}2}$. \end{defi} For two sets $X_1\subseteq \SG_{n_1}$ and $X_2 \subseteq \SG_{n_2}$ we define \begin{align*} X_1 \iprod X_2 := \set{\sigma_1 \iprod \sigma_2 \mid \sigma_1\in X_1, \sigma_2 \in X_2 }. \end{align*} We will see in \Cref{thm:inductive_product_cycles} that the inductive product is well-defined. \begin{exa} \label{exa:inductive_product} \begin{enumerate} \item Let $\emptyset\in \SG_0$ be the empty function and $\sigma \in \SG_n$. Then \begin{align*} \emptyset\iprod \sigma = \sigma \iprod \emptyset = \sigma. \end{align*} \item Consider $n_1 = 6$, $n_2 = 4$, $n= 10$ and the elements in stair form $\sigma_{(6)} \in \SG_{n_1}$ and $\sigma_{(3,1)} \in \SG_{n_2}$. Then $k = 3$ and \begin{align*} \sigma_{(6)} \iprod \sigma_{(3,1)} &= (1, 6, 2, 5, 3, 4) \iprod (1, 4, 2)(3) \\ &= (1, 6 + 4 , 2, 5 + 4, 3, 4 + 4)( 1 + 3, 4 + 3, 2 + 3)(3 + 3) \\ &= (1, 10 , 2 , 9, 3, 8)( 4, 7, 5)( 6 ). \end{align*} \item Consider $n_1 = 5$, $n_2 = 4$ and the elements in stair form $\sigma_{(5)} = ( 1, 5, 2, 4, 3)\in \SG_{n_1}$ and $\sigma_{(3,1)} = (1,4,2)(3)\in \SG_{n_2}$. Then $\sigma_{(3,1)}^{w_0} = ( 1, 3,4)(2)$ where $w_0 = (1,4)(2,3)$ is the longest element of $\SG_4$. We have $k = 3$ and \begin{align*} \sigma_{(5)} \iprod \sigma^{w_0}_{(3,1)} &= ( 1, 5+4, 2, 4+4, 3)(1+3, 3+3, 4+3)(2+3) \\ &= (1, 9 , 2 , 8, 3)(7, 4 , 6)(5). \end{align*} \end{enumerate} Note that in Parts (2) and (3) we obtain the elements in stair form $\sigma_{(6,3,1)}$ and $\sigma_{(5,3,1)}$, respectively. \end{exa} In order to work with the inductive product, it is convenient to describe it more formally. To this end we introduce the following notation which we will use throughout the \namecref{sec:equivalence_classes:inductive_product}. \begin{notn} \label{not:inductive_product} Let $n\geq 0$, $(n_1,n_2) \vDash_0 n$, $k := \ceil{\frac{n_1}2}$, \begin{align*} N_1 := [k] \cup [k+n_2+1, n] \quad \text{and} \quad N_2 := [k+1, k+n_2]. \end{align*} We have that $|N_1| = n_1, |N_2| = n_2$, $N_1$ and $N_2$ are disjoint and $N_1\cup N_2 = [n]$. Note that $[0] = [1,0] = \emptyset$. Define the bijections $\varphi_{1}\colon [n_1] \to N_1$ and $\varphi_2 \colon [n_2] \to N_2$ by \begin{align*} \varphi_1(i) := \begin{cases} i &\myif i\leq k \\ i+n_2 & \myif i> k \end{cases} \quad \text{and} \quad \varphi_2(i) := i + k. \end{align*} The bijections $\varphi_1$ and $\varphi_2$ formalize the alteration of the cycles of $\sigma_1$ and $\sigma_2$ in \Cref{def:inductive_product}, respectively. Their inverses are given by \begin{align*} \varphi\inv_1(i) := \begin{cases} i & \myif i\leq k \\ i-n_2 & \myif i>k \end{cases} \quad \text{and} \quad \varphi\inv_2(i) := i - k. \end{align*} For $i=1,2$ and $\sigma_i\in \SG_{n_i}$, write $\sigma_i^{\varphi_i} := \varphi_i \circ \sigma_i \circ \varphi_i\inv$. Then $\sigma_i^{\varphi_i}\in \SG(N_i)$ and $\sigma_i^{\varphi_i}$ can naturally be identified with the element of $\SG_n$ that acts on $N_i$ as $\sigma_i^{\varphi_i}$ and fixes all elements of $[n] \setminus N_i$. \end{notn} We will see in \Cref{thm:inductive_product_cycles} that we obtain $\sigma_i^{\varphi_i}$ by applying $\varphi_i$ on each entry in of $\sigma_i$ in cycle notation. \begin{exa} \label{exa:inductive_product_functions} Let $n_1 = 6$ and $n_2 = 4$ and consider the elements in stair form \begin{center} $\sigma_1 := \sigma_{(6)} = ( 1, 6, 2, 5, 3, 4) \in \SG_6$ \quad and \quad $\sigma_2 := \sigma_{(3,1)} = ( 1, 4, 2)(3) \in \SG_4$. \end{center} Then $k = 3$ and \begin{align*} \sigma_1^{\varphi_1} &= (1, 6 + 4 , 2, 5 + 4, 3, 4 + 4) = (1, 10 , 2 , 9, 3, 8),\\ \sigma_2^{\varphi_2} &= ( 1 + 3, 4 + 3, 2 + 3)(3 + 3) = ( 4, 7, 5)( 6 ). \end{align*} Thus, from \Cref{exa:inductive_product} it follows that $\sigma_1 \iprod \sigma_2 = \sigma_1^{\varphi_1}\sigma_2^{\varphi_2}$. The next \namecref{thm:inductive_product_cycles} states that this is true in general. \end{exa} We now come to the more formal description of the inductive product. \begin{lem} \label{thm:inductive_product_cycles} Let $\sigma_r \in \SG_{n_r}$ with cycle decomposition $\sigma_r = \sigma_{r,1}\sigma_{r,2}\cdots \sigma_{r,p_r}$ for $r= 1,2$. \begin{enumerate} \item We have \begin{align*} \sigma_1 \iprod \sigma_2 = \sigma_1^{\varphi_1} \sigma_2^{\varphi_2}. \end{align*} \item Let $r\in \set{1,2}$ and $\sigma_{r,j} = (c_1,\dots, c_t)$ be a cycle of $\sigma_r$. Then \begin{align*} \sigma_{r,j}^{\varphi_r} = (\varphi_r(c_1),\dots, \varphi_r(c_t)). \end{align*} \item The decomposition of $\sigma_1 \iprod \sigma_2$ in disjoint cycles is given by \begin{align*} \sigma_1 \iprod \sigma_2 = \sigma^{\varphi_1}_{1,1} \cdots \sigma^{\varphi_1}_{1,p_1} \cdot \sigma^{\varphi_2}_{2,1} \cdots \sigma^{\varphi_2}_{2,p_2}. \end{align*} \end{enumerate} \end{lem} \begin{proof} Set $\sigma := \sigma_1 \iprod \sigma_2$ and $\sigma' := \sigma_1^{\varphi_1}\sigma_2^{\varphi_2}$. It will turn out that $\sigma = \sigma'$. We first show Part~(2). Let $r\in \set{1,2}$, $\xi$ be a cycle of $\sigma_r$ and $i\in [n_r]$. Then \begin{align*} \xi^{\varphi_r}(\varphi_r(i)) = (\varphi_r \circ \xi \circ \varphi_r\inv \circ \varphi_r )(i) = \varphi_r(\xi(i)). \end{align*} Hence, if $\xi = (c_1,\dots, c_t) \in \SG_{n_r}$ then $\xi^{\varphi_r} = (\varphi_r(c_1),\dots, \varphi_r(c_t)) \in \SG(N_r)$. We continue with showing Part~(3) for $\sigma'$. For $r =1,2$ we have \begin{align*} \sigma_r^{\varphi_r} &= \varphi_r \circ \sigma_r \circ \varphi_r\inv \\ &= \varphi_r \circ \sigma_{r,1}\cdots \sigma_{r,p_r} \circ \varphi_r\inv \\ &= (\varphi_r \circ \sigma_{r,1} \circ \varphi_r\inv) \cdots (\varphi_r \circ \sigma_{r,p_r} \circ \varphi_r\inv) \\ &= \sigma_{r,1}^{\varphi_r} \cdots \sigma_{r,p_r}^{\varphi_r}. \end{align*} Thus, \begin{align} \label{eq:cycle_decomposition_sigma} \sigma' = \sigma_{1,1}^{\varphi_1} \cdots \sigma_{1,p_1}^{\varphi_1} \sigma_{2,1}^{\varphi_2} \cdots \sigma_{1,p_2}^{\varphi_2}. \end{align} The cycles in this decomposition are given by Part~(1). As $\varphi_1$ and $\varphi_2$ are bijections with disjoint images, the cycles are disjoint. Lastly, we show $\sigma = \sigma'$. From \Cref{eq:cycle_decomposition_sigma}, Part~(2) and the definition of $\varphi_1$ and $\varphi_2$ it follows that we obtain the cycles of $\sigma'$ by altering the cycles of $\sigma_1$ and $\sigma_2$ as described in \Cref{def:inductive_product}. Hence, $\sigma = \sigma'$. \end{proof} \begin{cor} \label{thm:inductive_product_orbits} Let $\sigma_1\in \SG_{n_1}, \sigma_2\in \SG_{n_2}$ and $\sigma := \sigma_1\iprod \sigma_2$. Then \begin{align*} P(\sigma) = \varphi_1(P(\sigma_1)) \cup \varphi_2(P(\sigma_2)). \end{align*} \end{cor} We continue with basic properties of the inductive product. \begin{lem} \label{thm:inductive_product_two_domains} Let $\sigma_1\in \SG_{n_1}, \sigma_2\in \SG_{n_2}$ and $\sigma := \sigma_1\iprod \sigma_2$. Then for all $i\in [n]$ \begin{align*} \sigma(i) = \begin{cases} \sigma_1^{\varphi_1}(i) &\myif i\in N_1 \\ \sigma_2^{\varphi_2}(i) &\myif i\in N_2. \end{cases} \end{align*} \end{lem} \begin{proof} By \Cref{thm:inductive_product_cycles}, $\sigma = \sigma_1^{\varphi_1}\sigma_2^{\varphi_2}$. If $n_1= 0$ or $n_2= 0$ the claim is trivially true. Thus, suppose $n_1,n_2\geq 1$ and let $i\in [n]$. Consider $\sigma_1^{\varphi_1}$ and $\sigma_2^{\varphi_2}$ as elements of $\SG_n$. Since $\set{N_1,N_2}$ is a partition of $[n]$ there is exactly one $r\in \set{1,2}$ such that $i\in N_r$. We have that $\sigma_r^{\varphi_r}(N_r) = N_r$ and that $\sigma_{2-r+1}^{\varphi_{2-r+1}}$ fixes each element of $N_r$. Hence, \begin{align*} \sigma(i) = \sigma_1^{\varphi_1}\sigma_2^{\varphi_2}(i) = \sigma_r^{\varphi_r}(i). &\qedhere \end{align*} \end{proof} We now determine the image of the inductive product and show that it is injective. \begin{lem} \label{thm:inductive_product_image_and_injectivity} Let $(n_1,n_2) \vDash_0 n$. \begin{enumerate} \item The image of $\SG_{n_1} \times \SG_{n_2}$ under $\iprod$ is given by \begin{align*} \SG_{n_1} \iprod \SG_{n_2} = \set{\sigma \in \SG_n \mid \sigma(N_i) = N_i \text{ for } i = 1,2}. \end{align*} \item The inductive product on $\SG_{n_1} \times \SG_{n_2}$ is injective. \end{enumerate} \end{lem} \begin{proof} \begin{wideenumerate} \item Set $Y := \set{\sigma \in \SG_n \mid \sigma(N_i) = N_i \text{ for } i = 1,2}$. We first show $\SG_{n_1}\iprod\SG_{n_2} \subseteq Y$. Let $\sigma \in \SG_{n_1}\iprod\SG_{n_2}$. Then there are $\sigma_i \in \SG_{n_i}$ for $i = 1,2$ such that $\sigma = \sigma_1\iprod \sigma_2$. By \Cref{thm:inductive_product_two_domains} we have $\sigma(N_i) = \sigma^{\varphi_i}(N_i) = N_i$ for $i = 1,2$. Hence, $\sigma \in Y$. Now we show $Y \subseteq \SG_{n_1}\iprod\SG_{n_2}$. Let $\sigma \in Y$. For $i=1,2$ set $\tilde \sigma_i = \sigma |_{N_i}$ (the restriction to $N_i$). Consider $i \in \set{1,2}$. Since $\sigma\in Y$, $\tilde \sigma_i(N_i) = N_i$ and thus $\tilde \sigma_i \in \SG(N_i)$. Therefore, $\sigma_i := \varphi\inv_i \circ \tilde\sigma_i \circ \varphi_i$ is an element of $\SG_{n_i}$. Moreover, $\sigma_i^{\varphi_i}$ considered as an element of $\SG_n$ leaves each element of $N_{2-i+1}$ fixed. Hence, we have \begin{align*} (\sigma_1 \iprod \sigma_2)|_{N_i} = \sigma^{\varphi_1}_1\sigma^{\varphi_2}_2|_{N_i} = \sigma^{\varphi_i}_i|_{N_i} = \tilde \sigma_i |_{N_i} = \sigma |_{N_i}. \end{align*} Consequently, $\sigma = \sigma_1 \iprod \sigma_2$. \item Since $|N_i| = n_i$ for $i = 1,2$, the cardinality of~$Y$ is $n_1!n_2!$. This is also the cardinality of $\SG_{n_1} \times \SG_{n_2}$. As the image of $\SG_{n_1} \times \SG_{n_2}$ under $\iprod$ is~$Y$, it follows that $\iprod$ is injective. \qedhere \end{wideenumerate} \end{proof} Recall that for $\alpha \vDash_e n$, each element of $\Sigma_\alpha$ has the property that its length is maximal in its conjugacy class. We want to use this property to prove our main result. Consider $\sigma = \sigma_1 \iprod \sigma_2$ such that $\sigma_1$ has type $(n_1)$. We seek a formula for $\ell(\sigma)$ depending on $\sigma_1$ and $\sigma_2$. We are particularly interested in the case where the $n_1$-cycle $\sigma_1$ is oscillating. Given $\sigma \in \SG_n$ let $\Inv(\sigma) := \set{(i,j) \mid 1\leq i < j \leq n, \sigma(i) > \sigma(j) } $ be the \emph{set of inversions} of $\sigma$. Then $\ell(\sigma) = |\Inv(\sigma)|$ by \cite[Proposition 1.5.2]{Bjorner2006}. \begin{lem} \label{thm:inductive_product_and_length} Let $\sigma_1\in \SG_{n_1}$ be an $n_1$-cycle, $\sigma_2\in \SG_{n_2}$, $\sigma := \sigma_1 \iprod \sigma_2$, \begin{align*} P &:= \set{i\in [k] \mid \sigma_1(i) > k}, \\ Q &:= \set{i\in [k+1,n_1] \mid \sigma_1(i) \leq k}, \end{align*} $p := |P|$ and $q := |Q|$. Then we have \begin{align*} \ell(\sigma) = \ell(\sigma_1) + \ell(\sigma_2) + (p+q)n_2. \end{align*} Moreover, \begin{enumerate} \item $p,q \leq \floor{\frac{n_1}2}$, \item if $\sigma_1$ is oscillating, then $p = q = \floor*{\frac{n_1}2}$. \end{enumerate} \end{lem} \begin{proof} Let $i,j\in [n]$ and $m:= \floor*{\frac{n_1}{2}}$. We distinguish three types of pairs $(i,j)$ and count the number of inversions of $\sigma$ type by type. \begin{enumerate}[label=\bf{Type \arabic*.}, nosep, wide] \item There is an $r\in \set{1,2}$ such that $i,j\in N_r$. In this case let $t\in \set{i,j}$ and set $t' := \varphi_r\inv(t)$. Then $t' \in [n_r]$. From \Cref{thm:inductive_product_two_domains} we obtain \begin{align*} \sigma(t)= \varphi_r(\sigma_r(t')). \end{align*} In addition, we have \begin{align*} \varphi_r(\sigma_r(i')) > \varphi_r(\sigma_r(j')) \iff \sigma_r(i') > \sigma_r(j') \end{align*} since $\varphi_r$ is a stricly increasing function. As $\varphi_r\inv$ is stricly increasing as well, we also have that \begin{align*} i<j \iff i' < j'. \end{align*} Hence, \begin{align*} (i,j) \in \Inv(\sigma) &\iff \text{$i < j$ and $\sigma(i) > \sigma(j)$} \\ &\iff \text{$i' < j'$ and $\varphi_r(\sigma_r(i')) > \varphi_r(\sigma_r(j'))$} \\ &\iff \text{$i' < j'$ and $\sigma_r(i') > \sigma_r(j')$} \\ &\iff (i',j') \in \Inv(\sigma_r). \end{align*} Thus, the number of inversions of Type~1 is \begin{align*} |\Inv(\sigma_1)| + |\Inv(\sigma_2)| = \ell(\sigma_1) + \ell(\sigma_2). \end{align*} \item We have $i \in N_1$, $j\in N_2$ and $i<j$. Assume that $(i,j)$ is of this type and recall that $N_1 = [k] \cup [k + n_2 + 1, n]$ and $N_2 = [k+1, k+n_2]$ where $k = \ceil{\frac{n}2}$. Since $i<j$, we have $i\leq k$ which in particular means that $\varphi_1^{-1}(i) = i$. As $\sigma(j) \in N_2$, $k+1 \leq \sigma(j) \leq k + n_2$. Moreover, $\sigma(i) = \sigma_1^{\varphi_1}(i)$ by \Cref{thm:inductive_product_two_domains}. Consequently, \begin{align*} \sigma(i) = \sigma_1^{\varphi_1}(i) = \varphi_1(\sigma_1(i)) = \begin{cases} \sigma_1(i) < \sigma(j) &\myif \sigma_1(i) \leq k \\ \sigma_1(i) + n_2 > \sigma(j) &\myif \sigma_1(i) > k. \end{cases} \end{align*} Therefore, \begin{align*} (i,j) \in \Inv(\sigma) \iff \sigma_1(i) > k. \end{align*} Hence, the number of inversions of Type~2 is the cardinality of the set $P \times N_2$. Thus, we have $pn_2$ inversions of Type~2. \item We have $i\in N_2$, $j \in N_1$ and $i<j$. Let $(i,j)$ be of Type~$3$. Then from $i<j$ we obtain $j\geq k+n_2+1$. In particular this type can only occur if $n_1 > 1$ because otherwise $n = 1 + n_2< j$. Since $i \in N_2$, also $\sigma(i)\in N_2$. That is, $k+1 \leq \sigma(i) \leq k+n_2$. Moreover, from $i < j$ and $i\in N_2$ it follows that $j \geq k + n_2+1$. Thus, \begin{align*} j':=\varphi_1\inv(j) = j - n_2 \end{align*} and $j' \in [k+1,n_1]$. Hence, \begin{align*} \sigma(j) = \sigma_1^{\varphi_1}(j) = \varphi_1(\sigma_1(j')) = \begin{cases} \sigma_1(j') < \sigma(i) &\myif \sigma_1(j') \leq k \\ \sigma_1(j') + n_2 > \sigma(i) &\myif \sigma_1(j') > k. \end{cases} \end{align*} That is, \begin{align*} (i,j) \in \Inv(\sigma) \iff \sigma_1(j') \leq k \iff j' \in Q \iff j \in \varphi_1(Q) \end{align*} where we use that $j'\in [k+1, n_1]$ for the second equivalence. Consequently, the set of inversion of Type~3 is the set $N_2 \times \varphi_1(Q)$. Since $\varphi_1$ is a bijection, it follows that there are exactly $qn_2$ inversions of this type. \end{enumerate} Summing up the number of inversions of each type, we obtain the formula for the length of $\sigma$. We now prove $(1)$ and $(2)$. \begin{wideenumerate} \item By definition, $\sigma_1(P) \subseteq [k+1,n_1]$ and $Q\subseteq [k+1,n_1]$. The cardinality of $[k+1,n_1]$ is $\floor*{\frac{n_1}{2}}$. Therefore, $p,q\leq \floor*{\frac{n_1}{2}}$. \item Assume that $\sigma_1$ is oscillating. Suppose first that $n$ is even. Then $k = \frac{n_1}2$. Because $\sigma_1$ is oscillating, we obtain that \begin{align*} \sigma_1([k]) = [k+1,n_1] \quad \text{and} \quad \sigma_1([k+1,n_1]) = [k] \end{align*} from \Cref{def:oscillating_n-cycle} and \Cref{thm:oscillating_n_cycle_complements}. Hence, $p = q = k = \floor{\frac{n_1}2}$. Suppose now that $n$ is odd. Then $k = \frac{n_1+1}2$. Since $\sigma_1$ is oscillating, \Cref{def:oscillating_n-cycle} and \Cref{thm:oscillating_n_cycle_complements} yield that there is an $m\in \set{k-1,k}$ such that \begin{align*} \sigma_1([m]) = [n_1-m+1, n_1] \quad\text{and}\quad \sigma_1([m+1, n_1]) = [n_1-m]. \end{align*} It is not hard to see that this implies $p = q = k-1 = \floor{\frac{n_1}2}$. \qedhere \end{wideenumerate} \end{proof} We have seen in \Cref{exa:inductive_product} that the elements in stair form $\sigma_{(5,3)}$ and $\sigma_{(6,3)}$ can be decomposed as \begin{align*} \sigma_{(5,3)} = \sigma_{(5)} \iprod \sigma_{(3)}^{w_0} \quad \text{and} \quad \sigma_{(6,3)} = \sigma_{(6)} \iprod \sigma_{(3)} \end{align*} where $w_0$ is the longest element of $\SG_3$. We want to show that these are special cases of a general rule for decomposing the element in stair form $\sigma_\alpha$. Before we state the rule, we compare the sequences used to define the element in stair form in \Cref{thm:element_in_stair_form} for compositions of~$n$, $n_1$ and $n_2$. \begin{lem} \label{thm:inductive_product_and_x_sequence} For $m \in \N_0$ let $x^{(m)}$ be the sequence $(x^{(m)}_1,\dots, x^{(m)}_m)$ given by $x^{(m)}_{2i-1} = i$ and $x^{(m)}_{2i} = m-i+1$. Set $x:=x^{(n)}$, $y := x^{(n_1)}$ and $z := x^{(n_2)}.$ \begin{enumerate} \item We have $\varphi_1(y_i) = x_i$ for all $i\in [n_1]$. \item If $n_1$ is even then $\varphi_2(z_i) = x_{i+n_1}$ for all $i\in [n_2]$. \item If $n_1$ is odd then $\varphi_2(w_0(z_i)) = x_{i+n_1}$ for all $i\in [n_2]$ where $w_0$ is the longest element of $\SG_{n_2}$. \end{enumerate} \end{lem} \begin{proof} Recall that $k = \ceil*{\frac{n_1}2}$ and $(n_1,n_2)\vDash_0 n$ by \Cref{not:inductive_product}. Let $i\in \N$. We mainly do straight forward calculations. \begin{wideenumerate} \item Assume $2i-1 \in [n_1]$. Then $i\leq k$ and thus $\varphi_1(i) = i$. Consequently, \begin{align*} \varphi_1(y_{2i-1}) = \varphi_1(i) = i = x_{2i-1}. \end{align*} Now, assume $2i \in [n_1]$. Then \begin{align*} n_1-i+1 =\ceil*{n_1-i+1} &\geq \ceil*{n_1-\frac{n_1}2+1} \\ &= \ceil*{\frac{n_1}2+1} =\ceil*{\frac{n_1}2} + 1 = k + 1, \end{align*} \ie $\varphi_1(n_1-i+1) = n_1+n_2-i+1$. Therefore, \begin{align*} \varphi_1(y_{2i}) = \varphi_1(n_1-i+1) = n_1+n_2-i+1 = n-i+1 = x_{2i}. \end{align*} \item Assume that $n_1$ is even. Then $n_1 = 2k$. If $2i-1\in [n_2]$ then we have \begin{align*} 2(k+i)-1 = n_1 + 2i-1\leq n_1+n_2 = n. \end{align*} Thus, \begin{align*} \varphi_2(z_{2i-1}) = \varphi_2(i) = k + i = x_{2(k+i)-1} = x_{2i -1+n_1}. \end{align*} Suppose $2i\in [n_2]$. Then $2(k+i) = n_1 + 2i \leq n$ and \begin{align*} \varphi_2(z_{2i}) = k+n_2-i+1 &= (n-2k-n_2) + k + n_2 -i+1 \\ &= n-k-i+1 \\ &= x_{2(k+i)} = x_{2i + n_1}. \end{align*} \item Assume that $n_1$ is odd. In this case $n_1 = 2k-1$. Let $w_0$ be the longest element of $\SG_{n_2}$. We have $w_0(j) = n_2-j+1$ for all $j\in [n_2]$. If $2i-1\in [n_2]$ then $2i-1 + n_1 \in [n]$ and \begin{align*} \varphi_2(w_0(z_{2i-1})) &= \varphi_2(w_0(i)) \\ &= \varphi_2(n_2-i+1) \\ &= n_2 + k - i + 1 \\ &= (n - 2k + 1 - n_2) + n_2 + k - i + 1 \\ &= n - (k+i-1) + 1 \\ &= x_{2(i+k-1)} \\ &= x_{2i-1 + 2k-1} = x_{2i-1 + n_1}. \end{align*} If $2i \in [n_2]$ then $2i + n_1 \in [n]$ and \begin{align*} \varphi_2(w_0(z_{2i})) = \varphi_2(w_0(n_2-i+1)) = \varphi_2(i) = i+k = x_{2(i+k) - 1} = x_{2i + n_1}. &\qedhere \end{align*} \end{wideenumerate} \end{proof} \begin{exa} Consider $n=9$, $n_1 = 6$ and $n_2 = 3$. Then $k = 3$. Using the notation from \Cref{thm:inductive_product_and_x_sequence} we obtain \begin{align*} x &= (1,9,2,8,3,7,4,6,5), \\ y &= (1,6,2,5,3,4), \\ z &= (1,3,2). \end{align*} Then $x = (\varphi_1(y_1) ,\dots, \varphi_1(y_6), \varphi_2(z_1), \varphi_2(z_2), \varphi_2(z_3))$ as predicted by \Cref{thm:inductive_product_and_x_sequence}. Moreover, $x,y$ and~$z$ are the sequences used to define the elements in stair form $\sigma_{(6,3)}$, $\sigma_{(6)}$ and $\sigma_{(3)}$, respectively. Therefore, \begin{align*} \sigma_{(6,3)} = (\varphi_1(y_1) ,\dots, \varphi_1(y_6))( \varphi_2(z_1), \varphi_2(z_2), \varphi_2(z_3)) = \sigma_{(6)}^{\varphi_1} \sigma_{(3)}^{\varphi_2} = \sigma_{(6)} \iprod \sigma_{(3)}. \end{align*} This also illustrates the idea of the proof of the next lemma. \end{exa} \begin{lem} \label{thm:inductive_product_and_element_in_stair_from} Let $\alpha = \parts{\alpha}{l} \vDash_e n$ with $l\geq1$. Then we have the following. \begin{enumerate} \item If $\alpha_1$ is even then $\sigma_\alpha = \sigma_{(\alpha_1)} \iprod \sigma_{(\alpha_2,\dots, \alpha_l)}$. \item If $\alpha_1$ is odd then $\sigma_\alpha = \sigma_{(\alpha_1)} \iprod \left(\sigma_{(\alpha_2,\dots, \alpha_l)}\right)^{w_0}$ where $w_0$ is the longest element of $\SG_{\alpha_2 + \dots + \alpha_l}$. \end{enumerate} \end{lem} \begin{proof} Set $n_1 := \alpha_1$ and $n_2 := \alpha_2 + \dots + \alpha_l$. As in \Cref{thm:inductive_product_and_x_sequence}, let $x^{(m)}$ be the sequence $(x^{(m)}_1,\dots, x^{(m)}_m)$ given by $x^{(m)}_{2i-1} = i$ and $x^{(m)}_{2i} = m-i+1$ for $m \in \N_0$ and set $x:=x^{(n)}$, $y := x^{(n_1)}$ and $z := x^{(n_2)}.$ We have that \begin{enumerate} \item $\sigma_{\alpha}$ has the cycles \begin{align*} \sigma_{\alpha_i} = \left( x_{\alpha_1 + \dots + \alpha_{i-1} + 1}, x_{\alpha_1 + \dots + \alpha_{i-1} + 2}, \dots , x_{\alpha_1 + \dots + \alpha_{i-1} + \alpha_i} \right ) \end{align*} for $i = 1,\dots, l$, \item $\sigma_{(\alpha_1)} = \left( y_{1}, y_2, \dots , y_{n_1} \right )$ and \item $\sigma_{(\alpha_2,\dots, \alpha_l)}$ has the cycles \begin{align*} \tilde\sigma_{\alpha_i} = \left( z_{\alpha_2 + \dots + \alpha_{i-1} + 1}, z_{\alpha_2 + \dots + \alpha_{i-1} + 2}, \dots , z_{\alpha_2 + \dots + \alpha_{i-1} + \alpha_i} \right ) \end{align*} for $i = 2,\dots, l$. \end{enumerate} Assume that $\alpha_1$ is even and set $\sigma := \sigma_{(\alpha_1)} \iprod \sigma_{(\alpha_2,\dots, \alpha_l)}$. From \Cref{thm:inductive_product_cycles} we obtain that $\sigma$ has the cycles $(\sigma_{(\alpha_1)})^{\varphi_1}$ and $(\tilde \sigma_{(\alpha_i)})^{\varphi_2}$ for $i = 2,\dots, l$. By \Cref{thm:inductive_product_and_x_sequence}, $\varphi_1(y_j) = x_j$ for $j\in [n_1]$ and $\varphi_2(z_j) = x_{\alpha_1 + j}$ for $j\in [n_2]$. As a consequence, \begin{align*} (\sigma_{(\alpha_1)})^{\varphi_1} = (\varphi_1(y_1),\dots, \varphi_1(y_{\alpha_1})) = (x_1,\dots, x_{\alpha_1}) = \sigma_{\alpha_1} \end{align*} and \begin{align*} (\tilde \sigma_{\alpha_i})^{\varphi_2 } &= \left( \varphi_2(z_{\alpha_2 + \dots + \alpha_{i-1} + 1}), \dots , \varphi_2(z_{\alpha_2 + \dots + \alpha_{i-1} + \alpha_i}) \right )\\ &= \left( x_{\alpha_1 + \dots + \alpha_{i-1} + 1}, \dots , x_{\alpha_1 + \dots + \alpha_{i-1} + \alpha_i} \right )\\ &= \sigma_{\alpha_i} \end{align*} for $i = 2,\dots, l$. Hence, $\sigma = \sigma_\alpha$. Now let $\alpha_1$ be odd. Set $\sigma := \sigma_{(\alpha_1)} \iprod \left(\sigma_{(\alpha_2,\dots, \alpha_l)}\right)^{w_0}$ where $w_0$ is the longest element of $\SG_{\alpha_2 + \dots + \alpha_l}$. Then $\sigma$ has the cycles $(\sigma_{(\alpha_1)})^{\varphi_1}$ and $( (\tilde\sigma_{(\alpha_i)})^{w_0})^{\varphi_2}$ for $i = 2,\dots, l$. Moreover, from \Cref{thm:inductive_product_and_x_sequence} we have that $\varphi_2(w_0(z_j)) = x_{\alpha_1 + i}$ for $j\in [n_2]$. Thus, \begin{align*} (\tilde\sigma_{(\alpha_i)})^{w_0})^{\varphi_2} &= \left( \varphi_2(w_0(z_{\alpha_2 + \dots + \alpha_{i-1} + 1})), \dots , \varphi_2(w_0(z_{\alpha_2 + \dots + \alpha_{i-1} + \alpha_i}) \right ))\\ &= \left( x_{\alpha_1 + \dots + \alpha_{i-1} + 1}, \dots , x_{\alpha_1 + \dots + \alpha_{i-1} + \alpha_i} \right )\\ &= \sigma_{\alpha_i} \end{align*} for $i = 2,\dots, l$. As we have already shown that $(\sigma_{(\alpha_1)})^{\varphi_1}= \sigma_{\alpha_1}$, it follows that $\sigma = \sigma_\alpha$. \end{proof} The upcoming \Cref{thm:inductive_product_structures_of_Sigma} is the main result of this \namecref{sec:equivalence_classes:inductive_product}. It enables us to decompose $\Sigma_{\alpha}$ if $\alpha_1$ is even. Before we can state the result, we need to introduce some more notation. For $\alpha\vDash_e n$ we define \begin{align*} \Sigma_\alpha^\times := \set{\sigma \in \Sigma_\alpha \mid \partition(\sigma) = \partition(\sigma_\alpha)}. \end{align*} In \Cref{thm:inductive_product_structures_of_Sigma} the set $ \left(\Sigma^\times_{\alpha}\right)^{w_0}$ appears where $w_0$ the longest element of $\SG_n$. Let $\sigma \in \Sigma_\alpha$. Then by \Cref{thm:conjugation_with_w0_and_equivalence_classes}, $\sigma^{w_0} \in \Sigma_\alpha$. Since $\partition(\sigma^{w_0}) = w_0(\partition(\sigma))$, we have \begin{align} \label{eq:Sigma_star_and_orbits} \sigma \in \left(\Sigma^\times_{\alpha}\right)^{w_0} \iff \partition(\sigma^{w_0}) = \partition(\sigma_\alpha) \iff \partition(\sigma) = \partition(\sigma^{w_0}_\alpha). \end{align} \begin{thm} \label{thm:inductive_product_structures_of_Sigma} Let $\alpha = \parts{\alpha}{l} \vDash_e n$ with $l\geq 1$. \begin{enumerate} \item If $\alpha_1$ is even then $\Sigma_{\alpha} = \Sigma_{(\alpha_1)} \iprod \Sigma_{(\alpha_2,\dots, \alpha_l)}$. \item If $\alpha_1$ is odd then $\Sigma^\times_{\alpha} = \Sigma^\times_{(\alpha_1)} \iprod \left(\Sigma^\times_{(\alpha_2,\dots, \alpha_l)}\right)^{w_0}$ where $w_0$ is the longest element of $\SG_{\alpha_2 + \cdots + \alpha_l}$. \end{enumerate} \end{thm} \begin{proof} Let $\alpha^{(1)} := (\alpha_1)$, $\alpha^{(2)} := (\alpha_2,\dots, \alpha_l)$, $n_1 := |\alpha^{(1)}|$, $n_2 := |\alpha^{(2)}|$ and $w_0$ be the longest element of $\SG_{n_2}$. We use the inductive product on $\SG_{n_1} \times \SG_{n_2}$ and the related notation. The proofs of~(1) and~(2) have a lot in common. Hence, we do them simultaneously as much as possible and separate the cases $\alpha_1$ even and $\alpha_1$ odd only when necessary. If $l =1$ then $\alpha = \alpha^{(1)}$, $\alpha^{(2)} = \emptyset$ and thus \begin{align*} \Sigma_{\alpha^{(1)}} \iprod \Sigma_{\alpha^{(2)}} = \Sigma_{\alpha} \iprod \SG_0 = \Sigma_\alpha. \end{align*} Moreover, $\Sigma^\times_{(\alpha_1)} = \Sigma_{(\alpha_1)}$ and $\left(\Sigma^\times_{\emptyset}\right)^{w_0} = \Sigma_{\emptyset}$. Thus we have (1) and (2) in this case. Now suppose $l\geq 2$. Let $\sigma := \sigma_\alpha$, $\sigma_1 := \sigma_{\alpha^{(1)}}$ and $\sigma_2 := \sigma_{\alpha^{(2)}}$ if $\alpha_1$ is even and $\sigma_2 = \sigma^{w_0}_{\alpha^{(2)}}$ if $\alpha_1$ is odd. From \Cref{thm:inductive_product_and_element_in_stair_from} we have $\sigma = \sigma_1 \iprod \sigma_2$. By \Cref{thm:parametrizations_of_kim}, $\sigma_{\alpha^{(i)}} \in \Sigma_{\alpha^{(i)}}$ for $i=1,2$. In addition, \Cref{thm:conjugation_with_w0_and_equivalence_classes} then yields that $\sigma^{w_0}_{\alpha^{(2)}} \in \Sigma_{\alpha^{(2)}}$. Consequently, $\sigma_i\in \Sigma_{\alpha^{(i)}}$ for $i=1,2$. We begin with the inclusions ``$\subseteq$''. Let $\tau \in \Sigma_{\alpha}$ with $\partition(\tau) = \partition(\sigma)$ if $\alpha_1$ is odd. First we show $\tau \in \SG_{n_1} \iprod \SG_{n_2}$. By \Cref{thm:inductive_product_image_and_injectivity}, we have to show $\tau(N_i) = N_i$ for $i=1,2$. Since $\set{N_1,N_2}$ is a set partition of $[n]$, it suffices to show $\tau(N_1) = N_1$. As $\sigma_1\in \SG_{n_1}$ is an $n_1$-cycle, $\partition(\sigma_1) = \set{[n_1]}$. Moreover, \Cref{thm:inductive_product_orbits} yields $\partition(\sigma) = \varphi_1(\partition(\sigma_1)) \cup \varphi_2(\partition(\sigma_2))$. Thus, \begin{align*} N_1 = \varphi_1([n_1]) \in \varphi_1(\partition(\sigma_1)) \subseteq \partition(\sigma). \end{align*} If $\alpha_1$ is even then $N_1\in \partition_e(\sigma)$. Moreover, \Cref{thm:characterization_of_Sigma_using_length} yields $\partition_e(\tau) = \partition_e(\sigma)$. Thus, $N_1\in \partition(\tau)$ which means that $\tau(N_1) = N_1$. If $\alpha_1$ is odd then $\partition(\tau) = \partition(\sigma)$ by assumption. Hence, $N_1\in P(\sigma) = P(\tau)$ and thus $\tau(N_1) = N_1$. Because $\tau \in \SG_{n_1} \iprod \SG_{n_2}$, there are $\tau_1 \in \SG_{n_1}$ and $\tau_2 \in \SG_{n_2}$ such that $\tau = \tau_1 \iprod \tau_2$. Let $i\in \set{1,2}$. We want to show $\tau_i \in \Sigma_{\alpha^{(i)}}$. Recall that $\sigma_i \in \Sigma_{\alpha^{(i)}}$. Thus, from \Cref{thm:characterization_of_Sigma_using_length} it follows that $\tau_i \in \Sigma_{\alpha^{(i)}}$ if and only if \begin{enumerate}[label = (\roman*)] \item $\sigma_i$ and $\tau_i$ are conjugate in $\SG_{n_i}$, \item $\ell(\sigma_i) = \ell(\tau_i)$ and \item $\partition_e(\sigma_i) = \partition_e(\tau_i)$. \end{enumerate} Therefore, we show that $\tau_i$ satisfies (i) -- (iii). Let~$i$ be arbitrary again. \begin{enumerate}[label = (\roman*), wide, nosep] \item For a permutation $\xi$, let $\typems(\xi)$ be the multiset of cycle lengths of $\xi$. Assume $\xi = \xi_1 \iprod \xi_2$ for $\xi_i \in \SG_{n_i}$ and $i=1,2$. From \Cref{thm:inductive_product_cycles} it follows that \begin{align} \label{eq:cycle_type_multisets} \typems(\xi) = \typems(\xi_1) \cup \typems(\xi_2). \end{align} Since $\tau = \tau_1 \iprod \tau_2$, \Cref{thm:inductive_product_orbits} implies $\partition(\tau) = \varphi_1(\partition(\tau_1)) \cup \varphi_2(\partition(\tau_2))$. Therefore, from $N_1\in \partition(\tau)$ it follows that $\partition(\tau_1) = \set{[n_1]}$. That is, $\tau_1$ is an $n_1$-cycle of $\SG_{n_1}$. By definition, $\sigma_1$ is an $n_1$-cycle of $\SG_{n_1}$ too. Thus, $\typems(\tau_1 ) = \typems(\sigma_1)$. Since $\tau \in \Sigma_\alpha$, $\tau$ and $\sigma$ are conjugate so that $\typems(\tau) = \typems(\sigma)$. Because of \Cref{eq:cycle_type_multisets} and $\typems(\tau_1 ) = \typems(\sigma_1)$, it follows that also $\typems{(\tau_2)} = \typems{(\sigma_2)}$. In other words, $\tau_i$ and $\sigma_i$ are conjugate for $i=1,2$. \item Let $m := \floor*{\frac{n_1}{2}}$. By \Cref{thm:inductive_product_and_length}, there are $p,q\leq m$ such that \begin{align*} \ell(\tau ) = \ell(\tau_1) + \ell(\tau_2) + (p+q)n_2. \end{align*} Moreover, we have $\ell(\tau_i) \leq \ell(\sigma_i)$ for $i=1,2$ because $\tau_i$ and $\sigma_i$ are conjugate and $\sigma_i\in \Sigma_{\alpha^{(i)}}$. On the other hand, $\sigma_1$ is oscillating by \Cref{thm:characterization_of_Sigma_(n)} and hence \Cref{thm:inductive_product_and_length} yields \begin{align*} \ell(\sigma ) = \ell(\sigma_1) + \ell(\sigma_2) + 2mn_2. \end{align*} Since $\tau \in \Sigma_\alpha$, we have $\ell(\tau) = \ell(\sigma)$. Therefore, we obtain from the equalities for $\ell(\tau)$ and $\ell(\sigma)$ and the inequalities for $\ell(\tau_1),\ell(\tau_2), p$ and~$q$ that $\ell(\tau_1) = \ell(\sigma_1)$ and $\ell(\tau_2) = \ell(\sigma_2)$. \item \Cref{thm:inductive_product_orbits} states that \begin{align} \label{eq:inductive_product_orbits} \partition(\xi) = \varphi_1(\partition(\xi_1)) \cup \varphi(\partition(\xi_2)) \end{align} for $\xi = \sigma,\tau$. This equality remains valid if we replace $\partition$ by $\partition_e$. From $\tau \in \Sigma_\alpha$ and \Cref{thm:characterization_of_Sigma_using_length} it follows that $\partition_e(\tau) = \partition_e(\sigma)$. Hence, \begin{align*} \varphi_1(\partition_e(\tau_1)) \cup \varphi_2(\partition_e(\tau_2))= \varphi_1(\partition_e(\sigma_1)) \cup \varphi_2(\partition_e(\sigma_2)). \end{align*} Since $\varphi_1$ and $\varphi_2$ are bijections and the images of $\varphi_1$ and $\varphi_2$ are disjoint, it follows that $\partition_e(\tau_i) = \partition_e(\sigma_i)$ for $i=1,2$. This finishes the proof of $\tau \in \Sigma_{\alpha^{(1)}} \iprod \Sigma_{\alpha^{(2)}}$. \end{enumerate} It remains to show that $\tau_1 \in \Sigma^\times_{\alpha^{(1)}}$ and $\tau_2 \in \left(\Sigma^\times_{\alpha^{(2)}}\right)^{w_0}$ if $\alpha_1$ is odd. Thus, assume that $\alpha_1$ is odd. We have already seen that $\partition(\tau_1) = \partition(\sigma_1)$. Hence, $\tau_1 \in \Sigma^\times_{\alpha^{(1)}}$. Since $\alpha_1$ is odd, $\partition(\tau) = \partition(\sigma)$ by assumption and therefore we deduce from \Cref{eq:inductive_product_orbits} as above that $\partition(\tau_2) = \partition(\sigma_2)$. Now we can use that $\sigma_2 = \sigma_{\alpha^{(2)}}^{w_0}$ and obtain $\tau_2 \in \left(\Sigma^\times_{\alpha^{(2)}}\right)^{w_0}$ from \Cref{eq:Sigma_star_and_orbits}. We continue with the inclusions ``$\supseteq$''. Let $\tau_i \in \Sigma_{\alpha^{(i)}}$ for $i = 1,2$ and $\tau := \tau_1 \iprod \tau_2$. If $\alpha_1$ is odd, assume that in addition $\tau_1 \in \Sigma^\times_{\alpha^{(1)}}$ and $\tau_2 \in \left(\Sigma^\times_{\alpha^{(2)}}\right)^{w_0}$ which by \Cref{eq:Sigma_star_and_orbits} is equivalent to $\partition(\tau_i) = \partition(\sigma_i)$ for $i=1,2$. We want to show $\tau \in \Sigma_\alpha$ and again use \Cref{thm:characterization_of_Sigma_using_length} to do this. That is, we show the properties (i) -- (iii) for $\tau$ and $\sigma$. \begin{enumerate}[label = (\roman*), wide, nosep] \item For $i\in \set{1,2}$ we have $\typems(\tau_i) =\typems(\sigma_i)$ since $\tau_i\in \Sigma_{\alpha^{(i)}}$. Hence, from \Cref{eq:cycle_type_multisets} it follows that $\typems(\tau) = \typems(\sigma)$, \ie $\tau$ and $\sigma$ are conjugate. \item Since $\tau_1,\sigma_1\in \Sigma_{\alpha^{(1)}}$, they are oscillating $n_1$-cycles by \Cref{thm:characterization_of_Sigma_(n)}. Therefore, \Cref{thm:inductive_product_and_length} yields \begin{align*} \ell(\xi ) = \ell(\xi_1) + \ell(\xi_2) + 2mn_2 \end{align*} for $\xi = \sigma,\tau$ and $m = \floor{\frac{n_1}2}$. Moreover, as $\sigma_i,\tau_i \in \Sigma_{\alpha^{(i)}}$, $\ell(\tau_i) = \ell(\sigma_i)$ for $i = 1,2$. Hence, $\ell(\tau) = \ell(\sigma)$. \item Since $\xi = \xi_1 \iprod \xi_2$ for $\xi = \sigma, \tau$, Equation \Cref{eq:inductive_product_orbits} holds. This equation remains true if we substitute $\partition$ by $\partition_e$. In addition, from \Cref{thm:characterization_of_Sigma_using_length} we obtain that $\partition_e(\tau_i) = \partition_e(\sigma_i)$ for $i=1,2$. Thus, $\partition_e(\tau) = \partition_e(\sigma)$. \end{enumerate} Because of (i) -- (iii) we can now apply \Cref{thm:characterization_of_Sigma_using_length} and obtain that $\tau \in \Sigma_\alpha$. In the case where $\alpha_1$ is odd, it remains to show $\partition(\tau) = \partition(\sigma)$. But this is merely a consequence of $\partition(\tau_i) = \partition(\sigma_i)$ for $i=1,2$ and \Cref{eq:inductive_product_orbits}. \end{proof} We now infer from \Cref{thm:inductive_product_structures_of_Sigma} that the inductive product provides a bijection from $\Sigma_{(\alpha_1)} \times \Sigma_{(\alpha_2,\dots, \alpha_l)}$ to $\Sigma_\alpha$ for all $\alpha \vDash_e n$ with even $\alpha_1$. \begin{cor} \label{thm:indcutive_product_bijection} Let $\alpha = \parts{\alpha}{l} \vDash_e n$ with $l\geq 1$. \begin{enumerate} \item If $\alpha_1$ is even then the map $\Sigma_{(\alpha_1)} \times \Sigma_{(\alpha_2,\dots, \alpha_l)} \to \Sigma_{\alpha}$, $(\sigma_1,\sigma_2)\mapsto \sigma_1 \iprod \sigma_2$ is a bijection. \item If $\alpha_1$ is odd then the map $\Sigma^\times_{(\alpha_1)} \times \left(\Sigma^\times_{(\alpha_2,\dots, \alpha_l)}\right)^{w_0} \to \Sigma^\times_{\alpha}$, $(\sigma_1,\sigma_2)\mapsto \sigma_1 \iprod \sigma_2$ where $w_0$ is the longest element of $\SG_{\alpha_2 + \cdots + \alpha_l}$ is a bijection. \end{enumerate} \end{cor} \begin{proof} By \Cref{thm:inductive_product_image_and_injectivity} the two maps in question are injective. \Cref{thm:inductive_product_structures_of_Sigma} shows that they are also surjective. \end{proof} Recall that, given a maximal composition $\alpha = \parts{\alpha}l\vDash_e n$, there exists $0\leq j \leq l$ such that $\alpha_1,\dots, \alpha_j$ are even and $\alpha_{j+1} \geq \dots \geq \alpha_l$ are odd. Using Part~(1) of \Cref{thm:indcutive_product_bijection} iteratively, we obtain the following decomposition of the elements of $\Sigma_\alpha$. \begin{cor} \label{thm:inductive_product_splitting_even_entries_off} Let $\alpha = \parts{\alpha}l\vDash_e n$, $\sigma\in \SG_n$ of type $\alpha$ and $0\leq j \leq l$ be such that $\alpha' := (\alpha_{j+1}, \dots, \alpha_{l})$ are the odd parts of $\alpha$. Then $\sigma \in \Sigma_\alpha$ if and only if there are $\sigma_i\in \Sigma_{(\alpha_i)}$ for $i = 1,\dots, j$ and $\tau \in \Sigma_{\alpha'}$ such that \begin{align*} \sigma = \sigma_1 \iprod \sigma_2 \iprod \cdots \iprod \sigma_j \iprod \tau \end{align*} where the product is evaluated from right to left. \end{cor} \begin{exa} Consider $\alpha = (2,4,3,1,1)\vDash_e 11$. From \Cref{tbl:Sigma_(n)} and \Cref{thm:odd_hook_example_(3_1_1)} we obtain \begin{align*} \Sigma_{(2)} &= \set{(1,2)}, \\ \Sigma_{(4)} &= \set{(1,4,2,3), (1,3,2,4)}, \\ \Sigma_{(3,1,1)} &= \set{(1,5,2),(1,2,5),(1,5,3),(1,3,5),(1,5,4),(1,4,5)}. \end{align*} By \Cref{thm:inductive_product_splitting_even_entries_off}, $\Sigma_\alpha$ consists of all elements $(1,2) \iprod \left( \sigma \iprod \tau \right)$ with $\sigma \in \Sigma_{(4)}$ and $\tau \in \Sigma_{(3,1,1)}$. Thus, $|\Sigma_\alpha|= 12$. For instance, \begin{align*} (1,2) \iprod \left( (1,3,2,4) \iprod (1,3,5)\right) &= (1,2)\iprod (1,8,2,9)(3,5,7) \\ &= (1,11)(2,9,3,10)(4,6,8) \end{align*} is an element of $\Sigma_{\alpha}$. \end{exa} \begin{rem} \label{thm:inductive_product_remark_reduction_to_odd_partitions} For compositions with one part $\alpha = (n)$, \Cref{thm:characterization_of_Sigma_(n)} provides a combinatorial characterization of $\Sigma_{(n)}$. Therefore, \Cref{thm:inductive_product_splitting_even_entries_off} reduces the problem of describing $\Sigma_\alpha$ for each maximal composition $\alpha$ to the case where $\alpha$ has only odd parts. These $\alpha$ are the partitions consisting of odds parts. If $\alpha$ is an odd hook, then \Cref{thm:characterization_of_Sigma_for_odd_hook} yields that the hook properties characterize the elements of $\Sigma_\alpha$. That is, we have a description of $\Sigma_\alpha$ for all maximal compositions $\alpha$ whose odd parts form a hook. \end{rem} Let $\alpha \vDash_e n$ and $\alpha'$ be the composition formed by the odd parts of $\alpha$. We infer from \Cref{thm:inductive_product_splitting_even_entries_off} a formula that expresses $\card{\Sigma_\alpha}$ as a product of $\card{\Sigma_{\alpha'}}$ and a factor that only depends on the even parts of $\alpha$. In the case where $\alpha'$ is an odd hook, we can determine $\card{\Sigma_{\alpha'}}$ explicitly and thus obtain a closed formula. \begin{cor} \label{thm:inductive_product_cardinality_formula} Let $\alpha = \parts{\alpha}l\vDash_e n$, $0\leq j \leq l$ be such that $(\alpha_1,\dots, \alpha_j)$ are the even and $\alpha' := (\alpha_{j+1}, \dots, \alpha_{l})$ are the odd parts of $\alpha$, $n' := |\alpha'|$, $P := \set{i\in [j] \mid \alpha_i \geq 4}$, $p := |P|$ and $q := -2p + \frac{1}{2}\sum_{i\in P}\alpha_i$. Then \begin{align*} |\Sigma_\alpha| &= 2^p 3^{q}|\Sigma_{\alpha'}|. \end{align*} Moreover, if $\alpha'$ is a hook $(r, 1^{n'-r})$ then \begin{align*} |\Sigma_\alpha| &= \begin{cases} 2^p 3^{q} &\text{if $r \leq 1$} \\ (n'-r+1)2^{p'} 3^{q'} &\text{if $r\geq3$} \end{cases} \end{align*} where $p' := p+1$ and $q' :=q+\frac{r-3}{2}$. \end{cor} \begin{proof} Since $\alpha_1, \dots, \alpha_j$ are the even parts of $\alpha$, \Cref{thm:inductive_product_splitting_even_entries_off} implies that \begin{align} \label{eq:inductive_product_cardinality_product_rule} |\Sigma_\alpha| = \left|\Sigma_{\alpha'}\right| \prod_{i=1}^j |\Sigma_{(\alpha_i)}|. \end{align} For the same reason, \Cref{thm:sizes_of_Sigma_n} yields \begin{align*} |\Sigma_{(\alpha_i)}| = \begin{cases} 1 & \myif \text{$n\leq2$} \\ 2 \cdot 3^{\frac{\alpha_i-4}2} & \myif \text{$n\geq 4$}. \\ \end{cases} \end{align*} for $i = 1,\dots, j$. Therefore, \begin{align*} \prod_{i=1}^j |\Sigma_{(\alpha_i)}| = \prod_{i\in P} 2 \cdot 3^{\frac{\alpha_i-4}2} = 2^p 3^{-2p + \frac{1}{2}\sum_{i\in P}\alpha_i} = 2^p 3^q. \end{align*} and with \Cref{eq:inductive_product_cardinality_product_rule} we get the first statement. For the second part, assume that $\alpha'$ is a hook. Then, by the choice of~$j$, $\alpha'$ is an odd hook. It remains to compute $|\Sigma_{\alpha'}|$. If $\alpha' = \emptyset$ or $\alpha' = (1^{n'})$ we have $|\Sigma_\alpha'|=1$. If $\alpha' = (r,1^{n'-r})$ with $r\geq 3$ then \Cref{thm:odd_hook_cardinality_of_Sigma_alpha} provides the formula \begin{equation*} |\Sigma_{\alpha'}| = 2 (n'-r+1) 3^{\frac{r-3}2}. \qedhere \end{equation*} \end{proof} \begin{exa} Consider $\alpha = (2,8,4,5,1,1,1) \vDash_e 22$. Then $\alpha' = (5,1,1,1) \vDash_e 8$ is a hook, $P = \set{2,3}$, $p' = 2 + 1$ and $q' = - 2\cdot 2 + \frac{1}{2}(8+4) + \frac{5-3}{2} = 3$. Thus, \Cref{thm:inductive_product_cardinality_formula} yields $|\Sigma_\alpha| = (8-5+1) 2^3 3^3 = 864$. \end{exa} Let $\alpha = (l, 1^{n-l})\vDash_e n$ be a hook. From \Cref{thm:odd_hook_bijection} we know how to construct $\Sigma_\alpha$ from $\Sigma_{(l)}$ if~$k$ is odd. If~$l$ is even, we obtain $\Sigma_\alpha$ in the following way. \begin{cor} \label{thm:even_hook_bijection} Let $\alpha = (l,1^{n-l})\vDash_e n$ be an even hook and $\id \in \SG_{n-k}$. Then the map $\Sigma_{(l)} \to \Sigma_\alpha$, $\sigma \mapsto \sigma \iprod \id$ is a bijection. \end{cor} \begin{proof} Recall $\Sigma_{(1^{n-l})} = \set{\id}$. Then \Cref{thm:indcutive_product_bijection} yields that the map from the claim is a bijection. \end{proof} \begin{exa} Consider $\alpha = (4,1,1)$ and $\id \in \SG_2$. From \Cref{tbl:Sigma_(n)} we read \begin{align*} \Sigma_{(4)} &= \set{(1,4,2,3), (1,3,2,4)} \end{align*} Hence, \Cref{thm:even_hook_bijection} yields \begin{align*} \sigma_\alpha =\set{\sigma \iprod \id \mid \sigma \in \Sigma_{(4)}} =\set{ (1,6,2,5), (1,5,2,6) }. \end{align*} \end{exa} In \Cref{thm:characterization_of_Sigma_for_odd_hook} we showed that $\Sigma_\alpha$ is characterized by the hook properties if $\alpha$ is an odd hook. In the remainder of the \namecref{sec:equivalence_classes} we want to prove that the same is true for even hooks. We first show that $\iprod$ is compatible with the concepts of being oscillating and having connected intervals. \begin{lem} \label{thm:inductive_product_osc_and_c.i} Let $\sigma_1\in \SG_{n_1}$, $\sigma_2\in \SG_{n_2}$ and $\sigma := \sigma_1 \iprod \sigma_2$. Then $\sigma$ is oscillating (has connected intervals) if and only if $\sigma_1$ and $\sigma_2$ are oscillating (have connected intervals). \end{lem} \begin{proof} Let $\sigma_r = \sigma_{r,1}\sigma_{r,2}\cdots \sigma_{r,p_r}$ be a decomposition in disjoint cycles for $r= 1,2$. Fix an $r\in \set{1,2}$ and a cycle $(c_1,\dots c_t) = \sigma_{r,j}$ of $\sigma_r$. Then by \Cref{thm:inductive_product_cycles} we have that \begin{align*} \sigma_{r,j}^{\varphi_r} = (\varphi_r(c_1),\dots, \varphi_r(c_t)). \end{align*} As $\varphi_r$ is strictly increasing, it preserves the relative order of the cycle elements so that \begin{align*} \cst( \sigma_{r,j}) = \cst(\sigma_{r,j}^{\varphi_r}). \end{align*} In addition, \Cref{thm:inductive_product_cycles} provides the cycle decomposition \begin{align*} \sigma = \sigma^{\varphi_1}_{1,1} \cdots \sigma^{\varphi_1}_{1,p_1} \cdot \sigma^{\varphi_2}_{2,1} \cdots \sigma^{\varphi_2}_{2,p_2}. \end{align*} of $\sigma$. Hence, $\sigma$ is oscillating if and only $\sigma_1$ and $\sigma_2$ are oscillating. For the same reason, $\sigma$ has connected intervals if and only if $\sigma_1$ and $\sigma_2$ have connected intervals. \end{proof} We now generalize \Cref{thm:characterization_of_Sigma_for_odd_hook} to all hooks. The hook properties can be looked up in \Cref{def:hook_properties}. \begin{thm} \label{thm:characterization_of_Sigma_for_arbitrary_hook} Let $\alpha\hdash n$ be a hook and $\sigma\in \SG_n$ of type $\alpha$. Then $\sigma \in \Sigma_\alpha$ if and only if $\sigma$ satisfies the hook properties. \end{thm} \begin{proof} Let $\alpha = (l, 1^{n-l})\hdash n$ and $\sigma \in \SG_n$ be of type $\alpha$. The case where~$l$ is odd was done in \Cref{thm:characterization_of_Sigma_for_odd_hook}. Therefore, assume that~$l$ is even. If $l = n$ then the third hook property is satisfied and therefore the $n$-cycle $\sigma \in \SG_n$ has the hook properties if and only if it is oscillating and has connected intervals. By \Cref{thm:characterization_of_Sigma_(n)} this is equivalent to $\sigma \in \Sigma_{(n)}$. Therefore we now assume $l < n$. Write $\sigma = (d_1,\dots, d_l)$ omitting the trivial cycles. We consider the inductive product on $\SG_l \times \SG_{n-l}$ and $\id \in \SG_{n-l}$. Following \Cref{not:inductive_product} we then have that \begin{align*} N_1 = \varphi_1([l]) = \left[\frac{l}2\right] \cup \left[n - \frac{l}2 +1, n\right]. \end{align*} Note that $\sigma$ satisfies the third hook property if and only if $\set{d_1, \dots, d_l} = N_1$. We begin with the implication form left to right. Assume that $\sigma \in \Sigma_{(l,1^{n-l})}$. By \Cref{thm:even_hook_bijection} there is $\tau \in \Sigma_{(l)}$ such that $\sigma = \tau \iprod \id$. Certainly $\id$ is oscillating and has connected intervals. Moreover, $\tau$ has these properties by \Cref{thm:characterization_of_Sigma_(n)}. Therefore, $\sigma$ is oscillating with connected intervals by \Cref{thm:inductive_product_osc_and_c.i}. Because $\sigma = \tau \iprod \id$, \Cref{thm:inductive_product_cycles} implies that we can write $\tau = (c_1,\dots, c_l)$ such that $d_i = \varphi_1(c_i)$ for $i = 1,\dots, l$. Therefore, \begin{align*} \set{d_1,\dots, d_l} = \varphi_1(\set{c_1,\dots c_l}) = \varphi_1([l]) = N_1 \end{align*} which means that $\sigma$ satisfies the third hook property. We now show the implication from right to left. Assume that $\sigma$ fulfills the hook properties. Then the third hook property yields that $\set{d_1,\dots, d_l} = N_1$ which implies that $\sigma(N_1) = N_1$. Therefore, $\sigma \in \SG_l \iprod \SG_{n-l}$ by \Cref{thm:inductive_product_image_and_injectivity}, \ie there are $\sigma_1\in \SG_l$ and $\sigma_2 \in \SG_{n-l}$ such that $\sigma = \sigma_1 \iprod \sigma_2$. From \Cref{thm:inductive_product_two_domains} we obtain that $\sigma|_{N_1}= \sigma_1^{\varphi_1}$ so that we can write $\sigma_1$ as $\sigma_1 = (c_1 ,\dots, c_l)$ with $c_i = \varphi_1\inv(d_i)$ for $i = 1,\dots, l$. It follows that $\sigma_1$ is an $l$-cycle of $\SG_l$. Since $\sigma$ fixes each element of $N_2$, it follows from \Cref{thm:inductive_product_two_domains} that $\sigma_2 = \id$. As $\sigma$ is oscillating with connected intervals, \Cref{thm:inductive_product_osc_and_c.i} implies that $\sigma_1$ has these properties as well. Thus, $\sigma_1 \in \Sigma_{(l)}$ by \Cref{thm:characterization_of_Sigma_(n)}. Hence, we can apply \Cref{thm:even_hook_bijection} and obtain that \begin{displaymath} \sigma = \sigma_1 \iprod \id \in \Sigma_{(l,1^{n-l})}. \qedhere \end{displaymath} \end{proof} \begin{rem} \label{thm:remark_on_generalisation_of_description_of_Sigma_alpha} In \Cref{thm:inductive_product_remark_reduction_to_odd_partitions} we reduced the problem of describing $\Sigma_\alpha$ for all maximal compositions $\alpha$ to the partitions with only odd parts. As we have such a description for odd hooks, it remains to find a combinatorial description of $\Sigma_\alpha$ in the case where $\alpha$ is a partition of odd parts which is not a hook. Then $\Sigma_\alpha$ consists of all permutations of type $\alpha$ of maximal length. Unfortunately, the situation is a lot more complex. One reason for this is the following. For any subset $\Sigma$ of $\SG_n$ define \begin{align*} \partition(\Sigma) := \set{\partition(\sigma) \mid \sigma \in \Sigma}. \end{align*} In general, $\partition(\sigma_\alpha)$ is not the only element of $\partition(\Sigma_\alpha)$ and there seems to be no obvious way to describe $P(\Sigma_\alpha)$. Moreover, the number of $\sigma \in \Sigma_\alpha$ whose orbits yield the same set partition of $[n]$ depends on this very set partition. For example, $\Sigma_{(3,3)}$ consists of the following elements where elements with the same orbit partition occur in the same row. \begin{align*} \begin{array}{llll} (1,6,2)(3,4,5) & (1,2,6)(3,4,5) & (1,6,2)(3,5,4) & (1,2,6)(3,5,4) \\ (1,6,3)(2,4,5) & (1,6,3)(2,5,4) & (1,3,6)(2,4,5) & (1,3,6)(2,5,4) \\ (1,4,5)(2,6,3) & (1,5,4)(2,3,6) & (1,5,4)(2,6,3) & (1,4,5)(2,3,6) \\ (1,6,4)(2,3,5) & (1,4,6)(2,3,5) & (1,6,4)(2,5,3) & (1,4,6)(2,5,3) \\ (1,6,5)(2,3,4) & (1,5,6)(2,3,4) & (1,5,6)(2,4,3) & (1,6,5)(2,4,3) \\ (1,5,3)(2,4,6) & (1,3,5)(2,6,4) \\ \end{array} \end{align*} \end{rem} \bibliographystyle{amsalpha}
proofpile-arXiv_065-34
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \IEEEPARstart{I}{mages} captured under low-light conditions often suffer from poor visibility, unexpected noise, and color distortion. In order to take high-quality images in low-light conditions, several operations including setting long exposures, high ISO, and flash are commonly applied. However, solely turning up the brightness of dark regions will inevitably amplify image degradation. To further mitigate the degradation caused by low-light conditions, several traditional methods have been proposed. Histogram Equalization (HE)~\cite{pizer1990contrast} rearranges the pixels of the low-light image to improve the dynamic range of the image. Retinex-based methods~\cite{wang2013naturalness, wang2014variational} decompose the low-light images into illumination and reflection maps and obtain the intensified image by fusing the enhanced reflection map and illumination map. Dehazing-based methods~\cite{dong2011fast, li2015low} regard the inverted low-light image as a haze image and improve visibility by applying dehazing. Although these methods can improve brightness, especially for dark pixels, they barely consider realistic lighting factors, often making the enhanced results visually tenuous and inconsistent with the actual scene. \begin{figure} \begin{center} \subfigure[Input]{ \includegraphics[width=0.48\linewidth]{Figures/Fig-1/Input.jpg} }\hspace*{-2mm} \subfigure[Zero-DCE~\cite{guo2020zero}]{ \includegraphics[width=0.48\linewidth]{Figures/Fig-1/Zero-DCE.jpg} } \subfigure[EnglightenGAN~\cite{jiang2021enlightengan}]{ \includegraphics[width=0.48\linewidth]{Figures/Fig-1/EnlightenGAN.jpg} }\hspace*{-2mm} \subfigure[Ours]{ \includegraphics[width=0.48\linewidth]{Figures/Fig-1/Ours.jpg} } \caption{Visual results of the proposed method compared with the state-of-the-art unsupervised low-light enhancement methods. The low-light image of (a) is from EnlightenGAN test set~\cite{jiang2021enlightengan}.} \label{fig:intro} \end{center} \end{figure} Recently, Deep Convolutional Neural Networks (CNNs) set the state-of-the-art in low-light image enhancement. Compared with traditional methods, the CNNs learn better feature representation to obtain enhanced results with superior visual quality, which benefits from the large dataset and powerful computational ability. However, most CNN-based methods require training examples with references, whereas it is extremely challenging to simultaneously capture low-light and normal-light images of the same visual scene. To eliminate the reliance on paired training data, several unsupervised deep learning-based methods~\cite{guo2020zero,zhang2020self,jiang2021enlightengan} have been proposed. These algorithms are able to restore images with better illumination and contrast in some cases. However, most unsupervised methods heavily rely on carefully selected multi-exposure training data or unpaired training data, which makes these approaches not generalize well to handle various types of images. Therefore, it is of great interest to seek a novel strategy to deal with different scenarios in the wild. In this study, we propose an unsupervised low-light image enhancement algorithm based on an effective prior termed histogram equalization prior~(HEP). Our work is motivated by an interesting observation on the pre-trained networks: the feature maps of histogram equalization enhanced image and the ground truth are similar. Intuitively, the feature maps of histogram equalization enhanced images can directly provide abundant texture and luminance information~\cite{geirhos2018imagenet}. We show theoretically and empirically that this generic property of the histogram equalization enhanced image holds for many low-light images; more details are shown in Section~\ref{method}. This inspires us to regularize the feature similarity between the histogram equalization enhanced images and the restored images. Following~\cite{Chen2018Retinex}, we split the low-light image enhancement process into two stages: image brightening and image denoising. The first stage decomposes the low-light images into illumination and reflectance maps, and the reflectance maps can be regarded as restored images. We formulate the histogram equalization prior to guiding the training process and add an illumination smoothness loss to suppress the texture and color information in the illumination map. However, according to the derivation based on Retinex theory~\cite{zhang2021beyond}, the reflectance maps are contaminated by noise. To improve the image quality, the second stage works as an enhancer to denoise the reflectance map. In this stage, we propose an unsupervised denoising model based on disentangled representation to remove the noise and generate the final enhanced image. The disentanglement is achieved by splitting the content and noise features in a reflectance map using content encoders and noise encoders. Inspired by~\cite{bao2018towards}, we add a KL divergence loss to regularize the distribution range of extracted noise features to suppress the contained content information. Moreover, we adopt the adversarial loss and the cycle-consistency loss as regularizers to assist the generator networks in yielding more realistic images and preserving the content of the original image. Extensive experiments demonstrate that our method performs favorably against the state-of-the-art unsupervised low-light enhancement algorithms and even matches the state-of-the-art supervised algorithms. Fig.\ref{fig:intro} shows an example of enhancing the low-light image. In comparison to state-of-the-art methods, our method delivers improved image brightness while preserving the details. In summary, the main contributions of this work are as follows: 1. We propose an effective prior termed histogram equalization prior (HEP) for low-light image decomposition and add an illumination smoothness loss to suppress the texture and color information in the illumination map. 2. We introduce a noise disentanglement module to disentangle the noise and content in the reflectance maps with the reliable aid of unpaired clean images. 3. We build an unsupervised low-light image enhancement framework based on Retinex and disentangled representation, possessing more effective training and faster convergence speed. 4. We demonstrate that the proposed method achieves remarkable performance compared with the state-of-the-art unsupervised algorithms and even matches the state-of-the-art supervised algorithms. The rest of this paper is organized as follows. Section~\ref{related} provides a brief review of some related works. Section~\ref{method} presents our proposed histogram equalization prior first, then introduces the decomposition network, finally, presents the proposed noise encoder. Section~\ref{experiment} illustrated the experimental results. Section~\ref{ablation} provided the ablation studies on each component. Finally, concluding remarks are provided in Section~\ref{conclusion}. \section{Related Work} \label{related} \textbf{Conventional Methods} The conventional methods for low-light image enhancement can be roughly divided into three aspects: Gamma Correction (GC)~\cite{farid2001blind}, Histogram Equalization (HE)~\cite{pizer1990contrast}, and Retinex~\cite{land1971lightness}. Gamma correction edits the gamma curve of the image to perform nonlinear tone editing to detect the dark part and the light part in the image signal and increase the ratio of the two-part to improve the contrast. However, the global parameters lead to local over/under-exposure, and the value of the global parameter is very complicated to select. Rahman~{\emph{et al.}}~\cite{rahman2016adaptive} proposed an adaptive gamma correction method that dynamically determines the intensity conversion function based on the statistical characteristics of the image. Histogram Equalization stretches the image's dynamic range by evenly distributing the pixel values to improve the contrast and brightness of the image. However, it applies the adjustment globally, leads to unexpected local overexposure and amplifying the noise. Adaptive Histogram Equalization (AHE)~\cite{pizer1987adaptive} has been proposed to map the histogram of the local region as a simple mathematical distribution. Pizer~{\emph{et al.}}~\cite{pizer1990contrast} proposed Contrast Limited Adaptive Histogram Equalization (CLAHE). This method sets a threshold and assumes that if a certain pixel value of the histogram exceeds the threshold, crop this pixel and evenly distribute the part that exceeds the threshold to each pixel. Retinex theory is a calculation theory of color constancy. As a model of human visual perception, these methods decompose images into reflectance and illumination maps. MSR~\cite{jobson1997multiscale} obtains enhanced results by fusing different single-scale Retinex outputs. MSRCR~\cite{jobson1997multiscale} improves the color distortion problem of the previous methods. However, the Retinex methods lead to unreal or partially over-enhanced. Inspired by the Retinex theory, NPE~\cite{wang2013naturalness} was proposed for the enhancement of non-uniform illumination images. MF~\cite{fu2016fusion} was proposed to apply multi-layer fusion to image enhancement under different light conditions. LIME~\cite{guo2016lime} evaluate the illumination map of the image and smooth the illumination map for enhancement. SRIE~\cite{fu2016weighted} evaluate the illumination map and the reflectance map simultaneously through a weighted variational model. \textbf{Deep learning based Methods} Deep learning-based methods have dominated the research of low-light image enhancement. Lore~{\emph{et al.}}~\cite{lore2017llnet} proposed the first convolutional neural networks for low-light image enhancement termed LL-Net, perform contrast enhancement and denoising based on deep auto-encoder. Chen~{\emph{et al.}}~\cite {Chen2018Retinex} proposed Retinex-Net, which includes a Decom-Net that splits the input images into reflectance and illumination maps, and an Enhance-Net that adjusts the illumination map for low-light enhancement. Zhang~{\emph{et al.}} proposed KinD~\cite{zhang2019kindling}, which is similar to Reinex-Net. It presented a new decomposition network, a reflection map enhancement network, and an illumination map enhancement network, which achieved outstanding performance in low-light image enhancement. Zhang~{\emph{et al.}} proposed KinD++~\cite{zhang2021beyond}, which improves the KinD method, and achieves state-of-the-art performance. Guo~{\emph{et al.}}~\cite{guo2020zero} proposed a zero-shot learning method named Zero-DCE, which is achieved by an intuitive and straightforward nonlinear curve mapping. However, Zero-DCE heavily relies on the usage of multi-exposure training data. Zhang~{\emph{et al.}}~\cite{zhang2020self} proposed a self-supervised method that uses the max entropy loss for better image decomposition, but the restored image still suffers from noise contamination. \begin{figure*}[htbp] \centering \includegraphics[width=\textwidth]{Figures/Fig-2.jpg} \caption{Overview of the framework. The proposed method consists of two stages: (a) light-up and (b) noise disentanglement. The light-up module first decomposes the low-light image into an illumination map and reflectance map. Then the noise disentanglement module denoises the reflectance map to yield the final enhanced image. In (a), the bright channel is a 1-channel image, which is obtained by calculating the maximum channel value of the input RGB image. Then the bright channel and the input image are concatenated together to form a 4-channel image as the input of the network. In (b), blue arrows represent the data flow of the noise domain, orange arrows represent the data flow of the clean domain. $E^N$ is the noise encoder for noise images; $E^C_Y$ and $E^C_X$ are the content encoders for noise and clean images; $G_X$ and $G_Y$ are noise image and clean image generators.} \label{fig arc} \end{figure*} \textbf{Image to Image Translation} Generative Adversarial Network~(GAN) is the most influential generative model in computer vision technology. Based on the powerful generative capabilities of GAN, image-to-image translation has become an important way to achieve image enhancement, which is achieved by converting corrupted images to sharp images. Zhu {\emph{et al.}}~\cite{zhu2017unpaired} proposed CylceGAN, which showed tremendous capacity in the field of the image domain transfer. Liu {\emph{et al.}}~\cite{liu2017unsupervised} proposed UNIT, which learned shared-latent representation for diverse image translation. Lee {\emph{et al.}}~\cite{lee2018diverse} proposed DRIT, which separated the latent space to content space and attribute space. The content space is shared, the attribute space is independent. Yuan~{\emph{et al.}}~\cite{yuan2018unsupervised} proposed a nested CycleGAN to achieve the unsupervised image super-resolution. Lu~{\emph{et al.}}~\cite{lu2019unsupervised} extended DRIT and proposed to decompose the image into the image content domain and the noise domain to achieved unsupervised image deblurring. Based on Lu's work, Du~{\emph{et al.}}~\cite{du2020learning} added Background Consistency Module and Semantic Consistency Module to the networks, learning robust representation under dual-domain constraints, such as feature and image domains. Jiang~{\emph{et al.}}~\cite{jiang2021enlightengan} proposed a backbone model EnlightenGAN for low-light image enhancement based on adversarial learning. However, EnlightenGAN relies on large number of parameters for good performance. \section{Methodology} \label{method} The main purpose of our method is to recover texture details, reduce noise and color bias, and maintain sharp edges for low-light image enhancement. As shown in Fig.\ref{fig arc}, the proposed method consists of two components: 1) Light Up Module (LUM); 2) Noise Disentanglement Module (LUM). The first stage is improving the brightness of the images, and the second stage is removing the noise of the images. \begin{figure*}[htbp] \centering \includegraphics[width=\textwidth]{Figures/Fig-3.jpg} \caption{Feature maps on the $conv4\_{1}$ layer of VGG-19 networks pre-trained on ImageNet.} \label{fig prior} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figures/Fig-4.jpg} \caption{Histogram of the cosine similarities. Green: cosine similarities between the feature maps of the input low-light images and the ground truth. Blue: cosine similarities between the histogram equalization prior and the ground truth.} \label{fig cos} \end{figure} For low-light image enhancement, unsupervised learning-based methods are complicated to implement. The main reason is that texture and color information in low-light images is difficult to extract without the aid of paired ground truth data or prior information. Therefore, we investigate an effective prior information to guide the training process and maintain the texture and structure. In the following subsections, we first introduce the proposed histogram equalization prior in Section~\ref{hist}. Then, we present the method to decompose the low-light images into reflectance maps and illumination maps in Section~\ref{lightup}. In Section~\ref{noise}, we discuss the approach to disentangle the noise and content in reflectance maps. \begin{figure} \centering \includegraphics[width=1\linewidth]{Figures/Fig-5.jpg} \caption{Feature maps on different layers of VGG-19 networks pre-trained on ImageNet with a histogram equalization enhanced image.} \label{fig fea} \end{figure} \subsection{Histogram Equalization Prior} \label{hist} The histogram equalization prior is based on histogram equalization enhanced image. Traditional histogram equalization can make the dark images visible by stretching the dynamic range of dark images via manipulating the corresponding histogram. However, it is not flexible enough for visual property adjustment in local regions and leads to undesirable local appearances, e.g., under/over-exposure and amplified noise. Encouraging the pixels of output images to match the histogram equalization enhanced images will capture unpleasant local impressions contained in the enhanced image. Inspired by \cite{johnson2016perceptual}, we can adopt the VGG feature map to constrain the perceptual similarity between the low-light image and its histogram equalization enhanced version. As shown in Fig.\ref{fig prior}, we can observe that the feature map of the input low-light image has less semantic information~\cite{wang2021rethinking}. In contrast, the feature map of histogram equalization enhanced image has rich semantic information, and it is remarkably similar to the feature map of ground truth. To further verify the validity of the histogram equalization prior, we have selected 500 paired images from the LOL dataset~\cite{Chen2018Retinex}. We calculate the cosine similarity between the feature maps of the histogram equalization enhanced image and the feature maps of the ground truth. Fig.\ref{fig cos} is the histogram of cosine similarities over all 500 low-light images. We can observe that about $80\%$ of the cosine similarities are concentrated above $0.8$. Compared with the cosine similarities between the feature maps of the input low-light images and the feature maps of the ground truth, the cosine similarities has been substantially improved. This statistic provides a strong support to our histogram equalization prior, and it indicates that we can adopt this prior instead of ground truth to guide the training process. Fig.\ref{fig fea} shows the different layer of VGG-19~\cite{simonyan2014very} networks pre-trained on ImageNet~\cite{deng2009imagenet} with a histogram equalization enhanced image. Feature maps closer to the input layer pay more attention to the specific details of texture information, and some feature maps can also show the shape of the toy's face. Feature maps farther away from the input layer are more concerned with semantic and abstract information, such as the toy's eye and nose characteristics. The feature maps of the deepest layers become more obscure and can no longer provide adequate information, while the features are similar between each group of feature maps. Based on this information, we select the feature map of $conv4\_{1}$ layer as the feature similarity. \subsection{Light Up} \label{lightup} The first stage improves the brightness of the images based on the Retinex theory. According to the Retinex theory, the images can be decomposed into reflectance maps and illumination maps. Mathematically, a degraded low-light image can be naturally modeled as follows: \begin{equation} I=R \circ L + N \end{equation} \noindent where I stands for input image, R stands for reflectance map, L is the illumination map, N represents the noise component, $\circ$ represents element-wise multiplication. As the illumination map determines the dynamic range of images, it cannot be affected by noise. In contrast, the reflectance map represents the intrinsic properties of the images, which are often affected by noise during the imaging process. Hence, by taking simple algebra steps~\cite{zhang2021beyond}, we can have the following formula: \begin{equation} I=R \circ L + N=R \circ L + \tilde{N} \circ L =(R + \tilde{N}) \circ L=\tilde{R} \circ L \end{equation} \noindent where $\tilde{N}$ stands for the degradation having the illumination decoupled, $\tilde{R}$ represents for the polluted reflectance map. According to the above theory, the reflectance map can be regarded as a restored image with noise. Therefore, we design a neural network to decompose the low-light images into reflectance and illumination maps, and then send the reflectance maps to the NDM for further denoising. We follow the similar network architecture as the one used in~\cite{zhang2020self}, the module framework is shown in Fig.\ref{fig arc}(a). It first uses a 9×9 convolutional layer to extract features from the input image. Secondly, three 3×3 convolutional+ReLU layers and one 3×3 deconvolutional+ReLU layer are followed. A residual feature from the conv2 layer concatenates with the feature from the deconv layer and feeds to a 3×3 convolutional+ReLU layer. The feature from this layer concatenates with the feature from a 3×3 convolutional+ReLU layer, which extracts features from the input image. Finally, two 3×3 convolutional layers project reflectance map and illumination map from feature space. The sigmoid function constrains both reflectance map and illumination map in the range of [0,1]. Due to the lack of ground-truth data to guide the training process, it is tough to recover these two components from low-light images. We adopt the histogram equalization prior to constrain the reflectance map. We define an MSE loss between the feature map of the output reflectance map and the feature map of the input image, which we call the histogram equalization prior loss. The loss function can be formulated as follows: \begin{equation} \mathcal{L}_{hep} = \parallel F(\tilde{R}) - F(I)\parallel_2^2 \end{equation} \noindent where $F(\cdot)$ denotes the feature map extracted from a VGG-19 model pre-trained on ImageNet. Since the network decomposes the image into an illumination map and a reflectance map, the decomposed two maps should reproduce the input image. We introduce reconstruction loss to ensure the quality of the generated image. The formula is as follows: \begin{equation} \mathcal{L}_{recon} = \parallel \tilde{R} \circ L - I\parallel_1 \end{equation} As the reflectance map should preserve more texture and color details. In other words, the illumination map should be smooth in textural information while still preserving the structural boundaries. To make the illumination map aware of the image structure boundary, we modify the illumination smoothness loss proposed in \cite{zhang2021beyond}. Different from the previous loss, our illumination smoothness loss only takes the low-light input image as the reference. This term constrains the relative structure of the illumination map to be consistent with the input image, which can reduce the risk of over-smoothing on the structure boundary. The illumination smoothness loss is formulated as: \begin{equation} \mathcal{L}_{is} = \parallel \frac{\nabla L}{max(\mid \nabla I \mid, \epsilon)}\parallel_1 \end{equation} \noindent where $\mid\!\cdot\!\mid$ means the absolute value operator, $\epsilon$ is a small positive constant for avoiding zero denominators, $\nabla$ denotes the gradient including $\nabla h$ (horizontal) and $\nabla v$ (vertical). As a result, the loss function of the LUM is as follows: \begin{equation} \mathcal{L} = \mathcal{L}_{recon} + \lambda_{rs}\mathcal{L}_{hep} + \lambda_{is}\mathcal{L}_{is} \end{equation} In our experiment, these parameters are set to $\lambda_{hep}=\lambda_{is}= 0.1$, $\epsilon=0.01$. Due to the careful settings of these loss terms, the light-up module can perform sufficiently well. Still, the light-up image is constrained by histogram equalization, as the method often causes noise and blur. Although the images generated by the network have been enhanced, however, compared with the normal-light images, the noise level cannot meet the visual quality. Therefore, they need to be further denoised. \subsection{Noise Disentanglement} \label{noise} Although the content information of the low-light image appears after being decomposed into a reflectance map, the noise contained in it seriously interferes with the clarity of the image. Therefore, we adopt the domain transfer method to eliminate the noise and retain the content information. As shown in Fig.\ref{fig arc}(b). The noise disentanglement module consists of six parts: 1) content encoder $E_X^C$ and $E_Y^C$(due to the shared parameter, we regard the content encoder of two domains are the same); 2) noise encoder $E^N$; 3) noise domain image generator $G_X$; 4) clean domain image generator $G_Y$; 5) noise domain image discriminator $D_X$; 6) clean domain image discriminator $D_Y$. Given a training image sample $I_n$ from the noise domain and a training image sample $I_c$ from the clean domain, the content encoder $E_X^C$ and $E_Y^C$ extract the content feature from corresponding image samples, the noise encoder $E^N$ extract the noise feature from noise image samples. $G_X$ then takes the content feature of the clean domain and noise feature of the noise domain to generate a noise image $I_{gn}$ while $G_Y$ then takes the content feature of the noise domain to generate a clean image $I_{gc}$. The discriminators $D_X$ and $D_Y$ distinguish between the real and generated examples. Due to the unpaired setting, it is not trivial to disentangle the content information from a noise image. To restrain the noise encoder to only encode the noise information, we add the KL distance to constrain the distribution of noise feature extracted by the noise encoder, forcing the distribution of the noise feature to be closer to the standard normal distribution. The KL distance formula is as follows: \begin{equation} KL(q(z_n)\parallel p(z))=\int q(z_n)\log \frac{p(z)}{q(z_n)}dz \end{equation} \noindent where $q(z_n)$ stands for the distribution of the noise features $z_n$, $p(z)$ stands for the distribution of the standard normal distribution $N(0,1)$. As proved in \cite{kingma2013auto}, the KL divergence loss will suppress the content information contained in the noise feature $z_n$, and minimizing the KL divergence is equivalent to minimizing the following loss function, which has been proved in \cite{bao2018towards}. \begin{equation} \mathcal{L}_{KL}=\frac{1}{2}\sum_{i=1}^d(-\log(\sigma_i^2)+\mu_i^2+\sigma_u^2-1) \end{equation} \noindent where $d$ is the dimension of noise feature, $\mu$ and $\sigma$ are the mean and standard deviation of noise feature. In order to make the enhanced images look like realistic normal-light images, we adopt the adversarial loss to minimize the distance between the real image and output distributions. We modified the discriminator slightly to replace the loss function with the least-square GAN (LSGAN) loss. The adversarial loss function is as follows: \vspace{-1mm} \begin{equation} \label{adv_loss_x} \mathcal{L}_{adv}=\frac{1}{2}\mathbb{E}_{x\sim p_r}[(D(x)-b)^2]+\frac{1}{2}\mathbb{E}_{z\sim p_z}[(D(G(z))-a)^2] \end{equation} \noindent where $a$ is the label for the generated samples, $b$ is the label for the real samples, and $z$ is the latent vector. Without pairwise supervision, the denoised image may lose some content information. Similar to~\cite{zhu2017unpaired}, we introduce the cycle-consistency loss to guarantee that the generated corrupted image $I_{gn}$ translates back to the original clean image domain, and the denoised image $I_{gc}$ reconstructed to the original corrupted sample. We define the cycle-consistency loss on both domains as: \begin{equation} \mathcal{L}_{cc}=\parallel I - \tilde{I}\parallel_1 \end{equation} \noindent where $I$ is the input samples, $\tilde{I}$ is the the backward translation of the input samples. In addition to the cycle-consistency loss, we introduce self-reconstruction loss to facilitate the better-perceived quality of the generated image. The formula of the loss function is as follows: \begin{equation} \mathcal{L}_{rec} = \parallel I_{rec} - I\parallel_1 \end{equation} Following the observations from \cite{taigman2016unsupervised} that features extracted from the deep layer of pre-trained model contain rich semantic information, we add perceptual loss between the denoised images and the original corrupted images to recover finer image texture details. It could be formulated as: \begin{equation} \mathcal{L}_{per}=\parallel \phi_l(I_g) - \phi_l(I)\parallel_2^2 \end{equation} \noindent where $\phi_l(\cdot)$ represents the feature extracted from $l$-th layer of the pre-trained VGG network, $I_g$ is the generated samples. In our experiments, we use the $conv3\_{2}$ layer of the VGG-19 pre-trained network on ImageNet. To eliminate the potential color deviations in the denoised image, we adopt the color constancy loss proposed in \cite{guo2020zero}, it follows the Gray-World color constancy hypothesis that color in each sensor channel averages to gray over the entire image. The loss function can be expressed as: \begin{equation} \mathcal{L}_{col}\!=\!\sum\nolimits_{\forall(p,q)\in\epsilon}\! (\mathcal{J}^{p}-\mathcal{J}^{q})^2,\!\epsilon=\{(R,G),(R,B),(G,B)\} \end{equation} \noindent where $\mathcal{J}^{p}$ represents the the average intensity value of $p$ channel in the denoised image, $(p,q)$ represents a pair of channels. From our preliminary experiments, we find that the generated denoised samples often over-smooth in the background, then we adopt background consistency loss proposed by \cite{du2020learning}, which uses a multi-scale Gaussian-Blur operator to obtain multi-scale features respectively. The loss function is formulated as: \begin{equation} \mathcal{L}_{bc}=\sum_{\sigma=i,j,k} \lambda_\sigma \parallel B_\sigma(I) - B_\sigma(I_g)\parallel_1 \end{equation} \noindent where $\lambda_\sigma$ is the hyper-parameter to balance the errors at different Gaussian-Blur levels, $B_\sigma(\cdot)$ represents the Gaussian-Blur operator with blur kernel $\sigma$. In our experiments, we set $\lambda_\sigma={0.25, 0.5, 1.0}$ for $\sigma={5,9,15}$ respectively. The entire loss function for the NDM is summarized as follows: \begin{equation} \begin{split} \mathcal{L}=& \quad\mathcal{L}_{adv}+\lambda_{KL}\mathcal{L}_{KL}+\lambda_{cc}\mathcal{L}_{cc}+\lambda_{col}\mathcal{L}_{col}\\ & \quad+\lambda_{per}\mathcal{L}_{per}+\lambda_{bc}\mathcal{L}_{bc}+\lambda_{rec}\mathcal{L}_{rec} \end{split} \end{equation} We empirically set these parameters to $\lambda_{KL}=0.01$, $\lambda_{per}=0.1$, $\lambda_{col}=0.5$, $\lambda_{bc}=5$, $\lambda_{cc}=\lambda_{rec}=10$. At test time, given a test corrupted sample, $E^N$ and $E_X^C$ extract the noise and content features map respectively. Then $G_Y$ takes the latent vector and generates the denoised image as the outputs. \begin{figure*} \begin{center} \hspace*{-4mm} \subfigure[Input]{ \begin{tabular}[]{c} \includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_Input_1.jpg}\\ \includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_Input_2.jpg}\\ \includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_Input_1_magnifier_0.png} \includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_Input_2_magnifier_0.png} \end{tabular} }\hspace*{-5mm} \subfigure[HE~\cite{pizer1990contrast}]{ \begin{tabular}[]{c} \includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_HE_1.jpg}\\ \includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_HE_2.jpg}\\ \includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_HE_1_magnifier_0.png} \includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_HE_2_magnifier_0.png} \end{tabular} }\hspace*{-5mm} \subfigure[LIME~\cite{guo2016lime}]{ \begin{tabular}[]{c} \includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_LIME_1.jpg}\\ \includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_LIME_2.jpg}\\ \includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_LIME_1_magnifier_0.png} \includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_LIME_2_magnifier_0.png} \end{tabular} }\hspace*{-5mm} \subfigure[Retinex-Net~\cite{Chen2018Retinex}]{ \begin{tabular}[]{c} \includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_RetinexNet_1.jpg}\\ \includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_RetinexNet_2.jpg}\\ \includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_RetinexNet_1_magnifier_0.png} \includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_RetinexNet_2_magnifier_0.png} \end{tabular} }\hspace*{-5mm} \subfigure[KinD++~\cite{zhang2021beyond}]{ \begin{tabular}[]{c} \includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_KinD++_1.jpg}\\ \includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_KinD++_2.jpg}\\ \includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_KinD++_1_magnifier_0.png} \includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_KinD++_2_magnifier_0.png} \end{tabular} } \hspace*{-4mm} \subfigure[Zero-DCE~\cite{guo2020zero}]{ \begin{tabular}[]{c} \includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_Zero-DCE_1.jpg}\\ \includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_Zero-DCE_2.jpg}\\ \includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_Zero-DCE_1_magnifier_0.png} \includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_Zero-DCE_2_magnifier_0.png} \end{tabular} }\hspace*{-5mm} \subfigure[EnlightenGAN~\cite{jiang2021enlightengan}]{ \begin{tabular}[]{c} \includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_EnlightenGAN_1.jpg}\\ \includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_EnlightenGAN_2.jpg}\\ \includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_EnlightenGAN_1_magnifier_0.png} \includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_EnlightenGAN_2_magnifier_0.png} \end{tabular} }\hspace*{-5mm} \subfigure[Self-Supervised~\cite{zhang2020self}]{ \begin{tabular}[]{c} \includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_Self-Supervised_1.jpg}\\ \includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_Self-Supervised_2.jpg}\\ \includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_Self-Supervised_1_magnifier_0.png} \includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_Self-Supervised_2_magnifier_0.png} \end{tabular} }\hspace*{-5mm} \subfigure[Ours]{ \begin{tabular}[]{c} \includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_Ours_1.jpg}\\ \includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_Ours_2.jpg}\\ \includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_Ours_1_magnifier_0.png} \includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_Ours_2_magnifier_0.png} \end{tabular} }\hspace*{-5mm} \subfigure[Ground-Truth]{ \begin{tabular}[]{c} \includegraphics[width=0.19\linewidth]{Figures/Fig-6/1/Fig-1_GT_1.jpg}\\ \includegraphics[width=0.19\linewidth]{Figures/Fig-6/2/Fig-2_GT_2.jpg}\\ \includegraphics[width=0.092\linewidth]{Figures/Fig-6/1/Fig-1_GT_1_magnifier_0.png} \includegraphics[width=0.09\linewidth]{Figures/Fig-6/2/Fig-2_GT_2_magnifier_0.png} \end{tabular} } \caption{Visual comparison with other state-of-the-art methods on LOL dataset~\cite{Chen2018Retinex}. Best viewed in color and by zooming in.} \label{fig:LOL} \end{center} \end{figure*} \section{Experimental Validation} \label{experiment} In this section, we first introduce the implementation details of the proposed method for low-light image enhancement. Then we qualitatively and quantitatively compare the proposed method with the state-of-the-art methods (include supervised and unsupervised methods), we use traditional metrics to evaluate, such as Peak-Signal-Noise-Rate (PSNR), Structural Similarity (SSIM)~\cite{wang2004image}, and Natural Image Quality Evaluator (NIQE)~\cite{mittal2012making}. Furthermore, we test the proposed method on some real-world datasets while comparing them with the state-of-the-art methods in terms of visual performance and NIQE metrics. Finally, we conduct ablation studies to demonstrate the effectiveness of each component or loss in the proposed method. \subsection{Implementation Details} Since the proposed method is a two-stage model, we need to train the model separately. In the first stage, our training dataset is selected from the low-light part of the LOL dataset~\cite {Chen2018Retinex}, which includes 500 low/normal-light image pairs. During the training, we use Adam~\cite{kingma2014adam} optimizer to perform optimization with the weight decay equal to 0.0001. The initial learning rate is set to $10^{-4}$, which decreases to $10^{-5}$ after 20 epochs and then to $10^{-6}$ after 40 epochs. The batch size is set to 16 and the patch size to 48×48. In the second stage, we assemble a mixture of 481 low-light images from the LOL dataset and 481 normal-light images from the EnlightenGAN dataset~\cite{jiang2021enlightengan}. The Adam method is adopted to optimize the parameters with the momentum equal to 0.9 and the weight decay equal to 0.0001. The learning rate is initially set to $10^{-4}$ and exponential decay over the 10K iterators. The batch size is set to 16 and the patch size to 64×64. All experiments are conducted using PyTorch~\cite{paszke2017automatic} framework on an Nvidia 2080Ti GTX GPU. \begin{figure*} \begin{center} \subfigure[Input]{ \includegraphics[width=0.32\linewidth]{Figures/Fig-7/Input.jpg} }\hspace*{-2mm} \subfigure[HE~\cite{pizer1990contrast}]{ \includegraphics[width=0.32\linewidth]{Figures/Fig-7/HE.jpg} }\hspace*{-2mm} \subfigure[LIME~\cite{guo2016lime}]{ \includegraphics[width=0.32\linewidth]{Figures/Fig-7/LIME.jpg} } \subfigure[Retinex-Net~\cite{Chen2018Retinex}]{ \includegraphics[width=0.32\linewidth]{Figures/Fig-7/RetinexNet.jpg} }\hspace*{-2mm} \subfigure[KinD++~\cite{zhang2021beyond}]{ \includegraphics[width=0.32\linewidth]{Figures/Fig-7/KinD++.jpg} }\hspace*{-2mm} \subfigure[Zero-DCE~\cite{guo2020zero}]{ \includegraphics[width=0.32\linewidth]{Figures/Fig-7/Zero-DCE.jpg} } \subfigure[EnlightenGAN~\cite{jiang2021enlightengan}]{ \includegraphics[width=0.32\linewidth]{Figures/Fig-7/EnlightenGAN.jpg} }\hspace*{-2mm} \subfigure[Self-Supervised~\cite{zhang2020self}]{ \includegraphics[width=0.32\linewidth]{Figures/Fig-7/Self-Supervised.jpg} }\hspace*{-2mm} \subfigure[Ours]{ \includegraphics[width=0.32\linewidth]{Figures/Fig-7/Ours.jpg} } \caption{Visual comparison with state-of-the-art methods on the SCIE dataset~\cite{cai2018learning}. Best viewed in color and by zooming in.} \label{fig:real} \end{center} \end{figure*} \subsection{Qualitative Evaluation} We first visually evaluate our proposed networks on the classical low-light image datasets: LOL datasets, and compare it with other state-of-the-art approaches with available codes, including HE~\cite{pizer1990contrast}, LIME~\cite{guo2016lime}, Retinex-Net~\cite {Chen2018Retinex}, KinD++~\cite{zhang2021beyond}, Zero-DCE~\cite{guo2020zero}, EnlightenGAN~\cite{jiang2021enlightengan}, and Self-Supervised~\cite{zhang2020self}. We have fine-tuned all models on the LOL train set and then evaluated it on the LOL test set. Fig.\ref{fig:LOL} shows some representative results for visual comparison. The enhanced results show that the EnlightenGAN and Zero-DCE fail to recover the images. HE significantly improves the brightness of the low-light image. However, it applies a contrast pull-up to each channel of RGB separately, which leads to color distortion (for example, the wall in Fig.\ref{fig:LOL}(b)). LIME enhances the images by directly estimating the illumination map, but this approach enhances both details and noise. Retinex-Net notably improves the visual quality of the low-light images, but it over-smoothes details, amplifies noise, and even produces color bias. It seems that the results of Self-Supervised, KinD++, and ours have better visual quality among all the methods. To further investigate the differences between these three methods, we have zoomed in the details inside the red and green bounding boxes. We can find from Fig.\ref{fig:LOL}(h) that Self-Supervised produces blurred results for the rotation switch in the red rectangle, while the results of KinD++ and ours show a better reconstruction. For the platform area in the green rectangle, we can see that the image estimated by Self-Supervised is corrupted, while KinD++ and our results are clearer. In summary, the best visual quality can be obtained with our proposed method and KinD++. Considering that KinD++ is a supervised method, this shows that our proposed unsupervised method is very effective. \begin{table}[t] \centering \caption{Quantitative comparisons on the LOL test set in terms of PSNR, SSIM, and NIQE. The best result is in red, whereas the second-best results are in blue, respectively. T, SL, and UL represent the traditional method, supervised learning method, and unsupervised learning method, respectively.} \begin{tabular}{c|c|c|c|c} \hline \textbf{Learning} &\textbf{Method} &\textbf{PSNR}$\uparrow$ &\textbf{SSIM}$\uparrow$ &\textbf{NIQE}$\downarrow$\\ \hline &Input &7.77 &0.191 &6.749\\ \hline \multirow{2}{*}{T} &HE~\cite{pizer1990contrast} &14.95 &0.409 &8.427\\ &LIME~\cite{guo2016lime} &17.18 &0.484 &8.221\\ \hline \multirow{2}{*}{SL} &Retinex-Net~\cite{Chen2018Retinex} &16.77 &0.425 &8.879\\ &KinD++~\cite{zhang2021beyond} &{\textcolor{red}{21.32}} &{\textcolor{red}{0.829}} &5.120\\ \hline \multirow{4}{*}{UL} &Zero-DCE~\cite{guo2020zero} &14.86 &0.562 &7.767\\ &EnlightenGAN~\cite{jiang2021enlightengan} &17.48 &0.652 &{\textcolor{blue}{4.684}}\\ &Self-Supervised~\cite{zhang2020self} &19.13 &0.651 &4.702\\ &Ours &{\textcolor{blue}{20.23}} &{\textcolor{blue}{0.790}} &{\textcolor{red}{3.780}}\\ \hline \end{tabular} \label{table:LOL} \end{table} \subsection{Quantitative Evaluation} We have also quantitatively compared our method to the other state-of-the-art methods. We have fine-tuned all models on the LOL train set and then evaluated it on the LOL test set. As shown in Table~\ref{table:LOL}, the proposed method achieves the best performance with an average PSNR score of 20.23 dB, SSIM score of 0.79, and NIEQ score of 3.78 in unsupervised methods, which exceed the second-best unsupervised method (Self-Supervised) by 1.1 dB on PSNR, 0.139 on SSIM, and 0.922 on NIQE. It demonstrates that the proposed method possesses the highest capability among all unsupervised methods and its performance is approximating the level of the state-of-the-art supervised methods. Recently, NIQE has been used to evaluate the image quality of low-light image enhancement, which evaluating real image restoration without ground truth. A smaller NIQE score indicates better visual quality. We can see from Table~\ref{table:LOL} that our method obtains the best NIQE scores in all unsupervised methods and even surpasses the state-of-the-art supervised method KinD++. It indicates that the low-light images enhanced with our method have the best visual quality. \begin{table*}[t] \centering \caption{NIQE scores on low-light image sets(MEF, LIME, NPE, VV, DICM, SCIE, ExDark, EnlightenGAN, COCO). The best result is in red whereas the second best results are in blue, respectively. Smaller NIQE scores indicate a better quality of perceptual tendency.} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline \textbf{Learning} &\textbf{Method} &MEF &LIME &NPE &VV &DICM &EnlightenGAN &SCIE &ExDark &COCO &Avg\\ \hline \multirow{2}{*}{T} &HE~\cite{pizer1990contrast} &3.472 &4.125 &4.289 &3.202 &3.643 &6.993 &3.373 &4.135 &4.206 &4.530\\ &LIME~\cite{guo2016lime} &3.56 &4.138 &4.194 &2.456 &3.818 &6.956 &3.222 &4.759 &4.24 &4.667\\ \hline \multirow{2}{*}{SL} &Retinex-Net~\cite{Chen2018Retinex} &4.386 &4.68 &4.567 &2.461 &4.451 &8.063 &3.705 &5.274 &4.89 &5.296\\ &KinD++~\cite{zhang2021beyond} &3.734 &4.81 &4.381 &2.352 &3.787 &{\textcolor{blue}{4.572}} &3.143 &{\textcolor{blue}{4.074}} &{\textcolor{blue}{3.896}} &{\textcolor{blue}{3.926}}\\ \hline \multirow{4}{*}{UL} &Zero-DCE~\cite{guo2020zero} &3.283 &3.782 &4.273 &3.217 &3.56 &6.582 &3.284 &4.149 &3.903 &4.386\\ &EnlightenGAN~\cite{jiang2021enlightengan} &{\textcolor{blue}{3.221}} &{\textcolor{blue}{3.678}} &{\textcolor{blue}{4.125}} &{\textcolor{red}{2.251}} &{\textcolor{blue}{3.546}} &4.609 &{\textcolor{blue}{2.939}} &4.357 &3.953 &3.973\\ &Self-Supervised~\cite{zhang2020self} &4.477 &4.966 &4.743 &3.364 &4.588 &4.872 &3.978 &5.176 &4.947 &4.758\\ &Ours &{\textcolor{red}{3.188}} &{\textcolor{red}{3.484}} &{\textcolor{red}{3.504}} &{\textcolor{blue}{2.336}} &{\textcolor{red}{3.425}} &{\textcolor{red}{3.711}} &{\textcolor{red}{2.864}} &{\textcolor{red}{3.422}} &{\textcolor{red}{3.037}} &{\textcolor{red}{3.325}}\\ \hline \end{tabular} \label{table:real} \end{table*} \subsection{Generalization Ability on Real-World Images} To further demonstrate the generalization ability of the proposed method, we have tested the proposed method on some real-world low-light image sets, including MEF~\cite{lee2011power}(17 images), LIME~\cite{guo2016lime}(10 images), NPE~\cite{wang2013naturalness}(84 images), VV\footnote{https://sites.google.com/site/vonikakis/datasets}(24 images), DICM~\cite{lee2013contrast}(64 images), EnlightenGAN~\cite{jiang2021enlightengan}(148 images), SCIE~\cite{cai2018learning}(select 100 low-light images from the datasets). Furthermore, in order to showcase this unique advantage of our method in practice, we also conduct experiments using low-light images from other datasets, which are built for object detection and recognition. We selected 216 low-light images from ExDark~\cite{loh2019getting} and 100 nighttime images from COCO~\cite{lin2014microsoft}. We have fine-tuned all models on the EnlightenGAN train set\footnote {Since EnlightenGAN dataset is unpaired and cannot be used as the train set for supervised method, so we use LOL dataset as the train set for the supervised method} and then evaluated it on all the low-light image sets. As all these datasets are unpaired, we employ the NIQE metric to provide quantitative comparisons with the state-of-the-art methods, which are used for evaluating real image restoration without ground truth. The NIQE results on nine publicly available image sets used by previous works are reported in Table~\ref{table:real}. Our method achieved the best performance in eight of these nine datasets and achieved the first place in the average score. Fig.\ref{fig:real} shows the results of a challenging image on the SCIE dataset. From the results, we can observe that our proposed method and KinD++ enhance dark regions and simultaneously preserve the color. The result is visually pleasing without obvious noise and color casts. In contrast, HE, LIME, and EnlightenGAN generate visually good results, but it contains some undesired artifacts (e.g., the white wall). Zero-DCE fails to recover the image. Retinex-Net and Self-Supervised over-smooth the details, amplify noise, and even produce color deviation. Our proposed method and KinD++ enhance dark regions and preserve the color of the input image simultaneously. The result is visually pleasing without obvious noise and color casts. It demonstrates that our method has great generalization ability in real-world images with more naturalistic quality. \section{Ablation Study} \label{ablation} To demonstrate the effectiveness of each component proposed in Section~\ref{method}, we conduct several ablation experiments. We primarily analyze the components in our Light Up Module (LUM), which are the core contribution and play critical roles in this work. \subsection{Contribution of Light Up} \subsubsection{Effect of Histogram Equalization Prior} Since histogram equalization prior is the main contribution in our work, a comparative assessment of its validity has been carried out. We employ the histogram equalization enhanced image instead as the reference image. We have evaluated different loss functions with the histogram equalization enhanced image: L1 loss $\mathcal{L}_{L1}$, MSE loss $\mathcal{L}_{MSE}$, and SSIM loss $\mathcal{L}_{SSIM}$, and max information entropy loss $\mathcal{L}_{max}$~\cite{zhang2020self}. The formulas of these losses are as follows: \begin{equation} \mathcal{L}_{L1} = \parallel R - H(I)\parallel_1 \end{equation} \begin{equation} \mathcal{L}_{MSE} = \parallel R - H(I)\parallel_2^2 \end{equation} \begin{equation} \mathcal{L}_{SSIM} = 1-SSIM(R, H(I)) \end{equation} \begin{equation} \mathcal{L}_{max} = \parallel \mathop{max}\limits_{c\in{R,G,B}}(R^c) - H(\mathop{max}\limits_{c\in{R,G,B}} (I^c))\parallel_1 \end{equation} \noindent where $H(\cdot)$ stands for histogram equalization operation, $R$ represents the relfectance map, $I$ denotes the input low-light image, $R^c$ represents the max channel of relfectance map, $I^c$ represents the max channel of input low-light image. The comparison results are shown in Table \ref{table:prior}. Using $\mathcal{L}_{L1}$ or $\mathcal{L}_{MSE}$ achieves similar SSIM and NIQE scores. Nevertheless, for the PSNR values, the estimation from $\mathcal{L}_{MSE}$ exceeds those from $\mathcal{L}_{L1}$ with 0.33dB. The $\mathcal{L}_{SSIM}$ improves the NIQE score by a large margin. $\mathcal{L}_{SSIM}$ surpasses HEP in NIQE score, but HEP outperformed by 1.58dB in PSNR and 0.047 in SSIM. $\mathcal{L}_{max}$ achieves similar SSIM scores with HEP, but it failed in NIQE and PSNR by a large margin. Fig.\ref{Figure prior} shows a visual comparison of these loss functions. $\mathcal{L}_{L1}$ and $\mathcal{L}_{MSE}$ significantly improves the brightness of the low-light images. However, they have obvious color deviations (e.g., the color of the floor) and undesired artifacts (e.g., the dark region of the wall). $\mathcal{L}_{SSIM}$ reveals the color and texture, but with the blurry mask. $\mathcal{L}_{max}$ has color distortion. Both quantitative and qualitative results demonstrate the effectiveness of the proposed prior. \begin{figure*}[t] \centering \subfigure[Input]{ \includegraphics[width=0.16\linewidth]{Figures/Fig-8/input.jpg} }\hspace*{-2mm} \subfigure[with $\mathcal{L}_{L1}$]{ \includegraphics[width=0.16\linewidth]{Figures/Fig-8/L1.jpg} }\hspace*{-2mm} \subfigure[with $\mathcal{L}_{MSE}$]{ \includegraphics[width=0.16\linewidth]{Figures/Fig-8/MSE.jpg} }\hspace*{-2mm} \subfigure[with $\mathcal{L}_{SSIM}$]{ \includegraphics[width=0.16\linewidth]{Figures/Fig-8/SSIM.jpg} }\hspace*{-2mm} \subfigure[with $\mathcal{L}_{max}$]{ \includegraphics[width=0.16\linewidth]{Figures/Fig-8/Max.jpg} }\hspace*{-2mm} \subfigure[with HEP]{ \includegraphics[width=0.16\linewidth]{Figures/Fig-8/Ours.jpg} } \caption{Ablation study of the contribution of histogram equalization prior in LUM (replace reflectance similarity loss $\mathcal{L}_{rs}$ with L1 loss $\mathcal{L}_{L1}$, MSE loss $\mathcal{L}_{MSE}$, SSIM loss $\mathcal{L}_{SSIM}$, and max information entropy loss $\mathcal{L}_{max}$).} \label{Figure prior} \end{figure*} \begin{figure*}[htbp] \centering \subfigure[Input]{ \includegraphics[width=0.19\linewidth]{Figures/Fig-9/input.jpg} }\hspace*{-2mm} \subfigure[w/o $\mathcal{L}_{recon}$]{ \includegraphics[width=0.19\linewidth]{Figures/Fig-9/without_recon.jpg} }\hspace*{-2mm} \subfigure[w/o $\mathcal{L}_{is}$]{ \includegraphics[width=0.19\linewidth]{Figures/Fig-9/withour_IS.jpg} }\hspace*{-2mm} \subfigure[w/o $\mathcal{L}_{hep}$]{ \includegraphics[width=0.19\linewidth]{Figures/Fig-9/without_RS.jpg} }\hspace*{-2mm} \subfigure[full loss]{ \includegraphics[width=0.19\linewidth]{Figures/Fig-9/LUM.jpg} } \caption{Ablation study of the contribution of loss functions in LUM (reconstruction loss $\mathcal{L}_{recon}$, illumination smoothness loss $\mathcal{L}_{is}$), histogram equalization prior loss $\mathcal{L}_{hep}$). Red rectangle indicate the obvious differences and amplified details.} \label{Figure abs1loss} \end{figure*} \begin{table}[t] \centering \caption{Ablation study of the contribution of histogram equalization prior in LUM in terms of PSNR, SSIM and NIQE.} \begin{tabular}{c|c|c|c} \hline \textbf{Loss Function} & \textbf{PSNR}$\uparrow$ & \textbf{SSIM}$\uparrow$ & \textbf{NIQE}$\downarrow$ \\ \hline Input &7.77 &0.191 &6.749\\ with $\mathcal{L}_{L1}$ &17.51 &0.687 &6.343\\ with $\mathcal{L}_{MSE}$ &17.84 &0.698 &6.649\\ with $\mathcal{L}_{SSIM}$ &17.94 &0.654 &4.869\\ with $\mathcal{L}_{max}$ &18.29 &0.690 &7.294\\ with HEP &19.52 &0.701 &5.480\\ \hline \end{tabular} \label{table:prior} \end{table} \subsubsection{Effect of Loss functions} We present the results of LUM trained by various combinations of losses in Fig.~\ref{Figure abs1loss}. Removing the reconstruction loss $\mathcal{L}_{recon}$ fails to brighten the image, and this shows the importance of reconstruction loss in enhancing the quality of the generated image. The results with illumination smoothness loss $\mathcal{L}_{is}$haves relatively lower contrast than the full results, and it shows smooth illumination map can somehow brighten the reflectance map. Finally, removing the histogram equalization prior loss $\mathcal{L}_{hep}$ hampers the correlations between neighboring regions leading to obvious artifacts. To further demonstrate the effectiveness of each loss, we conduct several experiments on the LOL dataset. The evaluation results of each loss show in Table~\ref{table:loss1}. The results show that without the histogram equalization prior loss, the PSNR decrease from 19.52 to 9.0, the SSIM decrease from 0.701 to 0.54. It demonstrates the importance of perceptual loss. To better prove the role of perceptual loss, we conduct an ablation study on this prior in the next subsection. \begin{table}[t] \centering \caption{Ablation study of the contribution of loss functions in LUM in terms of PSNR, SSIM and NIQE.} \begin{tabular}{c|c|c|c} \hline \textbf{Loss Function} &\textbf{PSNR}$\uparrow$ &\textbf{SSIM}$\uparrow$ &\textbf{NIQE}$\downarrow$\\ \hline Input &7.77 &0.191 &6.749\\ w/o $\mathcal{L}_{hep}$ &9.00 &0.540 &4.539\\ w/o $\mathcal{L}_{recon}$ &17.06 &0.675 &6.782\\ w/o $\mathcal{L}_{is}$ &17.93 &0.621 &6.350\\ full loss &19.52 &0.701 &5.480\\ \hline \end{tabular} \label{table:loss1} \end{table} \subsection{Contribution of Noise Disentanglement} \subsubsection{Effect of Network Architecture} In this part, we compare three different denoise manners, including a traditional denoising tool BM3D~\cite{dabov2007image}, a GAN-based denoise method~\cite{du2020learning}, which has a similar architecture to ours, and our proposed NDM. Fig.\ref{fig noise} shows the comparison results of these three methods. The BM3D and the GAN-based method are the state-of-the-art denoising methods. However, the results show that the BM3D can handle noise, but it blurs the image. The GAN-based method is visually similar to our proposed NDM, but the image is overexposed compared to the ground truth. The result of our proposed NDM contains more delicate details and more vivid colors than other methods. As the quantitative results are shown in Table~\ref{table:arc2}, the NDM improves the GAN-based denoise method by a large margin in terms of PSNR and outperforms the BM3D by about 0.66dB in PSNR, 0.014 in SSIM, and 2.497 in NIQE. The new design of the NDM proves its effectiveness by the best results in this comparison. \begin{figure*}[htbp] \centering \subfigure[Input]{ \includegraphics[width=0.19\linewidth]{Figures/Fig-10/Input.jpg} }\hspace*{-2mm} \subfigure[LUM + BM3D~\cite{dabov2007image}]{ \includegraphics[width=0.19\linewidth]{Figures/Fig-10/BM3D.jpg} }\hspace*{-2mm} \subfigure[LUM + Du~{\emph{et al.}}~\cite{du2020learning}]{ \includegraphics[width=0.19\linewidth]{Figures/Fig-10/LIR.jpg} }\hspace*{-2mm} \subfigure[LUM + NDM]{ \includegraphics[width=0.19\linewidth]{Figures/Fig-10/Ours.jpg} }\hspace*{-2mm} \subfigure[Ground-Truth]{ \includegraphics[width=0.19\linewidth]{Figures/Fig-10/GT.jpg} } \caption{Ablation study of the contribution of noise encoder in NDM (compare with BM3D and a GAN-based denoise mode). Red rectangle indicate the obvious differences and amplified details.} \label{fig noise} \end{figure*} \begin{table}[t] \centering \caption{Ablation study of the contribution of noise encoder in NDM in terms of PSNR, SSIM, and NIQE.} \begin{tabular}{c|c|c|c} \hline \textbf{Denoise Model} &\textbf{PSNR}$\uparrow$ &\textbf{SSIM}$\uparrow$ &\textbf{NIQE}$\downarrow$ \\ \hline LUM &19.52 &0.701 &5.480\\ LUM + BM3D~\cite{dabov2007image} &19.57 &0.776 &6.277\\ LUM + Du~{\emph{et al.}}~\cite{du2020learning} &18.74 &0.791 &4.539\\ LUM + NDM &20.23 &0.790 &3.780\\ \hline \end{tabular} \label{table:arc2} \end{table} \subsubsection{Effect of Loss functions} We evaluate the loss functions of the NDM and the evaluation results are shown in table~\ref{table:loss2}. From the results, we can conclude that removing self-reconstruction loss $\mathcal{L}_{recon}$ can significantly reduce PSNR and NIQE scores. Without the KL divergence loss $\mathcal{L}_{KL}$, background consistency loss $\mathcal{L}_{bc}$, and perceptual loss $\mathcal{L}_{per}$, all metrics have dropped a lot. Removing the adversarial loss $\mathcal{L}_{adv}$ cause SSIM and NIQE scores to drop a lot. Finally, when removing the cycle-consistency loss $\mathcal{L}_{cc}$, the NIQE scores have risen by 0.028, but at the same time, PSNR and SSIM have dropped by 0.32dB and 0.01. The entire loss function of NDM is designed to transfer noise image to clean image, and it performs stronger noise suppression on regions where the brightness is significantly promoted after the image brightness enhancement guided by the histogram equalization prior. \begin{table}[t] \centering \caption{Ablation study of the contribution of loss functions in NDM in terms of PSNR, SSIM, and NIQE.} \begin{tabular}{c|c|c|c} \hline \textbf{Loss Function} &\textbf{PSNR}$\uparrow$ &\textbf{SSIM}$\uparrow$ &\textbf{NIQE}$\downarrow$ \\ \hline w/o $\mathcal{L}_{adv}$ &19.66 &0.705 &5.299\\ w/o $\mathcal{L}_{KL}$ &19.68 &0.778 &4.394\\ w/o $\mathcal{L}_{per}$ &19.83 &0.781 &4.389\\ w/o $\mathcal{L}_{cc}$ &19.91 &0.780 &3.752\\ w/o $\mathcal{L}_{bc}$ &19.92 &0.785 &4.143\\ w/o $\mathcal{L}_{recon}$ &19.96 &0.783 &4.234\\ full loss &20.23 &0.790 &3.780\\ \hline \end{tabular} \label{table:loss2} \end{table} \section{Conclusion} \label{conclusion} In this work, we propose an unsupervised network for low-light image enhancement. Inspired by Retinex theory, we design a two-stage network to enhance the low-light image. The first stage is an image decomposition network termed light up module (LUM), and the second stage is an image denoising network termed noise disentanglement module (NDM). The LUM brightens the image by decomposing the images into reflectance and illumination maps. In the absence of ground truth, we introduce an effective prior termed histogram equalization prior to guiding the training process, which is an extension of histogram equalization that investigates the spatial correlation between feature maps. Benefiting from the abundant information of the histogram equalization prior, the reflectance maps generated by LUM simultaneously improve brightness and preserve texture and color information. The NDM further denoises the reflectance maps to obtain the final images while preserving more natural color and texture details. Both qualitative and quantitative experiments demonstrate the advantages of our model over state-of-the-art algorithms. In the future work, we intend to explore more effective prior for low-light image enhancement and investigate some GAN-based methods for low-light and normal-light image transfer. Due to the limited application value of low light enhancement, we also expect to integrate enhancement algorithms with some high-level tasks, such as object detection and semantic segmentation, which can be used for autonomous driving to provide reliable visual aids for dark and challenging environments. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
proofpile-arXiv_065-35
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Episodic accretion onto low-mass pre-main sequence (PMS) stars is no longer considered an oddity. It is now considered as one of the important stages in the grand scheme of evolution of the low-mass PMS stars, even though it is a poorly understood phenomenon. The short outburst timescales compared to the millions of years spent in the formation stage of these PMS stars makes these events extremely rare, although statistically each PMS star is expected to experience $\sim$50 such short duration outbursts during its formation stages \citep{2013MNRAS.430.2910S}. The outburst durations, although short in timescales, are capable of delivering a substantial fraction of circumstellar mass onto the central PMS star \citep{2006ApJ...650..956V}. These events have been observed to span the entire age range of young stars starting from the embedded Class 0 sources to the Class\,{\sc ii} sources \citep{2015ApJ...800L...5S}. Based on the outburst timescales and spectroscopic features, these classes of sources have been classically divided into two categories: FUors which experience a luminosity outburst of 4-5 mag that last for several decades and containing only absorption lines in their spectra and EXors experiencing a luminosity outburst of 2-3 mag which last for a timescale of few months to few years and contains emission lines in their spectra \citep{1977ApJ...217..693H,1996ARA&A..34..207H, 1998apsf.book.....H}. The physical origin of the sudden enhancement of accretion rate is not yet clear. However, a variety of models ranging from thermal instability, magneto-rotational instability, combination of magneto-rotational instability and gravitational instability, disc fragmentation to external perturbations have been proposed \citep{2014prpl.conf..387A}. To arrive at a general consensus about the physics behind such sudden enhancement of accretion rates, a large sample of FUor/EXor sources are required to test the above instability models. However, only about 25 FUor/EXor sources have been discovered so far \citep[][]{2014prpl.conf..387A}. Therefore, any newly discovered source provides an important test-bed to probe the various physical aspects of episodic accretion and their comparison with the previous sources. The $Gaia$ Photometric Alert System \citep{2012gfss.conf...21W,2013RSPTA.37120239H} is dedicated towards issuing transient alerts. Previously, it has issued three proven alerts for the eruptive young stars: Gaia 17bpi \citep{2018ApJ...869..146H}, Gaia 19ajj \citep{2019AJ....158..240H} and Gaia 18dvy \citep{2020ApJ...899..130S}. Among these, Gaia 17bpi and Gaia 18dvy are classified as FUors while Gaia 19ajj has been classified as an EXor, with its spectral features similar to that of V2492 Cyg \citep{2019AJ....158..240H}. The $Gaia$ alert system issued an notification on 2020 August 28 about Gaia 20eae with a transient identification number of AT2020nrs stating that it has undergone a 4.6 magnitude outburst. The rise timescales and the amplitude of the outburst suggest that this should be an FUor/EXor phenomenon. We have carried out optical and near-infrared (NIR) photometric and spectroscopic observations and combined them with the archival optical and infrared (IR) data to identify the outburst features of Gaia 20eae. In this paper, we present the initial findings of this source. Section \ref{pt0} provides details on the location and distance of Gaia 20eae. Section \ref{pt1} describes about the observations and the data reduction procedures in detail. In Section \ref{pt2} we describe the results that have been obtained while in Section \ref{pt3} we conclude by our understanding of the present outburst in the context of the FUor/EXor phenomena. \begin{figure*}[h] \centering \includegraphics[width=0.65\textwidth]{f-w4-w3-K.pdf} \includegraphics[width=0.31\textwidth]{outburst.pdf} \caption{\label{fima} Color-composite image obtained by using the WISE 22 $\mu$m (red), WISE 12 $\mu$m (green) and 2MASS 2.2 $\mu$m (blue) images of the $\sim30^\prime\times 30^\prime$ Field of View (FOV) around Gaia 20eae. The locations of Gaia 20eae, IRAS 19230+1506 and IRAS 19236+1456 \citep{2017ApJ...839..113R} are shown by cyan circles. Locations of standard stars from the ZTF sky survey is also shown with green circles. Sub-panels 1(a) and 1(b) show the pre-outburst and post-outburst phases of Gaia 20eae in optical color composite image taken from SDSS and HCT, respectively.} \end{figure*} \section{G\lowercase{aia} 20\lowercase{eae}: Location and distance} \label{pt0} The Gaia 20eae ($\alpha$$_{2000}$ =19$^{h}$25$^{m}$40$^{s}.61$, $\delta$$_{2000}$ = +15$^\circ$07$^\prime$46$^{\prime\prime}.5$; $l = 50.258492^\circ, b= -00.507730^\circ$) is located near the edge of the W51 star-forming complex. The W51 star-forming region is known to be one of the most massive and active star-forming sites of our Galaxy located at about 5 kpc distance from us \citep{2017arXiv170206627G}. \citet{2017ApJ...839..113R} have listed two molecular cloud MC1 (size$\sim15^\prime \times15^\prime$) and MC2 (size$\sim30^\prime \times36^\prime$) in this direction at two different distances of $1.3\pm0.2$ kpc and $3.4\pm0.4$ kpc, respectively. The distance of these molecular clouds are derived kinematically using the CS ($2\rightarrow1$) line velocities \citep{2004A&A...426...97F}. The MC1 and MC2 are also associated with IRAS sources IRAS 19230+1506 and IRAS 19236+1456, respectively \citep{2017ApJ...839..113R}. These molecular clouds do not have any optical nebula associated with them, indicating that the star-formation activity has started recently. Also, the IRAS sources have ultra-compact H\,{\sc ii} region (UCH{\sc ii}) colors indicating that these molecular clouds are high mass star-forming regions. In Figure \ref{fima}, we show the location of Gaia 20eae along with the IRAS sources in the IR color-composite image generated from WISE 22 $\mu$m (red), WISE 12 $\mu$m (green), and 2MASS 2.2 $\mu$m (blue) images. Heated dust grains (22 $\micron$ emission) can be seen at several places including at the IRAS locations of sources. The warm dust towards the south of Gaia 20eae is surrounded by 12 $\mu$m emission which covers the prominent Polycyclic Aromatic Hydrocarbons (PAH) features at 11.3 $\mu$m, indicative of photon dominant region (PDR) under the influence of feedback from massive stars \citep[see e.g.][]{2004ApJ...613..986P}. This indicates that the Gaia 20eae is located at a site showing signatures of recent star-formation activities. Until the release of data from the Gaia mission\footnote{https://sci.esa.int/web/gaia}, there was no direct measurement of distance of the Gaia 20eae. Recently, adding corrections to the Gaia data release 3 (DR3) parallax by using the Bayesian inference approach to account for the non-linearity of the transformation and the asymmetry of the resulting probability distribution, \citet{2021AJ....161..147B} estimated the distances of stars in our Galaxy. Therefore, for Gaia 20eae, we have adopted a distance of 3.2 $\pm$ 1 kpc as estimated by \citet{2021AJ....161..147B}. Since, molecular cloud MC2 is located in the same direction at a distance of $3.4\pm0.4$ kpc \citep{2017ApJ...839..113R}, Gaia 20eae's seems to be associated with MC2. This makes it the farthest discovered FUor/EXor type source till date. Almost all the previously discovered FUors/EXors are located at a distance of $\sim$1 kpc or less. \begin{table*} \centering \caption{Log of Photometric and Spectroscopic Observations.} \label{tab:obs_log} \begin{tabular}{@{}rrrrr@{}} \hline Telescope/Instrument & Date & Julian Day & Filters/Grisms & Exposure(sec) $\times$ Number of frames \\ \hline 2.0m HCT HFOSC & 2020 Aug 29 & 2459091 & $Gr 7, Gr 8$ & 2100$\times$1, 1800$\times$1 \\ 2.0m HCT HFOSC & 2020 Aug 30 & 2459092 & $Gr 7$ & 2400$\times$1\\ 2.0m HCT HFOSC & 2020 Aug 31 & 2459093 & $B, V, R, I, Gr 7$ & 120$\times$1,60$\times$1,30$\times$1,30$\times$2, 2400$\times$1\\ 2.0m HCT HFOSC & 2020 Sep 01 & 2459094 & $Gr 7, Gr8$ & 2400$\times$1, 2400$\times$1 \\ 2.0m HCT HFOSC & 2020 Sep 07 & 2459100 & $B, V, R, I, Gr7, Gr8$ & 120$\times$2, 60$\times$2, 30$\times$2, 30$\times$2, 2400$\times$1, 1800$\times$1\\ 10.0m HET HPF & 2020 Sep 11 & 2459104 & NIR Cross dispersed echelle & 617.7$\times$2\\ 2.0m HCT HFOSC & 2020 Sep 12 & 2459105 & $B, V, R, I$ & 120$\times$2, 60$\times$2, 30$\times$2, 30$\times$2 \\ 10.0m HET LRS2 & 2020 Sep 12 & 2459105 & Optical low resolution spectra & 157.7$\times$2\\ 2.0m HCT HFOSC & 2020 Sep 14 & 2459107 & $B, V, R, I, Gr7, Gr8$ & 180$\times$4, 60$\times$4, 30$\times$3, 30$\times$2,2700$\times$1, 2700$\times$1\\ 1.3m DFOT 2KCCD & 2020 Oct 11 & 2459134 & $V, R, I$ & 10$\times$30, 10$\times$12, 10$\times$12\\ 1.3m DFOT 2KCCD & 2020 Oct 12 & 2459135 & $V, R, I$ & 10$\times$20, 10$\times$8, 10$\times$8\\ 1.3m DFOT 2KCCD & 2020 Oct 13 & 2459136 & $V, R, I$ & 15$\times$20, 15$\times$8, 15$\times$8\\ 0.5m ARCSAT & 2020 Oct 12 & 2459135 & $g,r, i, z$ & 300$\times$1,120$\times$6,120$\times$6,180$\times$6\\ 0.5m ARCSAT & 2020 Oct 13 & 2459136 & $g,r, i, z$ & 360$\times$5,300$\times$5,240$\times$11,240$\times$5 \\ 0.5m ARCSAT & 2020 Oct 18 & 2459141 & $g,r, i, z$ & 900$\times$2,300$\times$3,240$\times3$\\ 1.3m DFOT 2KCCD & 2020 Oct 19 & 2459142 & $B, V, R, I$ & 60$\times$4, 60$\times$3, 60$\times$1, 60$\times$1\\ 1.3m DFOT 2KCCD & 2020 Oct 20 & 2459143 & $B, V, R, I$ & 60$\times$4, 60$\times$3, 60$\times$1, 60$\times$1\\ 3.6m DOT TANSPEC & 2020 Oct 21 & 2459144 &$J,H, K$ & 10$\times$4$\times$ 5 dither \\ 3.6m DOT TANSPEC & 2020 Oct 24 & 2459147 & Cross Dispersed spectra & 150$\times$8 \\ 3.6m DOT TANSPEC & 2020 Nov 06 & 2459160 & Cross Dispersed spectra & 150$\times$8 \\ 1.3m DFOT 2KCCD & 2020 Nov 08 & 2459162 & $B, V, R, I$ & 240$\times$2, 150$\times$1, 60$\times$1, 60$\times$1 \\ 1.3m DFOT 2KCCD & 2020 Nov 13 & 2459167 & $B, V, R, I$ & 240$\times$7, 150$\times$7, 60$\times$7, 60$\times$7\\ 1.3m DFOT 2KCCD & 2020 Nov 14 & 2459168 & $B, V, R, I$ & 240$\times$4, 150$\times$2, 60$\times$1, 60$\times$1\\ 1.3m DFOT 2KCCD & 2020 Dec 07 & 2459191 & $V, R, I$ & 60$\times$3,60$\times$3,60$\times$3\\ \hline \end{tabular} \end{table*} \section{Observation and Data Reduction} \label{pt1} \subsection{Photometric data} \subsubsection{Present data} We have monitored Gaia 20eae photometrically in optical bands at 16 different epochs with the Himalayan Faint Optical Spectrograph Camera\footnote{\url{https://www.iiap.res.in/iao/hfosc\_details.html}} (HFOSC, 4 epochs) on 2m Himalayan \textit{Chandra} Telescope (HCT), Hanle, India, ANDOR 2K CCD (9 epochs) on 1.3m Devasthal Fast Optical Telescope (DFOT), Nainital India, and FlareCam 1K CCD\footnote{\url{https://www.apo.nmsu.edu/Telescopes/ARCSAT/Instruments/arcsat_instruments.html}} on 0.5m ARC Small Aperture Telescope (ARCSAT, 3 epochs), New Mexico\footnote{Apache Point Observatory (APO) located in Sunspot, New Mexico which is operated by the Astrophysical Research Consortium (ARC). ARCSAT is a 0.5m Classical Cassegranian Telescope, formerly known as the SDSS Photometric Telescope (PT).} from 2020 August to December. We have also obtained the near infrared (NIR) photometric data of Gaia 20eae during its outburst state using the TIFR-ARIES Near Infrared Spectrometer \citep[TANSPEC;][]{2018BSRSL..87...58O} mounted on the 3.6m Devasthal Optical Telescope (DOT), Nainital, India on the night of 2020 October 24. Table \ref{tab:obs_log} provides the complete log of photometric observations presented in this work. We have used standard data reduction procedures for the image cleaning, photometry, and astrometry \citep[for details, see ][]{2020MNRAS.498.2309S}. We have derived following color transformation equations using the available magnitudes in different filters (i.e., APASS DR10 archive\footnote{https://www.aavso.org/download-apass-data} or Two Micron All Sky Survey (2MASS) archive\footnote{https://irsa.ipac.caltech.edu/Missions/2mass.html}) of all the stars in the frame at epoch JD $=$ 2459141 (for optical) and JD $=$ 2459144 (for NIR). \begin{equation} \label{eqn1} B-V = 1.06\pm0.02 \times (b-v) - 0.71\pm0.03 \end{equation} \begin{equation} \label{eqn2} V-R = 0.77\pm0.02 \times (v-r) - 0.21\pm0.02 \end{equation} \begin{equation} \label{eqn3} R-I = 0.88\pm0.04 \times (r-i) + 0.66\pm0.02 \end{equation} \begin{equation} \label{eqn4} R-r = 0.09\pm0.04 \times (V-R) + 2.35\pm0.02 \end{equation} \begin{equation} \label{eqn5} J-H = 0.90\pm0.06 \times (j-h) + 0.05\pm0.09 \end{equation} \begin{equation} \label{eqn6} H-K = 1.00\pm0.07 \times (h-k) + 0.52\pm0.05 \end{equation} \begin{equation} \label{eqn7} J-j = 0.01\pm0.04 \times (J-H) - 1.20\pm0.04 \end{equation} In order to calibrate the photometry of Gaia 20eae at other epochs, we have used these equations with intercept estimated from a set of 7 non-variable standard stars (see Table \ref{tab:std_log}). These non-variable standard stars were identified from the Zwicky Transient Facility (ZTF) sky survey \citep{Bellm_2018} based on their zr band light curves (LCs\footnote{https://irsa.ipac.caltech.edu/Missions/ztf.html}). Table 2 lists the magnitudes of Gaia 20eae in different filters in different epochs of our observations. \begin{table*} \centering \tiny \caption{Photometric magnitudes of Gaia 20eae in different filters using the present observations.} \label{tab:phot_tab} \begin{tabular}{@{}rrrrrrrrrrrr@{}} \hline JD & $B$ & $V$ & $R_c$ & $I_c$ & $g$ & $r$ & $i$ & $z$ & $J$ & $H$ & $K_{s}$\\ & (mag) & (mag) & (mag) & (mag) &(mag) &(mag) & (mag) & (mag) & (mag) & (mag)& (mag) \\ \hline 2459093 & 18.60$\pm$0.03 & 16.49$\pm$0.01 & 15.23$\pm$0.01 & 14.33$\pm$0.01 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$\\ 2459100 & 18.88$\pm$0.01 & 16.74$\pm$0.01 & 15.41$\pm$0.01 & 14.45$\pm$0.01 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ 2459105 & 18.63$\pm$0.01 & 16.52$\pm$0.01 & 15.23$\pm$0.01 & 14.38$\pm$0.01 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ 2459107 & 18.16$\pm$0.01 & 16.47$\pm$0.01 & 15.20$\pm$0.01 & 14.27$\pm$0.01 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ 2459134 & $-$ & 17.43$\pm$0.01 & 16.11$\pm$0.01 & 14.71$\pm$0.02 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ 2459135 & $-$ & 17.19$\pm$0.01 & 15.52$\pm$0.01 & 14.66$\pm$0.02 & 18.01$\pm$0.07& 16.24$\pm$0.01& 15.02$\pm$0.01& 14.22$\pm$0.02 & $-$ & $-$ & $-$ \\ 2459136 & $-$ & 17.47$\pm$0.02 & 15.68$\pm$0.01 & 14.70$\pm$0.01 & 17.94$\pm$0.03& 16.17$\pm$0.04& 15.03$\pm$0.02& 14.23$\pm$0.02 & $-$ & $-$ & $-$ \\ 2459141 & $-$ & $-$ & $-$ & $-$ & 17.86$\pm$0.04& 16.26$\pm$0.07& 15.11$\pm$0.04& $-$ & $-$ & $-$ & $-$ \\ 2459142 & 19.21$\pm$0.03 & 17.38$\pm$0.01 & 16.08$\pm$0.01 & 14.72$\pm$0.01 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ 2459143 & 19.12$\pm$0.04 & 17.14$\pm$0.01 & 15.91$\pm$0.02 & 14.61$\pm$0.01 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ 2459144 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & 12.41$\pm$0.02 & 11.27$\pm$0.03 & 10.40$\pm$0.03 \\ 2459162 & 19.54$\pm$0.03 & 17.74$\pm$0.01 & 16.41$\pm$0.01 & 15.08$\pm$0.01 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ 2459167 & 18.67$\pm$0.01 & 16.96$\pm$0.01 & 15.73$\pm$0.01 & 14.45$\pm$0.01 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ 2459168 & 18.77$\pm$0.03 & 17.08$\pm$0.01 & 15.83$\pm$0.01 & 14.54$\pm$0.01 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ 2459191 & $-$ & 17.52$\pm$0.01 & 16.20$\pm$0.01 & 14.89$\pm$0.01 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ \hline \end{tabular} \end{table*} \begin{table} \centering \caption{Coordinates of the local standard stars.} \label{tab:std_log} \begin{tabular}{@{}rrrrr@{}} \hline ID & $\alpha$$_{2000}$ & $\delta$$_{2000}$ & $zr$$\pm$$\sigma$ \\ & (degrees) & (degrees) & (mag) \\ \hline 1 & 291.423929 & +15.136802 & 15.51$\pm$0.01\\ 2 & 291.424941 & +15.133386 & 16.01$\pm$0.01\\ 3 & 291.403179 & +15.140891 & 15.50$\pm$0.01\\ 4 & 291.384237 & +15.130902 & 14.95$\pm$0.01\\ 5 & 291.384966 & +15.140891 & 15.02$\pm$0.01\\ 6 & 291.392691 & +15.117122 & 12.75$\pm$0.01\\ 7 & 291.368150 & +15.148200 & 13.10$\pm$0.01\\ \hline \end{tabular} \end{table} \subsubsection{Archival data} We have also obtained the photometric data from the time domain Gaia sky survey \citep{2016A&A...595A...1G,2018A&A...616A...1G}. Gaia sky survey maps the sky in $G$ band to look out for the transients and regularly updates on their Gaia Alert Index website\footnote{http://gsaweb.ast.cam.ac.uk/alerts/alertsindex}. We have obtained the $G$ band photometric data provided by the Gaia survey at its data archive\footnote{https://gea.esac.esa.int/archive/}. We have also acquired the pre-outburst $g,r,i,z$ band archival data from the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS; PS1). The details about the PS1 surveys and latest data products are given in \citet{2016arXiv161205560C}. We have downloaded the point source catalog from the data release 2 of the PS1\footnote{http://catalogs.mast.stsci.edu/}. Gaia 20eae was observed by the Zwicky ZTF sky survey \citep{Bellm_2018}. We obtained the archival $zr$ band photometric data of ZTF available from the NASA/IPAC Infrared Science Archive\footnote{https://www.ipac.caltech.edu}. We also obtained the recent photometric data in $zg$ and $zr$ bands of the observation made by the ZTF survey from Lasair 2.0\footnote{https://lasair.roe.ac.uk/}, a community broker service to access, visualize and extract science data. We obtained the pre-outburst mid-infrared (MIR) magnitudes of Gaia 20eae from the $Spitzer$ archive\footnote{https://irsa.ipac.caltech.edu/Missions/spitzer.html} in 3.6 $\mu$m, 4.5 $\mu$m, 5.8 $\mu$m, 8.0 $\mu$m and 24.0 $\mu$m wave bands. Pre-outburst magnitudes of Gaia 20eae are also obtained from the $WISE$ archive\footnote{https://irsa.ipac.caltech.edu/Missions/wise.html} in 3.4 $\mu$m, 4.6 $\mu$m, 12 $\mu$m and 22 $\mu$m wave bands. Outburst magnitudes of Gaia 20eae are obtained from the $WISE/NEOWISE$ survey\footnote{https://irsa.ipac.caltech.edu/Missions/wise.html} in 3.4 $\mu$m and 4.6 $\mu$m wave bands. \subsection{Spectroscopic data} \subsubsection{Medium resolution Optical/NIR Spectroscopy} A photometric alert was issued by the Gaia alert system named AT2020nrs on 2020 August 28, 1:28 p.m. UTC. We immediately followed it using a medium resolution (R$\sim$2000) spectrograph `HFOSC' mounted on the 2m HCT starting from 2020 August 29 itself. Using the $Gr 7$ and $Gr 8$ grisms of HFOSC, our spectroscopic observations spanned the optical wavelength range from $\rm \sim4000\AA $ to $\rm 9000\AA$. We have also observed the flux calibrator `Feige 110' on each night after Gaia 20eae to flux calibrate our HFOSC spectra. On 2020 September 12, we also obtained a medium resolution optical spectrum (R$\sim$1140, 1760 and 1920 for the Orange Arm, Red Arm and Far Red Arm respectively) using the LRS-2 Red Integral Field Unit spectrograph on 10-m Hobby-Eberly Telescope (HET) \citep{HETRamsey,HETqueue}, USA. LRS2-R spectrum was reduced using standard LRS2 pipeline, Panacea\footnote{\url{ https://github.com/grzeimann/Panacea/blob/master/README\_v0.1.md}}. Finally, we scaled our flux calibrated spectra to match the flux and slope obtained from the photometric flux values of the same date. In case, photometric magnitudes are not available on the same date we have scaled our spectra with the photometric flux values of the nearest date. This is done to correct for any residual systematics in the flux calibration for the HFOSC due to its sensitivity to seeing variations and centering errors on the slit. This is also important for the LRS2 IFU observation, since the night was hazy due to smoke from wildfires, resulting in a highly variable non-grey atmospheric extinction. We obtained NIR spectra of Gaia 20eae using the TANSPEC with its 0$^{\prime\prime}.5$ slit providing a R$\sim$2700 on the nights of 2020 October 24 and 2020 November 6. Standard NIR dithering technique i.e, obtaining the spectra at two different slit positions, was followed. The final spectrum of the object is obtained by subtracting the spectra obtained at the dithered positions to cancel the sky contribution. A telluric standard star was also observed immediately after the Gaia 20eae observations to remove the telluric features. We have used the standard tasks of IRAF\footnote{IRAF is distributed by National Optical Astronomy Observatories, USA which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with National Science Foundation for performing image processing.} to reduce medium resolution spectroscopic data. The task {\sc apall} was used to extract the one dimensional spectrum. The extracted spectrum was then calibrated using {\sc identify} task with the help of the calibration lamps taken immediately after the source spectrum. Finally, {\sc continuum} task of IRAF is used to continuum normalize the spectra in order to measure the equivalent widths (EWs) of different lines. Standard IRAF tasks {\sc standard}, {\sc sensfunc} and {\sc calibrate} were used to flux calibrate our spectra. \subsubsection{High Resolution Near Infrared Spectroscopy} We observed a high resolution NIR spectrum of Gaia 20eae on 2020 September 12 using the Habitable Zone Planet Finder (HPF) \citep{2012SPIE.8446E..1SM,mahadevan_habitable-zone_2014} on the 10m HET. HPF covers the wavelength range of 8100 -- 12800 $\AA$, at a spectral resolution of R$\sim$55,000. The H2RG up-the-ramp raw data cube was reduced to 1D spectra by the procedures described in \citet{ninan_habitable-zone_2018,kaplan2018,stefansson20}. The wavelength calibration was done using a laser frequency comb calibrator as described in \citet{2019OptL...44.2673M}. Barycentric correction was applied to all spectra with the values calculated using \texttt{barycorrpy} \citep{kanodia_python_2018}. In summary, we have monitored Gaia 20eae spectroscopically at 10 different epochs with the HFOSC (6), TANSPEC (2), LRS2-R (1) and HPF (1). Table \ref{tab:obs_log} provides the complete log of spectroscopic observations presented in this work. \section{Results and Analysis} \label{pt2} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{gaia20eae_lc1f.pdf} \includegraphics[width=0.95\textwidth]{gaia20eae_lczoomed1mod1f.pdf} \caption{\label{flc}Upper panel: Historical Light Curve (LC) of Gaia 20eae in $Gaia$ $G$, ZTF $zg$ and $zr$, Johnson-Cousins $B$, $V$, $R$ and $I_{C}$ and $SDSS$ $g$, $r$, $i$ and $z$ bands showing both pre-outburst (white region) and outburst (shaded region) phases. Lower panel: Zoomed-in LC of Gaia 20eae in the outburst phase. The dark and intermediate brown regions represent the current plateau phase and the transition region, respectively. The vertical brown and magenta solid lines are representing the epochs when HCT HFOSC and HET HPF spectra were taken, respectively. Olive line denotes the epoch of TANSPEC spectroscopic observations. } \end{figure*} \subsection{Gaia 20eae during Quiescent phase} \subsubsection{Physical properties} Gaia 20eae is named as `SSTGLMC G050.2584-00.5077' and classified as a candidate young stellar source (YSO) due to its red color in MIR bands using the $Spitzer$ photometry by \citet{2008AJ....136.2413R}. Later on, this source was classified as Class\,{\sc ii} YSO based on its IR spectral index \citep[MC1-M15 in the Table 4 of][]{2017ApJ...839..113R}. \citet{2017ApJ...839..113R} also derived its mass as 1.5 M$_\odot$ assuming an age of 2 Myr for a typical Class\,{\sc ii} source and a distance of 1.3 kpc. As \citet{2021AJ....161..147B} have estimated a distance of 3.2 kpc for Gaia 20eae from Gaia DR3, this would result in a different mass estimation. However, in the absence of direct measurement of $A_V$ around this source, it would be very difficult to derive its accurate physical parameters (e.g., age/mass). \subsubsection{Light Curve} The upper panel of Figure \ref{flc} shows the Light Curve (LC) of Gaia 20eae in $Gaia$ $G$, ZTF $zg$ and $zr$, $Johnson-Cousins$ $B$, $V$, $R$ and $I$ and $SDSS$ $g$, $r$, $i$ and $z$ bands. It is worthwhile to mention here that, although the ZTF \citep{Bellm_2018} and $SDSS$ filters cover similar wavelengths, they have differences in their cutoff wavelength and transmission curve. The Gaia $G$ band data cover the longest time span of the LC starting from 2014 October 27 (JD=2456957) upto 2020 December 9 (JD=2459192). The ZTF $zr$ band has data from 2018 April 9 (JD$=$2458218) to 2020 November 27 (JD$=$2459180), whereas $zg$ band data are available from 2020 May 16 (JD$=$2458985) to 2020 November 25 (JD$=$2459178). The ZTF photometric data have better temporal sampling (2-3 days) as compared to the Gaia $G$ band data (10-15 days). The LC clearly demonstrates a long quiescent period with minor fluctuations until 2019 October 28 (JD$=$2458785), after that, it began to transit to the present outburst stage. The pre-outburst magnitudes of Gaia 20eae were $G\sim19.49$ mag (2019 November 30; JD$=$2458818) and $zr\sim19.40$ mag (on 2019 October 28; JD$=$2458785). The mean $G$ and $zr$ magnitudes of Gaia 20eae during the quiescent phase were $19.14\pm0.26$ mag (from 2014 October 27 to 2019 November 30) and $19.46\pm0.17$ mag (from 2018 April 9 to 2019 October 28), respectively. As the quiescent phase LC of Gaia 20eae shows small scale fluctuations, we searched for periodic variability in it using the ZTF $zr$ band data. Periodic variability has been reported in the PMS stars which is due to the rotation of the star having hot and cool spots on their photosphere. We have used the Period\footnote{\url{http://www.starlink.rl.ac.uk/docs/sun167.htx/sun167.html}} software, which works upon the principle of Lomb-Scargle (LS) periodogram \citep{1976Ap&SS..39..447L, 1982ApJ...263..835S}, to determine the period of Gaia 20eae and to phase fold the LC. The advantage of the LS method is that it is effective even in case of the data set being non-uniformly sampled. We have also used the NASA Exoplanet Archive Periodogram\footnote{\url{https://exoplanetarchive.ipac.caltech.edu/docs/tools.html}} service for cross verification. The periods obtained in both the cases matched well. The period of Gaia 20eae thus comes out to be 2.1$\pm$0.004 days. The period detected in the quiescent LC of the Gaia 20eae might correspond to the rotational period of the star. This type of period is commonly observed in Class\,{\sc ii/iii} type of YSOs as shown by \citet{2020MNRAS.493..267S}. Figure \ref{period_quiescent} shows the phase folded LC of the Gaia 20eae during its quiescent phase. The amplitude of variation is of the order of 0.2 mag which is also typical of Class\,{\sc ii/iii} type of YSOs \citep{2020MNRAS.493..267S}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{gaia_215d_new1.pdf} \vspace{0.5cm} \caption{\label{period_quiescent} Quiescent phase folded LC of Gaia 20eae as obtained from the ZTF $zr$ band data. The period is determined by using the Period software and also cross-matched with the NASA Exoplanet Archive Periodogram service. } \end{figure} \begin{figure*} \centering \includegraphics[width=0.46\textwidth]{gaia20eae_sed_change3.pdf} \includegraphics[width=0.45\textwidth]{gaia20eae_sed_magnitude1.pdf} \caption{\label{SEDnew}Left panel: The photometric SEDs of Gaia 20eae in quiescent (red dots) and active (cyan dots) phases. Right panel: Changes in SEDs, the observed magnitude differences (squares) are compared against pure dust-clearing events for $A_z = -2$ mag (circles) and $A_z= -4$ mag (triangles) and for $R_V$=3.1 (filled) or $R_V$=5.5 (open) dust laws. } \end{figure*} \begin{table*} \centering \caption{Reddening invariant colors in the quiescent and the outburst phases of the Gaia 20eae. N$\sigma$ is the ratio of color change to the quadrature sum of measurement errors.} \label{tab:cc_table1} \begin{tabular}{@{}cccccccccc@{}} \hline Days & $Q_{BVR}\pm\sigma$ & N$\sigma$ & $Q_{VRI}\pm\sigma$& N$\sigma$ & Days & $Q_{BVR}\pm\sigma$& N$\sigma$ & $Q_{VRI}\pm\sigma$ & N$\sigma$\\ (JD) & (mag) & & (mag) & & (JD) & (mag) & & (mag) & \\ \hline 2456928 & 0.25$\pm$0.15 & $-$ & 0.00$\pm$0.07 & $-$ & 2459135 & $-$ & $-$ & 0.88$\pm$0.02 & $-12.0$\\ 2459093 & 0.18$\pm$0.04 & 0.5 & 0.42$\pm$0.01 & $-5.9$ & 2459141 & $-0.16$ $\pm$0.04 & 2.7 & 0.03$\pm$0.02 & $-0.5$\\ 2459100 & 0.10$\pm$0.01 & 1.0 & 0.44$\pm$0.01 & $-6.3$ & 2459142 & 0.09$\pm$0.05 & 1.0 & 0.02$\pm$0.03 & $-0.3$\\ 2459105 & 0.14$\pm$0.01 & 0.8 & 0.50$\pm$0.01 & $-7.0$ & 2459161 & $-0.24$ $\pm$0.04 & 3.3 & 0.09$\pm$0.02 & $-1.3$\\ 2459107 & $-0.25$ $\pm$0.01 & 3.4 & 0.40$\pm$0.01 & $-5.7$ & 2459166 & $-0.17$ $\pm$0.01 & 2.8 & 0.04$\pm$0.01 & $-0.5$\\ 2459133 & $-$ & $-$ & 0.02$\pm$0.02 & $-0.2$ & 2459167 & $-0.22$ $\pm$0.04 & 3.1 & 0.05$\pm$0.01 & $-0.7$\\ 2459134 & $-$ & $-$ & 0.87$\pm$0.02 & $-11.9$& 2459190 & $-$ & $-$ & 0.10$\pm$0.01 & $-1.4$\\ \hline \end{tabular} \end{table*} \subsection{Gaia 20eae during outburst phase} \subsubsection{Light Curve} In the lower panel of Figure \ref{flc}, we show the LC of Gaia 20eae in $B$, $V$, $R$, $I$, $G$, $zg$, $zr$, $g$, $r$, $i$, $z$ bands in the outburst phase. The LC starts from 2020 March 3 (JD $=$ 2458800) upto the latest data point on 2020 December 9 (JD $=$ 2459192). The LC of Gaia 20eae is peculiar in the sense that the rise to peak brightness consists of two parts: an initial slow rise from the quiescent phase starting from JD $=$ 2458800 to JD $=$ 2458995 and then a rapid rise to the peak brightness from JD $=$ 2459014 to JD $=$ 2459047, reaching maximum brightness on JD $=$ 2459047, and then a slowly decaying phase (JD $=$ 2459047 to JD $=$ 2459145). It also shows small scale fluctuations with amplitude of $\sim$0.2 mag on a time scale of few days. We were not able to derive the periodicity of these fluctuations using the LS periodogram. We call the rapid rise part and slow decaying part of the LC as transition phase and active plateau phase, respectively, and the same have been labeled and shadowed with different colors in the lower panel of Figure \ref{flc}. We have calculated the rise-rate and decay-rate of Gaia 20eae in different wavelengths from the available Gaia and ZTF data by fitting a least square straight line in the different phases of the LC. The fits for the data points in the transition and active plateau phases are shown in the lower panel of Figure \ref{flc}. The overall best-fit rise-rate from the quiescent phase to the maximum brightness is calculated to be 0.6 mag/month in the $G$ band, over a duration of $\sim$247 days. This rise rate is prone to higher uncertainties as there are lots of data gaps in the LC initially. The rise-rate in transition phase was found to be similar i.e. $\sim$0.1 mag day$^{-1}$ or $\sim$3 mag month$^{-1}$ in $G$, $zg$ and $zr$ bands. The decay rate in the active plateau phase is calculated as 0.01 mag day$^{-1}$ (or 0.3 mag month$^{-1}$) for a duration of $\sim$98 days, which is an order less than the rise-rate. It is to be noted here that Gaia 20eae has not returned to its quiescent state. Hence, the decay rate that we calculated is by considering the data range that we have presented in this study. The maximum brightness in the current outburst phase in the $zg$ and $zr$ bands is recorded on 2020 July 16 (JD$=$2459047) as 17.02 mag and 15.09 mag, respectively. In the $G$ band, the source reached the maximum brightness of 14.89 magnitude on 2020 August 26 (JD $=$ 2459088). Thus, the current outburst magnitude amplitudes are: $\Delta G$ $=$ 4.25 mag and $\Delta zr$ $=$ 4.37 mag. The LCs in $B,V,R$ and $I$ also follow the trend of the ZTF and Gaia LCs. The quiescent phase $J,H,$ and $K_s$ magnitudes of Gaia 20eae are 14.67 mag, 13.36 mag and 12.16 mag, respectively as obtained from the UKIDSS DR10plus. During the present outburst stage, the $J,H,$ and $K_s$ magnitudes of Gaia 20eae as obtained from TANSPEC are 12.41 mag, 11.27 mag and 10.40 mag, respectively. This implies that the present outburst is similar to the FUor family of objects and is almost wavelength independent. \subsubsection{The evolution of the photometric Spectral Energy Distribution} Figure \ref{SEDnew} left panel represents the Spectral Energy Distributions (SEDs) of Gaia 20eae during its quiescent and active states represented in red and cyan curves, respectively. We constructed the quiescent phase SED using the multiwavelength data (optical to MIR wavelengths, i.e. 0.44 ($B$), 0.55 ($V$), 0.65 ($R$), 0.80($I$), 1.2 ($J$), 1.6 ($H$), 2.2 (K$_s$), 3.4($W1$), 3.6($I1$), 4.6($W2$), 4.5($I2$), 5.8($I3$), 8.0($I4$), 12($W3$), 22($W4$) and 24($I4$) $\mu m$) taken from the data archives (PS1, 2MASS, $Spitzer$ and $WISE$). The PS1 magnitudes were transformed to Johnson Cousins system by using the equations given by \citet{2012ApJ...750...99T}. For the outburst state SED, we have used current optical and NIR band observations as well as NEOWISE data. Apart from having a shift in the brightness, there is clearly a change in the shape of the SED at the longer wavelengths as compared to the shorter wavelengths. This change in the SED can be quantified by comparing the differences in observed magnitudes of Gaia 20eae with that of pure dust clearing events. Right panel of Figure \ref{SEDnew} shows the observed magnitude differences of Gaia 20eae between its quiescent and active state by a blue curve. We compare the magnitude variations against $\Delta$A$_z$ = $-2$ mag and $\Delta$A$_z$ = $-4$ mag for R$_V$ = 3.1 and R$_V$ = 5.5 dust laws \citep[see also,][]{2004ApJ...616.1058M}. From the deviation of the observed magnitude difference with the dust clearing events, we can conclude that the brightening of Gaia 20eae cannot be explained by the diminishing of the line-of-sight extinction, rather the enhancement of accretion rate is the likely cause. Using the multi-epoch data from the PS1 archive (quiescent phase), 2m HCT and 1.3m DFOT (outburst phase), we have also examined the evolution of the reddening invariant colors, `$Q_{xyz}$' of Gaia 20eae as it transitioned from quiescent state to the eruptive state. The reddening invariant colors have a generic form of : $Q_{xyz} = (x-y)-[(y-z)E(x-y)/E(y-z)]$, where x, y, and z are the observed magnitudes in each passband \citep{2004ApJ...616.1058M}. A color change having $\Delta$ $Q_{xyz}$ $\neq$ 0 indicates that the changes in SED is not due to pure dust clearing. The estimated reddening-invariant colors (for R${_V}$ = 3.1) for Gaia 20eae as it transitioned from the quiescent state to the active state are listed in Table \ref{tab:cc_table1}. The large change in most of the Q$_{xyz}$ values also points towards that the increase in the brightness of Gaia 20eae is not consistent with a dust-clearing event, rather an intrinsic change occurred in the SED. It is also to be noted that in the `Active Plateau Region' of Gaia 20eae starting from JD$=$2549141, the value of Q$_{VRI}$ is close to 0. This might imply that there was no change in the SED intrinsically in this duration. \begin{figure*} \centering \includegraphics[width=0.98\textwidth]{gaia20eae_misc_modn3lognew.pdf} \includegraphics[width=0.95\textwidth]{gaia20eae_totn.pdf} \caption{\label{frr} Evolution of the flux-calibrated spectra of Gaia 20eae during our monitoring period obtained using HFOSC on 2m HCT and LRS2 on 10m HET starting from 2020 August 29 to 2020 September 14 (upper panel). The lines used for the present study have been marked. The flux$-$calibrated spectrum show variation in the continuum level which is also present in the LC. Lower panel shows the normalized flux of Gaia 20eae obtained using TANSPEC on 2020 October 24 and 2020 November 06.} \end{figure*} \subsubsection{Spectral features} Figure \ref{frr} shows our medium resolution spectra covering a very broad wavelength range ($\sim$0.4 - 2.4 $\mu$m). The optical spectra are flux-calibrated whereas the NIR spectra are only normalized spectra. Both the optical and NIR spectra of Gaia 20eae consist of a mixture of the lines typically observed in the FUor and EXor family of sources. Evolution of these spectral lines at different epochs of the outburst phase are shown in Figure \ref{fca}. Similar to the FUor sources, Gaia 20eae exhibits blue-shifted absorption features in Na I resonance line and H$\beta$ line, indicative of the powerful winds from the source. It also shows strong P Cygni profile in H$\alpha$ and Ca II IR triplets (IRT) in emission. Fe II line at $\lambda$5018$\rm\AA$ which, is seen in emission in EX Lupi, is found to be in absorption but the Fe II line at $\lambda$6433$\rm\AA$ is found to be in emission. K I $\lambda$7694$\rm\AA$ and O I $\lambda$8446$\rm\AA$ lines are found to be in absorption. The strength of H$\beta$ and Na I D lines can be seen decreasing during the outburst phase of Gaia 20eae. Spectroscopically, in the optical regime the spectrum of Gaia 20eae resembles a FUor, whereas in the NIR regime it is more or less similar to an EXor. Our medium resolution NIR spectra show several distinct spectral features, most of them are in absorption. The gaps in the spectra represent atmospheric absorption windows due to the broad H$_2$O and OH bands. We could identify some prominent lines: Ca II IRT, He I at $\lambda$10830$\rm\AA$ and the CO bandheads. \textbf{CO bandhead in K band:} The CO (2-0) and CO(3-1) bandhead absorption are one of the defining characteristics of FUors \citep{1998apsf.book.....H}. The CO bandheads in Gaia 20eae are in emission, implying a temperature inversion at the surface of the protoplanetary disc. This is similar to that observed in other EXor sources like V2492 Cyg \citep{2011AJ....141..196A}. Thus based on the CO bandhead lines, Gaia 20eae resembles more of an EXor source.\footnote{It should be noted that some intermediate FUor/EXor sources like V1647 Ori have shown CO bands in both emission as well as in absorption at different stages of its outbursts.} \textbf{Radial velocity of Gaia 20eae:} Due to lack of symmetric photospheric lines in the high resolution spectrum of Gaia 20eae, it is hard to estimate the radial velocity of the star. The chromospheric Fe I emission lines were found to be the most symmetric lines in the spectrum, and the line center of the Fe $\lambda$8387.77$\rm\AA$ at 20 km s$^{-1}$ is taken as the stellar radial velocity with respect to the solar system barycentre in this study. The corresponding velocity in the local standard of rest reference frame comes out to be $\sim$35 km s$^{-1}$. It is to be mentioned that the peak velocity of the $^{13}$CO for the molecular cloud `MC2' is 42 km s$^{-1}$ with respect to the local standard of rest \citep{2017ApJ...839..113R}. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{gaia20eae_mplot_new.pdf} \includegraphics[width=0.95\textwidth]{gaia20eae_nir2.pdf} \caption{\label{fca} Evolution of the H$\beta$, Na I D $\lambda$5890/6$\rm\AA$, H$\alpha$, O I $\lambda$7773$\rm\AA$, Ca II IRT, He I, CO(2-0) and CO(3-1) lines during our monitoring period.} \end{figure*} \subsection{Physical parameters of Gaia 20eae during outburst phase} \subsubsection{Disc turbulence and outflow wind velocities} The panel (d) in Figure \ref{fca} represents the evolution of O I line at $\lambda$7773$\rm\AA$ of Gaia 20eae during our spectroscopic monitoring period. The formation of O I line at $\lambda$7773$\rm\AA$ in T Tauri stars, which is an indicator of turbulence, is attributed to the presence of warm gas in the envelope surrounding the disc or in the hot photosphere above the disc \citep{1992ApJS...82..247H}. Table \ref{tab:velocity} shows the variation in the value of equivalent width (EW) of O I line at $\lambda$7773$\rm\AA$ during our monitoring period. The mean value of the equivalent width of O I line at $\lambda$7773$\rm\AA$ is estimated as $4.7 \pm 1.2$ $\rm\AA$. The scatter in the EW values is twice the error in its estimation, indicating the presence of turbulent medium surrounded around Gaia 20eae during its outburst phase. For future inference, we have also tabulated the EW values of other lines during different epochs of the outburst phase of Gaia 20eae in Table \ref{tab:velocity}. Outflow wind velocity of the Gaia 20eae is estimated from the blue-shifted absorption minima of H$\alpha$, Na I D and H$\beta$ lines \citep{1998apsf.book.....H}. The estimated values of wind velocity by Doppler shift at different epochs in the outburst phase of Gaia 20eae are listed in the Table \ref{tab:velocity}. The values show the variation from $-630$ to $-203$ km s$^{-1}$. The mean velocity of the outflow wind velocity for H$\alpha$, Na I D and H$\beta$ comes out to be $-505\pm62$ km s$^{-1}$, $-356\pm49$ km s$^{-1}$ and $-339\pm136$ km s$^{-1}$, respectively. The typical error in the outflow wind velocity estimation is $\sim$25 km s$^{-1}$, therefore, a large scatter in its values can be attributed to intrinsic variation of outflow winds during the outburst phase. Resonant scattering from meta-stable Helium atoms are excellent traces of the outflow winds from YSOs \citep{Edwards_2003}. The EUV to X-ray radiation from magnetospheric accretion or chromosphere activity can cause significant formation of meta-stable triplet ground state of Helium atoms. During the $\sim$2.5 hours when they typically survive in this meta-stable state, they could resonant scatter the $\lambda$10830$\rm\AA$ photons resulting in a strong absorption signal at the local velocity of the gas. Figure \ref{HPFHe10830} shows the very strong blue-shifted He $\lambda$10830$\AA$ absorption signature in the high resolution spectrum of Gaia 20eae. On the red side, the absorption profile extends to about +200 km s$^{-1}$ and on the blue side, the absorption profile extends beyond $-400$ km s$^{-1}$. Unfortunately, the high resolution spectrum beyond $-400$ km s$^{-1}$ could not be measured since it falls outside the detector in the HPF spectrograph. Our medium resolution TANSPEC spectra in the panel (f) in Figure \ref{fca} shows the blue-shifted absorption extending to $-513$ km s$^{-1}$ on 2020 October 24 and reducing to $-493$ km s$^{-1}$ by 2020 November 6. Such strong blue-shifted He $\lambda$10830$\rm\AA$ triplet signatures are common in YSOs with strong outflows. Gaia 20eae is not an exception. The reduction in the blue-shifted wing velocity of He $\lambda$10830$\rm\AA$ over a span of two weeks could be either due to change in the structure of the outflow winds or due to drop in the EUV - X-Ray irradiation. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Gaia_20eae_He10830Lines_HPF_fl.pdf} \caption{\label{HPFHe10830} High resolution line profile of the He 10830 triplet on 2020 September 12 showing a large blue-shifted component, likely originating in the strong outflow from Gaia 20eae. The absorption signal extends from +200 km s$^{-1}$ on the red side, to beyond $-400$ km s$^{-1}$ on the blue side (the profile is truncated at the detector edge in HPF). The vertical dashed lines show the location of the He 10830 triplets in the stellar rest frame. The narrow lines in the spectrum are telluric.} \end{figure} \subsection{Magnetospheric accretion and line profiles} High resolution line profile shapes provide us direct measurement of the kinematics of the hot gaseous environment of the accretion region. In this section we highlight some of the most interesting line profiles we detected in Gaia 20eae. {\subsubsection{Infall signature in Ca II IRT} } Ca II IR triplet emission lines are believed to originate in the active chromosphere as well as in the magneto$-$spheric accretion funnel regions \citep{hamann1992,1998AJ....116..455M}. Our observation of Gaia 20eae on 2020 September 12 detected a red-shifted absorption component (with respect to the stellar rest frame) in all the three Ca II IR triplet lines (See Figure \ref{HPFCaII}). The smooth curves show the best fit of a double Gaussian model, where the first component fits the broad emission line and the second component fits the red-shifted absorption component at +25 km s$^{-1}$. The red-shifted absorption cannot originate in stellar wind or outflows. One possible region of origin could be the hot in-falling gas in the magnetospheric accretion funnel \citep{edwards1994}. The lower panel in Figure \ref{HPFCaII} shows this absorption component normalized to the fitted emission line profile. The ratio of equivalent widths of these absorption components (EqW: 0.27, 0.49, 0.41 $\AA$) is inconsistent with the optically thin line formation scenario (1:9:5).\footnote{This is unlike the optically thin blue-shifted absorption from winds typically seen in similar FUors/EXors V1647 Ori and V899 Mon \citep{2013ApJ...778..116N,2015ApJ...815....4N}.} \textbf{Constrain on the viewing angle:} The detection of the absorption profile implies the line of sight is along the increasing temperature gradient, and we are not seeing an infalling hot gas in the foreground of a cooler environment. i.e., the viewing angle is through the accretion funnel to the footprint on the stellar surface where it is hottest. For a star of mass M, radius $R_*$, and the disc infall radius $r_m$, the velocity of the infalling gas along the magnetic field line direction is given by the formula $v_p(r) = [\frac{2GM}{R_*} (\frac{R_*}{r} - \frac{R_*}{r_m})]^{1/2}$ \citep{hartmann1994}. For Gaia 20eae, this would imply a velocity of $\sim$350 km s$^{-1}$ at the base of the funnel, and in the order of $\sim$50 km s$^{-1}$ (or $\sim$25 km s$^{-1}$ along line of sight) at a radius very close to start of infall near the truncated accretion disc. This combined with the requirement of hotter background against which this low velocity infall gas is viewed, constrains the viewing angle as shown in Figure \ref{AccDiagram}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Gaia_20eae_CaIIITR_HPF_LineProfilefit_fl.pdf} \caption{\label{HPFCaII} Top-panel: High resolution line profiles of all the three Ca II IRT lines on 2020 September 12 showing a red-shifted absorption component. This is likely originating in the magnetospheric accretion funnel. The fainter step style plots behind the bold lines are the measured spectra. The bold curves are a 3 pixel smoothed spectra shown for clarity. The thin smooth curve is the double Gaussian composite fit of the emission at the stellar rest velocity and a red-shifted absorption at $\sim$+25 km s$^{-1}$. The individual components are shown in dashed curves. Lower panel: Normalized spectrum using the continuum plus the fitted emission Gaussian component.} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{paper_cartoon_final.pdf} \caption{\label{AccDiagram} Diagram of the classical dipole magnetospheric accretion funnel, and the region of the viewing angles that could potentially result in a low velocity red-shifted absorption signature on top of the broad Ca II IR triplet emission line is shown in the diagram.} \end{figure} \subsubsection{Hydrogen Paschen lines} Figure \ref{HPFHPa} shows the Hydrogen Paschen lines from Pa (14-3) to Pa $\gamma$ (6-3). Only the lines which are not completely lost in telluric bands are plotted here. The higher energy level Paschen lines are detected as broad absorption lines extending from $-250$ km s$^{-1}$ to +250 km s$^{-1}$. However, in the lowest energy levels lines, Pa$\gamma$ $\lambda 10938.086$ and Pa$\delta$ $\lambda 10049.369$, on top of the broad absorption component, we also detect an emission component at the core of the lines. The strength of this emission component decreases as we go to lines of higher energy levels. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Gaia_20eae_HPaLines_HPF_fl.pdf} \caption{\label{HPFHPa} Hydrogen Paschen lines from Pa (14-3) to Pa $\gamma$ (6-3). The lines which are completely lost in telluric bands are not plotted. The narrow lines in Pa$\gamma$ $\lambda 10938.086$ are telluric. The higher energy Paschen lines are detected as broad absorption lines starting from $-250$ km s$^{-1}$ to +250 km s$^{-1}$. We detect emission at the line core for the lower energy level lines Pa$\gamma$ $\lambda 10938.086$ and Pa$\delta$ $\lambda 10049.369$ on top of the broad absorption component. The bold curves are a 3 pixel smoothed spectra shown for clarity.} \end{figure} \subsubsection{Fe I and Ti I lines} We detect multiple Fe I and Ti I lines in emission from Gaia 20eae during its high accretion phase (Figure \ref{HPFFeI}). These lines typically originate in the active chromosphere, and are relatively symmetric when not blended with other lines. Hence, the peak positions of these lines were used to measure the 20 km s$^{-1}$ radial velocity of Gaia 20eae. The widths of these emission lines are similar across the spectrum. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Gaia_20eae_FeILines_HPF_fl.pdf}\\ \includegraphics[width=0.45\textwidth]{Gaia_20eae_TiILines_HPF_fl.pdf} \caption{\label{HPFFeI} High resolution line profiles of four Fe I lines and six Ti I emission lines on 2020 September 12. These lines are likely originating in the active chromosphere of Gaia 20eae. The fainter step style plots behind the bold lines are the measured spectra. The bold curves are a 3 pixel smoothed spectra shown for clarity.} \end{figure} \begin{table*} \centering \caption{Variations of the wind velocities as obtained from blue$-$shifted absorption dips of H$\alpha$, Na I D and H$\beta$ and equivalent width (in ${\AA}$) variations of the optical lines in Gaia 20eae. The error in the equivalent width is estimated using the relation provided by \citet{1988IAUS..132..345C}.} \begin{tabular}{@{}c@{ }c|c@{ }c@{ }c|r@{ }r@{ }r@{ }r@{ }r@{ }r@{ }r@{ }r@{}} \hline Date & JD & \multicolumn{3}{c}{Wind velocity (km/s)} & \multicolumn{7}{c}{Equivalent Width (${\AA}$)} \\ & & H$\alpha$ & Na I D & H$\beta$ & H$\beta$ & Na I D & H$\alpha$ & [O I] & \multicolumn{3}{c}{Ca\,{\sc ii}} \\ & & & & & & & & $\lambda$7773 &$\lambda$8498 & $\lambda$8542 & $\lambda$8662 & \\ \hline 2020 Aug 29 & 2459091 & $-422$ & $-303$ & $-313$ &$ 4.5\pm0.3$&$ 4.8 \pm 0.6 $&$-5.1 \pm 0.7$&$4.3 \pm 0.6 $&$-5.0 \pm 0.6 $&$-4.8 \pm 0.6$&$-6.1 \pm 0.6 $\\ 2020 Aug 30 & 2459092 & $-513$ & $-293$ & $-203$ &$ 7.2\pm0.3$&$ 3.0 \pm 0.3 $&$-4.5 \pm 0.6$&$ - $&$ - $&$ - $&$- $\\ 2020 Aug 31 & 2459093 & $-512$ & $-362$ & $-200$ &$ 4.1\pm0.3$&$ 2.9 \pm 0.3 $&$-5.6 \pm 0.7$&$ - $&$ - $&$ - $&$- $\\ 2020 Sept 01 & 2459094 & $-451$ & $-438$ & $-301$ &$ 5.4\pm0.3$&$ 4.2 \pm 0.5 $&$-5.1 \pm 0.7$&$4.5 \pm 0.6 $&$-6.7 \pm 0.8 $&$-7.2 \pm 0.8$&$-6.5 \pm 0.7$\\ 2020 Sept 07 & 2459100 & $-619$ & $-384$ & $-630$ &$ 8.3\pm1.8$&$ 6.6 \pm 1.2 $&$-8.3 \pm 1.0$&$3.4 \pm 0.6 $&$-7.1 \pm 0.9 $&$-9.3 \pm 1.0$&$-10.3\pm 0.9$\\ 2020 Sept 12 & 2549105 & $-$ & $-$ & $-320$ &$ 4.8\pm0.0$&$ 9.9 \pm 0.0 $&$-8.6 \pm 0.0$&$4.6 \pm 0.0 $&$-6.4 \pm 0.0 $&$-7.8 \pm 0.0$&$-6.6\pm 0.0$\\ 2020 Sept 14 & 2459107 & $-516$ & $-356$ & $-407$ &$ 6.3\pm1.1$&$ 8.7 \pm 1.3 $&$-5.6 \pm 0.8$&$4.4 \pm 0.7 $&$-4.6 \pm 0.6 $&$-5.5 \pm 0.7$&$-4.8 \pm 0.6$\\ \hline \end{tabular} \label{tab:velocity} \end{table*} \section{Discussion and Conclusions} \label{pt3} Gaia 20eae is the farthest discovered FUor/EXor type Class\,{\sc ii} YSO undergoing outburst of $\sim$4.25 mag in $G$ band. We have found that the present brightening of Gaia 20eae is not due to the dust clearing from our line of sight towards the source but due to an intrinsic change in the SED (warming of the continuum component). The LC of Gaia 20eae in the quiescent phase is showing a small scale fluctuation of amplitude of 0.2 mag and period of $\sim$2 days. In the outburst phase, Gaia 20eae is showing a transition stage during which most of its brightness ($\sim$3.4 mag) has occurred at a short timescale of 34 days with a rise-rate of 3 mag month$^{-1}$. This rise-rate of Gaia 20eae during the transition stage is greater than most of the recorded rise rates of FUor/EXor family of sources during the outburst phase, e.g., V899 Mon \citep[0.04-0.15 mag month$^{-1}$;][]{2015ApJ...815....4N}, Gaia 18dvy \citep[0.42 mag month$^{-1}$ in the Gaia $G$ band;][a newly discovered FUor]{2020ApJ...899..130S} and V1118 Ori \citep[1.05 mag month$^{-1}$;][an EXor source]{2017ApJ...839..112G}. Such difference in timescales of the rise-rates, possibly, implies a different trigger mechanism in Gaia 20eae resulting in the present luminosity outburst. Once it reached maximum brightness, it slowly started to decay from its maximum brightness with a decay rate of 0.3 mag month$^{-1}$. The present decay rate is similar to that of bonafide EXor class of sources, EX Lupi and VY Tau, which returned to its quiescent stages in 1.5-2 years after their maximum brightness state \citep{1977ApJ...217..693H}. The present decay rate is also similar to that of V899 Mon, which transitioned to a short quiescent state from its 2010 outburst state \citep{2015ApJ...815....4N}. The decay rate of Gaia 20eae is surprisingly similar to that of V1118 Ori being equal to 0.3 mag/month thus pointing to the fact that a similar relaxing phenomenon is occurring in Gaia 20eae also. During the outburst phase, Gaia 18dvy is also showing small scale fluctuations having amplitude of $\sim$0.2 mag in all the bands. Such a short scale accretion variability has also been reported by \citet{2015ApJ...815....4N} for V899 Mon. Similar fluctuations were observed in FU Ori, and may be due to flickering or inhomogeneities in the accretion disk \citep{2000ApJ...531.1028K,2013MNRAS.432..194S, 2020ApJ...899..130S}. Few interesting spectral features of Gaia 20eae are tabulated in Table \ref{tab:gaia20compare} to classify Gaia 20eae by comparing its property with different classes of the episodically accretion low mass young stars \citep{1998apsf.book.....H, 2018ApJ...861..145C}. Most of them are matching with EXor source but the H$\beta$ absorption line and P Cygni profile of H$\alpha$ line hint towards the FUor source classification. The P Cygni profile in H$\alpha$ originates from the winds generated due to accretion of matter through accretion funnels by the process of magnetospheric accretion. During our spectroscopic monitoring, the P Cygni profile of H$\alpha$ line showed substantial variations. We have found that the outflow wind velocity for Gaia 20eae shows a large scatter which may be due to the intrinsic variation of wind velocity during the outburst phase. As H$\alpha$ line originates from the innermost hot zone powered by accretion, the EW of the emission component of H$\alpha$ line is an approximate indicator of the accretion rate. Table \ref{tab:velocity} shows a large variation in EW values of H$\alpha$ line during our monitoring period. Similar to this, EW of O I $\lambda$7773$\rm\AA$ also varied upto $\sim75 \%$. This indicates towards the highly turbulent accretion activities going on in Gaia 20eae during its outburst phase. This is also evident from the short scale fluctuations of the photometric magnitudes observed during the same period. These properties of Gaia 20eae are similar to that of the V899 Mon which also showed heavy outflow activities and increase in disc turbulence as it transitioned to its quiescent state after its first outburst \citep{2015ApJ...815....4N}. Therefore, similar to V899 Mon, the present outburst of Gaia 20eae might be triggered by the magnetic instabilities in magnetospheric accretion. Gaia 20eae also clearly shows the decaying phase less than 15 months as well as CO band-heads in the emission. These features suggest that Gaia 20eae broadly resembles the EXor family of sources. Our high resolution spectrum shows a very strong blue-shifted He $\lambda$10830$\rm\AA$ absorption signature, which indicates a very strong outflow activity in Gaia 20eae. We have also detected a red-shifted absorption component in all the Ca II IR triplet lines, which could be due to the hot in-falling gas in the magnetospheric accretion funnel. As far as we know, this is the first reported direct detection of an infall signature in Ca II IR triplet lines in FUors/Exors family of objects. We believe that is strong evidence for the magnetospheric funnel origin of Ca II IR triplet lines in heavily accreting YSOs. Based on this, we have also constrained the viewing angle to be such that it is through the accretion funnel to the footprint on the stellar surface. \begin{table} \centering \caption{Features of Gaia 20eae compared to bonafide FUors and EXors.} \label{tab:gaia20compare} \begin{tabular}{p{1.2in}p{.55in}p{.55in}p{.55in}} \hline Feature & FUor & EXor & Gaia 20eae \\ \hline Outburst amplitude (mag) & 4-6 & 2-4 & 4.6\\ Age & Class\,{\sc i/ii} & Class\,{\sc ii} & Class\,{\sc ii} \\ Luminosity (L$_{\odot}$)& 100-300 & 0.5-20 & 5.9 \\ Reflection Nebulae & Yes & Sometimes & No\\ Decay to quiescence & 20-100 yr & 0.5-2 yr & 1.3 yr\\ H$\beta$ & Absorption & Emission & Absorption \\ H$\alpha$ & P Cygni & Emission & P Cygni \\ CO(2-0) and CO(3-1) & Absorption & Emission & Emission\\ \hline \end{tabular} \end{table} \section*{Acknowledgments} We thank the anonymous reviewer for valuable comments which greatly improved the scientific content of the paper. We thank the staff at the 1.3m DFOT and 3.6m DOT, Devasthal (ARIES), for their co-operation during observations. It is a pleasure to thank the members of 3.6m DOT team and IR astronomy group at TIFR for their support during TANSPEC observations. TIFR$-$ARIES Near Infrared Spectrometer (TANSPEC) was built in collaboration with TIFR, ARIES and MKIR, Hawaii for the DOT. We thank the staff of IAO, Hanle and CREST, Hosakote, that made these observations possible. The facilities at IAO and CREST are operated by the Indian Institute of Astrophysics, This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. The Center for Exoplanets and Habitable Worlds is supported by the Pennsylvania State University, the Eberly College of Science, and the Pennsylvania Space Grant Consortium. These results are based on observations obtained with the Habitable-zone Planet Finder Spectrograph on the HET. We acknowledge support from NSF grants AST-1006676, AST-1126413, AST-1310885, AST-1310875, ATI 2009889, ATI-2009982, AST-2108512 in the pursuit of precision radial velocities in the NIR. We acknowledge support from the Heising-Simons Foundation via grant 2017-0494. The Hobby-Eberly Telescope is a joint project of the University of Texas at Austin, the Pennsylvania State University, Ludwig-Maximilians-Universität München, and Georg-August Universität Gottingen. The HET is named in honor of its principal benefactors, William P. Hobby and Robert E. Eberly. The HET collaboration acknowledges the support and resources from the Texas Advanced Computing Center. We thank the Resident astronomers and Telescope Operators at the HET for the skillful execution of our observations of our observations with HPF. CIC acknowledges support by NASA Headquarters under the NASA Earth and Space Science Fellowship Program through grant 80NSSC18K1114. {SS, NP, RY acknowledge the support of the Department of Science and Technology, Government of India, under project No. DST/INT/Thai/P-15/2019. DKO acknowledges the support of the Department of Atomic Energy, Government of India, under Project Identification No. RTI 4002. } \vspace{5mm} \facilities{HCT (HFOSC), DFOT, DOT (TANSPEC, ADFOSC), ARCSAT, HET (HPF, LRS2)} \software{astropy \citep{2013A&A...558A..33A}, IRAF \citep{1986SPIE..627..733T,1993ASPC...52..173T}, DAOPHOT-II~software \citep{1987PASP...99..191S}}
proofpile-arXiv_065-36
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{introduction} Let $G_n=GL_n(F)$ be the general linear group of rank $n$ defined over a $p$-adic field $F$. Given a unitary supercuspidal representation $\tau$ of $G_m$ and two positive integers $c,~a$, we can attach a Speh representation $\rho_c(\tau_a)$, and it would be interesting to see when the following parabolic induction representation \[\rho_c(\tau_a)|det(\cdot)|^s\times \rho_d(\tau_b)|det(\cdot)|^{-s} \] is reducible, here $s\in\mathbb{C}$ (Please refer to Section \ref{mainthms} for the notions). Indeed, by analyzing Langlands--Shahidi's normalized intertwining operators, M{\oe}glin--Waldspurger have proved the existence of a ``good'' bounded domain for the location of possible reducibility points, i.e., given by the so-called condition ``li{\'e}s'' in \cite[Lemma I.6.3]{moeglin1989residue}, it seems that was the best result by then. Other than that, one could not say more on the explicit location of reducibility points, even for the basic case $b=1$ or $d=1$ before Tadi{\'c} and Lapid--M{\'{\i}}nguez's recent work (see \cite{tadic2014irreducibility,lapidminguez2016Innerforms}). In the short note, we give a natural interpretation of the condition ``li{\'e}s'' in terms of normalization factors of intertwining operators, which in turn provides more information on the condition for our induced representation to be reducible (see Remark \ref{liescondi}). This is achieved by following closely M{\oe}glin--Waldspurger's argument in a subtle way. By doing so, it gives us a sense that the condition ``li{\'e}s'' might be an ``if and only if'' condition, which does hold provided that a strong version of Main Theorem \ref{mthm1}, i.e., Conjecture \ref{conjnonzero}, is established. Concerning our approach in the spirit of M{\oe}glin--Waldspurger, we can establish a special case of Conjecture \ref{conjnonzero}, see Proposition \ref{conjspecial}, via Cai--Friedberg--Ginzburg--Kaplan's local coefficient theory of $(k,~c)$-model, as opposed to the well-known Shahidi's local coefficient theory of Whittaker model, involved in the profound generalized doubling method (see \cite{cai2018doubling,cai2019doubling}). On the other hand, such an expectation is really true proved recently by Tadi{\'c} and Lapid--M{\'{\i}}nguez via an alternative way, i.e., a detailed analysis of Jacquet modules in \cite{tadic2014irreducibility,lapidminguez2016Innerforms} (see Theorem \ref{redthm}). Indeed, using mainly the Jacquet module tool in a combinatorial way, they have obtained some beautiful criterion of irreducibility for certain induced representations including the toy example investigated in the paper, and proposed some exciting conjectures in a series of papers (see \cite{lapidminguez2018Squareirred,lapidminguez2020Conjecture,lapid2018KazhdanLusztigpolynomial}). We hope our analytic approach could shed some light on the general case. The structure of the paper is as follows. In the next section, we will first state our Main Theorem \ref{mthm1} and then provide some applications, while its proof is given in the last section. \section{main theorems} \label{mainthms} Let $F$ be a $p$-adic field with absolute value $|\cdot|$, $\mathfrak{w}$ be its uniformizer, $\mathcal{O}$ be its ring of integers, and $\mathbb{F}_q=\mathcal{O}/\mathfrak{w}\mathcal{O}$ be its residual field. For $n\in \mathbb{Z}_+$ the set of positive integers, denote $G_n=GL_n(F)$ to be the general linear group of rank $n$. Let $\tau$ be a unitary supercuspidal representation of $G_t=GL_t(F)$ with $t|k$, and $\tau_a$ be the unique discrete series subrepresentation of the normalized parabolic induced representation \[\tau|det(\cdot)|^\frac{a-1}{2}\times\tau|det(\cdot)|^{\frac{a-3}{2}}\times\cdots \times \tau|det(\cdot)|^{-\frac{a-1}{2}}:=Ind^{G_k}(\tau|det(\cdot)|^{\frac{a-1}{2}}\otimes\cdots \otimes \tau|det(\cdot)|^{-\frac{a-1}{2}})\] of $G_k=GL_k(F)$ with $a=\frac{k}{t}$. In particular, $\tau_1=\tau$. Attached to $\tau_a$ and $c\in \mathbb{Z}_+$, we denote $\rho_c(\tau_a)$ to be the associated Speh representation of $G_{kc}=GL_{kc}(F)$, i.e., the unique Langlands quotient of \[\tau_a|det(\cdot)|^{\frac{c-1}{2}}\times\tau_a|det(\cdot)|^{\frac{c-3}{2}}\times \cdots\times \tau_a|det(\cdot)|^{-\frac{c-1}{2}}\] or the unique subrepresentation of \[\tau_a|det(\cdot)|^{-\frac{c-1}{2}}\times\tau_a|det(\cdot)|^{-\frac{c-3}{2}}\times \cdots\times \tau_a|det(\cdot)|^{\frac{c-1}{2}}.\] For $\bar{\beta}=(\beta_1,~\beta_2)\in \mathbb{Z}_+^2$, let $P_{\bar{\beta}}$ be the standard maximal parabolic subgroup of $G_{\beta_1+\beta_2}$ with Levi factor $M_{\bar{\beta}}=G_{\beta_1}\times G_{\beta_2}$, $\sigma=(12)$ be the permutation such that \[\sigma M_{\bar{\beta}}=G_{\beta_2}\times G_{\beta_1}. \] For irreducible admissible representations $\pi_i$ of $G_{\beta_i}$, $i=1,~2$, and $\bar{s}=(s_1,~s_2)\in \mathbb{C}^2$, we have the non-normalized standard intertwining operator (see \cite{moeglin1989residue}) \[M(\sigma,~\pi_1\otimes \pi_2,~\bar{s}):~\pi_1|det(\cdot)|^{s_1}\times \pi_2|det(\cdot)|^{s_2}\longrightarrow \pi_2|det(\cdot)|^{s_2}\times \pi_1|det(\cdot)|^{s_1},\] and its normalized form \[N(\sigma,~\pi_1\otimes\pi_2,~\bar{s}):=\gamma(\sigma,~\pi_1\otimes\pi_2,~\bar{s})^{-1}M(\sigma,~\pi_1\otimes \pi_2,~\bar{s}), \] where $$\gamma(\sigma,~\pi_1\otimes\pi_2,~\bar{s}):=L(s_1-s_2,~\pi_1\times \pi_2^\vee)\cdot L(1+s_1-s_2,~\pi_1\times \pi_2^\vee)^{-1}$$ is slightly different from the factor in \cite{moeglin1989residue}. They are the same up to scalars in $\mathbb{C}[q^{-s},~q^{s}]^\star$ the set of invertible elements in $\mathbb{C}[q^{-s},~q^{s}]$. Here $(\cdot)^\vee$ means taking the contragredient. In the paper, we consider only the case $\pi_1=\rho_c(\tau_a)$ and $\pi_2=\rho_d(\tau_b)$ with $a,~b,~c,~d\in \mathbb{Z}_+$, and $\bar{s}=(s,~-s)\in \mathbb{C}^2$ for simplicity. The case $\rho_c(\tau_a)\otimes \rho_d(\tau'_b)$ with $\tau\not\simeq \tau'$, up to twisting by a unitary unramfied charater, could be discussed easily following from \cite{silberger1980special} and will be omitted. Denote \[\alpha(s,~\rho_c(\tau_a),~\rho_d(\tau_b)):=\prod_{j=\frac{|c-d|}{2}}^{\frac{c+d-2}{2}}L(2s-j,~\tau_a\times \tau^\vee_b),\quad \beta(s,~\rho_c(\tau_a),~\rho_d(\tau_b)):=\prod_{j=\frac{|c-d|}{2}}^{\frac{c+d-2}{2}}L(2s+j+1,~\tau_a\times\tau_b^\vee). \] Then an easy calculation shows that \[\gamma(\sigma,~\rho_c(\tau_a)\otimes \rho_d(\tau_b),~\bar{s})=\prod_{j=\frac{|c-d|}{2}}^{\frac{c+d-2}{2}}\frac{L(2s-j,~\tau_a\times \tau_b^\vee)}{L(2s+j+1,~\tau_a\times\tau_b^\vee)}=\frac{\alpha(s,~\rho_c(\tau_a),~\rho_d(\tau_b))}{\beta(s,~\rho_c(\tau_a),~\rho_d(\tau_b))}. \] \begin{mthm} \label{mthm1} Keep the notions as above. We have \[M^*(s,~\rho_c(\tau_a)\otimes \rho_d(\tau_b)):=\frac{1}{\alpha(s,~\rho_c(\tau_a),~\rho_d(\tau_b))}M(\sigma,~\rho_c(\tau_a)\otimes \rho_d(\tau_b),~\bar{s})\] is holomorphic for $s\in \mathbb{C}$. \end{mthm} With the help of \cite[Lemma I.5]{moeglin1989residue}, we can first obtain \begin{cor}\label{sufficient} Maintain the notation as before. The reducibility points of the representation \[\rho_c(\tau_a)|det(\cdot)|^s\times \rho_d(\tau_b)|det(\cdot)|^{-s}\] are exactly the poles of \[\beta(s,~\rho_c(\tau_a),~\rho_d(\tau_b))\cdot\beta(-s,~\rho_c(\tau_a),~\rho_d(\tau_b)), \] provided that the normalization factors $\alpha(s,~\rho_c(\tau_a), ~\rho_d(\tau_b))^{-1}$ and $\beta(s,~\rho_c(\tau_a),~ \rho_d(\tau_b))^{-1}$ are co-prime in $\mathbb{C}[q^{-s},~q^s]$. \end{cor} \begin{proof} Recall that \cite[Lemma I.5]{moeglin1989residue} says that the induced representation \[\rho_c(\tau_a)|det(\cdot)|^{s}\times \rho_d(\tau_b)|det(\cdot)|^{-s}\mbox{is irreducible at the point }s_0\] if and only if \[N(\sigma,~\rho_c(\tau_a)\otimes\rho_d(\tau_b),~\bar{s})\mbox{ and } N(\sigma,~\rho_d(\tau_b)\otimes\rho_c(\tau_a),~-\bar{s}) \mbox{ are holomorphic at }s_0. \] Note that our Main Theorem \ref{mthm1} says that \[\frac{1}{\alpha(s,~\rho_c(\tau_a),~\rho_d(\tau_b))}M(\sigma,~\rho_c(\tau_a)\otimes \rho_d(\tau_b),~\bar{s}) \mbox{ is holomorphic for }s\in \mathbb{C}, \] thus the condition \[\alpha(s,~\rho_c(\tau_a), ~\rho_d(\tau_b))^{-1}\mbox{ and } \beta(s,~\rho_c(\tau_a),~ \rho_d(\tau_b))^{-1}\mbox{ are co-prime}\] implies that \[N(\sigma,~\rho_c(\tau_a)\otimes\rho_d(\tau_b),~\bar{s})=\frac{\beta(s,~\rho_c(\tau_a),~\rho_d(\tau_b))}{\alpha(s,~\rho_c(\tau_a),~\rho_d(\tau_b))}M(\sigma,~\rho_c(\tau_a)\otimes \rho_d(\tau_b),~\bar{s}) \] is holomorphic at the point $s_0$ (resp. -$s_0$) if and only if \[\beta(s,~\rho_c(\tau_a),~\rho_d(\tau_b))\mbox{ is holomorphic at the point $s_0$ (resp. -$s_0$).} \] Whence finishing the proof. \end{proof} Moreover, with an extra help of the preservation of reducibility property under the Zelevinsky--Aubert dual \cite{zelevinsky1980induced,aubert1995dualite}, i.e., \[\rho_c(\tau_a)|det(\cdot)|^s\times \rho_d(\tau_b)|det(\cdot)|^{-s}\mbox{\bf and }\rho_a(\tau_c)|det(\cdot)|^s\times \rho_b(\tau_d)|det(\cdot)|^{-s} \mbox{\bf share the same reducibility},\] we can say a little bit more as follows. \begin{cor}\label{sufficientAubert} Retain the notation as previous. The reducibility points of the representation \[\rho_c(\tau_a)|det(\cdot)|^s\times \rho_d(\tau_b)|det(\cdot)|^{-s}\] are exactly the poles of \[\beta(s,~\rho_c(\tau_a),~\rho_d(\tau_b))\cdot\beta(-s,~\rho_c(\tau_a),~\rho_d(\tau_b)), \] provided that the normalization factors $\alpha(s,~\rho_a(\tau_c), ~\rho_b(\tau_d))^{-1}$ and $\beta(s,~\rho_c(\tau_a),~ \rho_d(\tau_b))^{-1}$ are co-prime in $\mathbb{C}[q^{-s},~q^s]$. \end{cor} \begin{proof} As discussed earlier, the only point is to realize that the normalization factor $\beta(s,~\rho_c(\tau_a),~ \rho_d(\tau_b))$ is invariant with respect to the following symmetries \[(c,~d)\mapsto (d,~c),~(a,~b)\mapsto (b,~a), \mbox{ and }(c,~d)\mapsto (a,~b). \] Which follows easily from the calculation \[\beta(s,~\rho_c(\tau_a),~ \rho_d(\tau_b)):=\prod_{j=\frac{|c-d|}{2}}^{\frac{c+d-2}{2}}L(2s+j+1,~\tau_a\times\tau_b^\vee)=\prod_{j=\frac{|c-d|}{2}}^{\frac{c+d-2}{2}}\prod_{k=\frac{|a-b|}{2}}^{\frac{a+b-2}{2}}L(2s+j+k+1,~\tau\times\tau^\vee). \] \end{proof} Let us take a break to see what the conditions in our above corollaries say. \begin{lem}\label{lemreducedCalcu} Use the same notation as defined earlier. We have \[\alpha(s,~\rho_c(\tau_a), ~\rho_d(\tau_b))^{-1}~ (\mbox{resp. } \alpha(s,~\rho_a(\tau_c), ~\rho_b(\tau_d))^{-1})\mbox{ and } \beta(s,~\rho_c(\tau_a),~ \rho_d(\tau_b))^{-1} \mbox{ are co-prime}\] if and only if \[|c-d|\geq min\{a-1,~b-1\}~(\mbox{resp. }|a-b|\geq min\{c-1,~d-1\}). \] \end{lem} \begin{proof} This follows from an easy calculation as follows. \[\alpha(s,~\rho_c(\tau_a),~\rho_d(\tau_b)):=\prod_{j=\frac{|c-d|}{2}}^{\frac{c+d-2}{2}}L(2s-j,~\tau_a\times \tau_b^\vee)=\prod_{j=\frac{|c-d|}{2}}^{\frac{c+d-2}{2}}\prod_{k=\frac{|a-b|}{2}}^{\frac{a+b-2}{2}}L(2s-j+k,~\tau\times \tau^\vee), \] and \[\beta(s,~\rho_c(\tau_a),~ \rho_d(\tau_b)):=\prod_{j=\frac{|c-d|}{2}}^{\frac{c+d-2}{2}}L(2s+j+1,~\tau_a\times\tau_b^\vee)=\prod_{j=\frac{|c-d|}{2}}^{\frac{c+d-2}{2}}\prod_{k=\frac{|a-b|}{2}}^{\frac{a+b-2}{2}}L(2s+j+k+1,~\tau\times\tau^\vee). \] Thus the co-prime conditions are equivalent to saying that the minimal number in $\{j-k:~\cdots \}$ is bigger than the maximal number in $\{-j-k-1:~\cdots\}$, i.e., \[\frac{|c-d|}{2}-\frac{a+b-2}{2}>-\frac{|c-d|}{2}-\frac{|a-b|}{2}-1 ~(\mbox{resp. }\frac{|a-b|}{2}-\frac{c+d-2}{2}>-\frac{|c-d|}{2}-\frac{|a-b|}{2}-1 ), \] that is, \[|c-d|\geq min\{a-1,~b-1\}~(\mbox{resp. }|a-b|\geq min\{c-1,~d-1\}). \] \end{proof} Given the calculation in Lemma \ref{lemreducedCalcu}, we can easily see that two basic examples, of which the location of reducibility points seems unknown before Tadi{\'c} and Lapid--M{\'{\i}}nguez's recent work, lie in the setting of our Corollaries \ref{sufficient} and \ref{sufficientAubert} as follows. \begin{exam}\label{exambasic} $\rho_c(\tau_a)|det(\cdot)|^s\times \rho_d(\tau)|det(\cdot)|^{-s}~ (i.e. ~b=1)$, and $\rho_c(\tau_a)|det(\cdot)|^s\times \tau_b|det(\cdot)|^{-s}~ (i.e.~d=1)$. \end{exam} Now back to our discussion on the information of reducibility points we can extract from Main Theorem \ref{mthm1}. One may note that the conditions in Corollaries \ref{sufficient} and \ref{sufficientAubert} might be too strong, though they already imply something interesting, see Example \ref{exambasic}. Indeed, there does exist a weaker condition, arising from a simple observation, which guarantees the reducibility given in what follows. Write $$g.c.d\left(\alpha(s,~\rho_c(\tau_a),~\rho_d(\tau_b))^{-1},~\beta(s,~\rho_c(\tau_a),~\rho_d(\tau_b))^{-1} \right)$$ to be the greatest common divisor of the normalization factors $\alpha(s,\cdots)$ and $\beta(s,\cdots)$, and denote by $deg_{s_0}(\alpha(s,\cdots))$ (resp. $deg_{s_0}(\beta(s,\cdots))$) the order of pole of $\alpha(s,\cdots)$ (resp. $\beta(s,\cdots)$) at the point $s_0$. Then we have \begin{cor}\label{corSuffi} Retain the notions as defined before. The induced representation \[\rho_c(\tau_a)|det(\cdot)|^s\times \rho_d(\tau_b)|det(\cdot)|^{-s}\] is reducible at those points $\pm s_0$, with $s_0<0$, characterized by one of the conditions \begin{enumerate}[(i).] \item $g.c.d\left(\alpha(s,~\rho_c(\tau_a),~\rho_d(\tau_b))^{-1},~\beta(s,~\rho_c(\tau_a),~\rho_d(\tau_b))^{-1} \right)|_{s=s_0}=1$, up to a non-zero scalar. \item $g.c.d\left(\alpha(s,~\rho_a(\tau_c),~\rho_b(\tau_d))^{-1},~\beta(s,~\rho_c(\tau_a),~\rho_d(\tau_b))^{-1} \right)|_{s=s_0}=1$, up to a non-zero scalar. \item If (i) does not hold, $deg_{s_0}\left(\beta(s,~\rho_c(\tau_a),~\rho_d(\tau_b))\right)>deg_{s_0}\left(\alpha(s,~\rho_c(\tau_a),~\rho_d(\tau_b))\right)$. \item If (ii) does not hold, $deg_{s_0}\left(\beta(s,~\rho_c(\tau_a),~\rho_d(\tau_b))\right)>deg_{s_0}\left(\alpha(s,~\rho_a(\tau_c),~\rho_b(\tau_d))\right).$ \end{enumerate} \end{cor} \begin{proof} The former two conditions have been discussed in Corollaries \ref{sufficient} and \ref{sufficientAubert} respectively. For the latter parts, they are easy by-products of the following simple observation, (see \cite{waldspurger2003formule}), \[\mbox{\bf The order of zero of }M^*(s,~\rho_c(\tau_a)\otimes \rho_d(\tau_b))\mbox{ at }s_0\leq deg_{s_0}\left(\alpha(s,~\rho_c(\tau_a),~\rho_d(\tau_b))\right). \] \end{proof} Let us take a look at two extremely bad cases, in terms of our conditions, to see what we can get from Corollary \ref{corSuffi}. \begin{exam}\label{exambad1} $\rho_c(\tau_c)|det(\cdot)|^s\times \rho_c(\tau_c)|det(\cdot)|^{-s}~(i.e.~c=d=a=b)$: we have \[\alpha(s,\cdots)=\prod_{j_1=0}^{c-1}L(2s-j_1,~\tau_c\times \tau_c^\vee)=\prod_{j_1=0}^{c-1}\prod_{j_2=0}^{c-1}L(2s-j_1+j_2,~\tau\times \tau^\vee),\] and \[\beta(s,\cdots)=\prod_{j_1=0}^{c-1}L(2s+j_1+1,~\tau_c\times \tau_c^\vee)=\prod_{j_1=0}^{c-1}\prod_{j_2=0}^{c-1}L(2s+j_1+j_2+1,~\tau\times \tau^\vee). \] Thus the poles of $\alpha(s,\cdots)$ and $\beta(s,\cdots)$ counted with multiplicity are listed in terms of $2s$ by the following matrices respectively, red marked are their common poles, \[ \left[\begin{array}{ccccc} {\color{red}-(c-1)} & {\color{red}\cdots}&{\color{red}\cdots} & {\color{red} -1} & 0 \\ {\color{red}\vdots} & &{\color{red}\udots} & &\vdots\\ {\color{red}\vdots} & {\color{red}\udots} & & & \vdots \\ {\color{red}-1} & & & & \vdots \\ 0 & \cdots &\cdots &\cdots & (c-1) \end{array} \right]_{\alpha(s,\cdots),}\qquad \left[\begin{array}{ccccc} -2(c-1)-1 &\cdots&\cdots & \cdots & -(c-1)-1 \\ \vdots & & & &{\color{red} -(c-1)}\\ \vdots & & &{\color{red}\udots} & {\color{red}\vdots} \\ \vdots & &{\color{red}\udots} & & {\color{red}\vdots} \\ -(c-1)-1 & {\color{red}-(c-1)} &{\color{red}\cdots} &{\color{red}\cdots} & {\color{red} -1} \end{array} \right]_{\beta(s,\cdots).} \] Therefor, by Corollary \ref{corSuffi}, we see that all poles of $\beta(s,~\rho_c(\tau_c),~\rho_c(\tau_c))\cdot\beta(-s,~\rho_c(\tau_c),~\rho_c(\tau_c))$ except the following points \[2s\in\left\{\pm 1,~\pm 2,~\cdots,~\pm \left\lfloor\frac{c}{2} \right\rfloor\right\} \] are known to be reducible points of our induced representation. \end{exam} \begin{exam}\label{exambad2} $\rho_c(\tau_a)|det(\cdot)|^s\times \rho_c(\tau_a)|det(\cdot)|^{-s}~(i.e.~c=d\neq a=b)$: we have \[\alpha(s,~\rho_c(\tau_a),~\rho_c(\tau_a))=\prod_{j_1=0}^{c-1}L(2s-j_1,~\tau_a\times \tau_a^\vee)=\prod_{j_1=0}^{c-1}\prod_{j_2=0}^{a-1}L(2s-j_1+j_2,~\tau\times \tau^\vee),\] and \[\beta(s,\cdots)=\prod_{j_1=0}^{c-1}L(2s+j_1+1,~\tau_a\times \tau_a^\vee)=\prod_{j_1=0}^{c-1}\prod_{j_2=0}^{a-1}L(2s+j_1+j_2+1,~\tau\times \tau^\vee). \] Following from taking the Zelevinsky--Aubert dual as in Corollary \ref{sufficientAubert}, we can assume $c>a$, then by Corollary \ref{corSuffi} and a similar calculation as done in Example \ref{exambad1}, we have that all poles of $\beta(s,~\rho_c(\tau_c),~\rho_c(\tau_c))\cdot\beta(-s,~\rho_c(\tau_c),~\rho_c(\tau_c))$ except the following points \[2s\in \left\{\pm 1,~\pm 2,~\cdots,~\pm \left\lfloor\frac{a}{2} \right\rfloor\right\} \] are known to be reducible points of our induced representation. \end{exam} \begin{rem}\label{liescondi} Given our Main Theorem \ref{mthm1}, with the help of \cite[Lemma I.5]{moeglin1989residue} and the functional equation, up to an invertible scalar, \[N(\sigma,~\rho_c(\tau_a)\otimes\rho_d(\tau_b),~\bar{s})\circ N(\sigma,~\rho_d(\tau_b)\otimes\rho_c(\tau_a),~-\bar{s})=id. \] one could readily see that the location of reducibility points is governed by the poles of \[\beta(s,~\rho_c(\tau_a),~\rho_d(\tau_b))\cdot\beta(-s,~\rho_c(\tau_a),~\rho_d(\tau_b)), \] which is exactly the condition what M{\oe}glin--Waldspurger's notion of ``li{\'e}s'' says in \cite[Lemma I.6.3]{moeglin1989residue}. Moreover, our Main Theorem \ref{mthm1} could be proved in an another way without the profound Langlands--Shahidi theory utilized heavily in \cite{moeglin1989residue}, i.e., following from our new argument illustrated in \cite{luo2021casselmanshahidiconjecture}, and will be discussed as a necessary ingredient for analogous results for classical groups in \cite{luo2021holomorphicityClassicalGroup}. In view of this, one could readily see that similar results hold for the general linear groups over division algebras if we could prove \cite[Lemma I.5 (i)]{moeglin1989residue} in this setting using the tool of Jacquet module directly, which we leave for a future work. \end{rem} Based on the above analysis, especially the observation Remark \ref{liescondi}, we are curious to see if our approach in the spirit of M{\oe}glin--Waldspurger could reach the point that it answers the reducibility problem completely, even though the problem has been settled recently by Tadi{\'c} and Lapid--M{\'{\i}}nguez in \cite{tadic2014irreducibility,lapidminguez2016Innerforms} via a detailed analysis of Jacquet modules in a combinatorial way. Excited to say that, a conjectural strong version of Main Theorem \ref{mthm1} can achieve this in what follows. \begin{conj}\label{conjnonzero} Use the same notation as in Main Theorem \ref{mthm1}. We have \[M^*(s,~\rho_c(\tau_a)\otimes \rho_d(\tau_b)):=\frac{1}{\alpha(s,~\rho_c(\tau_a),~\rho_d(\tau_b))}M(\sigma,~\rho_c(\tau_a)\otimes \rho_d(\tau_b),~\bar{s}) \] is always non-zero for $s\in \mathbb{C}$, i.e., a non-zero intertwining operator. \end{conj} \begin{thm}[Tadi{\'c}, Lapid--M{\'{\i}}nguez \cite{tadic2014irreducibility,lapidminguez2016Innerforms}]\label{redthm} Corollary \ref{sufficient} holds unconditionally. \end{thm} \begin{proof} As seen in Remark \ref{liescondi}, reducibility points are govern by the poles of \[\beta(s,~\rho_c(\tau_a),~\rho_d(\tau_b))\cdot\beta(-s,~\rho_c(\tau_a),~\rho_d(\tau_b)). \] Hence it suffices to show the converse also holds, i.e., poles of $\beta(s,~\cdots)\beta(-s,~\cdots)$ are controlled by the reducibility points of \[\rho_c(\tau_a)|det(\cdot)|^s\times \rho_d(\tau_b)|det(\cdot)|^{-s}. \] This follows easily from Conjecture \ref{conjnonzero} via the following functional equation, up to invertible elements, \[M^*(s,~\rho_c(\tau_a)\otimes \rho_d(\tau_b))\circ M^*(-s,~ \rho_d(\tau_b)\otimes\rho_c(\tau_a))=\frac{1}{\beta(s,~\rho_c(\tau_a),~\rho_d(\tau_b))\cdot\beta(-s,~\rho_c(\tau_a),~\rho_d(\tau_b))}. \] To be precise, if the right hand side is zero at some point $s_0$, then the left hand side is also zero at $s_0$. But Conjecture \ref{conjnonzero} says that $M^*(\pm s_0,~\cdots)\neq 0$, so the induced representation must be reducible at $s_0$. Whence Theorem \ref{redthm} holds. \end{proof} Regarding Conjecture \ref{conjnonzero}, via Cai--Friedberg--Ginzburg--Kaplan's local coefficient theory of $(k,~c)$-model in \cite{cai2018doubling,cai2019doubling}, we have \begin{prop}\label{conjspecial} Keep the notions as above. Conjecture \ref{conjnonzero} holds for the case $c=d$. \end{prop} \begin{proof} Recall that Cai--Friedberg--Ginzburg--Kaplan's local coefficient theory of $(k,~c)$-model, given by the following diagram, please refer to \cite{cai2018doubling,cai2019doubling} for the notation and results in details, \[ \xymatrix{ \rho_c(\tau_a)|det(\cdot)|^s\times \rho_c(\tau_b)|det(\cdot)|^{-s}\quad\ar[rr]^{M(\sigma,\rho_c(\tau_a)\otimes\rho_c(\tau_b),\bar{s})} \ar[dr]_{\lambda(s,\cdots)} & & \quad\rho_c(\tau_b)|det(\cdot)|^{-s}\times\rho_c(\tau_a)|det(\cdot)|^s \ar[dl]^{\lambda(-s,\cdots)} \\ & \mathbb{C}_{\psi} & }\] says that there exists a rational coefficient $C_\psi(s,~\cdots)$ given by, up to an invertible element in $\mathbb{C}[q^{-s},q^s]$, \[C_\psi(s,~\cdots)=\frac{\beta(-s,~\rho_c(\tau_a),~\rho_c(\tau_b))}{\alpha(s,~\rho_c(\tau_a),~\rho_c(\tau_b))} \] such that, up to an invertible element in $\mathbb{C}[q^{-s},q^s]$, \[\lambda(s,~\cdots)=C_\psi(s,~\cdots)\lambda(-s,~\cdots)\circ M(\sigma,~\rho_c(\tau_a)\otimes\rho_c(\tau_b),~\bar{s})=\beta(-s,~\cdots)\lambda(-s,~\cdots)\circ M^*(s,~\rho_c(\tau_a)\otimes \rho_c(\tau_b)). \] Note that $\lambda(s,~\cdots)$ is holomorphic in $s$ and non-zero (see \cite[P. 15]{cai2019doubling}), thus we obtain \[M^*(s,~\rho_c(\tau_a)\otimes \rho_c(\tau_b))\neq 0\mbox{ for }Re(s)\leq 0, \] which follows from the fact that $\beta(-s,~\cdots)$ has no poles at $Re(s)\leq 0$. On the other hand, the top diagram in Page 615 in \cite[I.5 Lemma]{moeglin1989residue} says that \[N(\sigma,~\rho_c(\tau_a)\otimes \rho_d(\tau_b),~\bar{s})=\frac{\beta(s,~\rho_c(\tau_a),~\rho_d(\tau_b))}{\alpha(s,~\rho_c(\tau_a),~\rho_d(\tau_b))}M(\sigma,~\rho_c(\tau_a)\otimes \rho_d(\tau_b),~\bar{s})\] is holomorphic and nonzero for $Re(s)\geq 0$. Note that \[\beta(s,~\rho_c(\tau_a),~\rho_d(\tau_b)):=\prod_{j=\frac{|c-d|}{2}}^{\frac{c+d-2}{2}}L(2s+j+1,~\tau_a\times\tau_b^\vee)\] has no poles for $Re(s)\geq 0$, thus we know \[M^*(s,~\rho_c(\tau_a)\otimes \rho_d(\tau_b)):=\frac{1}{\alpha(s,~\rho_c(\tau_a),~\rho_d(\tau_b))}M(\sigma,~\rho_c(\tau_a)\otimes \rho_d(\tau_b),~\bar{s}) \] is always non-zero for $Re(s)\geq 0$. Whence $M^*(s,~\rho_c(\tau_a)\otimes \rho_c(\tau_b))$ is always non-zero for $s\in \mathbb{C}$. \end{proof} \begin{rem}\label{localeffic} One may notice that the local coefficient theory of $(k,~c)$-model for general linear groups $GL$ has not been written down in detail in \cite{cai2018doubling}. But one also sees that the unramified calculation in some sense has been carried out in \cite[P.63 equation (6.57)]{cai2018doubling} and the multiplicative property is quite standard as in \cite[Proposition 3.2.1]{shahidi1981certain}, thus the formula used in the above proof, i.e., \[C_\psi(s,~\cdots)=\frac{\beta(-s,~\rho_c(\tau_a),~\rho_c(\tau_b))}{\alpha(s,~\rho_c(\tau_a),~\rho_c(\tau_b))} \] follows easily from the fact that $C_\psi(s,~\cdots)=1$ globally, see \cite[P.65]{cai2018doubling}, via a local-global argument. As this is not so related with our focus in the paper, we do not include the details herein. \end{rem} \section{proof of main theorem \ref{mthm1}} In this section, we will follow M{\oe}glin--Waldspurger's work \cite{moeglin1989residue} to prove our Main Theorem \ref{mthm1} for $\tau_a$ and $\tau_b$ supercuspidal, and discrete series gradually. Please refer to \cite[Section IA]{moeglin1989residue} for the details. \begin{proof}[Proof of Main Theorem \ref{mthm1} ($\tau_a$ and $\tau_b$ supercuspidal)] Note that \cite[Proposition I.10]{moeglin1989residue} and \cite[Lemma I.6.3]{moeglin1989residue} say that $N(\sigma,~\rho_c(\tau)\otimes \rho_d(\tau),~\bar{s})$ is holomorphic and non-zero for $2Re(s)>-1$, with possible poles at the reducible points of $\rho_c(\tau)|det(\cdot)|^{s}\times \rho_d(\tau)|det(\cdot)|^{-s}$, i.e., \[2Re(s)\in \left[-\frac{c+d}{2},~-\frac{|c-d|}{2}\right)\cap \mathbb{Z}.\] Thus it is equivalent to showing that, via a simple analysis of the normalization factors $\alpha(s,~\rho_c(\tau),~\rho_d(\tau))$ and $\beta(s,~\rho_c(\tau),~\rho_d(\tau))$, \[M(\sigma,~\rho_c(\tau)\otimes \rho_d(\tau),~\bar{s})\mbox{ is holomorphic for }2Re(s)< -\frac{|c-d|}{2}. \tag{$\star$} \] Which is equivalent to showing that \[N(\sigma,~\rho_c(\tau)\otimes \rho_d(\tau),~\bar{s})\mbox{ has only simple poles possibly at }2Re(s)\in \left[-\frac{c+d}{2},~-\frac{|c-d|}{2}\right)\cap \mathbb{Z}.\tag*{(C1)} \] Viewing $\rho_d(\tau)$ as a subrepresentation of $\tau|det(\cdot)|^{-\frac{d-1}{2}}\times\tau|det(\cdot)|^{-\frac{d-3}{2}}\times \cdots\times\tau|det(\cdot)|^{\frac{d-1}{2}}$, we have $$N(\sigma,~\rho_c(\tau)\otimes \rho_d(\tau),~\bar{s})=\prod_{j_2=-\frac{d-1}{2}}^{\frac{d-1}{2}}N(\sigma,~\rho_c(\tau)\otimes \tau|det(\cdot)|^{j_2},~\bar{s}) \mbox{ (from large to small)}.$$ For each $N(\sigma,~\rho_c(\tau)\otimes \tau|det(\cdot)|^{j_2},~\bar{s})$, applying \cite[Proposition I.10]{moeglin1989residue} and \cite[Lemma I.6.3]{moeglin1989residue} again, we know that it has possible poles only at \[2Re(s)=-\frac{c+1}{2}+j_2\in \mathbb{Z}.\] On one hand, $$M(\sigma,~\rho_c(\tau)\otimes \tau|det(\cdot)|^{j_2},~\bar{s})=\prod_{j_1=-\frac{c-1}{2}}^{\frac{c-1}{2}}M(\sigma,~\tau|det(\cdot)|^{j_1}\otimes \tau|det(\cdot)|^{j_2},~\bar{s}) \mbox{ (from small to large)}$$ via a reduced decomposition, and $M(\sigma,~\tau|det(\cdot)|^{j_1}\otimes \tau|det(\cdot)|^{j_2},~\bar{s})$ has only a simple pole possibly at (see \cite{bernstein1977induced}) \[2Re(s)=j_2-j_1.\] On the other hand, an easy calculation of the normalization factor gives \[\gamma(\sigma,~\rho_c(\tau)\otimes \tau|det(\cdot)|^{j_2},~\bar{s})=\frac{L(2s-1-j_2-\frac{c-1}{2},~\tau\times \tau^\vee)}{L(2s-1-j_2+\frac{c-1}{2}+1,~\tau\times \tau^\vee)} \] Thus the inequality $2Re(s)=-\frac{c-1}{2}-1+j_2<j_2-j_1$ for any $j_1$ implies that \[N(\sigma,~\rho_c(\tau)\otimes \tau|det(\cdot)|^{j_2},~\bar{s})\mbox{ has only a simple pole possibly at } 2Re(s)=-\frac{c+1}{2}+j_2. \] Therefore, the inequality $-\frac{c+1}{2}+j_2\neq -\frac{c+1}{2}+j'_2$ if $j_2\neq j'_2$ implies that \[N(\sigma,~\rho_c(\tau)\otimes \rho_d(\tau),~\bar{s})\mbox{ has only simple poles possibly at }2Re(s)\in \left[-\frac{c+d}{2},~-\frac{|c-d|}{2}\right)\cap \mathbb{Z}. \] Whence finishing the Claim (C1). \end{proof} \begin{proof}[Proof of Main Theorem \ref{mthm1} ($\tau_a$ and $\tau_b$ discrete series)] Recall that $\tau_b$ is the unique subrepresentation of $\tau|det(\cdot)|^\frac{b-1}{2}\times \cdots\times \tau|det(\cdot)|^{-\frac{b-1}{2}}$. The key observation for our argument in what follows is that \[\rho_c(\tau_b)\simeq S(\rho_c(\tau)|det(\cdot)|^{\frac{b-1}{2}}\times\cdots\times\rho_c(\tau)|det(\cdot)|^{-\frac{b-1}{2}})\mbox{ (the unique subrepresentation)}. \] This follows from the fact that they share the same Zelevinsky--Aubert dual (see \cite{moeglinwaldspurger1986involution}). In view of this observation, viewing $\rho_c(\tau_b)$ as a subrepresentation of $$\rho_c(\tau)|det(\cdot)|^{\frac{b-1}{2}}\times\cdots\times\rho_c(\tau)|det(\cdot)|^{-\frac{b-1}{2}},$$ we have the decomposition \[N(\sigma,~\rho_c(\tau_a)\otimes\rho_d(\tau_b),~\bar{s})=\prod_{j_2=-\frac{b-1}{2}}^{\frac{b-1}{2}}N(\sigma,~\rho_c(\tau_a)\otimes\rho_d(\tau)|det(\cdot)|^{j_2},~\bar{s})\mbox{ (from large to small)}. \] For each $N(\sigma,~\rho_c(\tau_a)\otimes\rho_d(\tau)|det(\cdot)|^{j_2},~\bar{s})$, \cite[Proposition I.10]{moeglin1989residue} and \cite[Lemma I.6.3]{moeglin1989residue} imply that it has possible poles at \[2Re(s)\in \left[j_2-\frac{a-1}{2}-\frac{c+d}{2},~j_2-\frac{a-1}{2}-\frac{|c-d|}{2}\right)\cap \mathbb{Z}. \] On the other hand, \[M(\sigma,~\rho_c(\tau_a)\otimes\rho_d(\tau)|det(\cdot)|^{j_2},~\bar{s})=\prod_{j_1=-\frac{a-1}{2}}^{\frac{a-1}{2}}M(\sigma,~\rho_c(\tau)|det(\cdot)|^{j_1}\otimes\rho_d(\tau)|det(\cdot)|^{j_2},~\bar{s}) \mbox{ (from small to large)}. \] Note that, by ($\star$), we know that $M(\sigma,~\rho_c(\tau)|det(\cdot)|^{j_1}\otimes\rho_d(\tau)|det(\cdot)|^{j_2},~\bar{s})$ has possible poles only at \[2Re(s)+j_1-j_2\geq -\frac{|c-d|}{2},~i.e.,~2Re(s)\in \left[j_2-j_1-\frac{|c-d|}{2},+\infty\right)\cap \mathbb{Z}. \] Thus the inequality $j_2-\frac{a-1}{2}\leq j_2-j_1$ says that for any $j_1$, \[j_2-\frac{a-1}{2}-\frac{|c-d|}{2}\leq j_2-j_1-\frac{|c-d|}{2}. \] Which in turn implies that, for $N(\sigma,~\rho_c(\tau_a)\otimes\rho_d(\tau)|det(\cdot)|^{j_2},~\bar{s})$, \[\mbox{all of its possible poles come from the normalization factor }\gamma(\sigma,~\rho_c(\tau_a)\otimes\rho_d(\tau)|det(\cdot)|^{j_2},~\bar{s}).\tag{$\star\star$}\] Therefore our Main Theorem \ref{mthm1} holds for the inducing data $\rho_c(\tau_a)\otimes\rho_d(\tau)$, as well as $\rho_c(\tau)\otimes \rho_d(\tau_b)$ which could be proved similarly and the detail will be left to the reader. Back to the general case, by an easy calculation, for $a\geq b$, we know that the normalization factors $\alpha(\cdots)$ match on both sides of the decomposition \[N(\sigma,~\rho_c(\tau_a)\otimes \rho_d(\tau_b),~\bar{s})=\prod_{j_2=-\frac{b-1}{2}}^{\frac{b-1}{2}}N(\sigma,~\rho_c(\tau_a)\otimes \rho_d(\tau)|det(\cdot)|^{j_2},~\bar{s}) \mbox{ (from large to small)}.\] The case $a\leq b$ can be analyzed similarly as above by decomposing with respect to $\rho_c(\tau_a)$ instead of $\rho_d(\tau_b)$, and will be left as an exercise for the interested readers. Whence \[M^*(s,~\rho_c(\tau_a)\otimes \rho_d(\tau_b)):=\frac{1}{\alpha(s,~\rho_c(\tau_a),~\rho_d(\tau_b))}M(\sigma,~\rho_c(\tau_a)\otimes \rho_d(\tau_b),~\bar{s})\] is holomorphic for $s\in \mathbb{C}$, completing the proof of Main Theorem \ref{mthm1}. \end{proof} \paragraph*{\textbf{Acknowledgments}} The author would like to thank Eyal Kaplan for his kindness and help. Thanks are also due to the referee for his/her detailed comments. The author was supported by the ISRAEL SCIENCE FOUNDATION Grant Number 376/21. \bibliographystyle{amsalpha}
proofpile-arXiv_065-37
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Resolution of two point sources is one of the most crucial elements in the science of imaging and sensing. The quality of fine resolution relies on two major factors: resolution capability (how small a separation can be discriminated) and measurement estimation credibility (how much the measurement can be trusted). For over a century, the empirical Abbe-Rayleigh diffraction criterion \cite{Abbe1873, Rayleigh1896, BornWolf,Rayleigh1979}, relating to the ratio of light wavelength and aperture diameter, has been regarded as a roadblock that limits the resolution capability with practically feasible parameters \cite{Goodman2005,Kolobov2000PRL}. This is due to the fact that when the two sources are getting closer, the blurred overlapping signals are harder to discriminate through direct intensity measurements. Moreover, the second factor, measurement credibility (or measurement precision), will also reduce as noise effects become relatively more prominent when the separation of the two sources decreases. Studies have shown that while statistical methods were able to improve the resolution capability by determining the source locations, the precision of measurement vanishes as the separation of the two sources goes beyond the diffraction limit, approaching zero \cite{Bettens1999UM, Aert2002JSB, Ram2006PNAS, Chao2016JOSA}. This phenomenon consolidates the common wisdom of the Abbe-Rayleigh diffraction criterion from a more rigorous foundation, and is termed as the Rayleigh's curse by many authors, see for example in \cite{Tsang2015O, Tsang2016PRL, Tsang2016PRX, Paur2016O, Rehacek2017PRA, Larson2018O, Hradil2019O, Liang2021O, Wadood2021OE,Faber2000OL}. Recently, the pioneering works of Tsang and coworkers \cite{Tsang2015O, Tsang2016PRX,Tsang2019arxiv,Tsang2019O} showed that it is possible in principle to improve both factors by analyzing the signal in a different (e.g., the Hermite-Gaussian mode) spatial basis instead of direct intensity measurement. The new technique leaves the Abbe-Rayleigh diffraction criterion irrelevant and at the same time guarantees a finite desired estimation accuracy via the parameter Fisher information (FI) \cite{Helstrom1976, Holevo2001}. Experimental confirmations have also been demonstrated, see for examples in \cite{Paur2016O, Wadood2021OE}. While working perfectly in ideal situations, this technique has two constraints, requiring (1) incoherence and (2) balance (equal brightness) of the two point sources. It has been shown that releasing either one of the two may lead to the resurgence of Rayleigh’s curse by Rehacek et al., with unbalanced incoherent sources \cite{Rehacek2017PRA} and by Larson-Saleh \cite{Larson2018O, Larson2019O} and De, et al., \cite{DePRR2021} with balanced but partially coherent sources . \begin{figure}[h!] \centering \includegraphics[width=4cm]{f1_scheme} \caption{The point spread functions, $h_{+}(x)$ and $h_{-}(x)$, of two unbalanced point sources separated by $s$ via a shift-invariant imaging system. The coefficients $a$ and $b$ characterize a continuous unbalanceness.} \label{scheme} \end{figure} To address this issue, we attack both restrictions at the same time by investigating the resolution of two unbalanced and partially coherent point sources with the assistance of an entangled partner, see a schematic illustration in Fig.~\ref{scheme}. The effect of partial coherence is analyzed by continuous basis rotation of the entangled partner \cite{Plenio2017RMP}. It is found that the effect of unbalanceness on the two-source separation estimation parameter, i.e., Fisher Information, is equivalent to that of the basis rotation-induced partial coherence. Unexpectedly, the joint effect of the two restrictions by entanglement permits the realization of super-resolution with a finite Fisher Information even when the separation of the two sources is approaching zero. It is also determined analytically that there exists a ``least resolvable" (non-zero) distance of which the Fisher information experiences a non-zero minimum that is determined by Lambert W functions \cite{Corless1996ACM}. This provides a guidance to employ optimum practical parameters to achieve super-resolution with a given accuracy requirement. \section{Model and Methods} We consider two unbalanced sources located at $x\pm s/2$ respectively with a separation $s$. Through a shift-invariant imaging system, their normalized point spread functions (PSF) are assumed to take the Gaussian form, i.e., $h_{\pm}(x)=h(x\pm s/2)$ with $h(x)=\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp[{-\frac{x^{2}}{2\sigma^2}}]$ and $\sigma$ as the width, see an illustration in Fig.~\ref{scheme}. We include the partial coherence property of the two sources by introducing an entangled partner of the spatial degree of freedom. The optical field in the image plane can be described as \begin{align} \label{ent} \ket{\Psi_{ent}}&=a\ket{h_{+}}\ket{\phi_{1}}+be^{i\varphi}\ket{h_{-}}\ket{\phi_{2}}, \end{align} where $\ket{h_{\pm}}$ represent the two non-orthogonal point spread functions (vectors) of the spatial degree of freedom with $\langle} \newcommand{\cP}{{\cal P} x \ket{h_{\pm}}=h_{\pm}(x)$, $\ket{\phi_{1}}, \ket{\phi_{2}}$ are two generic states describing the remaining degrees of freedom (including polarization, temporal domain, etc.), $a$ and $b$ are real normalized coefficients with $a^{2}+b^{2}=1$, and $\varphi$ is an arbitrary phase. The degree of unbalanceness of the two sources can be simply quantified by the ratio $r=b/a$, where $r=0$ means completely unbalanced and $r=1$ indicating balanceness ($r>1$ is equivalent to $r<1$ since $a$ and $b$ are symmetric. Here the spatial state space $\{\ket{h_{+}}, \ket{h_{-}}\}$ is entangled with the remaining state space $\{\ket{\phi_{1}}, \ket{\phi_{2}}\}$. The entangled state (\ref{ent}) can either be a quantum state of single photons or a macroscopic classical optical field \cite{Spreeuw1998FP, Luis2009OC, Simon2010PRL, Borges2010PRA, Qian2011OL, Kagalwala2013NP, Toppel2014NJP, Zela2014PRA, Forbes2017Nat, Goldberg2021AOP}. In general, $\ket{\phi_{1}}, \ket{\phi_{2}}$ can be non-orthogonal and their overlap, $\langle} \newcommand{\cP}{{\cal P} \phi_1|\phi_2\rangle} \newcommand{\cC}{{\cal C}$, determines the degree of coherence. For the convenience of following discussions, we take the two to be orthogonal but analyze the operations (or detections) in an arbitrarily rotated basis, i.e., $|\phi^{\alpha}_1\rangle} \newcommand{\cC}{{\cal C}=\cos\alpha |\phi_1\rangle} \newcommand{\cC}{{\cal C}-\sin\alpha |\phi_2\rangle} \newcommand{\cC}{{\cal C}$ and $|\phi^{\alpha}_2\rangle} \newcommand{\cC}{{\cal C}=\sin\alpha |\phi_1\rangle} \newcommand{\cC}{{\cal C}+\cos\alpha |\phi_2\rangle} \newcommand{\cC}{{\cal C}$. This is an equivalent way of introducing partial coherence due to its basis-dependent nature \cite{Plenio2017RMP, Qian2011OL, Kagalwala2013NP, Goldberg2021AOP}, see an illustrative analysis in supplemental material section 1. The optical field can be rewritten in the new basis as \begin{align} \ket{\Psi_{ent}}=\ket{h_{1}}|\phi^{\alpha}_1\rangle} \newcommand{\cC}{{\cal C}+\ket{h_{2}}|\phi^{\alpha}_2\rangle} \newcommand{\cC}{{\cal C}, \label{rotated} \end{align} where $\ket{h_{1}}=a\cos\alpha\ket{h_{+}}-b\sin\alpha e^{i\varphi}\ket{h_{-}}$, $\ket{h_{2}}=a\sin\alpha\ket{h_{+}}+b\cos\alpha e^{i\varphi}\ket{h_{-}}$ are two new spatial functions that are in general non-normalized and non-orthogonal. To quantify the likely-hood of being an accurate estimation (or the degree to which measurements can be trusted) of the two-source separation $s$, the conventional Fisher information (FI) \cite{Helstrom1976, Pang2017NC, Shukur2020Nat} is employed. It's definition is based on the Cram\'er-Rao bound \cite{Zmuidzinas2003OSA, Kay1993, Pang2014PRL} ${\rm Var}(s)\geq 1/F$. The optimal estimation of the unknown parameter $s$ corresponds to the maximization of the Fisher information $F$ which corresponds to a minimum of the estimator variance ${\rm Var}(s)$. A recent debate about the nonphysical divergence of Fisher information \cite{Larson2018O, Tsang2019O, Larson2019O, Hradil2019O} suggests that the Fisher information for analyzing two-source super-resolution needs to be appropriately normalized. Here we adopt the approach proposed by Hradil et al. \cite{Hradil2019O}, to account the total FI as a sum of weighted components for all probabilistic events. For the general entangled state (\ref{rotated}), the FI is defined as \begin{equation} F_{tot}=\langle} \newcommand{\cP}{{\cal P} h_1|h_1\rangle} \newcommand{\cC}{{\cal C} F_{\rho_1}+\langle} \newcommand{\cP}{{\cal P} h_2|h_2\rangle} \newcommand{\cC}{{\cal C} F_{\rho_2}, \label{FIdefinition} \end{equation} where $\rho_1=|h_1\rangle} \newcommand{\cC}{{\cal C} \langle} \newcommand{\cP}{{\cal P} h_1| /N_1$, $\rho_2=|h_2\rangle} \newcommand{\cC}{{\cal C}\langle} \newcommand{\cP}{{\cal P} h_2|/N_2 $ are two corresponding normalized states, by factors $N_1$, $N_2$, of the spatial domain with corresponding weights $\langle} \newcommand{\cP}{{\cal P} h_1|h_1\rangle} \newcommand{\cC}{{\cal C}$, $\langle} \newcommand{\cP}{{\cal P} h_2|h_2\rangle} \newcommand{\cC}{{\cal C}$. Here Fisher information takes the form $F_{\rho}=2{\rm Tr}[(\partial_s\rho(s))^2]$ for an arbitrary normalized state $\rho (s)$. This measure is based on the conditional outcome of a measurement in the rotated basis $\{|\phi^{\alpha}_1\rangle} \newcommand{\cC}{{\cal C},|\phi^{\alpha}_2\rangle} \newcommand{\cC}{{\cal C}\}$ of the entangled partner. It is important to note that here we treat the measurement of signal as a single repetition (e.g., an independent single photon detection event, detecting a bunch of identical photons within the coherence time, or a single measurement of light intensity). For given number (e.g., $N$) of multiple repetitions of measurement, our results will remain the same up to the constant factor $N$. Our analysis doesn't cover environment-induced loss cases where the photon numbers are unknown. \section{Results and Messages} In our consideration, the unbalanceness of the two sources $r=b/a$ is also an unknown parameter. However, it will be shown later that measurement in the rotated basis $\{|\phi^{\alpha}_1\rangle} \newcommand{\cC}{{\cal C},|\phi^{\alpha}_2\rangle} \newcommand{\cC}{{\cal C}\}$ is always able to cancel the unbalanceness effect. Therefore, one needs only to consider the single unknown parameter $s$ for the calculation of the Fisher Information. Then the Fisher information for the entangled field (\ref{rotated}) can be explicitly obtained as \begin{eqnarray} \label{Ftot} F_{tot}(s)&=&\frac{1}{4\sigma^{2}}\notag \\ &-&\frac{(r\sin2\alpha\cos\varphi)^{2}s^{2}}{16[(\cos^{2}\alpha+r^{2}\sin^{2}\alpha)e^{s^{2}/8\sigma^{2}}-r\sin2\alpha\cos\varphi]} \notag \\ &\times& \frac{1}{(r^{2}\cos^{2}\alpha+\sin^{2}\alpha)e^{s^{2}/8\sigma^{2}}+r\sin2\alpha\cos\varphi}, \end{eqnarray} which depends on the two-source separation $s$, the entangled partner's measurement basis characterized by rotation angle $\alpha$, the two-source unbalanceness through the amplitude ratio $r$, and the relative phase $\varphi$. Fig.~\ref{FI} (a) illustrates the specific behaviors of the Fisher Information on $s$ for a fixed partial coherence (rotation angle $\alpha=\pi/6$) and relative phase $\varphi=0$, but for different unbalanceness $r=0,1/4,1/2,1$. Qualitative behaviors for other degrees of partial coherence (i.e., other rotation angles) are similar to Fig.~\ref{FI} (a) except for some special cases which will be discussed later. The detailed derivation of (\ref{Ftot}) is provided in supplemental material section 2. \begin{figure}[t] \includegraphics[width=6cm]{1.png} \includegraphics[width=6cm]{2.png} \caption{Dependence of the FI on the displacement s for $\varphi=0$ with $\sigma=1$. Different color represents different value of $r$. The red line is for the balanced case ($r=1$), and the blue and green lines corresponds to $r=\frac{1}{2}$, and $r=\frac{1}{4}$ respectively. For $r=0$ are represented by the black line. (a) FI for the state $\ket{\Psi_{1}}$ in the case $\alpha=\frac{\pi}{6}$. (b) FI is for the state $\ket{\Psi_{2}}$.} \label{FI} \end{figure} There are three important messages delivered by the obtained Fisher information (\ref{Ftot}) for the entangled source. {\em The first major message} of result (\ref{Ftot}) is that super-resolution is achievable for various practical unbalanceness settings as the value of Fisher information remains finite even when the separation $s$ goes to zero, as shown in Fig.~\ref{FI} (a). This unexpected behavior is a result of the existence of the entangled partner. To have a clearer picture of effects of the entangled partner, we also analyze the Fisher information of two unentangled point sources with the same unbalanceness characterized by $r=b/a$ and relative phase $\varphi$, i.e., \begin{align} \label{unent} \ket{\Psi'}&=(a\ket{h_{+}}+be^{i\varphi}\ket{h_{-}})|\phi\rangle} \newcommand{\cC}{{\cal C}, \end{align} where $|\phi\rangle} \newcommand{\cC}{{\cal C}$ is a generic state of the remaining degrees of freedom. In this non-entangled case, the measurement basis of $|\phi\rangle} \newcommand{\cC}{{\cal C}$ is irrelevant to the spatial domain. Therefore, the Fisher information can be directly computed as $F_{\rho'}=2{\rm Tr}[(\partial_s\rho')^2]$ where $\rho'=|\Psi'\rangle} \newcommand{\cC}{{\cal C} \langle} \newcommand{\cP}{{\cal P} \Psi'|/N'$ with $N'$ being the normalization factor. It can be obtained straightforwardly as \begin{align} \label{F-unent} F_{\rho'}(s)&=&\frac{\frac{1}{4\sigma^{2}}- ab\cos\varphi e^{-\frac{s^{2}}{8\sigma^{2}}}\frac{(-s^{2}+4\sigma^{2})}{8\sigma^{4}})}{1+2ab\cos\varphi e^{-s^{2}/8\sigma^{2}}}\nonumber\\ &&-\frac{1}{4}\frac{(ab\cos\varphi e^{-\frac{s^{2}}{8\sigma^{2}}}s)^{2}}{(1+2ab\cos\varphi e^{-s^{2}/8\sigma^{2}})^{2}}. \end{align} Fig.~\ref{FI} (b) illustrates the specific behaviors of $F_{\rho'}$ for the nonentangled field with the same unbalanceness values and fixed phase $\varphi$. The detailed derivation of the above result is also provided in supplemental material section 3. By comparing the Fisher information of the two cases as illustrated in Fig.~\ref{FI} (a) and (b), one notes apparently that the entangled field has a significant enhancement for all unbalanceness values in the small separation $s$ regime. Particularly, for the frequently studied balanced source case ($r=1$), the nonentangled field Fisher information vanishes at zero separation while the entangled field one achieves its maximum finite value. It is also interesting to note that for the entangled field case, the Fisher information of various different unbalanceness simply converge to the same finite value when the source separation decreases to zero. {\em The second major message} of result (\ref{Ftot}) lies in the underlying competing mechanism between coherence and unbalanceness in affecting the Fisher information. As is mentioned, neither partial coherence nor unbalancess is able to achieve super-resolution \cite{Rehacek2017PRA, Larson2018O}. Surprisingly, as is shown here above, the combination of the two works! This is due to the fact that coherence and unbalanceness have counter effects against each other on the Fisher information. To understand this point better, we perform a detailed analysis of effects from both properties to explore the analogous behavior of the two. We first analyze the coherence effect by fixing at the two-source balanced case, i.e., setting the unbalanceness parameter to be $r=1$. Then the Fisher information simply reduces to \begin{align} \label{coherence-effect} F_{tot}(s,r=1)&=\frac{1}{4\sigma^{2}}-\frac{1}{16}\frac{\sin^{2}2\alpha\cos^{2}\varphi s^{2}}{e^{s^{2}/4\sigma^{2}}-\sin^{2}2\alpha\cos^{2}\varphi}, \end{align} which depends on the coherence rotation angle $\alpha$ and two-source separation $s$. Fig~\ref{equivalent} (a) illustrates its behavior for different coherence parameter values of $\alpha$. Next we analyze the unbalanceness effect by fixing the coherence rotation angle $\alpha=\pi/4$. To be comparable we define the unbalanceness parameter as $r=b/a=\tan\eta$, where $a=\cos \eta$ and $b=\sin\eta$ to satisfy the normalization condition $a^2+b^2=1$. Then the Fisher information reduces to \begin{align} \label{unbalanceness-effect} F_{tot}(s,\alpha=\frac{\pi}{4})&=\frac{1}{4\sigma^{2}}-\frac{1}{16}\frac{\sin^{2}2\eta\cos^{2}\varphi s^{2}}{e^{s^{2}/4\sigma^{2}}-\sin^{2}2\eta\cos^{2}\varphi}, \end{align} which depends on the unbalanceness angle $\eta$ and two-source separation $s$. Fig~\ref{equivalent} (b) illustrates the behavior for different unbalanceness values of $r$ or equivalently $\eta$. By comparing the two expressions (\ref{coherence-effect}) and (\ref{unbalanceness-effect}), one immediately notes that they are equivalent except for the change of parameter $\alpha$ into $\eta$. This shows that unbalanceness and the coherence are affecting the Fisher information in the same mechanism. Therefore, the opposite variations of the two parameters are able to cancel each other's negative effects on the Fisher information, thus permitting a super-resolution. Fig~\ref{equivalent} (a), (b) illustrates specifically the equivalence of the two Fisher information behaviors. When the parameters are chosen appropriately, the two effects behave exactly the same as shown by same color lines in the two plots. In addition, the fact that angle $\alpha$ is controllable by the analyzer, one can always achieve finite FI for arbitrary unknown parameter $r$ as shown in Fig~\ref{equivalent}. This justifies that in the calculation of the Fisher Information (\ref{Ftot}), it is not necessary to estimate the unknown parameter $r$. Also, we observe that in the limit $s \rightarrow 0$, Fisher Information $F_{tot}(s,r)$ will never vanish for any unbalanced (i.e., $r\ne 1$) sources, see detailed proof in Supplement material section 4. From the equivalence of coherence and unbalanceness effect, it can therefore be concluded that by adjusting the coherence rotation angle $\alpha$ one can always avoid the balanced situation and thus avoid the vanishing Fisher Information at zero separation. \begin{figure}[t!] \includegraphics[width=6cm]{3.png} \includegraphics[width=6cm]{4.png} \caption{Total FI versus the displacement $s$ for $\varphi=0$ when $\sigma=1$. (a) Rotating basis angle $\alpha$ effect on FI in the balanced case. The minimum of the FI corresponds to $s_{\rm least}$ in the expression (\ref{minimum}) (b) Unbalancenees intensities effect in the case of $\alpha=\frac{\pi}{4}$.} \label{equivalent} \end{figure} {\em The third major message} of the Fisher information (\ref{Ftot}) is its counter intuitive decreasing behavior within the small separation regime, see illustrations in Fig.~\ref{FI} (a) and Fig.~\ref{equivalent}. Normally one would expect that as the separation $s$ of the two sources increases the Fisher information would also increase because it is natural to assume that larger separation means less relative error in measurements. However, as shown in Fig.~\ref{FI} (a) and Fig.~\ref{equivalent}, the Fisher information experiences a decrease and then an increase as the separation $s$ increases from zero. This behavior is due to the competing nature of coherence ($\alpha$) and unbalanceness ($r$) in affecting the Fisher information for any fixed relative phase $\varphi$ . This interesting behavior suggests the existence of a least resolvable separation $s_{\rm least}$ that leads to a minimum Fisher information $F_{tot}^{\rm min}$. All practical situations should avoid analyzing distances at the vicinity of this critical separation $s_{\rm least}$. To achieve this separation quantitatively, we analyze the derivative of the Fisher information (\ref{FI}). The vanishing derivative of $F_{tot}$ leads to the solutions of the following equation \begin{align} \label{ali} \Lambda(s)e^{s^{2}/4\sigma^{2}}+\Pi(s)e^{s^{2}/8\sigma^{2}}+\Omega=0, \end{align} where, $\Lambda(s)=(a^{2}\cos^{2}\alpha+b^{2}\sin^{2}\alpha)(b^{2}\cos^{2}\alpha+a^{2}\sin^{2}\alpha)(1-\frac{2s^{2}}{8\sigma^{2}}), \Pi(s)=[2 (a^{2}\cos^{2}\alpha+b^{2}\sin^{2}\alpha)-1](1-\frac{s^{2}}{8\sigma^{2}})ab\sin2\alpha\cos\varphi$ and $\Omega=-(ab\sin2\alpha\cos\varphi)^{2}$. There are two trivial solutions $s=0$ and $s\rightarrow\infty$ as it is shown in Fig.~\ref{FI}(a) and Fig.~\ref{equivalent}. The nontrivial solution can in general be always achieved numerically. For the commonly studied balanced case when $r=1$, the nontrivial solution of (\ref{ali}) can be obtained analytically as (see detailed analysis in supplemental material section 5) \begin{align} \label{minimum} s_{\rm least}=\sigma\sqrt{4 + 4\mathcal{W}[-\frac{\sin^{2}2\alpha\cos^{2}\varphi}{e}]} \end{align} where $\mathcal{W}[.]$ is a special function known as the Lambert $\mathcal{W}$-function, which is an increasing function with a minimum at $\mathcal{W}[\frac{-1}{e}]=-1$. When $\sin^{2}2\alpha\cos^{2}\varphi=1$, $s_{\rm least}=0$ which is exactly the case analyzed in \cite{Hradil2019O} for $\varphi=0,\pi$ and $\alpha=\pi /4$. Due to the equivalence of the coherence effect and unbalanceness effect as analyzed earlier, the least resolvable distance with respect to different values of the parameter $r$ can be obtained in exactly the same way as (\ref{minimum}), see also illustrations of $s_{\rm least}$ for different curves in Fig.~\ref{equivalent} (a) and (b). Since the detection basis (in terms of $\alpha$) can be controlled by the observer, all unbalanced cases can be treated equivalently as balanced cases but with a corresponding coherence angle $\alpha$. The least resolvable distance analysis provides an important guidance to avoid resolution of two-source separation at the vicinity of $s_{\rm least}$ in various practical situations. \section{Conclusion} To summarize, we have investigated sub-diffraction-limit resolution of two point-sources under two practical situations: arbitrary two-source unbalanceness and partial coherence. By including an entangled partner of the spatial property of the two sources to account the partial coherence, it is found that super-resolution can be achieved with high measurement estimation credibility (quantified by maximum Fisher information) even when the two-source separation reduces to zero. It is revealed that such achievement is due to the fact that the effect on Fisher information from partial coherence is equivalent to that of the two-source unbalanceness. Appropriate control of the rotated basis (i.e., adjustment of partial coherence) by the analyzer is able to counter effect arbitrary unbalanceness. Such a capability indicates that the realization super-resolution is independent of whether the unbalanceness and partial coherence are known or not. This justifies the exclusion of unknown parameters unbalancenss and partial coherence in analyzing Fisher information. We have also carried out a detailed analysis of the counter intuitive decreasing behavior of Fisher information as the two-source separation increases. This allows the discovery a characteristic equation to determine the ``least resolvable" distance. Analytical solutions in terms of Lambert W function are also achieved. Our results provide an important guidance for practical optical designs and engineering in the realization of optimum fine resolution. {\bf Funding:} U.S. Army under contact No. W15QKN-18-D-0040. {\bf Disclosures:} The authors declare no conflicts of interest {\bf Acknowledgments:} We acknowledge partial financial support by Stevens Institute of Technology.
proofpile-arXiv_065-38
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Humans have a native capability to manipulate objects without much effort. We use different sensing modalities, e.g., visual, auditory and tactile sensing, to perceive the object properties and manipulate them in space. Among these sensing modalities, tactile sensing is not affected by changes of light conditions and occlusions of hands as vision, or influenced by the noise of the ambient environment. It can provide us rich information of the object in hand, e.g., the texture, temperature, shape and pose of the object. To equip the robot with similar tactile sensing capabilities, various tactile sensors have been proposed for robots in the past decades to imitate the human skin~\cite{TactileSensingRobotHandsSurvey,Directionstowardeffective,luo2017robotic}. Traditional approaches aimed at providing the robot with force information at a contact point~\cite{tactileSensing}. In order to provide the robot with more information on the contact, camera-based optical tactile sensors have been proposed and the GelSight sensor is one of them. The GelSight sensor captures high resolution geometric information of the object it interacts with, thus having the capability to aid the manipulation task~\cite{gomes2021generation}. It uses a camera to capture the deformation of a soft elastomer, using illumination sources from different directions~\cite{gomes2021generation}. It also has a few variants of different morphologies and camera/light configurations such as GelTip~\cite{gomes2020geltip} and GelSlim~\cite{donlon2018gelslim}. \begin{figure}[t] \centering \centerline{\includegraphics[scale=0.31]{Figures/key_results.png}} \caption{The adapted tactile images (middle) are generated from the simulation tactile images (left) using our proposed texture generation network. Compared to the corresponding real images (right), it can be observed that the contact regions in the adapted tactile images have similar textures to ones in the real images, with the uncontacted areas free from the textures.} \label{fig:cover} \label{overview} \end{figure} Due to the use of a soft elastomer on the top of the sensor to interact with objects, similar to many other tactile sensors, the camera based tactile sensors are fragile and suffer from wear and tear. To mitigate the damage to the sensor and save time for training on a real robot, the robot can be trained in a simulated environment first with simulated tactile sensors, before deploying the trained model in a real environment. To this end, simulation models of the GelSight sensor have been proposed~\cite{gomes2019gelsight,gomes2021generation}. However, the gap between the simulated tactile environment and the real world is still large, which may perturb the model and greatly impact its performance. As discussed in~\cite{gomes2021generation}, the imperfections of reality such as the scratches and other object deformation are what helps the trained model distinguish between objects via touch sensing. In contrast, in the simulation, those are not present, thus creating a gap between the two domains. One of the methods to diminish such gaps is to make the simulation as real as possible. In order to do so, the introduction of noise either in the form of textures and other methods such as adding Gaussian noise or domain randomisation can be adopted~\cite{tobin2017domain,domainRandomization2}. But its main issue is that the probability distribution of the imperfections in the reality domain has a long tail: although the probability of encountering a novel imperfection is small, it will eventually happen. In the context of robotic manipulation tasks, this can become a potential dangerous situation or damage the sensors~\cite{gomes2021generation}. To address this challenge, we propose a novel texture generation network to reduce the domain gap from simulated tactile images to the real tactile images for camera-based optical tactile sensing. In the proposed network, different regions of the simulated tactile image are adapted differently: the areas in contact with the object are applied with the generated textures from real tactile images, whereas regions without a contact maintain their appearance as when the sensor is not in contact with any object. We have conducted extensive experiments to evaluate the proposed method using a dataset of real and simulated tactile images from a GelSight tactile sensor. The experiments show that the proposed method can generate realistic artefacts on the deformed regions of the sensor, while avoiding leaking the textures into ones without a contact, as shown in Fig.~\ref{fig:cover}. In comparing the resulted tactile images with real tactile images, it achieved a low Mean Absolute Error (MAE) of 10.53\% on average and a similarity of 0.751 in the Structural Similarity Index (SSIM) metric. Beyond that, the experiments show that when using the adapted images generated by our proposed network for Sim2Real transfer of a learnt model for a classification task, the drop accuracy caused by the Sim2Real gap is reduced from $38.43\%$ to merely $0.81\%$. As such, this work has the potential to accelerate the Sim2Real learning for robotic tasks with tactile sensing. \section{Related Works} \subsection{Optical tactile sensors} Optical tactile sensors that use a camera underneath a soft elastomer layer are one highly practical method for providing robots the sense of touch, and thus a variety of sensor designs have been proposed. Currently, these can be grouped in two main families: marker-based, represented by TacTip sensors~\cite{TacTipFamily}, and image-based, represented by GelSight sensors~\cite{dong2017improved}. In this paper, we focus on GelSight sensors as they are better suited to capture the fine textures introduced by manufacturing defects or wear and tear. The \textit{GelSight} working principle was proposed in~\cite{RetrographicSensing} as a method for reconstructing the texture and shape of contacted objects\cite{cao2020spatio,luo2018vitac,lee2019touching}. To that purpose, light sources are placed from opposite angles next to a transparent elastomer that is coated with an opaque reflective paint, resulting in three different shaded images of the in-contact object texture. A direct mapping between the observed image pixel intensities and the elastomer surface orientation can then be found to create a lookup table, enabling the surface to be reconstructed using photometric stereo. Since its initial proposal, new designs have been proposed that aim at reducing the size~\cite{dong2017improved, donlon2018gelslim}, improving its sensing capability~\cite{donlon2018gelslim, dong2017improved} or providing curved finger-shaped surface for improved robotic dexterity~\cite{BlocksWorldOfTouch, softRoundGelSight, cao2021touchroller}. However, optical tactile sensors are still brittle and extensive experimentation with them often results in their sensing membrane being damaged. \subsection{Simulation of tactile sensors} It is desirable to develop and test robot agents initially within a simulator before their deployment in the real environment as running experiments with real hardware is time consuming and damage prone. To this end, a variety of methods have been proposed to simulate different tactile sensors. We proposed to simulate GelSight sensors in~\cite{gomes2019gelsight, gomes2021generation}, by considering close-up depth-maps extracted from simulators and using the Phong illumination model for rendering the RGB tactile images. However, despite the efforts in making the simulations as realistic as possible, some artefacts such as the textures that are not represented in the simulated object model and the scratches resulting from wear and tear contribute largely to the gap between the simulated and real images. They often hinder transferring the models trained on simulated data to the real robots (i.e., Sim2Real learning). Therefore, the gap itself must be addressed. \subsection{Reducing the Sim2Real gap for tactile sensing} A common approach to address the gap between training and test data is to augment the training data such that the test data becomes one particular subset of the whole augmented training data, i.e., Domain Randomisation. For Real2Real computer vision tasks, this augmentation is commonly performed at the image level, by applying random colour or geometric transformations to the images. When the training images are collected in simulation for Sim2Real learning, one other form of augmentation is to directly augment the simulation data, by randomising object colours, scene illumination or the environment physics~\cite{tobin2017domain}. As colours and illumination are constant for tactile images collected from the same tactile sensor, in~\cite{gomes2021generation} we experiment with augmenting the synthetic dataset by perturbing the in-contact object shapes using simple texture maps that resemble the artefacts mostly contributing to the gap observed in our dataset: the textures introduced in the 3D printing of our real reference object dataset. The method was proved to be a more effective augmentation schema than image-based augmentations. However, target domain agnostic randomisation is costly as it often requires running the same simulation in a great number of times, to capture all the dimensions' variations. Thus, in this paper we address the Sim2Real gap from a Domain Adaptation perspective and propose a network that adapts the simulated images into photo realistic counterparts. Domain adaptation has successfully been applied to vision-based tasks~\cite{cyCADA, photorealismEnhancement}, however, it has not been studied in the context of tactile sensing yet. \section{Methodology} \begin{figure}[t] \centerline{\includegraphics[scale=0.45]{Figures/framework.png}} \caption{The proposed texture generation network. Starting from the \textbf{Depth map} captured in the simulator, the \textbf{Simulated} tactile image and \textbf{Mask} are generated using \cite{gomes2021generation} and simple truncation of the depth map, respectively. The simulated image is then mapped to the \textbf{Adapted} target $R$ through $G_{S \rightarrow R}$. A discriminator $D_R$ then classifies the generated tactile image $G_{S \rightarrow R}($\textbf{Simulated}$)$ thus giving the adversarial loss $L_{GAN}$. The image is then cycled back to the simulated domain through $G_{R \rightarrow S}$, which then gives the cycle consistency loss ($L_{cycle}$). The mask of the in-contact area is used to cover the contact zones of the \textbf{Simulated}, \textbf{Adapted} and \textbf{Real images}, thus allowing us to constrain the background of the tactile image, while allowing the model to alter the contact zone with textures, resulting in $L_{mask}$.} \label{framework} \end{figure} \subsection{Problem description} In this paper, we address the domain gaps between the simulation tactile images and real tactile images, for the first time, that impede the ability of transferring a trained model in simulation to reality. The resulting factor that contributes to the domain gaps is represented by the textural artefacts~\cite{gomes2021generation}. Those artefacts are not limited to the production phase such as manufacturing defects and surface textures brought by the finishing, but can be created when an object is repeatedly interacted with thus being continuously deformed, i.e., wear and tear. Failure to consider the artefacts when training the robot agents can lead to an improper manipulation of a given object due to miss-classification and ultimately result in a possible damage to the robot. To this end, we aim at addressing the gap between simulated and real tactile images to diminish the risk of such situation and propose to learn the artefacts on object surfaces so as to mitigate the drop in performance in Sim2Real learning. It is challenging as texture artefacts should be applied only to the contact regions of the tactile sensors with the rest unaffected, as textural augmentation leaks to the untouched areas may lead to fake positive detection of contacts. In order to do so, we propose a novel texture generation network for applying textures to the contact surfaces in the simulation tactile images. \subsection{The texture generation network} As shown in Fig.~\ref{framework}, our proposed texture generation network has two generators: one generating a tactile image with textures $\hat{X}_A$ (i.e., an adapted tactile image) from a simulation tactile image $X_S$, i.e., $G_{S\rightarrow R}$; the other generating tactile images in the simulation domain from the adapted tactile image, i.e., $G_{R\rightarrow S}$. Two discriminators are responsible for distinguishing the real image from the generated one created by the generator in each domain: Discriminator $D_R$, aims at distinguishing $X_R$ from $G_{S\rightarrow R}(X_S)$ while the discriminator $D_S$ aims at distinguishing between $X_S$ and $G_{R\rightarrow S}(X_R)$. \begin{figure*}[tt] \centerline{\includegraphics[scale=0.92]{Figures/generatorComparisons.pdf}} \caption{Top row: Simulated samples collected using the GelSight simulation approach \cite{gomes2021generation}; Bottom row: The corresponding real samples captured using a real GelSight sensor \cite{dong2017improved}. In between, second to fourth rows: The two baselines that we experiment with and our final proposed network. As seen in the listed images, the original simulation tactile images lack the textures produced by the 3D printed process that can be observed in the real samples. On the opposite extreme, Pix2Pix~\cite{pix2pix} renders over textured tactile images, including outside the in-contact areas. CycleGAN~\cite{cycleGAN} produces much cleaner textures, when compared to Pix2Pix, however, some texture leaking can still be observed, e.g., in the first and last samples (columns). Finally, our proposed network generates the best results, with the textures generated only within the in-contact areas.} \label{figGenerativeModels} \end{figure*} The generator follows a U-Net architecture~\cite{unetBasic} which consists an encoder (downsample) part and a decoder (upsample) part. Each layer in the encoder, consists of blocks of a convolutional layer, followed by an instance normalisation layer and a Leaky ReLU activation. Each convolution has a stride of two and with each layer, the number of filters is doubled until the image is reduced to height and width of one and $512$ filters. The decoder is constructed of layers that consist of a transposed convolution of stride two, followed by an instance normalisation, a dropout layer, and a ReLU activation. The dropout layer is applied to the upsample section of the generator and functions as a network regulator. The network is compelled to learn meaningful representations from the latent space as a result of this. In addition, skip connections are tied between the mirrored layers in the encoder and decoder part of the model, which allows the model to propagate context information to higher resolution layers~\cite{unetBasic}. This is done by concatenating the mirrored layer with the output of the downsample layer. As a result, each layer in the decoder has the amount of filters doubled than the corresponding mirrored layer in the decoder network. The first downsample block does not use the normalisation, while in the decoder part only the first three blocks use a dropout layer. The Discriminator follows a Patch-GAN architecture~\cite{pix2pix}, where each layer consists of a convolution of a stride of two followed by an instance normalisation layer and a Leaky ReLU activation. The discriminator, rather than giving an absolute value, it outputs a patch of $N{\times}N$ dimensions, in our case, $N=33$. This allows the computation of $L1$ loss between the patches output by the discriminator. \subsection{Loss Functions} \textbf{Adversarial Loss}. The generator $G_{S \rightarrow R}$ represents a mapping function that takes an element from the distribution $X_S$ and maps it to the distribution $X_R$ while $D_R(x_s)$ outputs the probability that an instance comes from $X_R$ rather than from $X_S$. The discriminator tries to maximise the probability of correctly assigning a label to the $X_R$ and $G_{S \rightarrow R}(X_S)$ while the generator aims to minimise $log(1-D(G_{S \rightarrow R}(X_S))$. In other words, the loss can be described as a minimax game where the generator $G_{S \rightarrow R}$ aims at translating a tactile image from the synthetic domain to the reality domain whereas the discriminator $D_R$ aims at distinguishing between a generated tactile image and a real one. This corresponds to: \begin{equation} \begin{aligned} \mathcal{L}_{GAN}(G_{S \rightarrow R},D_R,X_R,X_S) = & E_{x_r\sim X_R}[logD_R(x_r)] + & \\ & E_{x_s\sim X_S}[log(1- & \\ & D_R(G_{S\rightarrow R}(x_s))]\label{adversarialLoss1} \end{aligned} \end{equation} \noindent In addition, the generator $G_{R \rightarrow S}$ learns to map the images from $X_R$ to $X_S$ while the discriminator $D_S$ distinguishes between them. This results into: \begin{equation} \begin{aligned} \mathcal{L}_{GAN}(G_{R \rightarrow S},D_S,X_S,X_R) = & E_{x_s\sim X_S}[logD_S(x_s)] + & \\ & E_{x_r\sim X_R}[log(1-& \\ & D_S(G_{R\rightarrow S}(x_r))]\label{adversarialLoss2} \end{aligned} \end{equation} Together, \eqref{adversarialLoss1} and \eqref{adversarialLoss2} give the total adversarial loss of: \begin{equation} \begin{aligned} \mathcal{L}_{GAN} = & \mathcal{L}_{GAN}(G_{S \rightarrow R},D_R,X_R,X_S) + \\ & \mathcal{L}_{GAN}(G_{R \rightarrow S},D_S,X_S,X_R)\label{adversarialLossCombined} \end{aligned} \end{equation} \textbf{The Cycle Consistency loss}. While the tactile image generated by $G_{S \rightarrow R}(X_S)$ may learn to produce convincing results that seem like they are sampled from the real distribution, it may not preserve the information in $X_S$, such as the class and the location of the object in the image. In order to enforce the stability and consistency of the model, the cycle consistency loss ${L}_{cycle}$~\cite{cycleGAN} has been implemented. ${L}_{cycle}$ calculates the difference between the input simulation tactile image $x_{s}$ and the image translated to a real image through the generator $G_{S \rightarrow R}$ then back to the synthetic domain through the generator $G_{R \rightarrow S}$. This allows the model to learn the mappings between the domains without the need of paired data such as in CycleGAN~\cite{cycleGAN}, DualGAN~\cite{dualGAN}, and DiscoGAN~\cite{discoGAN}. Mathematically, given an image $x_s$, and the cycled image $G_{R \rightarrow S}(G_{S \rightarrow R}(x_s))$, we want $G_{R \rightarrow S}(G_{S \rightarrow R}(x_s))\approx x_s$. Similarly, for an image $x_r$, we want $G_{S \rightarrow R}(G_{R \rightarrow S}(x_r))\approx x_r$. Both of the losses give the total cycle consistency loss: \begin{equation} \begin{aligned} \mathcal{L}_{cycle}&(X_S,X_R,G_{S \rightarrow R},G_{R \rightarrow S})=\\ & \mathbb{E}_{x_s \sim X_S}[||x_s - G_{R \rightarrow S}(G_{S \rightarrow R}(x_s))||_1] +\\ &\mathbb{E}_{x_r \sim X_R}[||x_r - G_{S \rightarrow R}(G_{R \rightarrow S}(x_r))||_1] \label{cycleLoss} \end{aligned} \end{equation} \textbf{Identity Loss.} In order to preserve the colours when the tactile image gets translated from one domain to the other, an identity loss is introduced such that, when a simulation tactile image from $X_S$ is translated through the generator $G_{R \rightarrow S}$, the output $x_s$ should have similar colour settings from the light configurations in simulation. This results in: \begin{equation} \begin{aligned} \mathcal{L}_{identity}= & \mathbb{E}_{x_s \sim X_S}[||x_s - G_{R \rightarrow S}(x_s)||_1] + \\ & \mathbb{E}_{x_r \sim X_R}[||x_r - G_{S \rightarrow R}(x_r)||_1]\label{idLoss} \end{aligned} \end{equation} \textbf{Mask Loss.} As shown in Fig.~\ref{framework}, using the depth maps in the simulation, we can distinguish between the foreground and the background by setting any region that is less than the height of the elastomer to one and the rest to zero and thus we created the binary mask $m_s$ of the object $x_s$. In order to make the areas that are not in contact unaffected by the textures, we constrain the image background on both the simulated and the real image background thus not only giving the model stability to the outside the contact regions, but also accounting for class shift that the model is prone to~\cite{cyCADA}. Furthermore, we propose to use a hyperparameter $\alpha$ to balance the background target (simulated and real) that enable us to control the generated background to copy more features from the real or simulated backgrounds. This results in the formulation of our mask loss: \begin{equation} \begin{aligned} \mathcal{L}_{mask} & (M_S,X_S,X_R,G_{S \rightarrow R})= \\ & E_{x_s\sim X_S} [\alpha ||(G_{S \rightarrow R}(x_s)-x_s)(1-m_s)||_1+\\ &(1-\alpha)||(G_{S \rightarrow R}(x_s)-x_r)(1-m_s)||_1] \label{maskLoss} \end{aligned} \end{equation} where a higher $\alpha$ would mean that the tactile image is more constrained on the simulated dataset whereas a lower $\alpha$ would imply that the image is more constrained on the real dataset. \section{The Dataset and Experiment setup} To carry out experiments and evaluation we make use of the dataset captured in~\cite{gomes2021generation}. This dataset consists of paired sets of simulated tactile images $X_S$, real tactile images $X_R$ and raw close-up depth maps that are collected by tapping a GelSight sensor~\cite{dong2017improved} against 21 reference objects of different shapes. These objects were modelled in CAD and printed using a Formlabs~Form~2~3D~printer. To ensure a controlled position of the sensor relative to the object, a Fused Deposition Modeling (FDM) 3D printer \textit{A30} from Geeetech was used as a Cartesian actuator, to move the sensor and tap the reference objects in $3 \times 3$ grid and 11 depths. This results in each set containing 2,079 ($ 21 \times 99 $) tactile samples. Identical setups were created both in the real world and simulation (in Gazebo), and the Robot Operating System (ROS) was used to orchestrate the different software components and the overall data collection. While in the real setup the tactile images $X_R$ were directly captured, in the simulated counterpart the close-up depth maps were firstly captured online, and then the tactile images $X_S$ were generated using the simulation method~\cite{gomes2021generation} offline. For more details of the dataset, we refer the reader to~\cite{gomes2021generation} and the project website\footnote{https://danfergo.github.io/gelsight-simulation/}. Despite the high resolution of the 3D printer, textures were introduced during the printing process that significantly affect the \textit{Sim2Real} transfer. For instance, in Fig.~\ref{figGenerativeModels} it can be observed that the real samples present different textures compared to the ones in the simulated counterparts. Furthermore, it can be seen that the difference between the real and simulated samples are in the high frequency texture, while the overall shapes of the model are the same. Even though this texture could be further smoothed using a variety of methods, we keep them and consider them as unexpected artefacts that could result from natural and unpredictable wear of the object that are commonly seen in the real life. In order to conduct our experiments we first preprocessed the data. For the training dataset, we first normalised the tactile images at the pixel level into the $[-1;1]$ interval. We then employed a data augmentation method, in which we increased the resolution of the tactile images and applied a random crop over the tactile images, followed by a slight rotation and a horizontal flip applied randomly. We implemented all of the models using the Keras API available through Tensorflow. \begin{table}[t] \caption{Classification Task Summary} \def1.2{1.2} \begin{center} \begin{tabular}{c|c|c} \hline \textbf{\textit{Model}}& \textbf{\textit{Sim}}& \textbf{\textit{Real}}\\ \hline Direct & \textbf{91.90}\% $\pm 1.80$ & $53.47\%\pm 6.64$ \\ Pix2Pix & $91.07\%\pm 0.95$ & $60.53\%\pm 2.81$\\ CycleGAN & $90.07\%\pm 1.04$ & $85.57\%\pm 3.36$ \\ CycleGAN w Mask Sim & $91.07\%\pm 1.15$ & \textbf{90.26} \%$\pm 2.70$\\ CycleGAN w Mask Real & $90.41\%\pm 0.89$ & $86.25\%\pm 5.15$ \\ CycleGAN w Mask Combined & $90.55\%\pm 0.84$ & $89.17\%\pm 2.45$ \\ \hline \end{tabular} \label{tab2} \end{center} \end{table} \section{Experiments and discussion} We evaluate the proposed texture generation network with three sets of experiments. Firstly, we compare the generated tactile images against corresponding adapted samples, with both quantitative and qualitative analyses; and then, we demonstrate the advantages of considering the adapted, instead of the original simulated images, for Sim2Real transfer learning in a classification task. As shown in Fig.~\ref{figGenerativeModels}, the generated tactile images using our proposed network appear substantially more similar to the real images than the simulated counterparts, and from Table~\ref{tab2} it can be seen that the initial drop in performance caused by the Sim2Real gap of $38.43\%$ is reduced to $0.81\%$ when considering the images adapted by our network. \begin{table}[b] \caption{Real and Adapted comparison} \def1.2{1.2} \begin{center} \begin{tabular}{c|c|c} \hline \textbf{\textit{Model}}& \textbf{\textit{SSIM $\uparrow$}}& \textbf{\textit{MAE $\downarrow$}}\\ \hline Pix2Pix & $0.332$ & $30.80\%$ \\ CycleGAN & $0.631$ & $23.26\%$ \\ CycleGAN w Mask Sim & $0.734$ & $10.70\%$ \\ CycleGAN w Mask Real & \textbf{0.751} & $10.80\%$ \\ CycleGAN w Mask Combined& $0.719$ & \textbf{10.50\%} \\ \hline \end{tabular} \label{tab1} \end{center} \end{table} \subsection{Constraining the augmented texture areas} During our early experimentation phase, when analysing tactile images generated CycleGAN models~\cite{cycleGAN} we observe that they produce realistic results with only slight discrepancies from the real tactile images. However, one tendency of the CycleGAN is to mirror the background and light of the tactile images. Furthermore, the model is not constrained to maintain the object structure while being mapped~\cite{cyCADA}. While this result is not entirely detrimental for cases such as classification, where one can associate such behaviour with a domain randomisation technique, it highlights the instability of the model. Such instability can be observed in the column one, of the CycleGAN row, of Figure~\ref{figGenerativeModels}, where the background is flipped and anomalies are injected into the picture thus creating deformations. To mitigate the issue, we constrain the model on the background of both the simulated and the real pair of image by using our proposed mask loss in Eq.~\eqref{maskLoss}. We further weight the terms differently in relation to the background of provenience such as, a weight of $0.4$ implies that the latter would be multiplied by a weight of $0.6$ and giving a total error based on both backgrounds. We test both of the extremes, constraining the model on only simulation background, only real background as well as the mixture of two. With the mask loss implemented, we observed a greater stability of the background, where the flip of colours along with the background does not occur. Furthermore, the model applies different textures on the contact zones, adding scratches at different angles and on different figures or not adopting a particular scratch. This has the potential to minimise the situation where the model runs into an unexpected type of scratch. \begin{figure} \centerline{\includegraphics[scale=0.15]{Figures/mae.png}} \caption{Difference maps of the generated adapted tactile images, using the different studied methods, against the real reference, with white pixels representing zero difference. As seen in the figure, the textures of the real images are directly visible in \textbf{Sim}, demonstrating the smoothness of the original simulated images. \textbf{Pix2Pix}~\cite{pix2pix} produces randomised textures throughout the entire image, resulting in significant differences even in areas of no contact. \textbf{CycleGAN}~\cite{cycleGAN} produces better results than~\textbf{Pix2Pix}, however, some artefacts can be seen in areas of non contact, e.g., first column, and a distortion on the pose of the object is visible in the last column. Finally, \textbf{Ours} method produces the overall smaller differences. } \label{diffs} \end{figure} \subsection{Comparison of different domain adaptation methods} In order to compare different domain adaptation methods quantitatively, we compute the average Structural Similarity (SSIM) and Mean Absolute Error (MAE) between the adapted images generated by the different methods and the real corresponding pairs. The obtained results are reported in Table~\ref{tab1}. We further compute the relative absolute differences maps, between the samples generated by each method and real counterparts, to improve the understanding of the numerical results, shown in Fig.~\ref{diffs}. Our method of adding information from the background results in the greatest SSIM score $0.751$ when being constrained on the real background, while managing to achieve the lowest MAE ($0.105$) when using a mixed background approach. The Pix2Pix network ~\cite{pix2pix}, although it can create realistic samples, requires a greater amount of time to converge and the model is free to shift the location of the objects freely, resulting in a lower value on SSIM. Furthermore, the random light flipping that we observe will affect the value negatively. \subsection{Sim2Real transfer for object classification} To evaluate the advantages of considering the adapted tactile images \textit{versus} the original simulations for Sim2Real learning, we consider a simple task of object classification using tactile images. To this end, we start by mapping all the simulated images to the target domain, using the pre-trained CycleGAN on which we further add our structural constraint, and proceed by training a classification model on the mapped images. For this purpose, we use the ResNet50 architecture~\cite{resNet} with the weights pretrained on the ImageNet dataset. On top of the base model, two blocks composed of a dense layer, batch normalisation and an ELU activation were added. For each of the added layers, we use the He initialisation \cite{initialization:he} to avoid the problem of vanishing and exploding gradients present in deep architectures. In addition, we add an output layer composed of 21 neurons and a softmax activation. We repeat the procedure of training the classifier and testing the results for 10 times. Each time we train the classifier for 30 epochs. We then test the models on the target domain by computing the accuracy of the model. The results are presented in table Table~\ref{tab2}. The direct transfer between the two domains shows the greatest amount of gap, with a drop of $38.43\%$ whereas our method has the least amount of drop ($0.81\%$) when the mask loss relies mostly on the simulated background as well as the greatest accuracy on the testing dataset ($90.26\%$). The results show that by providing the model with background information, the model is more stable and does not shift the classes, which has been encountered in previous works~\cite{yang2018unpaired,cycleGAN}. \section{Conclusion} In this paper, we proposed a novel texture generation network that is capable of bridging the gap between simulation and reality in the context of tactile images generated with a GelSight sensor. This allows the convenient training of other models in a simulated environment thus reducing the cost and the damage that can occur if the model is transferred to the domain of reality directly. Besides the ability to bridge the gap, the model is capable of generating new textures on the same object thus acting as a domain randomised and increasing the robustness of a model that is trained in simulation. We discovered that anomalies are created in the differences between the contact areas and the background and we stabilised the model using a proposed mask loss. In the future work, we would like to implement our proposed method on more complex Sim2Real tasks, for example, robot grasping and manipulation with tactile sensing.
proofpile-arXiv_065-39
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sect1} In the past decades, magnetic fields in galaxy clusters have been observed and studied \citep[see review of][]{han17,ct02,gf04,fgs+08,fgg+12}. The magnetic fields are crucial for a comprehensive understanding of radio emission from the diffuse intracluster medium (ICM). The presence of diffuse radio halos and radio relics in galaxy clusters is the direct evidence for magnetic fields in the ICM \citep[e.g.][]{gbf+09,vrbh10}. Under the minimum energy hypothesis or equipartition approach, magnetic fields permeating the ICM are roughly estimated from the radio emission intensity maps with a strength of a few micro-Gauss \citep[e.g.][]{gf04}. Statistical study of Faraday rotation measures (RMs) of radio sources within or behind galaxy clusters is an alternative way to investigate magnetic fields in galaxy clusters \citep[e.g.][]{ktk91,ckb01,gdm+10,bfm+10,bvb+13,pjds13,bck16}. When a linearly polarized electromagnetic wave signal travels through a magnetized plasma, the plane of polarization is rotated by an angle $\Delta \psi$ proportional to the wavelength squared $\lambda^2$, i.e. \begin{equation} \Delta \psi = \psi-\psi_0= \rm{RM} \cdot \lambda^2, \end{equation} where $\psi$ and $\psi_0$ are the measured and intrinsic polarization angle, and RM is the rotation measure which is an integrated quantity of the product of the thermal electron density $n_e$ and magnetic field strength ${ B}$ from the source to us, most effectively probing the fields along the line of sight. For a polarized radio source at redshift $z_{\rm s}$, RM is expressed by \begin{equation} {\rm RM} = 812\int_{\rm source}^{\rm us} n_e { B} \cdot d{ l} =812 \int_{\rm z_s}^{\rm us}\frac{n_e(z)B_{||}(z)}{(1+z)^{2}}\frac{dl}{dz} dz. \label{rmz} \end{equation} The electron density $n_e$ is in cm$^{-3}$, the magnetic field is a vector ${ B}$ (and magnetic field along the line of sight $B_{||}$) in units of $\mu$G, and $d{ l}$ is the unit vector along the light path towards us in units of kpc. The comoving path increment per unit redshift $\frac{dl}{dz}$ is in kpc and $(1+z)^2$ reflects the change of wavelength at redshift $z$ over the path transformed to the observer's frame. The observed rotation measure ${\rm RM}_{\rm obs}$, is a sum of the foreground Galactic RM (GRM) from the Milky Way, the rotation measure from intergalactic medium ${\rm RM}_{\rm IGM}$ and intrinsic to the source ${\rm RM}_{\rm in}$ i.e. \begin{equation} {\rm RM}_{\rm obs} = {\rm GRM} + {\rm RM}_{\rm IGM} + {\rm RM}_{\rm in}. \label{rmobs} \end{equation} When studying RMs of sources at a cosmological distance, one has to account RM contributions from all kinds of the intervening medium along the line of sight. For most extragalactic radio sources, the foreground Galactic RM is the dominant contribution. If the foreground GRM is not assessed properly, it is impossible to get small extragalactic contributions. There have been many efforts to investigate the foreground GRM \citep[e.g.][]{hmbb97, ojr+12, xh14, ojg+15}. RM values intrinsic to a radio source (${\rm RM}_{\rm in}$) at a redshift of $z_{\rm s}$ are reduced by a factor $(1+z_{\rm s})^{-2}$ due to change of $\lambda$ when the values are transformed to the observer's frame. The typical distribution of source-intrinsic RMs of distant quasar-like sources is only several rad~m$^{-2} $\citep{bsg+14}. The RMs from the intergalactic medium ${\rm RM}_{\rm IGM}$ may have several contributors, such as rotation measures from the cosmic webs, intervening galaxy halos and intracluster medium on the line of sight. The rotation measure from the cosmic webs might be traced by Ly$\alpha$ forest, and there have been some simulations on their contribution \citep[e.g.][]{bbo99,ar10,ar11,ptu16}. It is very small ($\sim$1--2 rad~m$^{-2}$) that it hardly be detected from present available data \citep{xh14b,omv+19,obv+20}. The excess of rotation measure from galaxy halos or protogalactic environments has been studied by intervening absorbers like Mg~II absorption lines \citep[e.g.][]{bml+08,foc+14,frg+17}. \citet{jc13} and \citet{mcs20} obtained an increase in the distribution deviation of around 8 rad~m$^{-2}$ for quasars with Mg~II absorption lines. Statistics of RMs of polarized radio sources located inside or behind galaxy clusters \citep[e.g.][]{ktk91,ckb01,gdm+10,bfm+10,bvb+13,pjds13,bck16} show the RM excess for the contributions from the intracluster medium with an amplitude from a few to a few tens of rad~m$^{-2}$ \citep{xh14b,gdm+10,ckb01}. It is now well established that the magnetic fields are ubiquitous in the ICM \citep[e.g.][]{ct02}. The intracluster magnetic fields are dominated by turbulent fluctuations over a range of scales. The field strength decreases from the central regions to the outskirts. The spatial power spectrum is well represented by a Kolmogorov power spectrum \citep{bfm+10}. Turbulent magnetic fields with a coherence length of a few kpc are indicated by RM dispersion studies of polarized radio sources \citep[e.g.][]{ktk91,gdm+10} and found in both relaxed clusters and merging clusters regardless of dynamic states \citep{ckb01,bck16,sd19}. Coherent rotation measures of radio relics reveal large-scale ($>$100 kpc) compressed magnetic fields \citep{ore+14,kbh+17}. The organized magnetic fields are responsible for systematic RM gradient over lobes of radio galaxies \citep[e.g.][]{tp93}. The ordered net magnetic fields can be considered as the large-scale fluctuations at the outer scale of turbulent magnetic fields where the energy is injected \citep{vmg+10}. The magnetic fields close to the center of galaxy clusters are more disturbed and tangled with a strength of a few micro-Gauss while those near the outskirts are more representative for the large-scale fluctuation component with a field strength of an order of magnitude smaller \citep{rkcd08}. \begin{figure} \centering \includegraphics[angle=0,width=80mm,trim=50 100 0 50,clip]{DeltaRMsche.ps} \caption{A schematic diagram showing a pair of lobes from a FR~II radio galaxy with observed RMs on each side (RM1 and RM2, respectively). The RMs of the pair with such a small angular separation ($\Delta$r) of an order of arcminutes have almost the same Galactic contributions (in general coherent from several degrees at low Galactic latitudes to tens of degrees at high Galactic latitudes) and the same intergalactic contributions in front of the lobes. Therefore the RM difference ($\Delta$RM) between the two lobes are the best probes for the magnetic properties of the ICM.} \label{FRIIsch} \end{figure} The RM difference of a pair of lobes from an embedded FR~II radio galaxy \citep{fr74} are the best probes for the magnetic fields in the ICM and their redshift evolution, because both the foreground Galactic RM and the RM contributions on the way to the cluster in all intervening galactic and intergalactic medium can be diminished, as depicted in Figure~\ref{FRIIsch}. The real physical pair of lobes are the bulk of radio emission from a galaxy on opposite sides, formed when central active galactic nuclei produce two opposite collimated jets that drive relativistic electrons running in magnetic fields into the lobes to generate synchrotron emission \citep{br74}. The environs of the host galaxy must be rich of gas. The jets travel through the interstellar medium of the host galaxy, and stay supersonic to a great distance to push their way through the external medium where a shock front is formed as shown by hot spots. The end of the jets move outwards much more slowly than material flows along the jets. A back flow of relativistic plasma deflected at the end of the jets forms the lobes. The gaseous environment they inhabit is very important to provide a working surface for the jets to terminate, therefore, the ICM provides an ideal environment for producing FR~II radio sources. The observed radio radiation from FR~II type radio sources is often highly linearly polarized \citep[e.g.][]{bfm+10}. { The Laing-Garrington effect strongly suggests the existence of intracluster magneto-ionic material surrounding the radio sources causing asymmetry in the polarization properties of double radio sources with one jet \citep{lai88,glcl88}.} Many double radio sources have been detected from galaxies at low redshifts ($z<0.3$), and a large number of sources have been found in dense cluster-like gaseous environments at higher redshifts \citep{ymp89,hl91,wd96,pvc+00,md08}. It is not known if there is any evolution of intracluster magnetic fields at different cosmological epochs. Statistical studies of the redshift evolution of {\it net} rotation measures contributed by the ICM is the key for the puzzle. { Cosmological simulations by \citet{ar11} predicted the redshift dependence of extragalactic rotation measures caused by the intergalactic medium. Contributions by galaxy clusters, however, could not be properly modeled given the cell size in their simulations.} Previously, there have been a number of works to investigate the redshift evolution of extragalactic rotation measures \citep{hrg12,nsb13,xh14b,ptu15,lrf+16,opa+17}, which were generally made for the whole contributions on the path from the observer to the sources. A marginal dependence of redshift was found. In the early days, the RM differences were also studied for a small number of double radio galaxies at low Galactic latitudes to investigate the enhanced turbulence in the interstellar medium \citep{sc86,prm+89,lsc90,ccsk92,ms96}. \citet{akm+98} studied 15 radio galaxies at high redshift $z>2$ with large rotation measures, and claimed their RM contributions are likely to be in the vicinity of the radio sources themselves. \citet{gkb+04} and \citet{opa+17} concluded that no statistically significant trend was found for the RM difference of two lobes against redshift. \citet{vgra19} classified a large sample of close pairs and found a significant difference of $\sim$5--10 rad~m$^{-2}$ between physical pairs (separate components of a multi-component radio galaxy or multiple RMs within one of the components) and random pairs, though the redshift dependence of the physical pairs is not evident. \citet{obv+20} used a similar method but high precision RM data from the LOFAR Two-Metre Sky Survey, and they find no significant difference between the $\Delta$RM distributions of the physical and non-physical pairs. In fact, the uncertainty of RM measurement is a very important factor for the evolution investigation. For example, very small RM differences (1$\sim$2 rad~m$^{-2}$) between the lobes of large radio galaxies at low redshifts can be ascertained with high precision observations \citep{omv+19,bowe19,sob+20}. RM differences for a larger sample of pure double radio sources is necessary to further investigate their correlation with redshift. A real pair of two physically associated lobes shown as double radio sources have a small separation and an almost the same flux density, which can be found in the Jansky Very Large Array (JVLA) Sky Survey \citep[NVSS;][]{ccg+98}. \citet{tss09} have reprocessed the 2-band polarization data of the NVSS, and obtained the two-band RMs for 37,543 sources. \citet{xh14} compiled a catalog of reliable RMs for 4553 extragalactic point radio sources. In addition to the previously cataloged RMs, many new RM data are published in the literature. In this paper, we have classified RM pairs in the NVSS RM data and the compiled catalog and later literature since 2014, and cross-identified available galaxy redshift data to obtain RMs and redshifts for 627 pairs. We use these data to study the redshift evolution of RM differences. We introduce the rotation measure data in Section~\ref{sect2} and study the distributions of RM differences of pairs in Section~\ref{sect3}. { Finally, we discuss our results and present conclusions in Section~\ref{sect4} and Section~\ref{sect5}, respectively.} Throughout this paper, a standard $\Lambda$CDM cosmology is used, taking $H_0=100h$~km~s$^{-1}$Mpc$^{-1}$, where $h=0.7$, $\Omega_m=0.3$ and $\Omega_{\Lambda}=0.7$. \section{Rotation measure data of pairs} \label{sect2} We obtain the RM data for a sample of pairs from the NVSS RM catalog \citep{tss09} and literature \citep[][and afterwards]{xh14}. We search for real pairs for the two RM datasets separately, since observation frequencies and resolutions for RM measurements are very different. The NVSS radio images are visual inspected to ensure physical pairs. \subsection{The NVSS RM pairs} In the NVSS RM catalog, RM data and flux density measurements are available for 37,543 ``sources''. Here a ``source'' is an independent radio emission component, while a galaxy can produce a few radio components, e.g. two unresolved lobes in addition to a compact core of a radio galaxy. We cross-matched the catalog against itself, and found 1513 source pairs with a flux density ratio $S_{\rm large}/S_{\rm small}$ less than 1.5 and an angular separation between $10'$ and $45''$ (i.e. the angular resolution of the NVSS survey). Flux densities of real pairs from two lobes of radio galaxies are most likely to be consistent with each other because of a similar radio power ejected from the same central black hole. The ratio limit is therefore used to largely excludes false pairs from two physically unrelated sources. The maximum separation of $10'$ is set for two reasons. The first is that it would be difficult to identify physically related double sources at a larger separation without a clear connection such as diffuse emission between two sources. Second, the number of physical pairs at larger separations is small. In the sample of \citet{vgra19}, only a few pairs have angular sizes greater than $10'$. The minimum separation was set as being the beam size $45''$ of the NVSS survey, so that two very close sources can be just resolved. \citet{vgra19} adopted two times the beam size, i.e. $1'.5$, while we found the number of physical pairs with separation $\Delta r < 1'.5$ is more than the twice for pairs with $\Delta r > 1'.5$, which is important to get pairs for high redshift galaxies. \begin{figure} \centering \includegraphics[angle=-90,width=40mm]{005226+121929.ps} \includegraphics[angle=-90,width=40mm]{025209+025422.ps}\\%3.05' \includegraphics[angle=-90,width=40mm]{000042-342401.ps} \includegraphics[angle=-90,width=40mm]{045434-162638.ps \caption{Example images of paired sources from radio galaxies with available RMs in the NVSS RM catalog. The top left is the pair of J005226+121929 ($\Delta r \simeq 0'.89$); the top right is the pair of J025209+025422 ($\Delta r \simeq 3'.05$); the bottom left is the pair of J000042-342401 ($\Delta r \simeq 0'.88$); and the bottom right is the pair of J045434-162638 ($\Delta r \simeq 3'.05$). The pair names here are corresponding to the mean RA and Dec of the pair. The top two pairs are located in the FIRST survey area and therefore the FIRST contours are shown in red. All contours are plotted at levels at $\pm$1, 2, 4, ... mJy beam$^{-1}$, with the plus ``+'' indicating the central coordinate of double radio sources.} \label{dbsch} \end{figure} Visual inspection was carried out to identify real physical pairs. We obtain the NVSS image centered on the mean RA and Dec of each pair, and make a contour map, as shown in Figure~\ref{dbsch}. For candidates with angular separations $\Delta r >3'$, the clear presence of fainter emission connecting the two ``sources'' is the signature for a real pair, so we get 34 real pairs with $\Delta r >3'$. For pairs with a smaller angular separation, we check candidates in the survey coverage area of the VLA Faint Images of the Radio Sky at Twenty centimeters \citep[FIRST;][]{bwh95} to verify the true pair. With the experience of classification of real pairs from the NVSS contour maps in the FIRST area, we extrapolate the method to the sources outside the survey area of the FIRST. We noticed that physically unrelated pairs are very scarce at much smaller angular separations \citep{vgra19,obv+20}. We get 1007 real pairs from the NVSS sources in total. Four examples of identified real pairs are shown in Figure~\ref{dbsch}. For these 1007 pairs, we search for their redshifts of the host galaxies from several large optical redshift surveys and online database. First, we cross-match the mean coordinates of RM pairs with the released spectroscopic redshift of 2.8 million galaxies from Data Release 16 of the Sloan Digital Sky Survey \citep[SDSS DR16,][]{aaa+20}, and we obtain spectroscopic redshift data for galaxies within 10 arcsec of the given position for 100 pairs. Second, we get another spectroscopic redshifts from the cross-identification of galaxies in the 6dF Galaxy Survey Redshift Catalogue Data Release 3 \citep{jrs+09} for 10 pairs. We get photometric redshifts for 227 pairs from the cross-match with the SDSS DR8. For the left sources, we cross-identified with the NASA/IPAC Extragalactic Database (NED), and we get redshifts for 64 pairs. In total, we get redshifts and RMs for 401 pairs, as listed in Table~\ref{samplenvss}. The reliability of such cross-match is about 80\%, as discussed in the Appendix~\ref{appen}. This is the largest sample of RMs for pairs with redshifts currently available for the NVSS RM data. \begin{figure*} \centering \includegraphics[angle=-90,width=75mm]{dRMz.ps} \includegraphics[angle=-90,width=75mm]{dRMz10.ps} \caption{In the left panel, the RM differences $\Delta$RM for 401 pairs from the NVSS data ({\it top-subpanel}) and for 226 pairs from the compiled data ({\it middle-subpanel}) and their histograms ({\it bottom subpanel}) against redshift are shown together with the histograms for uncertainties $\sigma_{\Delta \rm RM}$. There are 2 and 9 pairs with the $\Delta$RM values outside the value range of the subpanels for the NVSS and compiled data, respectively. % The distributions for same data but $\sigma_{\Delta \rm RM} \leqslant$ 10 rad~m$^{-2}$ are shown in the right panel.} \label{dRMz} \end{figure*} \subsection{The compiled RM pairs} In the compiled RM catalog \citep{xh14} and more recently published literature after then, RMs are available for many pairs, as listed or presented with radio images in the original references. We inspected all literature and find 444 double sources as real physical pairs. Among them 95 pairs have redshifts already listed in the references or from the NED. For the remaining 349 double sources without redshifts and known host galaxies, we adopted the same procedure for redshift search as for the NVSS RM pairs. The central coordinates of each pair are cross-matched with the SDSS DR16, and we find spectroscopic redshifts for 40 pairs within 10 arcsec. No objects can get the spectroscopic redshift from the 6dF Galaxy Survey data. From the catalog of SDSS DR8, we obtain photometric redshifts for 83 pairs. For the left, we found 8 redshifts from the NED. In total, we have 226 physical pairs with both RMs and redshifts, as listed in Table~\ref{samplecomp}. The redshifts for 95 pairs are very reliable, as marked with '*' in the 10th column, but for the rest 131 pairs, redshift reliability is about 80\%. Notice that the redshifts of pairs of $z>0.9$ are very reliable, because 34 of the 37 pairs have redshifts well measured. \subsection{The RM differences of pairs} For a physical pair, i.e. the two lobes of a radio galaxy shown as double radio sources, their radio waves experience almost the same integration path for the Faraday rotation from their inhabited environment in front of the radio galaxy to us, as shown in Figure~\ref{FRIIsch}. The RM difference of a pair indicates mostly the immediate difference of the magnetoionic medium in their local environment on a scale comparable to the projected source separation on the sky plane, i.e. a scale from tens of kpc to a few Mpc, though we do not know the angle between the line of sight and the pair connection in 3D. All pairs of sources collected in this work are unresolved point sources, so that their RMs are produced by almost the same intervening medium between the source and the observer. The RM difference $\Delta$RM$=$ RM1 -- RM2 with an uncertainty of $\sigma_{\Delta \rm RM} = \sqrt{\sigma_{\rm RM1}^2+\sigma_{\rm RM2}^2}$ therefore is the cleanest measurements of Faraday rotation in the ICM, avoiding any additional uncertainties caused by not-well-measured foreground GRM and by the unknown intergalactic contributions such as from cosmic webs and galaxy halos. These unknown uncertainties caused by the foreground of sources are inherited in all traditional statistics for extragalactic RMs. The RM difference can be negative or positive as we randomly take one to subtract the other, so that statistically the zero mean is expected for a large samples. For our sample the mean of RM difference is --0.21 and --0.11 rad~m$^{-2}$ for the NVSS and compiled RM pairs, respectively, which approximate to zero as expected. The distribution of $\Delta$RM for two samples of pairs are shown in Figure~\ref{dRMz}. The RMs, their differences and redshifts of all these 401 and 226 pairs from the compiled data and the NVSS data are listed in Table~\ref{samplenvss} and \ref{samplecomp}, respectively, together with angular separation $\Delta$r, projected linear separation LS. Only 12 of 401 pairs (3\%) of the NVSS RM sources have redshifts larger than 0.9, compared with 37 of 226 pairs (16\%) in the compiled sources. { In the compiled RM data, 34 double sources marked with '--' in column 11 and 12 have the coordinates for host galaxies but no coordinates for the two radio lobes, and thus the angular and linear separations are not available.} Because the RM uncertainty is a very important factor for the study of the small RM difference of pairs, and because the formal uncertainty of the NVSS RM measurements are much larger than those for the compiled data, the two samples should be analyzed separately. The RM data with small uncertainties are more valuable to reveal the possible evolution with redshift, the subsamples with $\sigma_{\Delta \rm RM} \leqslant$ 10 rad~m$^{-2}$ are taken seriously here and their distribution is shown in the right panel of Figure~\ref{dRMz}. \begin{figure} \centering \includegraphics[angle=-90,width=0.8\columnwidth]{dRMGB.ps} \caption{The absolute values of RM difference $|\Delta$RM$|$ of pairs from the NVSS data ({\it top panel}) and the compiled data ({\it lower panel}) against the Galactic latitudes $|b|$. No apparent dependence imply no significant contribution from the ISM. The uncertainties of the NVSS RM data are not shown for clarity. } \label{GB} \end{figure} \begin{figure} \centering \includegraphics[angle=-90,width=\columnwidth]{dRMalsep.ps} \caption{The absolute values of RM difference $|\Delta$RM$|$ of pairs for the NVSS data ({\it top panels}) and the compiled data ({\it lower panels}) against the angular separation ($\Delta$r) and the projected linear separation (LS). The uncertainties of the NVSS data are not shown for clarity. A few pairs without the separation values or having a RM difference out of the plotted ranges are not shown.} \label{dRMalsep} \end{figure} \section{Large RM difference at high redshifts} \label{sect3} Based on this largest samples of pairs with both RMs and redshift data available so far, we study their evolution with redshift, and check if the RM difference is related to the separations of two sources. Figure~\ref{GB} shows the distribution of absolute values of $|\Delta$RM$|$ of pairs against the Galactic latitude. Because the RM differences of double sources at low Galactic latitudes may be contaminated by enhanced turbulence in the interstellar medium when the radio waves pass through the Galactic plane \citep[e.g.][]{sc86, ccsk92,ms96}, we discard 9 NVSS pairs and 3 pairs from the compiled data at low Galactic latitudes of $|b|<10\degr$, though these few pairs may not affect our statistics (see Figure~\ref{GB}). A Spearman rank test demonstrates the absolute $|\Delta$RM$|$ of the NVSS data is uncorrelated with Galactic latitude, with a correlation coefficient of $\sim$ --0.004 ($p$-value $\sim$ 0.93). For the pairs from the compiled data, only a very weak correlation was found from data, with a correlation coefficient of --0.22 ($p$-value $\sim$ 0.002). We therefore conclude that the ``leakage'' to the RM differences from the Galactic interstellar medium can be ignored. Figure~\ref{dRMalsep} shows the absolute RM difference as a function of the angular separation and projected linear separation of two lobes on the sky plane. For the purpose to explore the magnetic fields in the intracluster medium, we discard 4 pairs with a LS $\geqslant$ 1~Mpc from the NVSS data and 25 pairs from the compiled data, because these pairs probably impact much less ICM and their differences may stand more for the RM contribution from the intergalactic medium, given the typical size of galaxy clusters being about 1~Mpc. In addition, one pair from a very distant radio galaxy in the compiled RM data and one pair from the NVSS data have a host galaxy with a redshift of $z>3$. They are also discarded for the following statistics. { All these discarded pairs are marked with '$\dag$' in the column 13 of Table~\ref{samplecomp} and \ref{samplenvss}.} We finally have a very cleaned 387 NVSS pairs and 197 compiled pairs with a separation of LS $<$ 1~Mpc, $|b|>10\degr$ and $z<3$ for further analysis. \begin{figure*} \centering \includegraphics[angle=-90,width=70mm]{dRMzsta_bl10.ps} \includegraphics[angle=-90,width=70mm]{dRMzsta_bl10_rm10.ps} \caption{Distribution of absolute values of RM difference $|\Delta \rm RM|$ and the data dispersions as a function of redshift for 387 NVSS pairs and the 197 pairs of the compiled data with a projected separation of LS $<$ 1~Mpc, $|b|>10\degr$ and $z<3$ in the left panel. Sources with $|\Delta \rm RM| >$ 100 rad~m$^{-2}$ are plotted at top boundary. The vertical dotted lines in the top two rows indicate the redshift at $z=$ 0.3, 0.6, 0.9, 1.5. The dispersions of the $\Delta$RM distribution are calculated with a Gaussian fitting with a characteristic width $W_{\rm \Delta RM}$, or simply taken as the median absolute values, as shown in the third and fourth rows of panels, respectively. The open circles represent the values from the NVSS RM data, and the filled dots stand for values from the compiled data, plotted at the median redshift for each redshift range. The same plots but for 152 NVSS pairs and 186 compiled pairs with a formal $\Delta$RM uncertainty $\sigma_{\Delta \rm RM} \leqslant$ 10 rad~m$^{-2}$ are shown in the right.} \label{dRMzsta} \end{figure*} \begin{table*} \centering \caption{Statistics of the $\Delta$RM distribution for pairs in redshift bins.\label{dataresult}} \begin{tabular}{crccccrccc} \hline \multicolumn{1}{c}{ } & \multicolumn{5}{c}{Subsamples from the NVSS RM data} & \multicolumn{4}{c}{Subsamples from the compiled RM data} \\ Redshift & No. of & $z_{\rm median}$ & $W_{\Delta \rm RM_{rms}}$ & $W_{\Delta \rm RM_{mad}}$ &$W_{\Delta \rm RM_{mock}}$~~~& No. of & $z_{\rm median}$ & $W_{\Delta \rm RM_{rms}}$ & $W_{\Delta \rm RM_{mad}}$~~~~~~\\ range & pairs & & (rad~m$^{-2}$) & (rad~m$^{-2}$) & (rad~m$^{-2}$) & pairs & & (rad~m$^{-2}$) & (rad~m$^{-2}$) \\ \hline \multicolumn{10}{c}{584 pairs with no uncertainty constraint: 387 NVSS RM pairs and 197 compiled RM pairs}\\% & \multicolumn{4}{c}{$\sigma_{RRM} <$ 20 rad~m$^{-2}$} \\ \hline 0.0--0.3 & 116 & 0.171 & 13.3$\pm$1.3 & 9.9$\pm$1.2 &10.2$\pm$3.1 & 57 & 0.198 & 2.1$\pm$0.3 & 1.6$\pm$0.3 \\ 0.3--0.6 & 174 & 0.455 & 11.5$\pm$0.9 &11.0$\pm$0.8 &10.2$\pm$1.6 & 67 & 0.439 & 1.5$\pm$0.2 & 1.0$\pm$0.2 \\ 0.6--0.9 & 86 & 0.668 & 12.3$\pm$1.3 &13.9$\pm$1.2 &10.7$\pm$2.1 & 39 & 0.708 & 2.0$\pm$0.4 & 1.1$\pm$0.4 \\ 0.9--1.5 & 9 & 1.148 & 18.7$\pm$6.6 &17.5$\pm$6.1 & -- & 25 & 1.131 & 46.5$\pm$9.7 &28.5$\pm$10.6 \\ 2.0--3.0 & 2 & -- & -- & -- & -- & 9 & 2.430 & 35.1$\pm$12.4 &38.5$\pm$10.7 \\ 0.9--3.0 & 11 & 1.198 & 17.3$\pm$5.5 &17.0$\pm$5.1 & -- & 34 & 1.222 & 43.7$\pm$7.7 &28.8$\pm$8.2 \\ \hline \multicolumn{10}{c}{338 pairs of $\sigma_{\Delta \rm RM} \leqslant$ 10 rad~m$^{-2}$: 152 NVSS RM pairs and 186 compiled RM pairs}\\% & \multicolumn{4}{c}{$\sigma_{RRM} <$ 20 rad~m$^{-2}$} \\ \hline 0.0--0.3 & 54 & 0.150 & 8.6$\pm$1.2 & 7.1$\pm$1.2 &7.4$\pm$1.4 & 53 & 0.198 & 1.7$\pm$0.3 & 1.6$\pm$0.2 \\ 0.3--0.6 & 60 & 0.454 & 9.0$\pm$1.2 & 7.9$\pm$1.1 &8.2$\pm$0.8 & 66 & 0.438 & 1.5$\pm$0.2 & 1.0$\pm$0.2 \\ 0.6--0.9 & 33 & 0.647 & 9.7$\pm$1.7 & 8.5$\pm$1.6 &8.4$\pm$1.9 & 36 & 0.709 & 2.0$\pm$0.4 & 1.1$\pm$0.4 \\ 0.9--1.5 & 4 & -- & -- & -- & -- & 23 & 1.131 & 46.8$\pm$10.2 &28.5$\pm$11.2 \\ 2.0--3.0 & 1 & -- & -- & -- & -- & 8 & 2.414 & 33.3$\pm$12.6 &30.9$\pm$11.5 \\ 0.9--3.0 & 5 & -- & -- & -- & -- & 31 & 1.201 & 43.6$\pm$8.1 &28.5$\pm$8.7 \\ \hline \multicolumn{10}{l}{$W_{\Delta \rm RM_{mock}}$ denotes the ``intrinsic'' dispersions of the NVSS data derived by the mock method in Appendix~\ref{appenB}.} \end{tabular} \end{table*} \subsection{The RM difference versus redshift} In order to reveal the possible redshift evolution of the small RM difference caused by the intracluster medium, the $\Delta$RM data have to be carefully analyzed. From Figure~\ref{dRMz} and Table~\ref{samplecomp} and \ref{samplenvss}, we see that the uncertainties $\sigma_{\Delta RM}$ from the NVSS RM measurements have a value between 0 and 25 rad~m$^{-2}$, and those for the compiled RM data are mostly less than 10 rad~m$^{-2}$ and more than half less than 1 rad~m$^{-2}$. \citet{xh14b} showed that large uncertainties would leak to the $\Delta$RM distribution. Therefore, we have to study the two samples of pairs with very different $\Delta$RM uncertainties separately. We examine two cases, one for the $\Delta$RMs from the whole samples without a threshold of uncertainty, and the other with the threshold of $\sigma_{\Delta \rm RM} \leqslant$ 10 rad~m$^{-2}$. According to number distribution in Figure~\ref{dRMz}, we divide the samples of pairs in five redshift ranges, z$=$(0.0,0.3), (0.3,0.6), (0.6,0.9), (0.9,1.5) and (2.0,3.0), and examine the data dispersion in these ranges as shown in Figure~\ref{dRMzsta}, assuming an insignificant evolution of RM differences in a given redshift range. The RM differences of a pair of lobes can be negative or positive, and for an ideal case of a large sample of the $\Delta$RM values should follow a Gaussian distribution with the zero mean. The dispersion, i.e. the width of a Gaussian function $W_{\Delta \rm RM_{rms}}$ and can be fitted from the real data distribution of $\Delta {\rm RM}$, through calculating the root mean square (rms) for the $\Delta$RMs: \begin{equation} W_{\Delta \rm RM_{rms}} =\sqrt{\frac{\sum_{i=1,N}(RM1-RM2)_i^2}{N}}, \label{rms} \end{equation} here $N$ is the total number of pairs. Alternatively, a more robust approach is to get the median absolute deviation $\rm W_{\Delta \rm RM_{mad}}$, which is good for small data samples and robust in the presence of outliers \citep[cf.][]{mcs20}. For our $\Delta$RM data, the zero mean is expected. Therefore, we consider the median of the absolute values of the RM difference, i.e. \begin{equation} \rm W_{\Delta \rm RM_{mad}}^{\rm ori} = Median(|RM1-RM2|_{i=1,N}). \label{madfm} \end{equation} For a normally distributed data, this can be linked to $W_{\Delta \rm RM_{rms}}$ by $ W_{\Delta \rm RM_{mad}} = 1.4826 \times W_{\Delta \rm RM_{mad}}^{\rm ori} \simeq W_{\Delta \rm RM_{rms}}$ \citep{llk+13}. In the redshift ranges with more than five pairs, we calculate the dispersion of RM differences, $ W_{\Delta \rm RM_{rms}} $ and $ W_{\Delta \rm RM_{mad}} $, see Table~\ref{dataresult} and Figure~\ref{dRMzsta}. { Though a large $\Delta$RM is possible for embedded double sources contributed from the intracluster medium, with a value maybe up to a few hundred rad~m$^{-2}$ \citep[e.g.][]{ckb01}, a few outliers are cleaned in our statistics since they affect the calculation of the dispersion of the main stream of data. For the rms calculation, data points scattered away from the main distribution by more than three times the standard deviation are marked as outliers, and removed iteratively until no outliers are marked. The trimmed rms of $\Delta \rm RM$ are taken as $ W_{\Delta \rm RM_{rms}} $ for a subsample in a redshift bin.} The uncertainty of $ W_{\Delta \rm RM_{rms}} $ is taken as the standard error for the zero mean, as done by \citet{vgra19}. { For the median calculation, the outliers are also cleaned first, and the median is found from the remaining $|\Delta {\rm RM}|$, which is taken as $ W_{\Delta \rm RM_{mad}}^{ori}$ and then converted to $ W_{\Delta \rm RM_{mad}}$ with a factor of 1.4826. Its uncertainty is taken as being $\sigma_{\left < |\Delta \rm RM_i| \right >}$, the error of the estimated mean value of $|\Delta {\rm RM}|$, also with a factor of 1.4826. } The dispersion calculated above in fact includes a ``noise'' term coming from various uncertainties of RM values. In principle, the noise term should be discounted from the $\Delta$RM dispersion to get real astrophysical contributions. For each pair, the noise term can be expressed from the quadrature sum of the uncertainty of RMs of two lobes, i.e. for the $i$th pair, the noise $ \sigma_{\Delta \rm RM_i}^2 = (\sigma_{RM1}^2+\sigma_{RM2}^2)_i$. The procedure of noise subtraction for the dispersion width $\sqrt{ \ W_{\Delta \rm RM_{rms}}^2 - \langle \sigma_{\Delta \rm RM_i}^2 \rangle }$ should be carried out under the assumption that the uncertainties in the observed RMs provide a realistic estimate of the measurement error. However, RM uncertainties of the NVSS data are underestimated for most sources \citep{sts11} or probably overestimated for physical pairs \citep{vgra19}, probably caused by a previously unknown systematic uncertainty \citep{mgh+10,xh14}. For the compiled RM data, different estimation methods were used for measurement errors, or observations with uncorrected ionospheric RM will introduce an extra RM uncertainty about 3 rad~m$^{-2}$. It is hard to get a realistic uniformed estimate of the measurement error for the pair sample in this paper. Fortunately for this work the RM difference $(\Delta \rm RM)^2$ is concerned, which can largely diminish any systematical uncertainties which contribute the same amount to the RM measurements of two closely located sources, though a small unknown amount of noise leakage still may occur. We found that the $ W_{\Delta \rm RM_{mad}}$ are even much smaller than the average noise power $\langle \sigma_{\Delta \rm RM}^2 \rangle$, thus no correction of the noise term is made to dispersion quantities $ W_{\Delta \rm RM_{rms}} $ and $ W_{\Delta \rm RM_{mad}} $ in Table~\ref{dataresult}. With these careful considerations above, it is the time to look at the dispersion of RM differences of pairs as a function of redshift $z$, with or without a threshold of $\Delta$RM uncertainty for the NVSS RM pairs and the compiled RM pairs, respectively. First of all, the amplitudes of dispersion represented by $ W_{\Delta \rm RM_{rms}} $ and $ W_{\Delta \rm RM_{mad}} $ are consistent with each other within error bars, as shown in Table~\ref{dataresult} and Figure~\ref{dRMzsta}. { Second, for the NVSS RM pairs, no significant variation of the dispersion with redshift is seen in both whole sample and the high precision sample with $\sigma_{\Delta \rm RM} \leqslant$ 10 rad~m$^{-2}$, which is consistent with the results for physical pairs obtained by \citet{vgra19}. However, the systematically larger dispersion is obtained from the whole sample than that from the high precision sample,} which implies the large uncertainty of the NVSS RM values \citep[a noise term around 10.4 rad~m$^{-2}$ given by][]{sch10} significantly affects the dispersion of $\Delta$RM, and probably buries the small amplitude evolution at low redshifts. This is a sign of the some noise leakage which cannot be cleaned. Third, for pairs from the compiled RM data which have very a small noise, a much larger dispersion appears for pairs of $z>0.9$ in both samples with/without $\sigma_{\Delta \rm RM}$ threshold setting, compared to a small dispersion for pairs of $z<0.9$. { The amplitude of dispersion for pairs of $z<0.9$ mostly is less than 2 rad~m$^{-2}$, but for pairs of $z>0.9$ the dispersion is about 30 to 40 rad~m$^{-2}$. Even the measurement noise, which is about 5.6/4.7 rad~m$^{-2}$ at $z>0.9$ without/with $\sigma_{\Delta \rm RM} \leqslant 10$~rad~m$^{-2}$ threshold is discounted, the result on larger dispersion is not changed. Since the dispersion values for two redshift ranges of $z>0.9$ are similar, the data of all pairs in the redshift of $0.9<z<3.0$ are therefore jointly analyzed, and the uncertainty becomes smaller. The large dispersion for the high-redshift pairs of $z>0.9$ is therefore a good detection at about a 5-sigma level. } { We note that the pairs with a low redshift in the compiled data are mainly measured at low frequencies by LOFAR (144~MHz) \citep[e.g.][]{obv+20} and MWA (200~MHz) \citep[e.g.][]{rgs+20}. Low frequency data may probe the outer part of galaxy clusters or poor clusters, hence the dispersion amplitude around 2 rad~m$^{-2}$ calculated from pairs of $z<0.9$ should read as a lower limit of Faraday rotation from the intracluster medium. The dispersion about 7$\sim$9 rad~m$^{-2}$ estimated from the NVSS RM data with $\sigma_{\Delta \rm RM} \leqslant$ 10 rad~m$^{-2}$ should taken as a upper limit. The ``intrinsic'' dispersions of the NVSS RM data in three low redshift bins at $z<0.9$ are verified by the mock method introduced by \citet{xh14b}, see Appendix~\ref{appenB}. Based on above results, we conclude that the dispersion of RM differences for pairs of $z<0.9$ should be a value in the range of 2$\sim$8 rad~m$^{-2}$, much smaller than the value of 30$\sim$40 rad~m$^{-2}$ for high-redshift pairs of $z>0.9$}. \begin{figure} \centering \includegraphics[angle=-90,width=0.7\columnwidth]{LSz.ps} \caption{Projected separation of pairs at various redshifts from the NVSS sample (\textit{top}) and the compiled data (\textit{bottom}). Note that 34 pairs (14 pairs at $z>0.9$) in the compiled data are not included since their angular and hence linear separations are not available. } \label{LSz} \end{figure} \begin{table*} \centering \caption{Statistics of the $\Delta$RM distribution for pairs with a separation larger or smaller than 500~kpc.\label{dataresultls500}} \begin{tabular}{crcccrccc} \hline \multicolumn{1}{c}{ } & \multicolumn{4}{c}{Subsamples from the NVSS RM data} & \multicolumn{4}{c}{Subsamples from the compiled RM data} \\ Redshift & No. of & $z_{\rm median}$ & $W_{\Delta \rm RM_{rms}}$ & $W_{\Delta \rm RM_{mad}}$~~~& No. of & $z_{\rm median}$ & $W_{\Delta \rm RM_{rms}}$ & $W_{\Delta \rm RM_{mad}}$~~~~~~\\ range & pairs & & (rad~m$^{-2}$) & (rad~m$^{-2}$) & pairs & & (rad~m$^{-2}$) & (rad~m$^{-2}$) \\ \hline \multicolumn{9}{c}{pairs with a separation larger than 500 kpc: 76 NVSS pairs and 54 compiled pairs}\\% \hline 0.0--0.3 & 7 & 0.218 & 10.9$\pm$4.5 &15.7$\pm$3.4 & 15 & 0.199 & 2.6$\pm$0.7 & 2.1$\pm$0.7 \\ 0.3--0.6 & 37 & 0.467 & 13.7$\pm$2.3 & 8.5$\pm$2.4 & 19 & 0.467 & 1.1$\pm$0.2 & 1.1$\pm$0.2 \\ 0.6--0.9 & 27 & 0.704 & 9.7$\pm$1.9 & 7.1$\pm$1.8 & 18 & 0.754 & 4.0$\pm$1.0 & 2.1$\pm$1.0 \\ 0.9--1.5 & 5 & 1.247 & 23.2$\pm$11.6&29.8$\pm$9.6 & 2 & -- & -- & -- \\ \hline \multicolumn{9}{c}{pairs with a separation smaller than 500 kpc: 309 NVSS pairs and 109 compiled pairs}\\% \hline 0.0--0.3 & 109 & 0.167 & 13.4$\pm$1.3 & 9.7$\pm$1.3 & 34 & 0.210 & 1.7$\pm$0.3 & 1.5$\pm$0.3 \\ 0.3--0.6 & 137 & 0.453 & 11.3$\pm$1.0 &11.2$\pm$0.9 & 38 & 0.397 & 1.6$\pm$0.3 & 0.9$\pm$0.3 \\ 0.6--0.9 & 59 & 0.653 & 13.3$\pm$1.8 &15.3$\pm$1.6 & 19 & 0.684 & 1.4$\pm$0.4 & 0.6$\pm$0.4 \\ 0.9--1.5 & 4 & -- & -- & -- & 18 & 1.114 & 28.1$\pm$6.8 &27.7$\pm$6.9 \\ \hline \end{tabular} \end{table*} \begin{figure*} \centering \includegraphics[angle=-90,width=70mm]{dRMzls5_nv.ps} \includegraphics[angle=-90,width=70mm]{dRMzls5.ps} \caption{Absolute RM difference ($|\Delta \rm RM|$) distributions and their dispersion ($ W_{\Delta \rm RM_{rms}} $ and $ W_{\Delta \rm RM_{mad}} $) against redshift for pairs with a separation larger and smaller than 500~kpc for the NVSS RM sample ({\it left}) and the compiled data sample ({\it right}). } \label{dRMzls5} \end{figure*} \subsection{The RM difference and projected separation} Is the significant change of $\Delta$RM dispersion for pairs at $z>0.9$ biased by the linear sizes of double radio sources or their separation? Figure~\ref{LSz} shows the projected separation of pairs versus the redshift for both the NVSS and compiled RM samples. The majority of high-redshift pairs ($z>0.9$) in the compiled data have a separation less than 500~kpc. As seen in Figure~\ref{dRMalsep}, the absolute values of RM differences decline to small values when a projected separation is larger than 1~Mpc, the typical size of a galaxy cluster. The pairs with such a projected separation greater than 1~Mpc probably lie at a large angle to the line of sight, and their light paths pass through much less content of the intracluster medium. To exam if the larger RM dispersion of high redshift pairs is caused by different separation, in the following we split the NVSS sample and compiled data sample into two cases, i.e. the subsamples with a separation larger or smaller than 500~kpc. In the compiled sample, 34 pairs (14 sources of $z>0.9$) are omitted since the angular and hence the linear separations are not available though they are probably smaller than 1~Mpc. Statistics results are shown in Figure~\ref{dRMzls5} and listed in Table~\ref{dataresultls500}. No obvious difference of the $\Delta \rm RM$ dispersion can be seen between two subsamples with a separation smaller and larger than 500~kpc in all three low redshift bins for both the NVSS data and the compiled data. The dispersion values of subsamples are consistent with the results derived from the whole sample, which means that the redshift-dependent dispersion is not caused by different sizes of pair separation. For high-redshift pairs of $z>0.9$, statistics can be made for the NVSS subsample with a larger separation than 500~kpc and also the compiled subsample data with a smaller separation. They both show a larger dispersion though with different uncertainties. The larger RM difference is detected at a $4\sigma$ level for the compiled subsample. \section{Discussion} \label{sect4} If the larger RM differences of high-redshift pairs were caused by intergalactic medium between the pair and us, the larger the separation between a pair of two lobes, the more likely their radio waves experience different foreground cosmic filaments and intervening medium along the lines of sight. That is to say, the larger the separation of lobe positions, the more likelihood for the larger RM difference \citep[e.g.][]{omv+19}. However, for the compiled RM samples in Figure~\ref{dRMalsep} and Figure~\ref{dRMzls5}, this is not the case, and the results are just the opposite, which means that the main RM differences are caused by the local ICM environment surrounding the double radio sources, instead of the intervening intergalactic medium in the foreground of a pair of two lobes. Therefore the RM differences of pairs are excellent probes for the ICM. \subsection{Strong magnetic fields in the intracluster medium in the early Universe} Evidence for larger RM differences for higher-redshift pairs, having wisely excluded any obvious influence by the Galactic and intergalactic contributions and also possible dependence on linear separations of pairs, demonstrates the strong magnetic fields in the ICM in the early Universe. We can estimate the field strengths in the ICM from the dispersion of RM differences at the present epoch and at high redshift. As mentioned in Section~\ref{sect1}, a pair of lobes are believed to mainly reside in dense environments of galaxy clusters/groups. Such dense ambient gas plays a key role in forming Faraday screens which contributes to the difference between the RM values of the lobes. The RM asymmetry of a pair of lobes indicates that there probably exists a large-scale ordered net magnetic fields in the foreground ICM with a scale of pair separation. Because of turbulent nature for intracluster magnetic fields, large scale fluctuations ($>$ 100~kpc) should be responsible for the RM differences of pairs, and a very large outer scale for turbulent intracluster magnetic fields of $\sim$450~kpc is possible as being used for modeling of magnetic fields for a giant radio halo \citep{vmg+10}. The small-scale field fluctuations at a few kpc could be averaged out over a path length comparable to the projected separation. A pair of radio sources in our sample could have any separation and arbitrary orientations in space. The path difference along the line of sight of the two lobes may vary from zero to the largest linear size. Assuming a unidirectional large-scale magnetic field geometry and a constant electron density in the ambient environs, we get a RM difference as being \begin{equation} \Delta {\rm RM} = 812~n_e B L_{||} \cos{\theta}, \end{equation} where $L_{||}$ is the separation of the pair (in kpc) projected onto the line of sight, and $\theta$ is the angle between the magnetic field direction and the line of sight. For a sample of pairs with the same separation but random directions of magnetic fields, the mean of $\Delta$RM is \begin{equation} \left<\Delta {\rm RM}\right> = 812~n_e B L_{||} \int_0^\pi\cos{\theta} \sin{\theta} d\theta \left/\int_0^\pi \sin{\theta} d\theta=0 \right. , \end{equation} and the variance is given by \begin{equation} \begin{split} \left<(\Delta {\rm RM})^2\right> & = (812~n_e B L_{||})^2 \int_0^\pi\cos^2{\theta}\sin{\theta} d\theta \left/\int_0^\pi \sin{\theta} d\theta \right. \\ & = \frac{1}{3}(812~n_e B L_{||})^2. \end{split} \end{equation} Further more, we consider a pair of sources with a random separation $L$ along a random orientation $\phi$, i.e. $L_{||} = L \cos{\phi}$, where $L$ is the size and $\phi$ is the angle between the orientation and the line of sight. Hence, we expect \begin{equation} \begin{split} \left<(\Delta {\rm RM})^2\right> & = \frac{1}{3}(812~n_e B)^2 \left<L^2\right> \left<\cos^2{\phi}\right> \\ & =\frac{1}{9}(812~n_e B)^2\left<L^2\right>. \end{split} \end{equation} Here $\left<L^2\right>$ denotes the mean square of the separation of pairs. The rest-frame RM dispersion of a Faraday screen at redshift $z$ is expected to be decreased to the observed values by the factor of $(1+z)^2$. Then we can derive an analytical formulation by assuming the field strength and the electron density to be constant in the environs around the double radio sources at redshift $z$, i.e. \begin{equation} \begin{split} \left<(\Delta {\rm RM})^2\right> =\frac{1}{9}812^2\left<L(z)^2\right> \left[\frac{n_e(z)B(z)}{(1+z)^2}\right]^2. \end{split} \end{equation} and finally we get \begin{equation} \begin{split} W_{\Delta \rm RM_{rms}} & = \left<(\Delta {\rm RM})^2\right>^{1/2}\\ & = 271~n_{e}(z)B(z)\left<L(z)^2\right>^{1/2} (1+z)^{-2}. \end{split} \label{Wmodel} \end{equation} From the Equation~(\ref{Wmodel}), we can derive the magnetic fields in the ICM if the dispersion of RM difference, electron density $n_{e}$ and the variance of the pair separations $\left<L^2\right>^{1/2}$ at redshift $z$ are known. Based on the results shown in Figure~\ref{dRMzsta}, the dispersion of the RM difference of pairs remains nearly flat at $z<0.9$, with an amplitude about 2 to 8 rad~m$^{-2}$. We take a typical value being 3.5 rad~m$^{-2}$ to to represent the dispersion at the present time. For pairs of $z>0.9$, the dispersion increases to 30 to 40 rad~m$^{-2}$ at a median redshift of $z=1.1$. We take a typical value being 35 rad~m$^{-2}$ at $z=1.1$. For the variance of the pair separations $\left<L^2\right>^{1/2}$ at redshift $z$, we take the same typical value of 350~kpc for pairs at low and high redshifts\footnote{The average projected linear separation is 281~kpc and 234~kpc for samples of $z<0.9$ and $z>0.9$ with a separation smaller than 500~kpc, based on the fact that majority of pairs at $z>0.9$ have small separations and their dispersions are consistent with those from the whole sample. Considering random projection effect, we estimate the real pair separations should be larger by a factor of $\sqrt{2}$=1.4, i.e. 396~kpc or 329~kpc, respectively. }. At low redshifts, the mean electron density $n_{e}$ in the ICM is taken to be $4\times 10^{-4}$~cm$^{-3}$, which is obtained by integrating the $\beta$-model profile of electron density over a sphere with a radius of 1~Mpc for 12 galaxy clusters \citep{gdm+10}. According to the Equation~(\ref{Wmodel}), from $W_{\Delta \rm RM_{rms}} = 3.5$ rad~m$^{-2}$, $n_{e}$ = $4\times 10^{-4}$~cm$^{-3}$ and $\left<L^2\right>^{1/2}$ = 350~kpc at $z=0$, we can obtain a simple estimation of the magnetic field strength over this scale as being $B= 0.1 \mu$G at present epoch. At high redshift $z>0.9$, we do not know the exact properties of the ICM. If we assume the mean electron density $n_{e}(z)$ at $z>0.9$ is as same as the density at the present epoch, along with $W_{\Delta \rm RM_{rms}} = 35$ rad~m$^{-2}$, and $\left<L^2\right>^{1/2}$ = 350~kpc as well at $z=1.1$, the magnetic field would be $B(z) = 4~\mu$G. To get this value, any field reversals smaller than 350~kpc are ignored. If the fields reverse at a scale of 30~kpc are considered, the field strength would be boosted by a factor of $\sqrt{350/30}$, reaching the field strength of 14~$\mu$G. \subsection{Implication of strong magnetic fields in the ICM} The field strength estimated above for the ICM from the pairs at $z<0.9$, if in the form of a uniform large-scale field geometry, is 0.1 $\mu$G, close to the minimum intracluster magnetic field obtained by \citet{pjds13}. More tangled fields would have a strength of a few times stronger. The estimated field strength is smaller than that of some targeted clusters, such as a few $\mu$G on scales of tens of kpc in merging clusters and a few 10 $\mu$G in cool core clusters \citep[see e.g.][]{ct02,vda+19}. There are two possible reasons. The first one, the well-measured RM differences at low redshifts are predominantly by the RM data with very small uncertainties, which were mainly measured at low frequencies by LOFAR \citep[e.g.][]{obv+20} and MWA \citep[e.g.][]{rgs+20}. Those observations at such low frequencies may probe medium in the outer part of galaxy clusters or poor clusters, so that the estimated field strength is close to the large-scale intergalactic magnetic fields around galaxy clusters, as illuminated by simulations \citep{rkcd08}. In contrast, the small number of RM data with larger uncertainties and more scattered in the distribution were mostly observed at 1.4 GHz or higher frequencies, which are more likely to probe the inner part of galaxy clusters. Secondly, at low redshifts most powerful radio sources reside in comparatively sparse environment with few exceptions [e.g. Cygnus A \citep{dcp87} and other sources of large RM differences in the compiled data], as pointed by \citet{pvc+00}, so that the dispersion of RM difference is small. This is supported by the NVSS sample with a similar small dispersion of RM difference, i.e. upper limit of 7--9 rad~m$^{-2}$ derived by this work and 4.6$\pm$1.1 rad~m$^{-2}$ by \citet{vgra19}. The value of uniform intracluster magnetic field strength of 4 $\mu$G (or $\sim$14 $\mu$G for tangled fields) at $z>0.9$ derived from the RM difference of pairs is intriguing , as it is comparable to the field strength of galaxy clusters at low redshifts \citep[see a review by][]{han17}, for example a central field strength of 4.7 $\mu$G in the Coma cluster \citep{bfm+10} and a few microGauss in a sample of X-ray selected clusters \citep{ckb01,bck16}. This is evidence for strong organized magnetic fields in galaxy clusters in the early Universe. If this scenario is correct, it poses a considerable challenge to theories on the origin of intracluster magnetic fields, because time available at $z>0.9$ is not sufficient to generate and align strong magnetic fields on such a large scales. The building-up of large-scale coherent magnetic fields via the inverse cascade of the $\alpha-\Omega$ dynamo fields that often works in normal spiral galaxies cannot operate in galaxy clusters because they do not have an observed organized rotation. Even if they have, only one or two rotations at this age of the Universe under slow cluster rotation ($v \leq 100$ km s$^{-1}$) is insufficient for generation of such a strong mean field \citep{ct02}. The origin and the growth of magnetic fields in galaxy clusters are an enigma. The widely accepted hypothesis is that they are amplified from much weaker seed fields (either primordial or injected by galactic outflows) through a variety of processes \citep[see review][]{dvbz18}. Simulations show evidence of significant magnetic field amplification with a small-scale dynamo driven by turbulence and compression during structure formation \citep{vbbb18,dvbb19}. Assuming the dynamo growth can start soon after the cluster forms, it often takes a time-span of several Gyr to amplify magnetic fields to a few $\mu$G \citep[e.g.][]{dvbb19}. Increasing the Reynolds number can reduce the time scale for magnetic amplification, but the number is limited by the efficiency of the transfer of kinetic energy into magnetic energy. Merger induced shocks that sweep through the ICM or motions induced by sloshing cool cores may play additional roles in fast amplification of intracluster magnetic field at high redshifts \citep{dvbz18}, but not up to such a large scale. The recent observations of diffuse radio emission in distant galaxy clusters \citep{dvb+21} have put a strong limit on the time scale of the magnetic growth by discovering field strengths of $\mu$G at $z\sim$ 0.7. The time available for the amplification in their case is about 3.7~Gyr. Our results is strong evidence for strong magnetic field strengths at such a large-scale at $z>$ 0.9 and even up to $z\sim$ 2, comparable to those in nearby clusters, which is a more stringent constraint for magnetic field generation and evolution. \section{Conclusions} \label{sect5} Faraday rotation measure differences between the two lobes of a sample of radio galaxies, which is completely free from the Faraday rotation effect contributed from the interstellar medium inside the Milky Way and the intergalactic medium between radio galaxies and us, is significantly large at $z>0.9$, indicating the average intracluster magnetic fields about 4 $\mu$G (or 14 $\mu$G for tangled fields), in contrast to the weaker intracluster fields at the present epoch about 0.1 $\mu$G (or 0.3 $\mu$G for tangled fields). Such a strong magnetic fields in the early universe makes a big challenge on the generation of cosmic magnetic fields. More RM data for pairs at high redshift are desired to reach firm conclusion, since current data sets are not enough in number and have somehow large measurement uncertainties. Polarization observations for RMs of a larger sample of double radio sources with a better precision of RMs should be available soon, which are necessary to further constrain the evolution of magnetic fields in the ICM.
proofpile-arXiv_065-40
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\@startsection{section}{1 \z@{.7\linespacing\@plus\linespacing}{.5\linespacing {\normalfont\huge\scshape\centering \renewcommand\subsection{\@startsection{subsection}{2 \z@{.5\linespacing\@plus.7\linespacing}{-.5em {\normalfont\LARGE\scshape}} \renewcommand\subsubsection{\@startsection{subsubsection}{3 \z@{.5\linespacing\@plus.7\linespacing}{-.5em {\Large\scshape}} \makeatother \pagestyle{headings} \newcommand{\Section}[1]{\section{#1} \markleft{\thesection. #1}} \newcommand{\Subsection}[1]{\subsection{#1} \markright{\thesubsection. #1}} \usepackage{longtable} \begin{document} \title[Balanced-Viscosity solutions for multi-rate systems {\Large Balanced-Viscosity solutions to\\ infinite-dimensional multi-rate systems\thanks{A.M. was partially supported by DFG within SPP\,2256 (no.\,441470105) under grant Mi\,459/9-1.}} \author[Alexander Mielke]{\large Alexander Mielke} \address{\upshape A.\ Mielke, Weierstra\ss-Institut f\"ur Angewandte Analysis und Stochastik, Mohrenstr.\ 39, D--10117 Berlin and Institut f\"ur Mathematik, Humboldt-Universit\"at zu Berlin, Rudower Chaussee 25, D--12489 Berlin (Adlershof) -- Germany\newline \indent ORCID 0000-0002-4583-3888} \email{alexander.mielke\,@\,wias-berlin.de} \author[Riccarda Rossi]{\large Riccarda Rossi} \address{\upshape R.\ Rossi, DIMI, Universit\`a degli studi di Brescia, via Branze 38, I--25133 Brescia -- Italy} \email{riccarda.rossi\,@\,unibs.it} \date{December 2, 2021} \begin{abstract} We consider generalized gradient systems with rate-independent and rate-dependent dissipation potentials. We provide a general framework for performing a vanishing-viscosity limit leading to the notion of parametrized and true Balanced-Viscosity solutions that include a precise description of the jump behavior developing in this limit. Distinguishing an elastic variable $u$ having a viscous damping with relaxation time $\eps^\alpha$ and an internal variable $z$ with relaxation time $\eps$ we obtain different limits for the three cases $\alpha \in (0,1)$, $\alpha=1$ and $\alpha>1$. An application to a delamination problem shows that the theory is general enough to treat nontrivial models in continuum mechanics. \\[0.2em] \textbf{Keywords:} balanced-viscosity solution, reparametrized solutions, energy-dissipation principle, generalized gradient systems, delamination model. \\[0.2em] \textbf{MSC:} 35Q74 47J30 49J40 49J45 49J52 74D10 74R99 \end{abstract} \maketitle \setlength{\leftmargini}{2em} \setlength{\leftmargin}{1.3em} {\small\tableofcontents} \subsubsection*{List of symbols} \mbox{}\\[-1.7em] {\small \begin{center}\bigskip \begin{longtable}{lll} $\Spu, \Spz$ & state spaces & \eqref{intro-state_spaces} \smallskip \\ $\Spq = \Spu \ti \Spz$ & overall state space & \smallskip \\ $ \Spw \subset \Spu, \ \Spx \Subset \Spz $ & energy spaces & Hyp.\ \ref{hyp:setup} \smallskip \\ $\Spy \supset \Spz$ & space for $1$-homogeneous dissipation potential & Hyp.\ \ref{hyp:setup} \smallskip \\ $\calR: \Spy \to [0,\infty) $& $1$-homogeneous dissipation potential & Hyp.\ \ref{hyp:diss-basic} \smallskip \\ $\disv u : \Spu \to [0,\infty)$, \ $\disv z : \Spz \to [0,\infty)$ & viscous dissipation potentials & Hyp.\ \ref{hyp:diss-basic} \smallskip \\ $\disv x^*: \mathbf{X}^* \to [0,\infty)$, $ \mathbf{X} \in \{ \Spu, \Spz\}$ & Legendre-Fenchel conjugate of $\disv x$ for $\mathsf{x} \in \{ \sfu, \sfz\} $ & Def.\ \ref{def:DissPotential} \smallskip \\ $\conj z: \Spz^* \to [0,\infty)$ & conjugate of $\calR{+}\disv z$ & \eqref{def:conj} \smallskip \\ $\disve x{\lambda}, \ \mathsf{x} \in \{ \sfu,\sfz\}, \ \lambda \in (0,\infty) $ & rescaled viscous dissipation potentials & \eqref{eq:Def.Vx.lambda} \smallskip \\ $\Psi_{\eps,\alpha} = \disve u{\eps^\alpha} + \calR+ \disve z{\eps}$ & overall viscous potential & \eqref{eq:def.Psi.e.a} \smallskip \\ $\calE: [0,T]\times \Spu \ti \Spz \to (-\infty,+\infty]$ & driving energy functional & Hyp.\ \ref{hyp:1} \smallskip \\ $\mathcal{S}_E$, $E>0$ & energy sublevels & \eqref{Esublevels} \smallskip \\ $\pl_q \calE$ & Fr\'echet subdifferential of $\calE(t,\cdot)$ & \eqref{Frsubq} \smallskip \\ $\slovname{x}: [0,T]\ti \domq \to [0,\infty] $ & generalized slope functional for $\mathsf{x} \in \{ \sfu, \sfz\} $ & \eqref{def:GeneralSlope} \smallskip \\ $\argminSlo xtq, \ (t,q)\in [0,T]\ti\domq$ & set of minimizers for the slope $ \slov {x}tq, \ \mathsf{x} \in \{ \sfu, \sfz\}$ & \eqref{not-empty-mislo} \smallskip \\ $ \SetG\alpha tq, \ (t,q)\in [0,T]\ti\domq $ & sets of positivity for the slopes at $(t,q)$ & \eqref{setGalpha} \smallskip \\ $\calB_\psi$ & B-function associated with a dissipation potential $\psi$ & \eqref{Bfunctions-2} \smallskip \\ $\mfb_\psi$ & vanishing-viscosity contact potential assoc.\ with $\psi$ & \eqref{bipotentials-1} \smallskip \\ $\mfB_\eps^\alpha$, $\eps\geq 0$, & rescaled joint B-function & \eqref{eq:def.B.al.eps} \smallskip \\ $\mfM_\eps^\alpha$, $\eps >0$, & rescaled joint M-function & \eqref{def-Me} \smallskip \\ $\mfM_0^\alpha$ & (limiting) rescaled joint M-function & \eqref{mename-0} \smallskip \\ $\mfM_0^{\alpha, \mathrm{red}}$ & reduced rescaled joint M-function & \eqref{decomposition-M-FUNCTION} \smallskip \\ $\mathscr{A}([a,b];[0,T]\ti \Spq)$ & admissible parametrized curves from $[a,b]$ to $[0,T]\ti \Spq$ & Def.\ \ref{def:adm-p-c} \\ $ \admtcq{t}{q_0}{q_1} $ & admissible transition curves betw.\ $q_0$ and $q_1$ at time $t$ & Def.\ \ref{def:adm-p-c} \\ $\Sigma_\alpha$ & contact set & \eqref{ctc-set} \\ $\rgs Au \rgs Cz = \rgs Au \cap \rgs Cz$& evolution regimes with $ \mathrm{A \in \{E,V,B\}} \text{ and } \mathrm{C \in \{R,V,B\} }$ & \eqref{regime-sets} \\ $\Varname \calR$ & $\calR$-variation & \eqref{R-variation} \\ $ \mathrm{J}[q] $ & jump set of a true $\BV$ solution & Def.\ \ref{def:jumpset} \\ $ \costname{\mename 0\alpha}$ & Finsler cost induced by $ \mename 0\alpha$ & \eqref{Finsler-cost} \\ $ \Varname{\mename 0\alpha} $ & total variation induced by $\mename 0 \alpha$ & \eqref{Finsler-variation} \end{longtable} \end{center} } \vspace{-1em} \Section{Introduction} \label{se:Intro} In this paper we address rate-independent limits of viscous evolutionary systems that are motivated by applications in solid mechanics. These systems can be described in terms of two variables $u\in \Spu $ and $ z\in \Spz$; throughout, we shall assume that the state spaces \begin{equation} \label{intro-state_spaces} \text{$\Spu$ and $\Spz$ are (separable) reflexive Banach spaces.} \end{equation} Typically, $u$ is the displacement, or the deformation of the body, whereas $z$ is an internal variable specific of the phenomenon under investigation, in accordance with the theory of \emph{generalized standard materials}, see \cite{HalNgu75MSG}. \Subsection{Rate-independent systems} \label{su:RIS} Under very slow loading rates, one often assumes that $u$ satisfies a \emph{static} balance law that arises as Euler-Lagrange equation from minimizing the energy functional $\calE$ with respect to $u$. The evolution of $z$ is governed by a (doubly nonlinear) subdifferential inclusion featuring the $z$-derivative of the energy and the viscous force in form of the subdifferential $\pl \calR$ of a dissipation potential $\calR$: \begin{subequations} \label{grad-structure} \begin{align} & \label{grad-structure-u} \rmD_u \ene t{u(t)}{z(t)} =0 && \text{ in } \Spu^*, && t \in (0,T), \\ \label{grad-structure-z} \mbox{}\qquad\pl \calR(z'(t)) + {}& \rmD_z \ene t{u(t)}{z(t)} \ni 0 && \text{ in } \Spz^*, && t \in (0,T)\,. \qquad\mbox{} \end{align} \end{subequations} If $\calR : \Spz \to [0,\infty]$ is positively homogeneous of degree $1$, i.e.\ $\calR(\lambda z')=\lambda \calR(z')$ for all $\lambda>0$, then the system \eqref{grad-structure} is called \emph{rate-independent}, and the triple $(\Spu\ti\Spz,\calE,\calR)$ is called a rate-independent system, cf.\ \cite{MieRouBOOK}. Here, $\pl \calR: \Spz \rightrightarrows \Spz^*$ denotes the subdifferential of convex analysis for the nonsmooth functional $\calR$, whereas, throughout this introduction, for simplicity we will assume that the (proper) energy functional $\calE: [0,T]\ti \Spu \ti \Spz\to (-\infty,\infty]$, which is smooth with respect to\ time, is additionally smooth with respect to\ both variables $u$ and $z$. System \eqref{grad-structure} reflects the ansatz that energy is dissipated through changes of the internal variable $z$ only: in particular, the doubly nonlinear evolution inclusion \eqref{grad-structure-z} balances the dissipative frictional forces from $\pl \calR(z')$ with the restoring force $\rmD_z \ene t{u}{z}$. Despite the assumed smoothness of $(u,z) \mapsto \ene tuz$, system \eqref{grad-structure} is only formally written: due to the $1$-homogeneity of $\calR$, one can in general expect only $\BV$-time regularity for $z$. Thus $z$ may have jumps as a function of time and the pointwise derivative $z'$ in the subdifferential inclusion \eqref{grad-structure-z} need not be defined. This has motivated the development of various weak solution concepts for system \eqref{grad-structure}. \par \emph{Energetic solutions} were advanced in the late '90s in \cite{MieThe99MMRI, MiThLe02, MieThe04RIHM} for \emph{abstract} rate-independent systems, and in the context of phase transformations in solids. In the realm of crack propagation, an analogous notion of evolution was pioneered in \cite{Francfort-Marigo98} and later further developed in \cite{DM-Toa2002} with the concept of `quasistatic evolution'. Due to its flexibility, the energetic concept has been successfully applied to a wide scope of problems, see e.g.\ \cite{MieRouBOOK} for a survey. \Subsection{The vanishing-viscosity approach} \label{su:I.VVA} However, it has been observed that the energetic notion may fail to provide a feasible description of the system behavior at jumps, in the case of a \emph{nonconvex} driving energy. This fact has motivated the introduction of an alternative weak solvability concept, first suggested in \cite{EfeMie06RILS} and based on the vanishing-viscosity regularization of the rate-independent system as a selection criterion for mechanically feasible weak solutions. In the context of system \eqref{grad-structure}, this `viscous regularization' involves a second (lower semicontinuous, convex) dissipation potential $ \disv z : \Spz \to [0,+\infty)$, with superlinear growth at infinity; to fix ideas, we may think of a \emph{quadratic} potential. The vanishing-viscosity approach then consists in performing the asymptotic analysis of solutions to the \emph{rate-dependent} system \begin{subequations} \label{naive-vanvisc} \begin{align} \label{naive-van-visc-u} &\rmD_u \ene t{u(t)}{z(t)} =0 && \text{ in } \Spu^*, && t \in (0,T), \\ \label{naive-van-visc-z} \mbox{}\qquad\pl \calR(z'(t)) + \pl \disv z (\eps z'(t)) +{} & \rmD_z \ene t{u(t)}{z(t)} \ni 0 && \text{ in } \Spz^*, && t \in (0,T)\,, \qquad\mbox{} \end{align} \end{subequations} as the \emph{viscosity} parameter $\eps \to 0^+$. System \eqref{naive-vanvisc} now features \emph{two rates}: in addition to that of the external loading, scaling as $\eps^0=1$, the internal rate of the system, set on the faster scale $\eps$, is revealed. In diverse (finite-dimensional, infinite-dimensional, and even metric) setups, cf.\ \cite{EfeMie06RILS,MRS09,MRS12,Mielke-Zelik,MRS13} (see also \cite{KZ21} and \cite{RiScVe21TSSN}), solutions to the `viscous system' have been shown to converge to a different type of solution of \eqref{grad-structure}, which we shall refer to as \emph{Balanced-Viscosity} solution, featuring a better description of the jumps of the system. In parallel, the vanishing-viscosity approach has proved to be a robust method in manifold applications, ranging from plasticity (cf.\ e.g.\ \cite{DalDesSol11, BabFraMor12, FrSt2013}), to fracture \cite{KnMiZa08ILMC, Lazzaroni-Toader, Negri14}, damage and fatigue \cite{KRZ13, Crismale-Lazzaroni,ACO2018}, and to optimal control \cite{SteWachWach2017} to name a few. This paper revolves around a different, but still of \emph{vanishing-viscosity type}, solution notion for system \eqref{grad-structure}. Indeed, we are going to regularize it by considering a viscous approximation of \eqref{grad-structure-u}, besides the viscous approximation \eqref{naive-van-visc-z} of \eqref{grad-structure-z}. Therefore, we will address the asymptotic analysis as $\eps\to 0^+$ of the system of doubly nonlinear differential inclusions \begin{subequations} \label{van-visc-intro} \begin{align} \label{van-visc-intro-u} & \pl \disve{u}{\epsalpha}(u'(t)) +\rmD_u \ene t{u(t)}{z(t)} \ni 0 && \text{ in } \Spu^* && \foraa\, t \in (0,T), \\ & \label{van-visc-intro-z} \pl \calR (z'(t)) + \pl \disve{z}{\eps}(z'(t)) +\rmD_z \ene t{u(t)}{z(t)} \ni 0 && \text{ in } \Spz^* && \foraa\, t \in (0,T), \end{align} \end{subequations} where for $\mathsf x\in \{\mathsf u, \mathsf z\}$ we have set \begin{subequations} \label{eq:Def.Vx.la} \begin{equation} \label{eq:Def.Vx.lambda} \disve x{\lambda}(w): = \frac{1}{\lambda} \disv x (\lambda w) \text{ for } \lambda\in (0,\infty) \ \text{ and } \ \disve x{\infty} (w): = \begin{cases} 0 &\text{for } w=0, \\ \!\infty &\text{for }w\neq 0\end{cases} \end{equation} (the functional $\disve x{\infty}$ will indeed come into play later on, cf.\ \eqref{e:diff-char-intro}). Throughout we assume that $\disv x$ satisfies $\disv x(0)=0$, \ $\pl \disv x(0)=\{0\}$, and has superlinear growth, which implies that $\disve x0$ and $\disve x\infty$ are indeed the Mosco limits of $\disve x\lambda$ for $\lambda\to 0^+$ and $\lambda \to \infty$, respectively. We will use that the subdifferentials take the form \begin{equation} \label{eq:SubDiff.Vx} \pl \disve x{\lambda}(w)=\pl \disv x(\lambda w) \text{ for }\lambda\in [0,\infty) \ \text{ and } \ \pl \disve x{\infty}(w)= \begin{cases} \mathbf{X}^* &\text{for } w=0, \\ \emptyset &\text{for }w\neq 0.\end{cases} \end{equation} \end{subequations} The parameter $\alpha$ in \eqref{van-visc-intro-u} determines which of the two variables $u$ and $z$ relaxes faster to \emph{equilibrium} and \emph{rate-independent} evolution, respectively. Hence, following the finite-dimensional work \cite{MRS14} we shall refer to \eqref{van-visc-intro} as a \emph{multi-rate} system, with the time scale $\eps^0=1$ of the external loading and the (possibly different) relaxation times $\eps$ and $\eps^\alpha$ of the variables $z$ and $u$. From a broader perspective, with our analysis we aim to contribute to the investigation of \emph{coupled} rate-dependent/rate-independent phenomena, a topic that has attracted some attention over the last decade. In this connection, we may mention the study of systems with a \emph{mixed} rate-dependent/rate-independent character (typically, a rate-independent flow rule for the internal variable coupled with the momentum balance, with viscosity and inertia, for the displacements, and possibly with the heat equation), see the series of papers by T.\ Roub\'{\i}\v{c}ek \cite{Roub08, Roub10TRIP, Roub-PP, Roubicek-defect, Roub-Tomassetti}, among others. There, a weak solvability notion, still of \emph{energetic type}, was advanced, cf.\ also \cite{Rossi-Thomas-SIAM, Maggiani-Mora}. However, unlike in those contributions, in our `modeling' approach the balanced interplay of rate-dependent and rate-independent behavior does not stem from coupling equations with a rate-dependent and a rate-independent character. Instead, it emerges through the asymptotic analysis as $\eps \to 0^+$ of the `viscous' system \eqref{van-visc-intro}, which leads to a solution of the rate-independent one \eqref{grad-structure} that is `reminiscent of viscosity', \emph{in both variables} $u$ and $z$, in the description of the system behavior at jumps. This `full' vanishing-viscosity approach, also involving the displacement variable $u$, has been already carried out for a model for fracture evolution with pre-assigned crack path in \cite{Racca}, as well as in the context of perfect plasticity \cite{DMSca14QEPP,Rossi2018} and delamination \cite{Scala14}. With different techniques, based on an alternating minimization scheme, the emergence of viscous behavior both for the displacement and for the internal variable is demonstrated in \cite{Knees-Negri} for a phase-field type fracture model. In this mainstream, in \cite{MRS14} we have addressed the vanishing-viscosity analysis of \eqref{van-visc-intro} in a preliminary {finite-dimensional} setting, with $\Spu = \R^n$ and $\Spz = \R^m$, and for a {smooth} energy $\calE \in \rmC^1([0,T]{\ti}\R^n{\ti}\R^m)$, with the aim of emphasizing the role of viscosity in the description of the jump behavior of the limiting rate-independent system. Even in this significantly simplified setup, the analysis in \cite{MRS14} conveyed how the balanced interplay of the different relaxation rates in \eqref{van-visc-intro} enters in the description of the jump dynamics of the rate-independent system. In particular, it showed that viscosity in $u$ and viscosity $z$ determine the jump transition path in different ways depending on whether the parameter $\alpha$ is strictly bigger than, or equal to, or strictly smaller than $1$. The aim of this paper is to thoroughly extend the results from \cite{MRS14} to an \emph{infinite-dimensional} and \emph{non-smooth} setting, suited for the application of this vanishing-viscosity approach to models in solid mechanics. What is more, we will also broaden the analysis in \cite{MRS14}, which is confined to the case of \emph{quadratic} `viscous' dissipation potentials, to a fairly general class of potentials $\disv u$ and $\disv z$. \Subsection{Our results} \label{ss:1.1} Throughout most of this paper, we will confine the discussion to the \emph{abstract rate-independent} system $\RIS$ arising in the vanishing-viscosity limit of \eqref{van-visc-intro}. The notation looks a bit extensive, but has the advantage of emphasizing the dependence of the solution concept on the energy functional $\calE$, the three different types of dissipation $\disv u$, $\calR$, and $\disv z$, and the parameter $\alpha>0$. This also explains the name ``Balanced-Viscosity solution'' that suggests the appearance of the viscous effects by balancing the influence of $\calR$, $\disv x$, and $\disv z$ in such a way that the energy-dissipation balance remains true. Of course, using the abbreviation ``BV solution'' should remind us about the fact that these solutions may not be continuous but may have jumps as functions of time. In our opinion, in that general framework the main ideas underlying the vanishing-viscosity approach are easier to convey. Indeed, we aim to provide some possible recipes for the application of this approach to concrete rate-independent limiting processes, where of course the `abstract techniques' may have to be suitably adjusted to the specific situation. For this, we will strive to work in a fairly general setup, \begin{compactenum} \item encompassing nonsmoothness of the energies $u\mapsto \ene tuz$ and $z\mapsto \ene tuz$ through the usage of suitable subdifferentials $\frname u\calE :[0,T]\ti \Spu \ti \Spz \rightrightarrows \Spu^*$ and $\frname z\calE :[0,T]\ti \Spu \ti \Spz \rightrightarrows \Spz^*$ in place of the G\^ateau derivatives $\rmD_u \calE$ and $\rmD_z \calE$, and \item allowing for a wide range of `viscous dissipation potentials' $\disv u$ and $\disv z$. In particular, we shall allow for a much broader class of dissipation potentials $\disv z$ than those considered in \cite{MRS13}. \end{compactenum} The first cornerstone of our vanishing-viscosity analysis is the observation that the viscous system \eqref{van-visc-intro} has the structure of a \emph{generalized gradient system} (cf.\ \cite{Miel16EGCG}): indeed, it rewrites as \begin{equation} \label{GGS-structure} \pl \Psi_{\eps,\alpha}(q'(t)) +\rmD_q \eneq t{q(t)} \ni 0 \qquad \text{in } \Spq^* \quad \foraa\, t \in (0,T) \end{equation} with $q=(u,z)\in \Spq = \Spu\ti \Spz$ and \begin{equation} \label{eq:def.Psi.e.a} \Psi_{\eps,\alpha}(q') = ( \disve u{\epsalpha}{\oplus} (\calR {+} \disve z{\eps}))(q') := \disve{u}{\epsalpha}(u') + \calR(z') + \disve z{\eps}(z'). \end{equation} In turn, \eqref{GGS-structure} can be equivalently formulated using the single \emph{energy-dissipation balance} \begin{equation} \label{EnDissBal-intro} \eneq t{q(t)}+ \int_s^t \tMfu\eps\alpha {r}{q(r)}{q'(r)} \dd r = \eneq s{q(s)}+ \int_s^t \pl_t \eneq r{q(r)} \dd r \end{equation} for all $0 \leq s\leq t \leq T$, featuring the M-function \begin{equation} \label{tildeMF} \tMfu{\eps}{\alpha} {t}{q}{q'}: = \Psi_{\eps,\alpha} (q') + \Psi_{\eps,\alpha}^* ({-}\rmD_q \eneq tq) \end{equation} with the Legendre-Fenchel conjugate $\Psi_{\eps,\alpha}^*$ of $\Psi_{\eps,\alpha}$. This reformulation is often referred to \emph{energy-dissipation principle}; the germs of this idea trace back to E.\ De Giorgi's \emph{variational theory for gradient flows} in \cite{Ambr95MM}, see also \cite[Prop.\,1.4.1]{AGS08} and \cite[Thm.\,3.2]{Miel16EGCG}. In our setup, it is based on the validity of a suitable chain rule for $\calE$, which will be thoroughly discussed in the sequel. From \eqref{EnDissBal-intro} we obtain the basic a priori estimates on a sequence $(u_\eps,z_\eps)_\eps$ of solutions to \eqref{van-visc-intro}. Together with the additional bound \begin{equation} \label{u-bound-intro} \int_0^T \| u_\eps'(t)\|_{\Spu} \dd t \leq C, \end{equation} we are able to reparametrize the curves $(q_\eps)_\eps = (u_\eps,z_\eps)_\eps$ by their ``dissipation arclength'' $ \mathsf{s}_\eps: [0,T] \to [0,\mathsf{S}_\eps]$ given by \[ \mathsf{s}_\eps(t) : =\int_0^t \left( 1{+} \tMfu{\eps}\alpha{r}{q_\eps(t)}{q_\eps'(r)} {+} \| u_\eps'(r)\|_{\Spu}\right) \dd r \,. \] Reparametrization was first advanced in \cite{EfeMie06RILS} as a tool to capture the viscous transition paths, at jumps, in the rate-independent limit. With this aim, first of all we observe that, setting $\sft_\eps: = \sfs_\eps^{-1}: [0, ,\mathsf{S}_\eps]\to [0,T]$ and $\sfu_\eps: = u_\eps \circ \sft_\eps$, $\sfz_\eps: = z_\eps \circ \sft_\eps$, the rescaled curves $(\sft_\eps, \sfu_\eps,\sfz_\eps )_\eps$ satisfy a reparametrized version of \eqref{EnDissBal-intro}. Using the first main results of this paper presented in Theorems \ref{thm:existBV} and \ref{thm:exist-enh-pBV}, we are able to pass to the limit in this reparametrized energy balance as $\eps\to0^+$ and obtain a triple $(\sft,\sfq) = (\sft,\sfu,\sfz): [0,\mathsf{S}]\to [0,T]\ti \Spu\ti \Spz$ satisfying the energy-dissipation balance \begin{equation} \label{lim-EDbal-intro} \begin{aligned} & \eneq {\sft(s_2)}{\sfq(s_2)} +\int_{s_1}^{s_2} \meq 0\alpha{\sft(s)}{\sfq(s)}{\sft'(s)}{\sfq'(s)} \dd s \\ & = \eneq {\sft(s_1)}{\sfq(s_1)} + \int_{s_1}^{s_2} \pl_t \ene s{\sfq(s)} {\sft'(s)} \dd s \quad \text{for all } 0 \leq s_1\leq s_2 \leq \mathsf{S}\,, \end{aligned} \end{equation} which encodes all the information on the behavior of the limiting rate-independent system in the expression of the `time-space dissipation function' $\mename 0\alpha$, thoroughly investigated in Section \ref{su:ReJoMFcn}. We shall call a triple $(\sft,\sfu,\sfz)$ complying with \eqref{lim-EDbal-intro} a \emph{parametrized Balanced-Viscosity} ($\pBV$, for short) solution to the rate-independent system $\RIS$. We highlight two main properties of this solution concept that follow from the special form of $\mename 0\alpha$: \begin{compactitem} \item[\textbullet] When a solution does not jump, i.e.\ when the function $\sft$ of the artificial time $s$, recording the (slow) external time scale, fulfills $\sft'(s)>0$, the term $\meq 0\alpha{\sft}{\sfq}{\sft'}{\sfq'} $ is finite if and only if $\sfu$ is \emph{stationary} and $\sfz$ is \emph{locally stable}, i.e.\ \[ -\rmD_u \ene {\sft(s)} {\sfu(s)} {\sfz(s)} =0 \text{ in } \Spu^* \quad \text{ and } \quad -\rmD_z \ene {\sft(s)} {\sfu(s)} {\sfz(s)} \in \pl \calR(0) \text{ in } \Spz^*. \] Because of the local character of the second condition, the unfeasible jumps that may occur in `energetic solutions' via their `global stability' are thus avoided. \item[\textbullet] The function $\mename 0\alpha$ in \eqref{lim-EDbal-intro} comprises the contributions of the dissipation potentials $\calR$, $\disv u$ and $\disv z$ by condensing the viscous effects into a description of the limiting jump behavior that can occur only if $\sft'(s)=0$, i.e.\ the slow external time is frozen. For example, if the dissipation potentials $\disv u$ and $\disv z$ are $p$-homogeneous (i.e.\ $\disv x(\lambda \sfx')=\lambda^p \disv x(\sfx')$ for $\lambda>0$), then for $\alpha=1$ and $\sft'=0$ we have \begin{equation} \label{2-homog-mename-intro} \begin{aligned} &\meq 01{\sft}{(\sfu,\sfz)}{0}{(\sfu',\sfz')} = \calR(\sfz') \\ &\qquad + \widehat c_p \: \big(\disv z (\sfz'){+} \disv u (\sfu')\big)^{1/p} \:\big( \disv u^*({-}\rmD_u \ene {\sft}{\sfu}{\sfz}) {+} \conj z({-}\rmD_z \ene {\sft}{\sfu}{\sfz}) \big)^{1{-}1/p} \end{aligned} \end{equation} (see Example \ref{ex:p-homog}). The symmetric role of $\disv u $ and $\disv z$ in \eqref{2-homog-mename-intro} arises because of $\alpha=1$ and reflects the fact that, at a jump, the system may switch to a viscous regime where \emph{both} dissipation mechanisms intervene in the evolution of $u$ and $z$, respectively. In contrast, for $\alpha>1$ and $\alpha<1$, the $M$-function $\mename 0\alpha$ shows the different roles of $\disv u $ and $\disv z$, cf.\ \eqref{l:partial}. \end{compactitem} These features are even more apparent in the characterization of a suitable class of $\pBV$ solutions in terms of a system of subdifferential inclusions that has the very same structure as the original viscous system \eqref{van-visc-intro} as provided by Theorem \ref{thm:diff-charact}. This result shows that a triple $(\sft,\sfu,\sfz) : [0,\mathsf{S}]\to [0,T] \ti \Spu\ti \Spz$ is an \emph{enhanced} $\pBV$ solution if and only if there exist measurable functions $\lambda_\sfu , \lambda_\sfz:[0,\mathsf{S}] \to [0,\infty]$ such that for almost all $s\in (0,\sfS)$ we have \begin{subequations} \label{e:diff-char-intro} \begin{equation} \label{param-subdif-incl-intro} \begin{aligned} \pl \disve u{\thn u(s) } (\sfu'(s)) +\rmD_u \ene {\sft(s)}{\sfu(s)}{\sfz(s)} \ni 0 & \text{ in } \Spu^*, \\ \pl \calR(\sfz'(s)) + \pl \disve z{\thn z(s)} ( \sfz'(s)) +\rmD_z \ene {\sft(s)}{\sfu(s)}{\sfz(s)} \ni 0 & \text{ in } \Spz^*, \end{aligned} \end{equation} \begin{equation} \label{switch-intro-1} \sft'(s)\:\frac{\thn u(s)}{1{+}\thn u(s)} =0 \quad \text{and} \quad \sft'(s)\:\frac{\thn z(s)}{1{+}\thn z(s)} =0, \end{equation} \begin{equation} \label{switch-intro-2} \begin{cases} \thn u(s) \: \dfrac{1}{1{+}\thn z(s)} =0& \text{ for }\alpha>1 , \\[0.6em] \thn u(s) =\thn z(s) &\text{ for }\alpha=1, \\[0.3em] \dfrac{1}{1{+}\thn u(s)}\,\thn z(s) =0& \text{ for }\alpha\in (0,1). \end{cases} \end{equation} \end{subequations} In \eqref{switch-intro-1} and \eqref{switch-intro-2} we use the obvious conventions $\frac{\infty}{1+\infty}=1$ and $\frac1{1+\infty}=0$, respectively. Condition \eqref{switch-intro-1} entails that the coefficients $\thn u(s)$ and $\thn z(s)$ of the `viscous terms' in \eqref{param-subdif-incl-intro} are allowed to be nonzero only when $\sft'(s)=0$, i.e.\ viscous behavior may manifest itself only at jumps happening now at a fixed time $t_*=\sft(s)$ for $s\in [s_0,s_1]$. Conditions \eqref{switch-intro-2} reveal that the onset of viscous effects in $u$ and/or in $z$ depends on whether $u$ relaxes to equilibrium faster (case $\alpha>1$), with the same speed (case $\alpha =1$), or more slowly (case $\alpha<1$), than $z$ relaxes to local stability. In particular, the case $\lambda_\sfx=\infty$ leads to a blocking of the variable $\sfx\in \{\sfu,\sfz\}$, i.e.\ $\sfx'(s)=0$ and $\pl \disve x{\infty}(0) =\Spx^*$. These aspects will be thoroughly explored in Sections \ref{s:EDI} and \ref{ss:6.3-diff-charact}. Finally, in analogy with the case of the `single-rate' vanishing-viscosity approach developed in \cite{MRS12, MRS13}, here as well we introduce ``true \emph{Balanced-Viscosity solutions}'' (shortly referred to as \emph{BV solutions}) as the \emph{non-parametrized counterpart} to $\pBV$ solutions, see Definition \ref{def:trueBV}. These solutions are functions of the \emph{original time variable} $t\in [0,T]$ and fulfill an energy balance that again encompasses the contribution of the viscous dissipation potentials $\disv u$ and $\disv z$ to the description of energy dissipation at jump times of the solution. We are going to show that true $\BV$ solutions are related to $\pBV$ solutions in a canonical way, see Theorem \ref{th:pBV.v.BVsol}. What is more, in Theorems \ref{thm:exist-trueBV} and \ref{thm:exist-nonpar-enh} we provide general assumptions that guarantee that all pointwise-in-time limits of a family of (\emph{non-parametrized}) viscous solutions $q_\epsk:[0,T]\to \Spq$, for $\epsk \to 0^+$, is indeed a $\BV$ solution. We emphasize that the definition of BV solutions is independent of the vanishing-viscosity approach. This independence guarantees that the solution concept is indeed stable under parameter variations in the way shown in \cite[Thm.\,4.8]{MRS2013} for generalized gradient systems (cf.\ also \cite[Thm.\ 4.2]{MiRoSa_VARRIS12}). Otherwise, doing the limit $\eps\to 0^+$ first and then a parameter limit $\delta\to \delta_*$ it is not possible to show that the obtained limit curve is a vanishing-viscosity limit for fixed $\delta_*$, see Remark \ref{rm:VVAvsBV}. In principle, our general definition of (parametrized) BV solutions for limiting rate-independent systems can be used and analyzed independently of the vanishing-viscosity approach. However, to avoid overburdening the present work we do not following this line and restrict ourselves to situations where existence of solutions can be established exactly by these methods. After all, this is the mechanical motivation for considering such solution classes. \subsection{Application to a model for delamination} In Section \ref{s:appl-dam} we show that our existence results for $\pBV$ solutions, characterized by \eqref{e:diff-char-intro}, and (true) $\BV$ solutions apply to a rate-independent process modeling delamination between two elastic bodies in adhesive contact along a prescribed interface. For a first approach to energetic solutions for this delamination problem, we refer to \cite{KoMiRo06RIAD}. A systematic approach to BV solutions for a multi-rate system involving elastoplasticity and damage is given in \cite{Crismale-Rossi19}. The vanishing-viscosity analysis for the viscously regularized delamination model poses nontrivial challenges due to the presence of various maximal monotone nonlinearities, in the displacement equation and in the flow rule for the delamination variable $z$, which for instance render the constraints $z(t,x) \in [0,1]$ and the unidirectionality of the evolution. In particular, the main challenge is to obtain the a priori estimate \eqref{u-bound-intro} uniformly in $\eps$ when taking the vanishing-viscosity limit. For this, it is necessary to carefully regularize the viscous system. Because of the relatively weak coupling between the displacement equation and the flow rule for $z$, the smoothened system possesses a \emph{semilinear structure} that allows us to apply the techniques developed in \cite[Sec.\,4.4]{Miel11DEMF} and \cite[Sec.\,2]{Mielke-Zelik}, see Section \ref{su:DelamSmooth}. \Subsection{Plan of the paper} \label{su:PlanPaper} In Section \ref{s:EDI} we introduce a prototype of the coupled systems that we aim to mathematically model through the Balanced-Viscosity concept. In this simplified context, avoiding technicalities we illustrate the notion of (parametrized) $\BV$ solution and its mechanical interpretation. Section \ref{se:Tools} contains some auxiliary tools on that will be central for the rest of the paper. It revolves around the construction of \emph{vanishing-viscosity contact potential} that will be relevant for describing the dissipative behavior of the viscously regularized system in the multi-rate case with $1$, $\eps$, and $\eps^\alpha$. In fact, it will enter into the definition of the function $\mename 0\alpha$ in \eqref{lim-EDbal-intro}. Since in this paper we will extend the analysis of \cite{MRS14} to general viscous dissipation potentials, we will not be able to explicitly calculate the related vanishing-viscosity contact potential except for particular cases. Thus, a large part of Section \ref{se:Tools} will focus on the derivation of general properties of contact potentials that will lay the ground for the study of the dissipation function $\mename 0\alpha$. In Section \ref{s:setup} we thoroughly establish the setup for our analysis, specifying the basic conditions on the spaces, on the energy functional, and on the dissipation potentials. Moreover, Theorem \ref{th:exist} recalls the existence result from \cite{MRS2013} for the viscous system \eqref{van-visc-intro}. Section \ref{su:AprioViscSol} is devoted to the derivation of a priori estimates for the solutions $(u_\eps,z_\eps)_\eps$ to \eqref{van-visc-intro} that are uniform with respect to the parameter $\eps$. Section \ref{su:ReJoMFcn} entirely revolves around the functional $\mename 0\alpha$ that has a central role in the definition of both $\pBV$ and true $\BV$ solutions. In particular, (i) we motivate its definition as the Mosco limit of the family of the time-integrated dissipation functional appearing in \eqref{EnDissBal-intro}, and (ii) relying on the results from Section \ref{se:Tools} we compute the limit $\mename 0\alpha$ explicitly and investigate its properties. In the subsequent subsections we give the definition of parametrized Balanced-Viscosity solution to the rate-independent system $\RIS$, state our existence results in Theorem \ref{thm:existBV} (and Theorem \ref{thm:exist-enh-pBV} for enhanced $\pBV$ solutions), and present the characterizations of $\pBV$ in terms of the subdifferential inclusions \eqref{e:diff-char-intro}, cf.\ Theorem \ref{thm:diff-charact}. In Section \ref{ss:4.3} we introduce \emph{true} $\BV$ solutions and state our existence result Theorem \ref{thm:exist-trueBV} (and Theorem \ref{thm:exist-nonpar-enh} for enhanced BV solutions). In particular, we show that these solutions are obtained by taking the vanishing-viscosity limit in system \eqref{van-visc-intro} written in the \emph{real time} variable $t\in [0,T]$. We also gain further insight into the description of the jump dynamics provided by true $\BV$ solutions. The proofs of the main results of Sections \ref{s:4+} and \ref{ss:4.3} are carried out in Section \ref{s:8}. Section \ref{s:appl-dam} shows that our abstract setup is suitable to handle a concrete application to in solid mechanics. In particular, in Theorem \ref{thm:BV-adh-cont} we prove the existence of enhanced parametrized and true $\BV$ solutions for a viscoelastic model with delamination along a prescribed interface. \Subsection{General notations} \label{su:I.GenNotations} Throughout the paper, for a given Banach space $X$, we will denote its norm by $\| \cdot \|_X$. For product spaces $X \ti \cdots \ti X$, we will often (up to exceptions) simply write $\| \cdot \|_X$ in place of $\| \cdot\|_{X {\ti \cdots \ti}X}$. By $\pairing{}{X}{\cdot}{\cdot}$ we shall denote both the duality pairing between $X$ and $X^*$ and the scalar product in $X$, if $X$ is a Hilbert space. We shall use the symbols $c,\,c',\, C,\,C'$, etc., whose meaning may vary even within the same line, to denote various positive constants depending only on known quantities. Furthermore, the symbols $I_i$, $i = 0, 1,... $, will be used as place-holders for terms involved in the various estimates: we warn the reader that we will not be self-consistent with the numbering, so that, for instance, the symbol $I_1$ will occur several times with different meanings. \Section{A prototypical class of coupled systems} \label{s:EDI} In this section we illustrate the notion of parametrized $\BV$ solution for a prototypical and simple class of coupled systems to which the existence and characterization results obtained in the sequel will apply. In particular, it contains a model combining linearized viscoelasticity and viscoplasticity. We shall confine the discussion to the particular case in which the ambient spaces \begin{subequations} \label{prototypical-setup} \begin{equation} \label{prot-Hilb} \text{$\Spu$ and $\Spz$ are Hilbert spaces,} \end{equation} the viscous dissipation potentials are \emph{quadratic}, namely \begin{equation} \label{prot-pot} \disv u : \Spu \to [0,\infty);\ u'\mapsto \frac12 \la \mathbb V_\sfu u',u'\ra, \qquad \disv z : \Spz \to [0,\infty);\ z' \mapsto \frac12 \la \mathbb V_\sfz z', z'\ra , \end{equation} with bounded linear and symmetric operators $\mathbb V_\sfx:\Spx\to \Spx^* $, and the driving energy functional is of the form \begin{equation} \label{prot-en} \ene tuz: = \frac12 \pairing{}{\Spu}{\bbA u}{u} + \pairing{}{\Spz}{\bbB u}{z} +\frac12 \pairing{}{\Spu}{\bbG u}{u} - \pairing{}{\Spu}{f(t)}{u} - \pairing{}{\Spz}{g(t)}{z} , \end{equation} \end{subequations} where $\bbA : \Spu \to \Spu^* $ and $\bbG: \Spz \to \Spz^*$ are linear, bounded and self-adjoint, $\bbB : \Spu \to \Spz^*$ is linear and bounded, and $(f,g): [0,T]\to \Spu^*\ti \Spz^*$ are smooth time-dependent applied forces. Moreover, we assume that the block operator $\binom{\bbA\ \bbB^*}{\bbB\ \bbG}$ is positive semidefinite. Together with the $1$-homogeneous potential $\calR: \Spz \to [0,\infty)$ the viscous system \eqref{van-visc-intro} reads \begin{subequations} \label{prot-van-visc-syst} \begin{align} & \label{prot-van-visc-u} \qquad \quad \eps^\alpha \mathbb V_\sfu u' + \bbA u+ \bbB^*z = f(t) && \text{in } \Spu && \foraa\, t \in (0,T), \\ & \label{prot-van-visc-z} \pl \calR(z') + \eps \mathbb V_\sfz z' + \bbB u + \bbG z =g(t) && \text{in } \Spz && \foraa\, t \in (0,T), \end{align} \end{subequations} with $\mathbb V_\sfx$ from \eqref{prot-pot}. It will be important to allow for coercivity of $\calR$ on a Banach space $\Spy$ such that $\Spz \subset \Spy$ continuously and $\calR(z')\geq c\|z'\|_{\Spy}$ for all $z'\in \Spy$. \begin{example}[Linearized elastoplasticity with hardening] \slshape Let the elastoplastic body occupy a bounded Lipschitz domain $\Omega\subset\R^d$: linearized elastoplasticity is described in terms of the displacement $u:\Omega \to \R^d$ with $u(t)\in \Spu=\rmH^1_0(\Omega)$ for simplicity and in terms of the symmetric, trace-free plastic strain tensor $z:\Omega \to \R_{\mathrm{dev}}^{d\ti d} := \bigset{z\in \R_{\mathrm{sym}}^{d\ti d}}{ \mathrm{tr}(z)=0}$. The driving energy functional $\calE: [0,T]\ti \Spu\ti \Spz \to \R$ with $\Spz= \rmL^2(\Omega; \R_{\mathrm{dev}}^{d\ti d})$ is defined by \[ \ene tuz : = \int_\Omega \left( \tfrac12 (e(u){-}z){:} \mathbb{C}(e(u){-}z) {+}\tfrac12z{:} \mathbb{H} z \right) \dd x - \pairing{}{H_0^1(\Omega;\R^d)}{f(t)}{u} \] with $e(u)$ the linearized symmetric strain tensor, $\mathbb{C} \in \mathrm{Lin}(\R_\mathrm{sym}^{d\ti d}) $ and $\mathbb{H}\in \mathrm{Lin}(\R_\mathrm{devm}^{d\ti d})$ are the positive definite and symmetric elasticity and hardening tensors, respectively, and $f: [0,T]\to \rmH^{-1}(\Omega;\R^d)$ a time-dependent volume loading. The dissipation potentials are \[ \calR(z') = \int_\Omega \sigma_\mathrm{yield}|z'| \dd x , \quad \disv u(u'): = \int_\Omega \tfrac12 e(u') : \bbD e(u') \dd x, \quad \disv z (z') : = \int_\Omega \tfrac12 z': \mathbb V z' \dd x \] where $\sigma_\mathrm{yield}>0$ is the yield stress and $\mathbb{D} \in \mathrm{Lin}(\R_\mathrm{sym}^{d\ti d})$ and $\mathbb V \in \mathrm{Lin}(\R_\mathrm{devm}^{d\ti d})$ are the symmetric and positive definite viscoelasticity and viscoplasticity tensors, respectively. Hence, system \eqref{prot-van-visc-syst} translates into \[ \begin{aligned} -\mathrm{div}\big(\eps^\alpha \bbD e(u') + \bbC (e(u){-}z)\big) \ \ \qquad & = f(t) && \text{ in } \Omega \ti (0,T), \\ \sigma_\mathrm{yield} \Sign (z') + \eps \mathbb V z' + \mathrm{dev}\big(\bbC (z{-}e(u))\big) + \mathbb{H} z & \ni\ \ 0 && \text{ in } \Omega \ti (0,T). \end{aligned} \] where ``$\mathrm{dev}$'' projects to the deviatoric part, namely $\mathrm{dev}\,A = A - \frac1d (\mathrm{tr}\!\; A)\,I$. \end{example} For the system $\RIS$ from \eqref{prototypical-setup}, featuring $2$-positively homogeneous dissipation potentials, the time-space dissipation function $\mename 0\alpha$ that enters into the definition of (parametrized) Balanced Viscosity solution can be explicitly computed (cf.\ Example \ref{ex:p-homog} ahead). Nonetheless, here we can give an even more transparent illustration of (parametrized) $\pBV$ solutions in terms of their differential characterization \eqref{e:diff-char-intro}. The upcoming Theorem \ref{thm:diff-charact} states that a triple $(\sft,\sfu,\sfz)$ is an (\emph{enhanced}) parametrized $\BV$ solution if and only if it solves, for almost all $s\in (0,\sfS)$, \begin{equation} \label{param-subdif-incl-2} \begin{aligned} \thn u(s) \mathbb V_\sfu {\sfu'(s)} +{}&\bbA \sfu(s)+ \bbB^* \sfz(s) \ni f(\sft(s)) && \text{in } \Spu^*, \\ \pl \calR(\sfz'(s)) + \thn z(s) \mathbb V_\sfz{\sfz'(s)}+ {}& \bbB \,\sfu(s) \,+ \bbG \,\sfz(s) \ni g(\sft(s)) && \text{in } \Spz^*, \end{aligned} \end{equation} joint with the `switching conditions' \eqref{switch-intro-1}--\eqref{switch-intro-2} on the measurable functions $\thn u, \, \thn z: (0,\sfS)\to [0,\infty]$. Here ``$\infty \mathbb V_\sfz{\sfz'}\,$'' has to be interpreted in the sense of $\pl \disve z{\infty}(\sfz')$, see \eqref{eq:SubDiff.Vx}. We recall that \eqref{switch-intro-1} simply ensures that, if the system is not jumping (i.e., $\sft'(s)>0$), then viscosity does not come into action, i.e.\ $\thn u(s)=\thn z(s)=0$. This means that $\sfu(s)$ is in `E'quilibrium with respect to $\sfz(s)$ and the loading $f(\sft(s))$, whereas $\sfz$ evolves according to the truly `R'ate-independent evolution $\pl \calR(\sfz')+\bbB \sfu+ \bbG\sfz \ni g$, hence we will denote this evolution regime by $\rgs Eu \rgs Rz$ in Section \ref{ss:6.3-diff-charact}. Conditions \eqref{switch-intro-2} differ in the three cases $\alpha=1$, $\alpha>1$ and $\alpha\in (0,1)$ and indeed show how the (possibly different) relaxation rates of the variables $u$ and $z$ influence the system behavior at jumps, see Section \ref{ss:6.3-diff-charact} for a full discussion of the occurring evolution regimes. For \underline{\emph{$\alpha=1$}} the variables $u$ and $z$ relax with the same rate: at a jump, the system \emph{may} switch to a viscous regime where the viscosity in $u$ and in $z$ are involved \emph{equally}, since the coefficients $\thn u$ and $\thn z$ modulating the `V'iscosity terms $\mathbb V_\sfu{\sfu'}$ and $\mathbb V_\sfz{\sfz'} $ coincide. This evolution regime will be denoted $\rgs V{uz}$. For \underline{\emph{$\alpha>1$}} the switching condition \eqref{switch-intro-2} imposes that either $\thn z =\infty$ (i.e.\ $\sfz'=0$) or that $\thn u=0$ (so that $\sfu$ is at equilibrium). Indeed, since $\sfu$ relaxes `V'iscously faster to equilibrium than $\sfz$ to rate-independent evolution, $\sfz$ is `B'locked until $\sfu$ has reached the equilibrium: and we call this evolution regime $\rgs Vu \rgs Bz$. After that $\sfu$ is in `E'quilibrium and $\sfz$ may have a `V'iscous transition with $\thn z>0$, a regime denoted by $\rgs Eu \rgs Vz$. Moreover, under suitable conditions on the operators $\bbA $, $\bbB $, and $\bbG$ which in particular ensure that the functional $\ene t{\cdot}z$ from \eqref{prot-en} is uniformly convex, the arguments from \cite[Prop.\,5.5]{MRS14} may be repeated for the system $\RIS$ defined via \eqref{prototypical-setup}. Hence, it is possible to show that the regime $\rgs Vu \rgs Bz$ can only occur once in the initial phase, while $\sfu$ never leaves equilibrium afterwards, i.e.\ only $\rgs Eu \rgs Rz$ and $\rgs Eu \rgs Vz$ are possible. For \underline{\emph{$\alpha \in (0,1)$}} the variable $\sfz$ relaxes faster than $\sfu$, which leads to the two viscous regimes: (i) $\rgs Bu \rgs Vz$ where $\sfu$ is blocked ($\thn u=\infty$) while $\sfz$ evolves viscously, and (ii) $\rgs Vu \rgs Rz$ where $\sfu$ relaxes to equilibrium while $\sfz$ stays in locally stable states ($\thn z=0$). For $\alpha \in (0,1)$ these two regimes and $\rgs Eu \rgs Rz$ may occur more than once in the evolution of the system. \Section{Some auxiliary tools for dissipation potentials} \label{se:Tools} In this section we prepare a series of useful tools for handling the balanced effect of the different dissipation potentials. They will be essential for the upcoming analysis and may be interesting elsewhere. \begin{definition}[Primal and dual dissipation potentials] \label{def:DissPotential} Let $X$ be a reflexive Banach space. Then, a function $\psi:X\to [0,\infty]$ is called a \emph{(primal) dissipation potential}, if \[ \psi \text{ is convex, lower semicontinuous (lsc, for short) and } \ \psi(0)=0. \] The \emph{dual dissipation potential} $\psi^*:X^*\to [0,\infty]$ is defined via Legendre-Fenchel conjugation as \[ \psi^*(\xi): = \sup\bigset{\langle \xi,v\rangle - \psi(v)}{ v\in X}. \] \end{definition} Note that $\psi^*$ is indeed again a dissipation potential, and we have $(\psi^*)^*=\psi$. In this section, we allow for functionals $\psi$ taking the value $\infty$ as well as degenerate functionals such that $\psi(v)=0$ for $v\neq 0$. With $\psi$ we associate the \emph{B-function} \begin{equation} \label{Bfunctions-2} \calB_{\psi} : (0,\infty) \ti X \ti [0,\infty) \to [0,\infty], \qquad \Bfu {\psi}\tau v\sigma: =\tau \psi\left(\frac v\tau\right) +\tau\sigma\,. \end{equation} We highlight the rescaling properties of $\calB_\psi$ as follows \begin{equation} \label{eq:mfB.rescale} \Bfu {\psi}\tau v\sigma = \tau \, \Bfu {\psi} 1{\frac1\tau v}\sigma = \frac1\delta \Bfu {\psi}{\delta\tau}{\delta v}\sigma \quad \text{for all }\delta>0. \end{equation} We will use that the functional $ \calB_{\psi}(\cdot,\cdot, \sigma)$ is convex for all $\sigma \geq 0$. To see this, we consider $\tau_0,\, \tau_1 \in (0,\infty)$, $v_0,v_1 \in X$, and $\theta \in [0,1]$ and set $\tau_\theta: = (1{-}\theta)\tau_0+\theta\tau_1 >0$ and $v_\theta: = (1{-}\theta)v_0+\theta v_1 $. With this we find \begin{equation} \label{miracle-HS} \begin{aligned} \Bfu {\psi}{\tau_\theta}{v_\theta}\sigma &= \tau_\theta \,\psi\left(\frac {v_\theta}{\tau_\theta}\right) +\tau_\theta\sigma = \tau_\theta \,\psi\Big( \frac{(1{-}\theta)\tau_0}{\tau_\theta} \,\frac1{\tau_0} v_0 + \frac{\theta \tau_1}{\tau_\theta} \,\frac1{\tau_1} v_1 \Big) + \tau_\theta \sigma \\ & \overset{(1)}{\leq} \tau_\theta\Big( \frac{(1{-}\theta) \tau_0}{\tau_\theta} \psi\left(\frac{v_0}{\tau_0} \right) + \frac{\theta\tau_1}{\tau_\theta} \psi\left(\frac{v_1}{\tau_1} \right) \Big) + \big(1{-}\theta)\tau_0 \sigma + \theta\tau_1 \sigma \\ & =(1{-}\theta) \Bfu\psi{\tau_0}{v_0}\sigma + \theta \Bfu\psi{\tau_1}{v_1}\sigma, \end{aligned} \end{equation} where in $\overset{(1)}{\leq} $ we used the convexity of $\psi$. We next define the functional \begin{align} & \label{bipotentials-1} \qquad \mfb_\psi: X\ti [0,\infty) \to [0,\infty]; && \mfb_{\psi}(v,\sigma) : = \inf\bigset{\Bfu {\psi}\tau v\sigma}{\tau>0}. \end{align} We shall refer to the functional $\mfb_\psi $ as \emph{vanishing-viscosity contact potential} associated with $\psi$, in accordance with the terminology used in \cite{MRS12}. As we will see, $\mfb_\psi$ will be handy for describing the interplay of vanishing viscosity and time rescaling upon taking the limit of \eqref{van-visc-intro}. \Subsection{Properties of vanishing-viscosity contact potentials $\mfb_\psi$} \label{su:mfb.psi} For arbitrary dissipation potentials $\psi$, we define the \emph{rate-independent part} $\psi_\mathrm{ri}:X \to [0,\infty]$ via \begin{equation} \label{eq:RI.part} \psi_\mathrm{ri}(v)=\lim_{\gamma \to 0^+} \frac1\gamma \psi(\gamma v) = \sup\bigset{\langle \eta,v\rangle_X }{ \eta \in \pl \psi(0) }. \end{equation} The following results are slight variants of the results in \cite[Thm.\,3.7]{MRS12}. \begin{proposition}[Properties of vanishing-viscosity contact potentials] \label{pr:VVCP} Assume that the dissipation potential $\psi:X \to [0,\infty]$ is superlinear, i.e.\ \begin{equation} \label{ass-diss-pot-superl} \lim_{\|v\|_X\to \infty}\frac{\psi(v)}{\|v\|_X} = \infty\,. \end{equation} Then, $\mfb_\psi$ has the following properties: \begin{compactenum} \item[\upshape(b1)] $\mfb_\psi(v,\sigma)=0$ implies $\sigma=0$ or $v=0$. Moreover, $\mfb_\psi(0,\sigma)=0$ for all $\sigma\geq 0$. \item[\upshape(b2)] For all $v\in X$ the function $\mfb_\psi(v,\cdot):[0,\infty) \to [0,\infty]$ is nondecreasing and concave. For $v\neq 0$ and $\sigma>0$ the infimum in the definition of $\mfb_\psi$ is attained at a value $\tau_{v,\sigma} \in (0,\infty)$. Moreover, for all $v \neq 0 $ and $\sigma>0$ we have $\mfb_\psi (v,\sigma) > \mfb_\psi (v,0)=\psi_\mathrm{ri}(v)$. \item[\upshape(b3)] For all $\sigma\geq 0$ the function $\mfb_\psi(\cdot,\sigma):X \to {[0,\infty)}$ is positively 1-homogeneous, lsc, and convex. \item[\upshape(b4)] If $\psi= \phi+\varphi$ where $\phi$ is 1-homogeneous, then $\mfb_{\phi+\varphi}(v,\sigma)= \phi(v) + \mfb_\varphi(v,\sigma)$. \item[\upshape(b5)] For all $(v,\eta)\in X\ti X^*$ we have $ \mfb_\psi(v,\psi^*(\eta)) \geq \langle \eta,v\rangle_X$. \end{compactenum} \end{proposition} \begin{proof} The main observation is that the function $g_{v,\sigma}: (0,\infty) \ni \tau \mapsto \tau \psi(\frac1\tau v) + \tau \sigma$ is convex (cf.\ \eqref{miracle-HS}) and takes only nonnegative values. For $\sigma>0$ we have $g_{v,\sigma}(\tau) \to \infty$ for $ \tau \to \infty$, and for $v\neq 0$ we have $g_{v,\sigma}(\tau)\to \infty$ for $\tau \to 0^+$ due to superlinearity of $\psi$. \underline{Part (b1):} If $\mfb_\psi(v,\sigma) = \inf g_{v,\sigma}(\cdot)=0$, the infimum must either be realized for $\tau \to 0^+$ or for $\tau \to \infty$. In the first case, the value of $\sigma $ does not matter, but the superlinearity of $\psi$ gives $\tau \psi(\frac1\tau v) \to \infty$, unless $v=0$. In the second case we have $\tau \sigma \to \infty$, unless $\sigma=0$. The relation $\mfb_\psi(0,\sigma)=0$ is obvious. \underline{Part (b2):} The first two statements follow because $\mfb_\psi(v,\cdot)$ is the infimum of a family of functions that are increasing and concave in $\sigma$. For $v\neq 0$ and $\sigma>0$ the minimum of $g_{v,\sigma}(\tau)$ is achieved at a $\tau_{v,\sigma} \in (0,\infty)$ as $g_{v,\sigma}(\tau) \to \infty$ on both sides (i.e., as $\tau \to 0^+$ and $\tau \to \infty$). Thus, $\mfb_\psi(v,\sigma)\geq \mfb_\psi(v,0)+ \sigma \tau_{v,\sigma} >\mfb_\psi(v,0)$ as desired. The relation $\mfb_\psi (v,0) = \psi_\mathrm{ri}(v)$ follows easily from the convexity of $\psi$. \underline{Part (b3):} The positive 1-homogeneity $\mfb_\psi(\gamma v, \sigma)=\gamma \mfb_\psi(v,\sigma)$ for all $\gamma>0$ follows by replacing $\tau $ by $\tau \gamma$. Convexity is obtained as follows. For fixed $v_0,v_1\in X$, $\theta \in (0,1)$, and $\sigma\geq 0$, we choose $\eps>0$ and find $\tau_0,\tau_1>0 $ such that for $j\in \{0,1\}$ we have \begin{equation} \label{eq:mfb.psi.eps} \tau_j\,\psi\big(\frac1{\tau_j} v_j,\sigma \big) + \tau_j \sigma \leq \mfb_\psi(v_j,\sigma)+\eps. \end{equation} Here we assumed without loss of generality $\mfb_\psi(v_j,\sigma)<\infty$ since otherwise there is nothing to be shown. Now we set $v_\theta= (1{-}\theta) v_0 + \theta v_1 $ and $\tau_\theta =(1{-}\theta) \tau_0 + \theta \tau_1>0$. Using the convexity \eqref{miracle-HS} of the functional $\Bfu{\psi}{\cdot}{\cdot}{\sigma}$, we obtain \begin{align*} \mfb_\psi(v_\theta,\sigma) \leq \Bfu\psi{\tau_\theta}{v_\theta}\sigma & \leq (1{-}\theta) \Bfu\psi{\tau_0}{v_0}\sigma +\theta \Bfu\psi{\tau_1}{v_1}\sigma \\ & \leq (1{-}\theta)\mfb_\psi(v_0,\sigma) + \theta \mfb_\psi(v_1) + \eps, \end{align*} with the last inequality due to \eqref{eq:mfb.psi.eps}. Since $\eps>0$ was arbitrary, this is the desired result. To prove lower semicontinuity, we use the special way $\mfb_\psi$ is constructed and that $\psi $ is lsc. For all sequences $v_j\to v_*$ and $\sigma_j\to \sigma_*$ we have to show \[ \mfb_\psi (v_*,\sigma_*) \leq \alpha:=\liminf_{j\to \infty} \mfb_\psi (v_j,\sigma_j) \] We may assume $\alpha<\infty$ and $\mfb_\psi(v_*,\sigma_*)>0$, since otherwise the desired estimate is trivial. The case $\sigma_*=0$ is simple, as we have \[ \alpha=\liminf_{j\to \infty} \mfb_\psi (v_j,\sigma_j) \geq \liminf_{j\to \infty} \mfb_\psi (v_j,0) \geq \liminf_{j\to \infty} \psi_\mathrm{ri}(v_j) \geq \psi_\mathrm{ri}(v_*) =\mfb_\psi(v_*,0)= \mfb_\psi(v_*,\sigma_*). \] It remains to consider the case $v_*\neq 0$ and $\sigma_*>0$. Since $\|v_j\|\geq \|v_*\|/2>0$ and $\sigma_j\geq \sigma_*/2>0$ for sufficiently large $j$, we see that the optimal $\tau_j=\tau_{v_j,\sigma_j}$ lie in a set $[1/M,M]\Subset (0,\infty)$. Thus, choosing a subsequence (not relabeled), we may assume $\tau_j\to \tau_\circ$ and obtain lower semicontinuity by using $\frac1{\tau_j}v_j \to \frac1{\tau_\circ}v_*$ as follows: \[ \alpha=\liminf_{j\to \infty} \mfb_\psi (v_j,\sigma_j)= \liminf_{j\to \infty} \left( \tau_j\psi\big(\frac1{\tau_j}v_j\big) + \tau_j \sigma_j \right) \geq \tau_\circ\psi\big(\frac1{\tau_\circ}v_*\big) + \tau_\circ \sigma_* \geq \mfb_\psi(v_*,\sigma_*). \] \underline{Part (b4):} The formula for $\mfb_{\phi+\varphi}$ follows from a direct calculation. \underline{Part (b5):} We have $g_\tau( v, \psi^*(\eta)) = \tau \Big(\psi\big(\frac1\tau v\big) + \psi^*(\eta) \Big) \geq \tau\big(\langle \eta,\frac1\tau v\rangle \big) = \langle \eta,v\rangle, $ and taking the infimum over $\tau>0$ gives the result. Thus, Proposition \ref{pr:VVCP} is proved. \end{proof} There is a canonical case in which $\mfb_\psi$ can be given explicitly, namely the case that $\mfb_\psi(v)$ only depends on the Banach-space norm $\|v\|$. In that case we have an explicit expression for $\mfb_\psi$ and the functional $ X \times X^* \ni (v,\eta) \mapsto \mfb_\psi(v,\psi^*(\eta))$. \begin{lemma}[Dissipation potentials depending on the norm] \label{le:psi.norm} Assume that $\psi$ is given in the form $\psi(v)=\zeta(\|v\|)$, where $\zeta:[0,\infty)\to [0,\infty]$ satisfies $\zeta(0)=0$ and is lsc, nondecreasing, convex, and superlinear. Setting $\zeta'(0)= \lim_{h\to 0^+} \frac1 h \zeta(h)$ we have the identities \begin{equation} \label{explicit-vvcp} \begin{aligned} \mfb_\psi(v,\sigma)&= \| v\| \kappa_\zeta(\sigma) \text{ with } \kappa_\zeta(\sigma):=\inf\bigset{\tau \zeta(1/\tau)+\tau\sigma}{ \tau>0}, \\ \mfb_\psi(v,\psi^*(\xi)) &= \| v\| \,\max\big\{ \zeta'(0)\, , \, \|\xi\|_* \big\}. \end{aligned} \end{equation} \end{lemma} \begin{proof} The first statement is trivial for $v=0$. For $v\neq 0$ we can replace $\tau$ by $\tau \|v\|$ and obtain the desired product form with $\|v\|$ as the first factor. To obtain the second statement in \eqref{explicit-vvcp} we first observe that $\psi^*(\xi)=\zeta^*(\|\xi\|_*)$ with $\zeta^*(r)=\sup\bigset{r\rho-\zeta(\rho)}{\rho\geq 0}$. As $\zeta$ is superlinear $\zeta^*(r)$ is finite for all $r\geq 0$, and $\zeta^*(r)=0$ for $r\in [0,\zeta'(0)]$. Secondly, we characterize $\kappa_\zeta$ by using the following estimate \[ \kappa_\zeta(\zeta^*(r)) =\inf\bigset{\tau\big(\zeta(\tfrac1\tau) {+} \zeta^*(r)\big)}{\tau>0} \geq \inf\bigset{\tau\big( \tfrac1\tau \,r \big)}{\tau>0} = r. \] The inequality is even an identity if the infimum is attained, which is the case of $\frac1\tau \in \pl \zeta^*(r)$ for some $\tau$. Thus, we have attainment whenever $\zeta^*(r)>0$, whereas for $r\in [0,\zeta'(0)]$, where $\zeta^*(r)=0$, we have non-attainment but find $\kappa_\zeta(0)= \zeta'(0)$. Together we arrive at $\kappa_\zeta(\zeta^*(r))= \max\{\zeta'(0),r\}$ (see also \cite[Sec.\,2.3]{LiMiSa18OETP}), and $ \mfb( v,\psi^*(\xi))=\| v\| \kappa_\zeta(\zeta^*(\|\xi\|_*))$ gives the desired result. \end{proof} The above result shows that the estimate $\mfb_\psi( v,\psi^*(\xi)) \geq \la \xi,v\ra$ in (b4) improves to \begin{equation} \label{used-later-HS} \mfb_\psi( v,\psi^*(\xi)) \geq \|\xi\|_*\|v\| \end{equation} in certain cases, in particular in the metric approach used in \cite{RMS08, MRS09}. As some of the following examples show, the latter estimate is not true in general, and that is why we will derive general lower bounds on the vanishing-viscosity contact potential $\mfb_\psi$ in Section \ref{su:LowerBounds}. \begin{example}[The function $\mfb_\psi$ for some special cases] \label{ex:VVCP}\slshape The following cases give some intuition about the vanishing-viscosity contact potential $\mfb_\psi$. \\[0.3em] (A) Assume that $\psi$ is positively $p$-homogeneous with $p\in (1,\infty)$, i.e.\ $\psi(\gamma v) = \gamma^p \psi(v)$ for all $\gamma>0$ and $v\in X$. Then, we have \begin{equation} \label{p-homo-mfb} \mfb_\psi(v,\sigma )= \big(\psi(v)\big)^{1/p}\: \hat c_p\, \sigma^{1/p'}, \quad \text{where } \hat c_p = p^{1/p} (p')^{1/p'}\text{ and } \frac1p+\frac1{p'} =1. \end{equation} In particular, for $\psi(v) = \frac1p \|v\|^p$ we find \[ \mfb_\psi(v,\sigma ) = \|v\|\big(p'\sigma\big)^{1/p'} \quad \text{ and }\quad \mfb_\psi(v,\psi^*(\eta)) = \|v\|\|\eta\|_*\, . \] \\[0.3em] (B) On $X=\R^2$ consider $\psi(v)=\frac12(av_1^2+b v_2^2)$ with $a,b>0$. Then, \[ \mfb_\psi(v,\sigma) = \big(av_1^2{+}b v_2^2\big)^{1/2} \:(2\sigma)^{1/2} \quad \text{ and }\quad \mfb_\psi(v,\psi^*(\xi))=\big(av_1^2{+}b v_2^2\big)^{1/2}\big(\frac1a \xi_1^2{+}\frac1b \xi_2^2\big)^{1/2}. \] If $\R^2$ is equipped with the Euclidean norm $\|\cdot\|$, then $ \mfb_\psi(v,\psi^*(\xi)) \geq \big(\frac{\min\{a,b\}}{\max\{a,b\}}\big)^{1/2} \| \xi\|_* \|v\|$, but estimate \eqref{used-later-HS} fails, while $ \mfb_\psi(v,\psi^*(\xi)) \geq \la \xi , v \ra$ obviously holds. \\[0.3em] (C) Again for $X=\R^2$ consider $\psi(v)=\frac12v_1^2+\phi(v_2)$ with \[ \phi(s)=\begin{cases} \frac12 s^2 &\text{for } |s|\leq 1,\\ \frac14(|s|{+}1)^2 -\frac12 &\text{for }|s|\geq 1, \end{cases} \quad \text{and} \quad \phi^*(r)=\begin{cases} \frac12 r^2 &\text{for } |r|\leq 1,\\ r^2 -|r| +\frac12 &\text{for }|r|\geq 1. \end{cases} \] An explicit calculation leads to the expression \[ \mfb_\psi(v,\sigma) = \begin{cases} \|v\|\sqrt{2\sigma} &\text{for } v_1^2 \geq (2\sigma{-}1) v_2^2,\\ \frac12\sqrt{2v_1^2{+}v_2^2}\,\sqrt{4\sigma{-}1}+ \frac{|v_2|}2 &\text{for } \sigma\geq 1/2 \text{ and } v_1^2 \leq (2\sigma{-}1) v_2^2. \end{cases} \] Using $\psi^*(\xi)=\frac12\xi_1^2 +\phi^*(\xi_2)$ we find $ \mfb_\psi(v, \psi^*(\xi)) =\|\xi\|_*\|v\|$ whenever $\|\xi\|_*\leq 1$. However, the explicit form of $\mfb_\psi$ shows that, in general, $ \mfb_\psi(v,\psi^*(\xi)) $ cannot be expressed in terms of $\|v\|$ and $\|\xi\|_*$ alone. With $\psi^*(\xi)\geq \frac12\|\xi\|_*^2$ and $\mfb_\psi(v,\sigma)\geq \frac12\big(\sqrt{4\sigma{-}1} +1)\|v\|$ for $\sigma\geq 1/2$ we obtain $ \mfb_\psi(v,\psi^*(\xi)) \geq \|v\|\frac12\big(\sqrt{2\|\xi\|_*^2{-}1}+1\big)$ for $\|\xi\|_*\geq 1$. \\[0.3em] (D) We still look at the case $X=\R^2$ with the Euclidean norm $\|v\|=\big(v_1^2{+} v_2^2 \big)^{1/2}$ and \[ \psi(v)=\frac12\, v_1^2 + \frac14 \, v_2^4 \quad \text{and} \quad \psi^*(\xi)=\frac12\, \xi_1^2 + \frac43 \, |\xi_2|^{4/3}. \] In principle, we can calculate $\mfb_\psi(v_1,v_2,\sigma)$ explicitly, however, it suffices to use (A) giving \[ \mfb_\psi(v_1,0,\sigma)= |v_1|(2\sigma)^{1/2} \quad \text{and} \quad \mfb_\psi(0,v_2,\sigma) = |v_2| \big(\tfrac43 \sigma\big)^{3/4}. \] Inserting $\sigma =\psi^*(\xi_1,\xi_2)$ and inserting the ``wrong directions'' with $\langle \xi, v \rangle =0$ we find \[ \mfb_\psi(v_1,0, \psi^*(\xi_1,\xi_2)) = \big(\tfrac83\big)^{1/2} |v_1|\,|\xi_2|^{2/3} \quad \text{and} \quad \mfb_\psi(0,v_2, \psi^*(\xi_1,\xi_2)) = \big(\tfrac23\big)^{3/4} |v_2|\,|\xi_1|^{3/2}. \] Clearly, there cannot be a constant $c_0>0$ such that $ \mfb_\psi(v,\psi^*(\xi)) \geq c_0 \| v\| \, \|\xi\|$ for all $v,\xi\in \R^2$. Of course, the relations are compatible with (b4) in Proposition \ref{pr:VVCP}, i.e.\ $ \mfb_\psi(v,\psi^*(\xi)) \geq \langle \xi, v \rangle$. \end{example} As we will see, the vanishing-viscosity contact potentials $\mfb_\psi$, which were developed for the case of two-rate problems (with time scales $1$ and $\eps$) in \cite{MRS12}, are also relevant to describe the limiting behavior of B-functions in the multi-rate case with time scales $1$, $\eps$, and $\eps^\alpha$. For this, we will use the concept of (sequential) Mosco convergence, which we recall here for a sequence of functionals $\mathscr{F}_n: \scrX \to (-\infty,+\infty] $ defined in a Banach space $\scrX$. \begin{definition}[Mosco convergence] \label{def:Mosco} We say that $\mathscr{F}: {X} \to (-\infty,+\infty] $ is the \emph{Mosco limit} of the functionals $(\mathscr{F}_n)_n$ as $n\to\infty$, and write $ \mathscr{F}_n \overset{\rmM}\to \mathscr{F}$ in ${X}$, if the following two conditions hold: \begin{subequations} \label{Gamma-convergence} \begin{align} \label{Gamma-liminf-Fn} &\Gamma\text{-}\liminf\ \mathrm{estimate:} \qquad x_n\weakto x \text{ weakly in } \scrX \ \ \Longrightarrow \ \ \mathscr{F}(x) \leq \liminf_{n\to\infty} \mathscr{F}_n(x_n); \\ \nonumber &\Gamma\text{-}\liminf\ \mathrm{estimate:} \\ & \label{Gamma-limsup-Fn} \qquad \qquad\forall\, x \in \scrX \ \exists\, (x_n)_n \subset \scrX: \ \ x_n\to x \text{ strongly in } \scrX \ \text{ and } \mathscr{F}(x) \geq \limsup_{n\to\infty} \mathscr{F}_n(x_n). \end{align} \end{subequations} \end{definition} \Subsection{Mosco convergence for the joint B-functions $\mfB_\eps^\alpha$} \label{su:MutliRotePot} In view of the vanishing-viscosity analysis of \eqref{van-visc-intro}, we now work with two dissipation potentials $\psi_\sfu:\Spu\to [0,\infty]$ and $\psi_\sfz:\Spz\to [0,\infty]$, with $\Spu$ and $\Spz$ the state spaces from \eqref{intro-state_spaces}. In Section \ref{su:ReJoMFcn}, we will indeed confine the discussion to the choices $\psi_\sfu: = \disv u$ and $\psi_{\sfz}: = \calR +\disv z$, but here we want to keep the discussion more general and in particular allow for $\psi_\sfu$ to have a nontrivial rate-independent part, too. When constructing the associated B-function we have to take care of the different scalings namely $\psi_\sfu^{\eps^\alpha}$ and $\psi_\sfz^\eps$ in the sense of \eqref{eq:Def.Vx.lambda}, i.e.\ $\psi^\lambda(v)=\frac1\lambda \psi(\lambda v)$. Indeed, since the conjugate function $(\psi^\lambda)^*$ satisfies the simple scaling law $( \psi^\lambda)^*(\xi)= \frac1\lambda \psi^*(\xi)$, the B-function $\calB_{\psi^\lambda}$ obeys the scaling relations \begin{equation} \label{eq:rescaled.B.eps} \calB_{\psi^\lambda} (\tau,v, \frac1\lambda \sigma) = \calB_\psi(\frac\tau\lambda, v, \sigma) = \frac1\lambda\, \calB_\psi( \tau, \lambda v, \sigma), \end{equation} where we used \eqref{eq:mfB.rescale} for the last step. Our definition of the associated B-function for the sum \[ \Psi_{\eps,\alpha} : \Spu{\ti} \Spz\to [0,\infty]; \quad \Psi_{\eps,\alpha} (u',z'): = \frac1{\eps^\alpha}\psi_\sfu (\eps^\alpha u') + \frac1\eps \psi_\sfz(\eps z') \] will be denoted by the symbol $\mfB_{\Psi_{\eps,\alpha}}$, see \eqref{eq:Resc.Bae} below. We emphasize that we deviate from the construction set forth in \eqref{Bfunctions-2}, since \eqref{eq:Resc.Bae} applies \eqref{eq:rescaled.B.eps} for each component individually. Hence, we introduc \begin{subequations} \label{eq:Resc.Bae} \begin{align} \label{eq:Resc.Bae.a} \mfB_{\Psi_{\eps,\alpha}}(\tau,u',z',\sigmav u,\sigmav z)&:= \frac1{\eps^\alpha}\,\Bfun_{\psi_\sfu} (\tau,\eps^\alpha u',\sigmav u) + \frac1\eps\:\Bfun_{\psi_\sfz} (\tau,\eps z',\sigmav z) \\ \label{eq:Resc.Bae.b} & \;= \;\Bfun_{\psi_\sfu} \big(\frac\tau{\eps^\alpha}, u',\sigmav u \big) + \Bfun_{\psi_\sfz} \big(\frac\tau\eps , z',\sigmav z \big). \end{align} \end{subequations} Subsequently, we will use the short-hand notation $\mfB_\eps^\alpha$ in place of $\mfB_{\Psi_{\eps,\alpha}}$ and extend $\mfB_\eps^\alpha$ to allow for the value $\tau=0$, defining \emph{rescaled joint B-function} $\mfB_\eps^\alpha:[0,\infty)\ti \Spu\ti \Spz \ti [0,\infty)^2\to [0,\infty]$ via \begin{equation} \label{eq:def.B.al.eps} \RCBfu \eps\alpha{\tau}{u'}{z'}{\sigmav u}{\sigmav z}:= \begin{cases}\displaystyle \frac\tau{\eps^\alpha}\, \psi_\sfu\Big(\frac{\eps^\alpha}\tau\,u'\Big) + \frac\tau{\eps^\alpha} \,\sigmav u + \frac\tau{\eps} \,\psi_\sfz\Big(\frac{\eps}\tau\,z'\Big) + \frac\tau{\eps} \,\sigmav z\ &\text{for }\tau>0,\\ \infty&\text{for } \tau=0. \end{cases} \end{equation} We highlight that $\mfB_\eps^\alpha$ is relevant for the \emph{coupled} system \eqref{van-visc-intro}, hence the name rescaled \emph{joint} B-function. The next result shows that the Mosco limit $\mfB^\alpha_0$ of the B-functions $(\mfB_\eps^\alpha)_\eps$ always exists and can be expressed in terms of the potentials $\mfb_{\psi_\sfu}$, $\mfb_{\psi_\sfz}$, and $\mfb_{\psi_\sfu{\oplus}\psi_\sfz}$. We emphasize that $(\tau,u',z')\mapsto \mfB^\alpha_0(\tau,u',z',\sigmav u,\sigmav z)$ is 1-homogeneous, which reflects the rate-independent character of the limiting procedure. \begin{proposition}[Mosco limit $\mfB^\alpha_0$ of the family $\mfB^\alpha_\eps$] \label{pr:Mosco.Beps} Let $\psi_\sfu$ and $\psi_\sfz$ satisfy \eqref{ass-diss-pot-superl} and assume $\alpha>0$. Then, $\mfB^\alpha_\eps$ Mosco converge to the limit $\mfB^\alpha_0:[0,\infty)\ti \Spu\ti \Spz \ti [0,\infty)^2\to [0,\infty]$ that is given as follows: {\upshape \begin{align*} \tau>0:\qquad &\RCBfu 0\alpha \tau{u'}{z'}{\sigmav u}{\sigmav z}= \begin{cases} \big(\psi_\sfu\big)_\mathrm{ri} (u') +\big(\psi_\sfz\big)_\mathrm{ri} (z') & \text{for } \sigmav u=\sigmav z=0, \\ \infty& \text{otherwise}; \end{cases} \\ \tau=0,\ \alpha>1{:}\ \ &\RCBfu 0\alpha 0{u'}{z'}{\sigmav u}{\sigmav z} =\begin{cases} \big(\psi_\sfu\big)_\mathrm{ri}(u')+\mfb_{\psi_\sfz} (z',\sigmav z) &\text{for }\sigmav u=0, \\ \mfb_{\psi_\sfu}(u',\sigmav u)& \text{for } \sigmav u>0 \text{ and } z'=0,\\ \infty& \text{otherwise}; \end{cases} \\ \tau=0, \ \alpha=1{:}\ \ &\RCBfu 01 0{u'}{z'}{\sigmav u}{\sigmav z}\ = \ \mfb_{\psi_\sfu \oplus \psi_\sfz}((u',z'),\sigmav u{+}\sigmav z); \\ \tau=0, \ \alpha<1{:}\ \ &\RCBfu 0\alpha 0{u'}{z'}{\sigmav u}{\sigmav z} =\begin{cases} \mfb_{\psi_\sfu} (u',\sigmav u) + \big(\psi_\sfz\big)_\mathrm{ri}(z') &\text{for }\sigmav z=0, \\ \mfb_{\psi_\sfz}(z',\sigmav z)& \text{for } \sigmav z>0 \text{ and } u'=0,\\ \infty& \text{otherwise}, \end{cases} \end{align*} where $\psi: = \psi_\sfu \oplus \psi_\sfz:(u',z')\mapsto \psi_\sfu(u') {+} \psi_\sfz(z')$. Thus, the functional $\RCBfu 0\alpha {\cdot}{\cdot}{\cdot}{\sigmav u}{\sigmav z}$ is convex and $1$-homogeneous for all $(\sigmav u, \sigmav z) \in [0,\infty)^2$. \end{proposition} \begin{proof} \underline{Case $\tau>0$.} Using $\psi_\sfx(v)\geq \big(\psi_\sfx\big)_\mathrm{ri} (v)$ we have \[ \RCBfu \eps\alpha{\tau}{u'}{z'}{\sigmav u}{\sigmav z} \geq \big(\psi_\sfu \big)_\mathrm{ri}(u') + \frac\tau{\eps^\alpha}\,\sigmav u+ \big(\psi_\sfz \big)_\mathrm{ri}(z') + \frac\tau\eps\, \sigmav z, \] which easily provides the desired liminf estimate. The limsup estimate follows with the constant recovery sequence $(u'_\eps,z'_\eps,\sigmave {u}{\eps},\sigmave z\eps)=(u',z',\sigmav u,\sigmav z)$. \underline{Case $\tau=0$ and $\alpha=1$.} By definition of $\mfb_\psi = \mfb_{\psi_\sfu {\oplus} \psi_\sfz}$ we have \[ \RCBfu \eps1{\tau}{u'}{z'}{\sigmav u}{\sigmav z} = \frac\tau\eps \psi\big(\frac\eps{\tau} (u',z')\big) + \frac\tau\eps\,(\sigmav u{+}\sigmav z) \geq \mfb_\psi((u',z'),\sigmav u{+}\sigmav z) \qquad \text{for all } \tau>0. \] Hence, the liminf estimate follows from Proposition \ref{pr:VVCP}. For the limsup estimate for $\RCBfu 01{0}{u'}{z'}{\sigmav u}{\sigmav z}$ we choose $\lambda_\eps$ such that $\lambda_\eps\psi(\frac1{\lambda_\eps} (u',z')) + \lambda_\eps(\sigmav u{+}\sigmav z) \to \mfb_\psi((u',z'),\sigmav u{+}\sigmav z)$, where we may assume $\lambda_\eps \leq 1/\sqrt\eps$. Then, it suffices to set $\tau_\eps= \lambda_\eps \eps\to 0$ to conclude $\RCBfu \eps1{\tau}{u'}{z'}{\sigmav u}{\sigmav z} \to \mfb_\psi((u',z'),\sigmav u{+}\sigmav z)= \RCBfu 01{0}{u'}{z'}{\sigmav u}{\sigmav z}$. \underline{Case $\tau=0$ and $\alpha>1$.} For the lower bound in the liminf estimate we only need to consider the case $\sigmav u=0$ and the case $\sigmav u>0$ and $z'=0$. In the latter situation we may drop the two last terms in the definition of $\mfB^\alpha_\eps$ and the lower bound is established by the lower semicontinuity of $\mfb_{\psi_\sfu}$. In the case $\sigmav u=0$, we have the lower bound \[ \RCBfu \eps\alpha{\tau}{u'}{z'}{\sigmav u}{\sigmav z}\geq \big(\psi_\sfu\big)_\mathrm{ri}(u') + \mfb_{\psi_\sfz}(z',\sigmav z) \] and the liminf again follows by the lsc. For the limsup estimates we use the recovery sequence $(\tau_\eps,u',z',\sigmav u,\sigmav z)$ converging strongly with $\tau_\eps\to 0$, as in the previous case. For $\sigma=0$ we choose $\tau_\eps=\lambda_\eps \eps$ where $\lambda_\eps$ realizes the infimum in $\mfb_{\psi_\sfz}(z',\sigmav z)$. In the case $\sigmav u>0$ and $z'=0$ we choose $\tau_\eps = \hat\lambda_\eps \eps^\alpha$, where $\hat\lambda_\eps$ realizes the infimum in $ \mfb_{\psi_\sfu}(u',\sigma)$. In the remaining case, which has $\sigma>0$, we may choose $\tau_\eps=\eps$. \underline{Case $\tau=0$ and $\alpha<1$.} This case is similar to the case $\alpha>1$ if we interchange the role of $u'$ and $z'$. Thus, Proposition \ref{pr:Mosco.Beps} is proved. \end{proof} \Subsection{Lower bounds for the B-function $\mfB_\eps^\alpha$} \label{su:LowerBounds} In the subsequent convergence analysis for the vanishing-viscosity limit we will need $\eps$-uniform a priori bounds for the time derivatives of the solutions $q_\eps=(u_\eps,z_\eps)$. They are derived by lower bounds for the B-functions, however, we have already observed in Example \ref{ex:VVCP} that the simple lower bound $\mfb_\psi(v,\psi^*(\xi))\geq \|\xi\|_*\|v\|$ in \eqref{used-later-HS} cannot be expected. The following result provides suitable surrogates of such estimate. They will play a crucial role in the vanishing-viscosity analysis, specifically in controlling $\|z'\|$ along jump paths, see Lemma \ref{new-lemma-Alex}. For this it will be important that the function $\varkappa$ occurring in \eqref{eq:1LowBounds} is strictly increasing, which implies $\varkappa(\sigma)>0$ for $\sigma>0$. \begin{lemma} [Lower bound on $\mfB^\alpha_\eps$] \label{le:LoBo.Bae} Let $\psi_\sfu$ and $\psi_\sfz$ satisfy \eqref{ass-diss-pot-superl} and let $\mfB^\alpha_\eps$ be given as in \eqref{eq:def.B.al.eps}. Then, there exists a continuous, convex, nondecreasing, and superlinear function $\varphi: [0,\infty)\to [0,\infty)$ such that \begin{subequations} \label{eq:1LowBounds} \begin{align} \nonumber &\hspace*{-1cm} \forall\, \alpha>0\ \forall\, \eps\in [0,1] \ \forall\, (\tau , u',z',\sigmav u,\sigmav z)\in [0,\infty)\ti \Spu\ti \Spz \ti[0,\infty)^2: \\ \label{eq:1LowBo.psi} & \psi_\sfu(u')\geq \varphi(\|u'\|_\Spu) \ \text{ and } \ \psi_\sfz(z')\geq \varphi(\|z'\|_\Spz), \\ \label{eq:1LowBo.Bae} & \RCBfu \eps\alpha{\tau}{u'}{z'}{\sigmav u}{\sigmav z} \geq \|u'\|_\Spu\,\varkappa(\sigmav u) + \|z'\|_\Spz \,\varkappa(\sigmav z), \end{align} where $\varkappa \in \rmC([0,\infty);[0,\infty))$ is given by $\varkappa(\sigma)= (\varphi^*)^{-1}(\sigma)$, is concave, and strictly increasing with $\varkappa(0)=0$ and $\varkappa(\sigma)\to \infty$ for $\sigma\to \infty$. We additionally have \begin{align} \alpha< 1:\quad & \label{eq:1LowBo.Bal1} \RCBfu \eps\alpha {\tau}{u'}{z'}{\sigmav u}{\sigmav z} \geq \|u'\|_\Spu \, \varkappa\big(\sigmav u{+}\sigmav z\big) , \\ \alpha=1:\quad & \label{eq:1LowBo.B1eD} \RCBfu \eps1{\tau}{u'}{z'}{\sigmav u}{\sigmav z} \geq \big( \|u'\|_\Spu {+} \|z'\|_\Spz \big) \, \varkappa\big( \frac12(\sigmav u{+}\sigmav z) \big) , \\ \alpha\geq 1:\quad & \label{eq:1LowBo.Bag1} \RCBfu \eps\alpha {\tau}{u'}{z'}{\sigmav u}{\sigmav z} \geq \|z'\|_\Spz \, \varkappa\big(\sigmav u{+}\sigmav z\big) . \end{align} \end{subequations} \end{lemma} \noindent \begin{proof} \STEP{1: Construction of $\varphi$.} Since $\psi_\sfu$ and $\psi_\sfz$ are superlinear, for each $K\geq 0$ there exists $S_K\geq 0$ such that \begin{equation*} \forall (u',z') \in \Spu\ti \Spz: \quad \psi_\sfu(u') \geq K\|u'\|_\Spu - S_K \text{ and } \psi_\sfz(z') \geq K\|z'\|_\Spz - S_K. \end{equation*} Hence, the estimates in \eqref{eq:1LowBo.psi} hold for the nonnegative, convex function $\varphi$ given by \[ \varphi(r) := \sup\bigset{ Kr -S_K }{K\geq 0}. \] From $\varphi(0)=0$ and non-negativity we conclude that $\varphi$ is nondecreasing. Moreover, it is superlinear by construction. \STEP{2: Lower bound on $\mfb_{\psi_\sfx}$.} In the definition of $\mfb_\psi$ the dependence on $\psi$ is monotone (because of $\tau>0$) so that $\psi_1\leq \psi_2$ implies $\mfb_{\psi_1} \leq \mfb_{\psi_2}$. Setting $\widetilde \varphi(v)=\varphi(\|v\|)$ we obtain $\mfb_{\psi_\sfx}\geq \mfb_{\widetilde\varphi}$, and using Lemma \ref{le:psi.norm} and the definition of $\varkappa$ yields \[ \mfb_{\psi_\sfx}(v,\sigma) \geq \|v\|\, \varkappa(\sigma) \qquad \text{for } \mathsf{x} \in \{\mathsf{u}, \mathsf{z}\}. \] \STEP{3: Lower bound on $\mfB^\alpha_\eps$.} The definitions of $\mfB^\alpha_\eps$ in \eqref{eq:Resc.Bae.b} and of $\mfb_\psi$ give, for $\eps>0$, \begin{align*} \RCBfu \eps\alpha{\tau}{u'}{z'}{\sigmav u}{\sigmav z} &= \calB_{\psi_\sfu}(\frac\tau{\eps^\alpha} , u',\sigmav u) + \calB_{\psi_\sfz}(\frac\tau{\eps} , z',\sigmav z) \\ & \geq \mfb_{\psi_\sfu}(u',\sigmav u) + \mfb_{\psi_\sfz}(z',\sigmav z) \geq \|u'\|_\Spu\,\varkappa(\sigmav u) + \|z'\|_\Spz \,\varkappa(\sigmav z), \end{align*} where Step 2 was invoked for the last estimate. This proves \eqref{eq:1LowBo.Bae}. Estimate \eqref{eq:1LowBo.B1eD} follows from the simple observation that, because of $\alpha=1$, the rescaled B-function $\mfB^1_\eps$ only depends $\sigmav u {+}\sigmav z$, such that each of $\sigmav u$ and $\sigmav z$ can be replaced by their arithmetic mean. For $\alpha\geq 1$ and $\eps\in (0,1]$, we have $\tau/{\eps^\alpha} \geq \tau/\eps$ so that \[ \RCBfu \eps\alpha{\tau}{u'}{z'}{\sigmav u}{\sigmav z} \geq \frac\tau{\eps^\alpha}\sigmav u +\calB_{\psi_\sfz}(\frac\tau{\eps} , z', \sigmav z) \geq \calB_{\psi_\sfz}(\frac\tau{\eps} , z', \sigmav u{+}\sigmav z) \geq \|z'\|_\Spz\, \varkappa( \sigmav u{+}\sigmav z). \] This shows estimate \eqref{eq:1LowBo.Bag1}, and \eqref{eq:1LowBo.Bal1} follows similarly. All estimates remain true for $\eps=0$ because $\mfB^\alpha_0$ is the Mosco limit of $\mfB^\alpha_\eps$. \end{proof} \Section{Setup and existence for the viscous system} \label{s:setup} In Section \ref{ss:2.1} we will introduce our basic conditions on the ambient spaces, the energy, and the dissipation potentials, collected in Hypotheses \ref{hyp:setup}, \ref{hyp:diss-basic}, \ref{hyp:1}, and \ref{h:closedness}, which will be assumed throughout the paper. Let us mention in advance that we will often omit to explicitly recall these assumptions in the various intermediate statements, with the exception of our main results in Theorems \ref{thm:existBV}, \ref{thm:exist-enh-pBV}, \ref{thm:exist-trueBV}, and \ref{thm:exist-nonpar-enh}. Then, in Section \ref{ss:ExistVisc} we will address the existence of solutions to the viscous system \eqref{van-visc-intro}. Its main result, Theorem \ref{th:exist} shows that, under two additional conditions on the driving energy functional, the existence result from \cite[Thm.\,2.2]{MRS2013} can be applied to deduce the existence of solutions for the doubly nonlinear system \eqref{van-visc-intro}. It will be crucial to our analysis that we are able to show that these solutions satisfy the $(\Psi,\Psi^*)$ energy-dissipation balance \eqref{EnDissBal-intro}. \Subsection{Function spaces} \label{ss:2.1} Here we state our standing assumptions on the function spaces for the energy functionals and for the dissipation potentials. \begin{hypothesis}[Function spaces] \label{hyp:setup} \slshape In addition to conditions \eqref{intro-state_spaces} on the ambient spaces $\Spu$ and $\Spz$, our (coercivity) conditions on the energy $\calE$ will involve two other \emph{reflexive} spaces $\Spw$ and $\Spx$, \ such that \[ \Spw \subset \Spu \text{ continuously and densely, and } \Spx \Subset \Spz \text{ compactly and densely}. \] The subscript $\mathrm{e}$ refers to the fact that the latter are `energy spaces' relating to $\calE$. Furthermore, the $1$-homogeneous dissipation potential $\calR$ will be in fact defined on a (separable) space $\Spy$ (where the subscript $\mathrm{ri}$ accordingly refers to rate-independence), such that \[ \Spz \subset \Spy \text{ continuously and densely}. \] \end{hypothesis} We refer to \eqref{spacesU} for some examples of relevant ambient spaces. In what follows, we will often use the notation \begin{equation} \label{notation} q:=(u,z) \in \mathbf{Q} : = \Spu \ti \Spz. \end{equation} \Subsection{Assumption on the dissipation potentials} \label{su:Dissipation} We will develop the general theory under the condition that the viscous dissipation potentials $\disv u$ and $\disv z$ as well as the $1$-homogeneous potential $\calR$ take only finite values in $[0,\infty)$ and are thus continuous. Recall that $\disv x^*$ is the Legendre-Fenchel conjugate of $\disv x$, see Definition \ref{def:DissPotential}. \begin{hypothesis}[Conditions on $\disv u$, $\disv z$, $\calR$] \label{hyp:diss-basic} \slshape Let $\disv u : \Spu \to [0,\infty)$ and $\disv z : \Spz \to [0,\infty)$ be dissipation potentials with the following additional conditions: \begin{subequations} \label{hyp:visc-diss} \begin{align} \label{h:v-diss-1} & \lim_{\| v\|_\Spu \to \infty} \frac{\disv u(v)}{\| v\|_\Spu } = \lim_{\| \mu \|_{\Spu^*} \to \infty} \frac{\disv u^*( \mu) }{\| \mu \|_{\Spu^*}} =\infty = \lim_{\| \eta\|_\Spz \to \infty} \frac{\disv z(\eta)}{\| \eta\|_\Spz} = \lim_{\| \zeta\|_{\Spz^*} \to \infty} \frac{\disv z^*(\zeta)}{\| \zeta\|_{\Spz^*}}, \\ \label{later-added} &\lim_{\lambda \to 0^+} \frac1{\lambda} \disv u (\lambda v) =0 \ \text{ for all } v \in \Spu, \quad \text{and} \quad \lim_{\lambda \to 0^+} \frac1{\lambda} \disv z (\lambda \eta) =0 \ \text{ for all } \eta \in \Spz\,. \end{align} \end{subequations} Let $\calR: \Spy \to [0,\infty]$ be a 1-homogeneous dissipation potential, i.e. \begin{subequations} \label{hyp:ri-diss} \begin{equation} \label{Rzero} \calR(\lambda \eta) = \lambda \calR (\eta) \quad \text{for all } \eta \in \Spy \text{ and } \lambda > 0, \end{equation} that is additionally $\Spz$-bounded and $\Spy$-coercive for $\Spz\subset \Spy$, i.e. \begin{equation} \label{R-coerc} \exists\, C_\calR,\, c_\calR>0: \quad \left\{ \begin{array}{ll} \forall\, \eta \in \Spz: & \calR(\eta) \leq C_\calR \norm{\eta}{\Spz}, \\ \forall\, \eta \in \Spy: & \calR(\eta) \geq c_\calR \norm{\eta}{\Spy}\,. \end{array} \right. \end{equation} \end{subequations} \end{hypothesis} Due to the superlinear growth of $\disv x$ and $\disv x^*$, $\sfx \in \{ \sfu, \sfz\}, $ both $\pl \disv x : \mathbf{X} \rightrightarrows \mathbf{X}^*$ and $\pl \disv x^* : \mathbf{X}^* \rightrightarrows \mathbf{X}$, $ \mathbf{X} \in \{ \Spu, \Spz\}$, are bounded operators, so that, ultimately, both $\disv u$ and $\disv u^*$ are continuous. Likewise, $\calR$ is continuous. Indeed, restricting our analysis to the case in which $\calR$ takes only finite values in $[0,\infty)$ excludes the direct application of our results to systems modeling unidirectional processes in solids such as damage or delamination. In those cases the existence theory (both for the rate-dependent, `viscous' system and for $\BV$ solutions of the rate-independent process) relies on additional estimates not considered here, see e.g.\ \cite{KnRoZa17}. Nevertheless, a broad class of models is still described by \emph{continuous} dissipation functionals. For instance, the coercivity and growth conditions \eqref{R-coerc} are compatible with the following example of dissipation potential, in the ambient spaces $\Spy =\rmL^1(\Omega)$ and $\Spz=\rmL^2(\Omega)$ (with $\Omega\subset\R^d$ a bounded domain): \begin{equation} \label{ex:healing} \calR: \rmL^1(\Omega) \to [0,\infty]; \quad \calR(\eta): = \begin{cases} \|\eta^+\|_{\rmL^2(\Omega)}+ \|\eta^-\|_{\rmL^1(\Omega)}& \text{ if } \eta^+ \in \rmL^2(\Omega), \\ \infty & \text{ otherwise.} \end{cases} \end{equation} Dissipation potentials with this structure occur, for instance, in models for damage or delamination allowing for possible healing, cf.\ e.g.\ \cite[Sec.\,5.2.7]{MieRouBOOK} and Section \ref{s:appl-dam}. Subsequently, $\pl \disv u: \Spu\rightrightarrows \Spu^*$, $\pl \disv u: \Spz\rightrightarrows \Spz^*$, and $\pl \calR: \Spz\rightrightarrows \Spz^*$ will denote the convex subdifferentials of $\disv u$, $\disv z$, and $\calR$, respectively. By the 1-homogeneity \eqref{Rzero} we have \begin{equation} \label{eq:subdiff.calR} \pl \calR(\eta) = \bigset{ \omega \in \pl \Spz^*}{ \forall\,v{\in} \Spz{:}\; \calR(v) \geq \calR(\eta){+}\!\pairing{}{\Spz\!}{\omega}{v{-}\eta}} = \bigset{\omega \in \pl \calR(0)}{ \calR(\eta)=\langle \omega,\eta\rangle } . \end{equation} Thanks to Hypothesis \ref{hyp:setup}, we have $\Spy^* \subset \Spz^*$ densely and continuously. As a consequence of \eqref{R-coerc} $\pl \calR (0)$ turns out to be a bounded subset in $\Spz^*$, viz. \begin{equation} \label{eq:l:classic} \pl \calR(\eta) \subset \pl \calR(0) \quad \text{and} \quad \overline{B}_{c_\calR}^{\Spy^*}(0) \subset \pl \calR(0) \subset \overline{B}_{C_\calR}^{\Spz^*}(0). \end{equation} \Subsection{Assumptions on the energy $\calE$} \label{su:Energy} We now collect our basic requirements on the energy functional $\calE: [0,T]\ti \Spu \ti \Spz \to (-\infty,\infty]$. With slight abuse of notation, we will often write $\calE(t,q) $ in place of $ \ene tuz$, in accordance with \eqref{notation}. Recall the embeddings $\Spw\subset \Spu$ and $\Spx \Subset \Spz\subset \Spy$ and the choice $\Spq= \Spu\ti \Spz$. \begin{hypothesis}[Lower semicontinuity, coercivity, time differentiability of $\calE$] \label{hyp:1} The energy functional \linebreak[4] $\calE: [0,T]\ti \Spu \ti \Spz \to (-\infty,\infty]$ has the proper domain $\mathrm{dom}(\calE) = [0,T]\ti \domq $ with $\domq \subset \Spw\ti \Spx$. Moreover, we require that \begin{subequations} \label{h:1} \begin{equation} \label{h:1.1} \forall\, t \in [0,T]: \quad \text{the map } q \mapsto \en tq \text{ is weakly lower semicontinuous on } \Spq, \end{equation} and $\calE$ is bounded from below: \begin{equation} \label{h:1.2} \exists\, C_0>0 \ \ \forall\, (t,q) \in [0,T]\ti \domq\, : \qquad \en tq \geq C_0\,. \end{equation} We set $\mfE( q):= \sup_{t\in [0,T]} \en tq $ and require that \begin{equation} \label{h:2} \text{the map } q \mapsto \mfE(q) + \| q \|_{\Spu\ti \Spy} \text{ has sublevels bounded in } \Spw \ti \Spx. \end{equation} Finally, we require that $t \mapsto \en tq$ is differentiable for all $q\in \domq $ satisfying the power-control estimate \begin{align} \label{h:1.3d} &\exists\, C_\#>0 \ \forall\, (t,q)\in [0,T]\ti \domq:\quad \left| \pet tq \right| \leq C_\# \en tq . \end{align} \end{subequations} \end{hypothesis} \noindent Concerning our conditions on $\mathrm{dom}(\calE)$, the crucial requirement is that $\mathrm{dom}(\calE(t,\cdot)) \equiv \domq$ is independent of time. Let us introduce the energy sublevels \begin{equation} \label{Esublevels} \subl E: = \{ q\in \domq\, : \ \mfE(q) \leq E \} \qquad \text{for }E>0. \end{equation} Applying Gr\"onwall's lemma we deduce from \eqref{h:1.3d} that \[ \forall\, (t,q) \in [0,T]\ti \domq \, : \qquad \mfE(q) \leq \rme^{C_\# T} \, \calE(t,q)\,. \] Hence, $\calE(t,q) \leq E$ for some $t\in [0,T]$ and $E>0$ guarantees $q\in \subl {E'}$ with $E' = \rme^{C_\# T} \, E $. Finally, observe that \eqref{h:2} implies the separate coercivity properties of the functionals $\mfE(\cdot,z)$ and $\mfE( u,\cdot)$, perturbed by the norm $\| \cdot\|_\Spu$ and $\| \cdot\|_{\Spy}$, respectively. Since we are only requiring that $\Spw \subset \Spu$ continuously, our analysis allows for the following two cases: (i) the energy $\ene t{\cdot}z$ and the dissipation potential $\disv u$ have sublevels bounded in the same space and (ii) the energy $\ene t{\cdot}z$ has sublevels compact in the space $\Spu$ of the dissipation $\disv u$. To fix ideas, typical examples for the pairs $(\Spu,\Spw)$ and the triples $(\Spx, \Spz,\Spy)$ are \begin{align} & \label{spacesU} \begin{aligned} &\text{(i) }\ \Spu =\Spw = \rmH^1(\Omega;\R^d) \quad \text{ or \quad (ii) } \ \Spw = \rmH^1(\Omega;\R^d) \ \Subset \ \Spu = \rmL^2(\Omega;\R^d), \\ & \text{and} \quad \Spx= \rmH^1(\Omega;\R^m) \ \Subset \ \Spz= \rmL^2(\Omega;\R^m) \ \subset \ \Spy = \rmL^1(\Omega;\R^d). \end{aligned} \end{align} As mentioned in the introduction, in our analysis we aim to allow for nonsmoothness of the energy functional $q=(u,z)\mapsto \en tq$. Accordingly, we will use the Fr\'echet subdifferential of $\calE$ with respect to\ the variable $q$, i.e.\ the multivalued operator $\frname q\calE : [0,T] \ti \Spq \rightrightarrows \Spq^*$ defined for $(t,q)\in [0,T]\ti \domq$ via \begin{equation} \label{Frsubq} \frsubq qtq :=\bigset{ \xi \in \Spq^*}{\en t{\hat{q}} \geq \en tq {+} \pairing{}{\Spq}{\eta}{\hat{q}{-}q} + o(\norm{\hat{q}{-}q}{\Spq}) \text{ as } \hat{q} \to q \text{ in } \Spq} \end{equation} with domain $\mathrm{dom}(\frname q\calE) : = \bigset{ (t,q) \in [0,T] \ti \domq }{ \frsubq qtq \neq \emptyset } $. Thus, our aim is to solve the subdifferential inclusion \begin{equation} \label{dne-q} \pl \Psi_{\eps,\alpha}(q'(t)) + \pl_q \calE(t,q(t)) \ni 0 \qquad \text{ in } \mathbf{Q}^* \ \foraa\, t \in (0,T) \end{equation} where the scaled dissipation potential $\Psi_{\eps,\alpha} $ is defined in \eqref{eq:def.Psi.e.a}. \begin{remark}[Partial Fr\'echet subdifferentials] \label{rmk:Alex} \slshape Observe that \begin{equation} \label{just-inclusion} \frsubq qt{u,z} \subset \frsubq ut{u,z} \ti \frsubq zt{u,z} \quad \text{for all } (t,q) = (t,u,z)\in [0,T]\ti\domq, \end{equation} where $\frsubq utq\subset \Spu^*$ and $\frsubq ztq\subset \Spz^*$ are the `partial' Fr\'echet subdifferentials of $\calE$ with respect to the variables $u $ and $z$, which are defined as Fr\'echet subdifferentials of $\ene t\cdot z:\Spu\to \R$ and $\ene tu\cdot:\Spz\to \R$, respectively. However, equality in \eqref{just-inclusion} is false, in general, e.g.\ for $\Spu=\Spz=\R$ and $\calE(t,u,z)=|u{-}z|$. In view of the inclusion \eqref{just-inclusion}, any curve $t \mapsto q(t)=(u(t),z(t))$ solving \eqref{dne-q} also solves the system \begin{subequations} \label{DNE-system} \begin{align} \label{DNEu} & \pl \disv {u}^{\epsalpha}(u'(t)) +\frsub u t{u(t)}{z(t)} \ni 0 && \text{ in } \Spu^* && \foraa t \in (0,T), \\ \label{DNEz} \pl \calR (z'(t)) +{} &\pl \disv {z}^\eps\,(z'(t)) \ + \, \frsub z t{u(t)}{z(t)} \ni 0 && \text{ in } \Spz^* && \foraa t \in (0,T). \end{align} \end{subequations} Nonetheless, let us stress that the `reference viscous system' for the subsequent discussion will be the one with the smaller solution set, namely \eqref{dne-q} or \eqref{enid.b} below. \end{remark} The existence result from \cite{MRS2013} can be applied provided that $\calE$ fulfills two further conditions, stated in the following Hypotheses \ref{h:closedness} and \ref{h:ch-rule}. \begin{hypothesis}[Closedness of $(\pl_q\calE,\pl_t\calE)$ on sublevels] \label{h:closedness} For all sequences $\big((t_n,q_n, \xi_n) \big)_{n\in \N} $ in the space $[0,T] \ti \Spq \ti \Spq^*$ with $t_n\to t$, $q_n \weakto q $ in $\Spq$, $\xi_n \weakto \xi $ in $\Spq^*$, $\sup_n \mfE(q_n) < \infty $, and $\xi_n\in \pl_q \en t{q_n}$, we have \begin{equation} \label{h:1.3e} \xi \in \pl_q \en tq \quad \text{and} \quad \pet {t_n}{q_n} \to \pet tq. \end{equation} \end{hypothesis} \begin{remark} \label{rmk:closedness-diminished} \sl For cases in which the energy space $\Spw$ is compactly embedded into $\Spu$, the sequences $(q_n)_n$ fulfilling the conditions of Hypothesis \ref{h:closedness} converge strongly in $\mathbf{Q}$ in view of the coercivity \eqref{h:2}. Therefore, in such cases Hypothesis \ref{h:closedness} turns out to be a closedness condition on the graph of $\pl_q \calE$ with respect to\ the \emph{strong-weak} topology of $\mathbf{Q}\ti \mathbf{Q}^*$. We also mention that, in contrast to what we did in \cite{MRS2013} (cf.\ (2.E$_5$) therein), here in Hypothesis \ref{h:closedness} we omit the requirement of energy convergence $ \en {t_n}{q_n} \to \en tq $ along the sequence $(t_n,q_n,\xi_n)_n$. In fact, that additional property was not strictly needed in the proof of the existence result \cite[Thm.\,2.2]{MRS2013}, to which we will resort later on to conclude the existence of solutions for our viscous system \eqref{dne-q}. Rather, in \cite{MRS2013} the energy-convergence requirement was encompassed in the closedness assumption in order to pave the way for a weakening of the chain-rule condition, cf.\ the discussion in \cite[Rmk.\,4.6]{MRS2013}. Such a weakening is outside the scope of this paper. \end{remark} Our final condition on $\calE$ is an abstract \emph{chain rule} that has a twofold role: First, it is a crucial ingredient in the proof of Theorem \ref{th:exist}, and secondly, it ensures the validity of the energy-dissipation balance \eqref{enid}. The latter will be the starting point in the derivation of our a priori estimates \emph{uniformly} with respect to\ the viscosity parameter $\eps$. We refer to Proposition \ref{prop:ch-ruleApp} in Appendix \ref{s:app-CR} for a discussion of conditions on $\calE$ yielding the validity of Hypothesis~\ref{h:ch-rule}. \begin{hypothesis}[Chain rule] \label{h:ch-rule} For every absolutely continuous curve $q\in \AC ([0,T]; \mathbf{Q})$ and all measurable selections $\xi: (0,T) \to \Spq^*$ with $\xi(t)\in \pl_q \calE(t, {q(t)})$ for a.a.\ $t\in (0,T)$, \begin{equation} \label{conditions-1} \sup_{t \in (0,T)} |\calE(t,q(t))|<\infty, \quad \text{and} \quad \int_0^T \| \xi(t)\|_{\mathbf{Q}^*} \| q'(t)\|_{\mathbf{Q}} \dd t <\infty, \end{equation} we have the following two properties: \begin{equation} \label{eq:48strong} \begin{gathered} \text{the map $t\mapsto \calE(t,q(t))$ is absolutely continuous on $[0,T]$ and} \\ \frac \dd{\dd t} \calE(t,q(t)) - \pl_t \calE(t,q(t)) = \pairing{}{\bfQ}{\xi(t)}{q'(t)} \quad \text{for a.a.\ }t\in (0,T). \end{gathered} \end{equation} \end{hypothesis} \Subsection{An existence result for the viscous problem} \label{ss:ExistVisc} We are now in the position to state our existence result for the viscous system \eqref{dne-q}. It is based on the $(\Psi,\Psi^*)$-formulation of the energy-dissipation balance (cf.\ \eqref{EnDissBal-intro} for the case $q\mapsto \calE(t,q)$ is smooth), which we now apply to \eqref{dne-q} using the Fr\'echet subdifferential $\frsubq q tq$ and the scaled dissipation potential $\Psi_{\eps,\alpha}$ defined in \eqref{eq:def.Psi.e.a}. The Legendre-Fenchel conjugate is given by \begin{equation} \label{def:conj} \Psi_{\eps,\alpha}^*(\mu,\zeta) = \frac1{\eps^\alpha}\disv u^*(\mu) + \frac1\eps\conj z(\zeta) \ \text{ with }\ \conj z(\zeta): = \min_{\sigma \in \pl \calR(0)} \disv z^*(\zeta{-}\sigma) \qquad \text{for } \zeta \in \Spz^*. \end{equation} It can be straightforwardly checked that the infimum in the definition of $\calW^*_\sfz$ is attained. \begin{theorem}[Existence of viscous solutions] \label{th:exist} Assume Hypotheses \ref{hyp:diss-basic}, \ref{hyp:1}, \ref{h:closedness}, and \ref{h:ch-rule}. Then, for every $\eps \in (0,1]$ and $q_0 = (u_0,z_0)\in \domq$ there exists a curve $q=(u,z) \in \AC ([0,T];\Spq)$ and a function $\xi=(\mu,\zeta) \in \rmL^1(0,T;\Spu^* \ti \Spz^*)$ fulfilling the initial condition $q(0) = q_0$, solving the generalized gradient system \eqref{dne-q} in the sense that for a.a.\ $r\in (0,T)$ \begin{subequations} \label{enid} \begin{equation} \label{enid.b} (\mu(r), \zeta(r)) \in \frsubq q r{q(r)} \ \text{ and } \ \left\{ \begin{array}{l@{\,}l} -\mu(r) &\in \pl \disv {u}^{\epsalpha}(u'(r)), \\ -\zeta (r) &\in \pl \calR (z'(r)) {+} \pl \disv {z}^\eps(z'(r)), \end{array} \right. \end{equation} Moreover, for $0\leq s < t \leq T$, these functions satisfy the energy-dissipation balance \begin{align} \label{enid.a} \en t{q(t)} & + \int_s^t \Big( \disve u{\eps^\alpha} (\eps^\alpha u'(r)) + \calR(z'(r)) + \disve z\eps (\eps\,z'(r)) \Big) \dd r \\ \nonumber & { + \int_s^t \Big( \frac1{\eps^\alpha} \disv u^* ({-}\mu(r)) + \frac1\eps \conj z({-}\zeta(r))\Big) \dd r = \en s{q(s)}+ \int_s^t\pl_t \en r{q(r)} \dd r.} \end{align} \end{subequations} where $\disve x\lambda$ is defined in \eqref{eq:Def.Vx.lambda}. \end{theorem} \begin{proof} Since we are in the simple setting of \cite[Sec.\,2]{MRS2013}, where the dissipation potential $\Psi_{\eps,\alpha}$ does not depend on the state $q$, we can appeal to \cite[Thm.\,2.2]{MRS2013}. Thus, it suffices to check the assumptions (2.$\Psi_1$)--(2.$\Psi_3$), (E$_0$), and (2.E$_1$)--(2.E$_4$) therein. Our Hypothesis \ref{hyp:diss-basic} clearly implies (2.$\Psi_1$) and (2.$\Psi_2$). Hypothesis \ref{hyp:1} implies the assumptions (E$_0$) via \eqref{h:1.1} and \eqref{h:1.2}, and assumption (2.E$_1$) follows via \eqref{h:2} and Hypotheses \ref{hyp:diss-basic}. Assumption (2.E$_2$) follows from Hypothesis \ref{h:closedness} via \cite[Prop.\,4.2]{MRS2013}. Assumption (2.E$_3$) equals \eqref{h:1.3d} in Hypothesis \ref{hyp:1}. Finally, leaving out the energy-convergence requirement assumption (2.E$_5$) follows from Hypothesis \ref{h:closedness}. Thus, all assumptions are satisfied except for (2.$\Psi_3$) and (2.E$_4$). Concerning (2.$\Psi_3$), we observe that this technical condition was used for the proof of \cite[Thm.\,2.2]{MRS2013} only in one place, namely in the proof of Lemma 6.1 there. In \cite[Thm.\,3.2.3]{Bach21NADN} or in \cite{MieRos20?DL} it is shown that Lemma 6.1, which is also called ``\emph{De Giorgi's lemma}'', is also valid if the condition \cite[Eqn.\,(2.$\Psi_3$)]{MRS2013} is replaced by the condition that the underlying space Banach space $\Spq$ is reflexive, but this is true by our Hypothesis \ref{hyp:setup}. As for the chain rule \cite[(2.E$_4$)]{MRS2013}, a close perusal of the proof of \cite{MRS2013} shows that our Hypothesis \ref{h:ch-rule} can replace it, allowing us to conclude the existence statement. \end{proof} \begin{remark}[Energy-dissipation inequality]\slshape \label{rmk:GS-used-later} The analysis from \cite{MRS2013} in fact reveals that, under the chain rule in Hypothesis \ref{h:ch-rule}, a curve $q\in \AC([0,T];\Spq)$ fulfills \eqref{enid.b} \emph{if and only if} the pair $(q,\xi)$ satisfies the energy-dissipation balance \eqref{enid.a} which, again by the chain rule, is in turn equivalent to the upper energy-dissipation estimate $\leq$. This characterization of the viscous system will prove handy for the analysis of the delamination system from Section \ref{s:appl-dam}. \end{remark} \Subsection{Properties of the generalized slopes} \label{su:GenSlopes} For the further analysis it is convenient to introduce the \emph{generalized slope functionals} $\slovname{x}: [0,T]\ti \domq \to [0,\infty]$, $\mathsf{x} \in \{\mathsf{u}, \mathsf{z}\}$ via \begin{equation} \label{def:GeneralSlope} \begin{aligned} & \slov utq: = \inf\bigset{\,\disv u^* ({-}\mu)\, }{(\mu,\zeta) \in \frsubq qtq} \quad \text{and} \\ & \slov ztq: =\inf\bigset{\conj z({-}\zeta)}{(\mu,\zeta) \in \frsubq qtq}, \end{aligned} \end{equation} where the infimum over the empty set is always $+\infty$. These functionals play the same key role as (the square of) the metric slope for metric gradient systems, hence from now on we shall refer to $\slovname u$ and $\slovname z$ as \emph{generalized slopes}. Clearly, energy balance \eqref{enid.a} entails the validity of the following energy-dissipation estimate featuring the slopes $\slovname u$ and $\slovname z$: \begin{equation} \label{enid-ineq} \begin{aligned} \eneq t{q(t)}+ \int_s^t \!\!\Big( \disve u{\eps^\alpha}( u'(r)) {+} \calR(z'(r)) {+} \disve z\eps( z'(r)) + \frac{\slov ur{q(r)}}{\eps^\alpha} + \frac{\slov zr{q(r)}}\eps \Big) \dd r \\ \leq \eneq s{q(s)} + \int_s^t\pl_t \eneq r{q(r)} \dd r \qquad \text{for all } 0 \leq s \leq t \leq T\,. \end{aligned} \end{equation} Note that \eqref{enid-ineq} is weaker than \eqref{enid.a}, but it has the advantage that the selections $\xi=(\mu,\zeta)$ in \eqref{enid.b} are no longer needed. Moreover, \eqref{enid-ineq} will be still strong enough to handle the limit passage $\eps \to 0^+$. For this, we will assume that the infima in \eqref{def:GeneralSlope} are attained. We set \begin{align*} &\mathrm{dom}(\pl_q \calE):= \bigset{(t,q) \in [0,T]\ti \Spq}{ \pl_q\calE(t,q)\neq \emptyset } \quad \text{and} \\ &\mathrm{dom}(\pl_q \calE(t,\cdot)):= \bigset{q\in \Spq}{ \pl_q\calE(t,q)\neq \emptyset } . \end{align*} In fact, it can be checked (e.g.\ by resorting to \cite[Prop.\,4.2]{MRS2013}), that $\mathrm{dom}(\pl_q \calE(t,\cdot))$ is dense in $\domq$. \begin{hypothesis}[Attainment and lower semicontinuity] \label{hyp:Sept19} For every $(t,q) \in \mathrm{dom}(\pl_q \calE)$ the infima in \eqref{def:GeneralSlope} are attained, namely \begin{equation} \label{not-empty-mislo} \argminSlo utq: = \mathop{\mathrm{Argmin}}\limits_{(\mu,\zeta) \in \frsubq qtq} \disv u^* ({-}\mu) \neq \emptyset \quad \text{and} \quad \ \argminSlo ztq: = \mathop{\mathrm{Argmin}} \limits_{(\mu,\zeta) \in \frsubq qtq} \conj z({-}\zeta) \neq \emptyset, \end{equation} where $\conj z$ is defined in \eqref{def:conj}. Furthermore, for all sequences $(t_n,q_n)_n\subset [0,T] \ti \Spq$ with $t_n\to t$, $q_n \weakto q$ in $\Spq$, and $\sup_{n\in \N} \mfE(q_n) \leq C<\infty$ there holds \begin{align} \label{liminf-diss-V-W} \liminf_{n\to\infty} \slov x {t_n}{q_n}\geq \slov x tq \qquad \text{for } \mathsf{x} \in \{\mathsf{u}, \mathsf{z}\}\,. \end{align} \end{hypothesis} We are going to show in Lemma \ref{l.4.13} below that a sufficient condition for Hypothesis \ref{hyp:Sept19} is that \eqref{just-inclusion} improves to an equality, namely \begin{equation} \label{it-is-product} \frsubq qtq = \frsubq utq \ti \frsubq ztq \quad \text{for all } (t,q) = (t,u,z)\in [0,T]\ti\domq. \end{equation} Observe that \eqref{it-is-product} does hold if, for instance, $\calE$ is of the form \begin{align*} & \en tq : = \calU(t,u) + \calZ(t,z) + \calF(t,u,z) \quad \text{for all } (t,q) = (t,u,z) \in [0,T]\ti \Spq \\ & \text{with } \calU(t,\cdot): \Spu \to (-\infty,\infty] \text{ and } \calZ(t,\cdot): \Spz \to (-\infty,\infty] \text{ proper and lsc}, \\ &\text{and } \ \, \calF(t,\cdot): \Spu\ti \Spz \to \R \text{ Fr\'echet differentiable}. \end{align*} \begin{lemma} \label{l.4.13} Assume Hypotheses \ref{hyp:diss-basic}, \ref{hyp:1}, \ref{h:closedness}, as well as \eqref{it-is-product}. Then, \begin{equation} \label{it-is-equality} \slov utq = \inf_{\mu \in \frsubq utq} \disv u^* ({-}\mu) \ \text{ and } \ \slov ztq = \inf_{\zeta \in \frsubq ztq} \conj z({-}\zeta) \end{equation} for all $(t,q) \in [0,T]\ti \mathrm{dom}(\pl_q \calE) $, and properties \eqref{not-empty-mislo} and \eqref{liminf-diss-V-W} hold. \end{lemma} \begin{proof} Obviously, for $(t,q)\in \mathrm{dom}(\pl_q \calE)$ we have \eqref{it-is-equality} as a consequence of \eqref{it-is-product}. We will just check the attainment \eqref{not-empty-mislo} and the lower semicontinuity \eqref{liminf-diss-V-W} for $\slovname z$, as the properties for $\slovname u$ follow by the same arguments. Suppose that $(t_n,q_n) \weakto (t,q)$ and $\liminf_{n\to\infty} \slov z{t_n}{q_n} <\infty$. Using \eqref{it-is-equality}, up to a subsequence, there exist $(\zeta_n) \subset \Spz^*$ with $\zeta_n \in \frsubq z{t_n}{q_n} $ and $(\sigma_n)_n \subset \pl \calR(0) \subset \Spz^*$ for all $n$ with \[ \lim_{n\to\infty} \disv z^*({-}\zeta_n{-}\sigma_n) = \lim_{n\to\infty} \slov z{t_n}{q_n} \leq C\,. \] It follows from \eqref{hyp:visc-diss} that the sequence $(\sigma_n{+}\zeta_n)_n$ is bounded in $\Spz^*$. Since, in view of \eqref{eq:l:classic}, $(\sigma_n)_n$ is bounded in $\Spz^*$, $(\zeta_n)_n$ turns out to be bounded in $\Spz^*$, too. Then, up to a subsequence we have $\sigma_n\weakto \sigma $ in $\Spz^*$ and $\zeta_n\weakto \zeta$ in $\Spz^*$. Since $\pl \calR(0)$ is sequentially weakly closed in $\Spz^*$, we find $\sigma \in \pl \calR(0)$. By Hypothesis \ref{h:closedness} we also have $\zeta \in \frsubq ztq $, hence \begin{align*} \lim_{n\to\infty} \slov z{t_n}{q_n} & = \lim_{n\to\infty} \disv z^*({-}\zeta_n{-}\sigma_n) \geq \disv z^*({-}\zeta{-}\sigma) \\ & \geq \conj z({-}\zeta) \geq \inf_{\widetilde\zeta \in \frsubq ztq} \conj z({-}\widetilde\zeta) = \slov ztq, \end{align*} which is the desired lsc \eqref{liminf-diss-V-W} for $\slovname z$. With similar arguments we deduce the attainment \eqref{not-empty-mislo}. \end{proof} In the above proof we have used in an essential way that $\pl \calR (0)$ is bounded in $\Spz^*$ by our assumption \eqref{R-coerc}. Without this property, the argument still goes through provided that, given a sequence $(q_n)_n\subset \Spq $ as in Hypothesis \ref{hyp:Sept19}, all sequences $(\zeta_n)_n$ with $\zeta_n\in \argminSlo z{t_n}{q_n}$ for all $n\in\N$ happen to be bounded in $\Spu^* \ti \Spz^*$, which can be, of course, an additional property of the subdifferential $\frname z \calE$.\medskip Throughout the rest of this paper, we will always tacitly assume the validity of Hypotheses \ref{hyp:setup}, \ref{hyp:diss-basic}, \ref{hyp:1}, \ref{h:closedness}, \ref{h:ch-rule}, and \ref{hyp:Sept19} and omit any explicit mentioning of them in most of the upcoming results (with the exception of our main existence theorems). \Subsection{A priori estimates for the viscous solutions} \label{su:AprioViscSol} Let $(q_\eps)_\eps $ be a family of solutions to the viscously regularized systems \eqref{van-visc-intro} in the stricter sense of \eqref{enid}, which includes the energy-dissipation balance \eqref{enid.a}. By Theorem \ref{th:exist} the existence of solutions $q_\eps =(u_\eps,z_\eps)$ is guaranteed, and in this subsection we discuss some a priori estimates on $(u_\eps,z_\eps)_\eps$ that are uniform with respect to the parameter $\eps$ and that form the core of our vanishing-viscosity analysis. The starting point is the energy-dissipation estimate \eqref{enid-ineq} that follows directly from \eqref{enid.a}. Recalling the constant $C_\#$ from \eqref{h:1.3d} in Hypothesis \ref{hyp:1} and $c_\calR$ in Hypotheses \ref{hyp:diss-basic}, we see that the following \emph{basic a priori estimates}, are valid under the \emph{sole} assumptions of Hypotheses \ref{hyp:diss-basic} and \ref{hyp:1}. \begin{lemma}[Basic a priori estimates] \label{l:1} For all $\eps>0$ and all solutions $q_\eps=(u_\eps,z_\eps):[0,T] \to \Spq=\Spu \ti \Spz$ of \eqref{enid} with $\calE(0,q_\eps(0))< \infty$ we have the a priori estimates \begin{subequations} \label{est-quoted-later} \begin{align} \label{est-quoted.a} & \int_0^T\! \Big( \frac{1}{\eps^\alpha} \disv u (\eps^\alpha u_\eps'(t)) + \calR(z_\eps'(t)) + \frac{1}{\eps} \disv z (\eps z_\eps'(t)) \\ & \hspace{7em} \nonumber + \frac{\slov u {t}{q_\eps(t)}}{\eps^\alpha} + \frac{\slov z {t}{q_\eps(t)}} \eps \Big) \dd t \leq \mathrm e^{C_\#T}\calE(0,q_\eps(0)), \\ \label{est-quoted.b} & 0 \leq \eneq t{q_\eps(t)} \leq \mathrm e^{C_\# t} \calE(0,q_\eps(0)) \text{ for all } t\in [0,T]. \end{align} \end{subequations} whence, in particular, \begin{equation} \label{est1} \| z_\eps'\|_{\rmL^1(0,T; \Spy)} \leq \frac{\mathrm e^{C_\# T}} {c_\calR}\, \calE(0,q_\eps(0)) \quad \text{and} \quad \sup_{t\in [0,T]} \mfE(q_\eps(t)) \leq \mathrm e^{2C_\# T} \calE(0,q_\eps(0))\,. \end{equation} \end{lemma} \begin{proof} The proof follows as in the purely rate-independent case treated in \cite[Cor\,3.3]{Miel05ERIS}. We start from \eqref{enid.a} and drop the nonnegative dissipation to obtain \[ \calE(t,q_\eps(t))\leq \calE(0,q_\eps(0))+ \int_0^t \pl_s\calE(s,q_\eps(s))\dd s \leq \calE(0,q_\eps(0))+ \int_0^t C_\# \calE(s,q_\eps(s))\dd s , \] where we used \eqref{h:1.3d}. Thus, Gr\"onwall's estimate gives \eqref{est-quoted.b} and this we find \[ \calE(0,q_\eps(0))+ \int_0^T \pl_s\calE(s,q_\eps(s))\dd s \leq \calE(0,q_\eps(0))+ \int_0^T C_\# \mathrm e^{C_\# s} \calE(0,q_\eps(0))\dd s = \mathrm e^{C_\# T} \calE(0,q_\eps(0)) \] and \eqref{est-quoted.a} is established as well, as $\calE(T;q_\eps(T))\geq C_0>0$ by \eqref{h:1.2}. Since $\disv x$ and $\slovname x$ are nonnegative, assumption \eqref{R-coerc} leads to the first estimate in \eqref{est1}. The last assertion follows from \eqref{est-quoted.b} and applying \eqref{h:1.3d} once again. \end{proof} Clearly, \eqref{est1} provides a uniform bound on the total variation of the solution component $z_\eps$ in the space $\Spy$. A similar bound cannot be expected for the components $u_\eps$, unless we add further assumptions. To see the problem consider $\Spu = \R^2$ and the ordinary differential equation \[ \eps^\alpha u'_\eps(t) + \rmD \varphi(u_\eps(t)) = z_\eps(t)= a\binom{\cos(\omega t)}{\sin(\omega t )}, \quad \text{where } \varphi(u)=\frac{\lambda}2|u|^2+ \frac12\max\{|u|{-}1,0\}^2 \] with $\lambda\geq 0$. Note that $\varphi$ is uniformly coercive for all $\lambda \geq 0$. However, the equation is linear for $|u|\leq 1$ and has an exact periodic solution of the form \[ u(t) = \big(\mathrm{Re}\, U(t), \mathrm{Im}\, U(t)\big) \quad \text{with } U(t)= \frac a{\lambda {+}\mathrm i\,\omega \eps^\alpha} \,\mathrm e^{\mathrm i\,\omega t}\in \mathbb C, \] as long as $|U(t)|\leq 1$, i.e.\ $a^2\leq \lambda^2{+}\omega^2\eps^{2\alpha}$. In this case, the derivatives satisfy the following $\rmL^1$-estimates \[ \|u'_\eps\|_{\rmL^1(0,T)} = |\omega|T \,\| u_\eps\|_{\rmL^\infty}= \big|\frac{a\omega}{\lambda {+}\mathrm i\,\omega \eps^\alpha}\big| \,T = \frac1{(\lambda^2 {+}\omega^2 \eps^{2\alpha})^{1/2}} \,\|z'_\eps\|_{\rmL^1(0,T)} . \] For $\lambda>0$ we thus obtain a bound on $\|u'_\eps\|_{\rmL^1(0,T)}$ from a bound on $\|z'_\eps\|_{\rmL^1(0,T)}$ as in \eqref{est1}. However, in the case $\lambda=0$ the value $\|u'_\eps\|_{\rmL^1(0,T)}$ may blow up while $\|z'_\eps\|_{\rmL^1(0,T)}$ remains bounded (or even tends to $0$) and $a^2 \leq \omega^2\eps^{2\alpha}$, e.g.\ choosing $\omega=\eps^{-\alpha/2}$ and $a=\eps^{2\alpha/\alpha}$. In the main part of this subsection, we provide sufficient conditions for the validity of a uniform bound on $\|u'_\eps\|_{\rmL^1(0,T;\Spu)} $. In the spirit of the above ODE example we assume that $u \mapsto \en t {u,z}$ is uniformly convex (i.e.\ $\lambda>0$) and that $z \mapsto \rmD_u\en t {u,z}$ is Lipschitz. Moreover, we need to assume that $\disv u$ is quadratic. More precisely, we have to confine the discussion to a special setup given by conditions \eqref{structure-diss} and \eqref{eq:E=E1+E2cond}: \noindent (1) the dissipation potential $\disv u $ is quadratic: \begin{equation} \label{structure-diss} \Spu \text{ is a Hilbert space \quad and }\disv u(v): = \frac12\|v \|_{\Spu}^2= \frac12\langle \mathbb V_\sfu v,v\rangle , \end{equation} where $\mathbb V_\sfu:\Spu\to \Spu^*$ is Riesz' norm isomorphism; \noindent (2) the energy functional $\calE$ has domain $\domq = \domene u \ti \domene z$ and admits the decomposition \begin{subequations} \label{eq:E=E1+E2cond} \begin{align} &\ene tuz = \calE_1(u) + \calE_2(t,u,z) \ \text{ with } \\[0.3em] \label{E1-unif-cvx} &\exists\,\Lambda>0:\quad \calE_1 \text{ is $\Lambda$-convex}, \\[0.3em] & \label{fr-diff-E2} \forall\, (t,z ) \in [0,T]\ti \domene z:\quad u\mapsto \calE_2(t,u,z) \text{ is Fr\'echet differentiable on } \domene u, \\[0.3em] \label{fr-Lip-E2} &\begin{aligned} &\exists\, C_\sfu \in (0,\Lambda) \ \forall\, E>0 \ \exists\, C_E>0 \ \forall\, t_1, t_2 \in [0,T] \ \forall\, (u_1,z_1), (u_2,z_2) \in \subl{E} : \\ & \hspace{6em} \norm{\mathrm{D}_\sfu \calE_2(t_1,u_1,z_1){-} \mathrm{D}_\sfu \calE_2(t_2,u_2,z_2)}{\Spu^*} \\ &\hspace{9em} \leq C_E \left( |t_1{-}t_2| + \norm{ z_1{-}z_2}{\Spy} \right)+ C_\sfu \|u_1{-}u_2\|_\Spu \end{aligned} \end{align} \end{subequations} where $\subl E$ denotes the sublevel of $\mfE$, cf.\ \eqref{Esublevels}. Hence, the possibly nonsmooth, but \emph{uniformly convex} functional $\calE_1$ is perturbed by the smooth, but possibly nonconvex, functional $u\mapsto \calE_2 (t,u,z)$. However, by $C_\sfu<\Lambda$ the mapping $u\mapsto \calE(t,u,z)$ is still uniformly convex. Unfortunately condition \eqref{eq:E=E1+E2cond} is rather restrictive, because in concrete examples the driving energy functional features a coupling between the variables $u$ and $z$ that is more complex. Nevertheless the desired a priori estimate derived in Proposition \ref{l:3.2} may still be valid. Indeed, for our delamination model examined in Section \ref{s:appl-dam} we establish the corresponding estimate via an \emph{ad hoc} approach for the specific system. The proof of the following results follows the technique for the a priori estimate developed in \cite[Prop.\,4.17]{Miel11DEMF}. We emphasize that the two additional assumptions \eqref{structure-diss} and \eqref{eq:E=E1+E2cond} yield that the solution $u_\eps$ for $\mathbb V_\sfu u' + \pl \calE_1(u) + \rmD \calE_1(t,u,z_\eps(t)) \ni 0$ is unique as long as $z_\eps$ is kept fixed, since it is a classical Hilbert-space gradient flow for a time-dependent, convex functional. \begin{proposition}[$\rmL^1$ bound on $u'_\eps$] \label{l:3.2} In addition to Hypotheses \ref{hyp:diss-basic} and \ref{hyp:1} assume \eqref{structure-diss} and \eqref{eq:E=E1+E2cond} and consider initial conditions $(q_\eps^0)_\eps$ such that \[ \exists\, C_\mathrm{init}>0\ \forall\, \eps\in (0,1):\quad \calE(0,q_\eps^0)+\eps^{-\alpha} \| \pl ^0_\sfu \calE(0,q_\eps^0)\|_{\Spu^*} \leq C_\mathrm{init} <\infty, \] where $\pl ^0_\sfu \calE(0,q_\eps^0)\subset \Spu^*$ denotes the unique element of minimal norm in $\pl_\sfu \calE(0,q_\eps^0)\subset \Spu^*$. Then, there exists a $C>0$ such that for all $\eps \in (0,1)$ all solutions $q_\eps = (u_\eps, z_\eps)$ of system \eqref{enid} with $q_\eps(0)=q_\eps^0$ satisfy \begin{equation} \label{est2} \begin{aligned} \| u_\eps'\|_{\rmL^1(0,T; \Spu)} &\leq \frac1{\Lambda{-}C_\sfu} \Big( C_\mathrm{init} + C_ET+ C_E \|z'_\eps\|_{\rmL^1(0,T;\Spy)} \Big) \\ &\leq \frac1{\Lambda{-}C_\sfu} \Big( C_\mathrm{init} + C_ET+ \frac{C_EC_\mathrm{init}}{c_\calR} \,\mathrm e^{C_\#T} \Big). \end{aligned} \end{equation} \end{proposition} \begin{proof} By Lemma \ref{l:1} all curves $q_\eps:[0,T] \to \Spq$ lie in $\calS_E=\bigset{q\in \Spq}{\mfE(q)\leq E}$ for $E=\mathrm e^{2C_\# T} C_\mathrm{init}$. Throughout the rest of this proof we drop the subscripts $\eps$ at $q_\eps=(u_\eps,z_\eps)$, but keep all constant explicitly to emphasize that they do not depend on $\eps$. Setting $\kappa= \Lambda{-}C_\sfu>0$, the uniform convexity of $\calE(t,\cdot, z)$ gives $\langle \mu_1{-}\mu_2 , u_1{-}u_2 \rangle \geq \kappa \|u_1{-}u_2\|_\Spu^2 $ for all $\mu_j \in \pl_\sfu \calE(t,u_j,z)$. We write the equation for $u$ in the form $0=\eps^\alpha\mathbb V_\sfu u'(t)+ \mu(t) $ with $\mu(t)\in \pl_\sfu \calE(t,u(t),z(t))$. For small $h>0$ and $t\in [0,T{-}h]$ we find \begin{align*} \frac{\eps^\alpha}2\frac{\rmd}{\rmd t} &\| u(t{+}h){-}u(t)\|^2_\Spu = \big\langle \eps^\alpha\mathbb V_\sfu (u'(t{+}h)-u'(t)), u(t{+}h)-u(t) \big\rangle \\ &= - \langle \mu(t{+}h) - \mu(t), u(t{+}h)-u(t)\rangle \\ &\leq - \langle \widetilde \mu_h(t) - \mu(t), u(t{+}h)-u(t)\rangle + \|\widetilde \mu_h(t) {-} \mu(t{+}h) \|_{\Spu^*} \| u(t{+}h)-u(t)\|_\Spu, \end{align*} where $\widetilde \mu_h(t) \in \pl_\sfu \calE(t, u(t{+}h),z(t))$. The uniform convexity and \eqref{fr-Lip-E2} give \[ \frac{\eps^\alpha}2\frac{\rmd}{\rmd t} \varrho_h(t)^2 \leq - \kappa \varrho_h(t)^2 + C_E\big( h + \|z(t{+}h){-}z(t)\|_{\Spy}\big) \varrho_h(t), \] where $\varrho_h(t):= \| u(t{+}h){-}u(t)\|_\Spu$. Choosing $\delta>0$ and setting $\nu_h(t):=\varrho_h(t)^2{+}\delta$ yields \begin{align*} \eps^\alpha \dot \nu_h &= \frac{\eps^\alpha \tfrac{\rmd}{\rmd t}\varrho_\rmH^2}{2 \nu_h} \leq - \kappa \frac{\nu_n^2- \delta}{\nu_h} + C_E\big(h + \|z(\cdot\,{+}h){-}z\|_{\Spy}\big) \frac{\varrho_h}{\nu_h}\\ &\leq - \kappa \nu_h + \kappa \delta^{1/2} + C_E\big(h + \|z(\cdot\,{+}h){-}z\|_{\Spy}\big). \end{align*} Integrating this inequality in time we arrive at \[\textstyle \kappa \int\limits_0^{T-h} \varrho_h(t) \dd t \leq \kappa \int\limits_0^{T-h} \nu_h(t) \dd t \leq \eps^\alpha \nu_h(0) + \delta^{1/2} T + C_E h T + C_E\int_0^{T-h}\|z(t{+}h){-}z(t)\|_{\Spy} \dd t. \] Taking the limit $\delta \to 0^+$, dividing by $h>0$, and using $\|z(t{+}h){-}z(t)\|_{\Spy}\leq \int_t^{t+h} \|z'(s)\|_{\Spy} \dd s$ gives \[ \kappa \int_0^{T-h} \!\! \big\|\frac1h\big(u(t{+}h){-}u(t)\big) \big\|_\Spu \dd t \leq \eps^\alpha\big\|\frac1h\big(u(0{+}h){-}u(0)\big) \big\|_\Spu + C_E T + C_E \int_0^T \| z'(t) \|_{\Spy} \dd t. \] Since the equation for $u$ is a Hilbert-space gradient flow we can apply \cite[Thm.\,3.1]{Brez73OMMS}, which shows that $\frac1h(u(h){-}u(0)) \to \pl ^0_\sfu \calE(0,u(0),z(0))$ for $h\to 0^+$. Thus, in the limit $\eps\to 0^+$ we find \[ \textstyle \kappa \int\limits_0^T \|u'(t)\|_\Spu\dd t = \lim\limits_{h\to 0^+} \kappa \int\limits_0^{T\!-h}\! \big\|\frac1h\big(u(t{+}h){-}u(t)\big) \big\|_\Spu \dd t \leq C_\mathrm{init} + C_E T + C_E \int_0^T \| z'(t) \|_{\Spy} \dd t, \] which is the desired result, when recalling $\kappa = \Lambda - C_\sfu$. \end{proof} The above result is valid for all solutions of the viscous system \eqref{enid}, but it relies on the rather strong assumptions \eqref{structure-diss} and \eqref{eq:E=E1+E2cond}. While the uniform convexity of $u \mapsto \en tuz$ in \eqref{eq:E=E1+E2cond} seems to be fundamental, it is expected that the rather strong assumption that $\disv u $ is the square of a Hilbert space norm, see \eqref{structure-diss}, can be relaxed, but then the solution $u_\eps$ may no longer be uniquely determined for fixed $z_\eps$. In that case it may be helpful to restrict the analysis to specific solution classes satisfying better a priori estimates, e.g.\ to minimizing movements obtained via time-incremental minimization problems as in \cite[Thm.\,3.23]{MRS13} or to solutions obtained as limit of Galerkin approximations as in \cite[Def.\,4.3 \& Thm.\,4.13]{Mielke-Zelik}. We also refer to our delamination model in Section \ref{s:appl-dam} for a derivation of the additional a priori estimate \eqref{est2} in a more difficult case. \Section{Parametrized Balanced-Viscosity solutions} \label{s:4+} In this section we will give the definition of Balanced-Viscosity solution to the rate-independent system $\RIS$ in a \emph{parametrized version}. For this, we study instead of the viscous solutions $q_\eps:[0,T] \to \Spq$ suitable reparametrizations $(\sft_\eps,\sfq_\eps):[0,\mathsf S_\eps] \to [0,T]\ti \Spq$, i.e., $\sfq_\eps(s)=q_\eps(\sft_\eps(s))$, see Section \ref{su:ReJoMFcn}. While quite general reparametrizations are possible, we will perform the vanishing-viscosity limit $\eps\to 0^+$ for the one given in terms of the energy-dissipation arclength $s=\sfs_\eps(t)$ defined in terms of the \RJMF\ $\mename \eps\alpha$ arising from the rescaled joint B-function $ \mfB^\alpha_\eps$, see \eqref{arclength-est1-2}. The $\Gamma$-limit $\mename 0\alpha$ of $\mename \eps\alpha$, which is called the \emph{limiting \RJMF}, will then be used, to introduce the concept of \emph{admissible parametrized curves}, see Definition \ref{def:adm-p-c} in Section \ref{ss:4.1bis}. This is the basis of our notion of for \emph{parametrized Balanced-Viscosity} ($\pBV$) \emph{solutions}, defined in Section \ref{ss:4.2}. Theorem \ref{thm:existBV} states our main existence result for $\pBV$ solutions, which is based on the convergence in the vanishing-viscosity limit $\eps\to 0^+$. However, we emphasize that the notion of `$\pBV$ solutions' is independent of the limiting procedure. Finally, in Section \ref{ss:6.3-diff-charact} we provide a characterization of (enhanced) $\pBV$ solutions showing that they are indeed solutions of the time-rescaled generalized gradient system \eqref{e:diff-char-intro}. \Subsection{Reparametrization and rescaled joint M-functions} \label{su:ReJoMFcn} This subsection revolves around the concept and the properties of the limiting \RJMF\ $\mename 0\alpha$ that will be introduced in the Definition \ref{defM0}. First, we will prove that $\mename 0\alpha$ is the $\Gamma$-limit of the family of M-functions $(\mename \eps\alpha)_\eps$ that appear naturally in the reparametrized version of the energy-dissipation estimate \eqref{enid-ineq} and that are given by a composition of the rescaled joint B-function $\mfB_\eps^\alpha$ and the slopes $\mathscr S_\sfx^*$. Namely, the \emph{\RJMF s} are defined by \begin{equation} \label{def-Me} \begin{aligned} & \mename \eps \alpha: [0,T] \ti \domq \ti [0,\infty) \ti \mathbf{Q} \to [0,\infty], \\ & \mathfrak{M}_\eps^{\alpha}(t,q,t',q'): = \begin{cases} \mfB_\eps^\alpha(t', u',z', \slov utq, \slov ztq) & \text{ for } \frsubq qtq \neq \emptyset,\\ \infty & \text{otherwise}; \end{cases} \end{aligned} \end{equation} where $ \mfB_\eps^\alpha$ is the rescaled joint B-function from \eqref{eq:def.B.al.eps} associated with the dissipation potentials $\psi_\sfu = \disv u$ and $\psi_{\mathsf{z}} =\calR{+} \disv z$. The basis for the construction of parametrized BV solutions is the reparametrization of the the viscous solutions $q_\eps:[0,T]\to \Spq$ in the form $\sfq_\eps(s)=q_\eps(\sft_\eps(s))$ such that the behavior of the function $(\sft_\eps,\sfq_\eps):[0,\sfS_\eps]\to [0,T]\ti \Spq$ is advantageous. In particular, the formation of jumps in $q_\eps$ with $\|q'_\eps(t)\|\approx 1/\eps$ can be modeled by a plateau-like behavior of $\sft_\eps$ with $\sft'_\eps(s)\approx \eps$ and a soft transition of $\sfq_\eps$ with $\|\sfq'_\eps(s)\|\approx 1$. The first usage of such reparametrizations for the vanishing-viscosity limit goes back to \cite{EfeMie06RILS}, but here we stay close to \cite[Sec.\,4.1]{MRS13} in using an `energy-based time reparametrization'. Hence, for a family $(q_\eps)_\eps =(u_\eps, z_\eps)_\eps$ of solutions to \eqref{van-visc-intro} for which the estimates from Lemma \ref{l:1} hold, as well as the additional a priori estimate \eqref{est2} on $\int_0^T\|u'_\eps\|_\Spu \dd t$, we reparametrize the functions $q_\eps$ using the \emph{energy-dissipation arclength} $\mathsf{s}_\eps: [0,T]\to [0,\mathsf{S}_\eps]$ with $\mathsf{S}_\eps:= \mathsf{s}_\eps(T)$ (cf.\ \cite[(4.3)]{MRS13}) defined by \begin{equation} \label{arclength-est1-2} \begin{aligned} \mathsf{s}_\eps(t): = \int_0^t \Big( 1 & + \disve u{\eps^\alpha}( u_\eps'(t)) + \calR(z_\eps'(t)) +\disve z \eps(z_\eps'(t)) \\ & + \frac{\slov u {t}{q_\eps(t)} }{\eps^\alpha} + \frac{\slov z {t}{q_\eps(t)}} \eps + \|u_\eps'(t)\|_{\Spu} \Big) \dd t\,, \end{aligned} \end{equation} such that estimates \eqref{est-quoted-later} and \eqref{est2} yield that $\sup_{\eps>0} \mathsf{S}_\eps \leq C$. Below we consider the reparametrized curves $(\mathsf{t}_\eps, \mathsf{q}_\eps) : [0,\mathsf{S}_\eps] \to [0,T]\ti \Spq $ defined by $\mathsf{t}_\eps: = \mathsf{s}_\eps^{-1}$, $\mathsf{q}_\eps : = q_\eps \circ \sft_\eps$ and show in Section \ref{ss:4.2} that they have an absolutely continuous limit $(\sft,\sfq)$, up to choosing a subsequence. We first remark that the quantities involved in the definition of $\mathsf{s}_\eps$ rewrite as \begin{equation} \label{just4clarity} \calR(z_\eps') + \disve u{\eps^\alpha} ( u_\eps') + \disve z\eps ( z_\eps') + \frac{\slov u {t}{q_\eps}}{\eps^\alpha} + \frac{\slov z{t}{q_\eps}} \eps \:=\: \mathfrak{M}_\eps^{\alpha}(t,q_\eps,1,q_\eps'). \end{equation} With this, the energy-dissipation estimate \eqref{enid-ineq} can be rewritten in terms of the parametrized curves $(\mathsf{t}_\eps, \mathsf{q}_\eps) $ in the form (for all $0\leq s_1< s_2 \leq \mathsf S_\eps$) \begin{equation} \label{reparam-enineq} \begin{aligned} & \eneq {\sft_\eps(s_2)}{\sfq_\eps(s_2)} +\int_{s_1}^{s_2} \mathfrak{M}_\eps^\alpha (\sft_\eps(\sigma), \sfq_\eps(\sigma), \sft_\eps'(\sigma), \sfq_\eps'(\sigma)) \,\dd \sigma \\ & \leq \eneq {\sft_\eps(s_1)}{\sfq_\eps(s_1)} +\int_{s_1}^{s_2} \pl_t \eneq {\sft_\eps(\sigma)}{\sfq_\eps(\sigma)} \,\sft'_\eps(\sigma) \,\dd \sigma\,. \end{aligned} \end{equation} Moreover, the definition of $\sfs_\eps$ in \eqref{arclength-est1-2} is equivalent to the normalization condition \begin{equation} \label{normal-cond} \sft_{\eps}'(s) + \mathfrak{M}_\eps^{\alpha}(\sft_\eps(s), \sfq_\eps(s),\sft_\eps'(s),\sfq_\eps'(s)) + \| \sfu_\eps'(s)\|_{\Spu} =1 \qquad \foraa\, s \in (0,\mathsf{S}_\eps)\,. \end{equation} Of course, the reparametrized solutions $\sfq_\eps$ inherit the energy estimate \eqref{est1}, namely \begin{equation} \label{en-est-param} \sup_{s\in [0,\mathsf{S}_\eps]} \mfE (\sfq_\eps(s)) \leq \mathrm e^{2C_\# T} \sup_{\eps\in (0,1)}\calE(0,\sfq_\eps(0))\,. \end{equation} The a priori estimates \eqref{normal-cond} and \eqref{en-est-param} for the reparametrized curves $(\sft_\eps,\sfq_\eps)_\eps $ will be strong enough to ensure their convergence along a subsequence, as $\eps \to 0^+$, to a curve $(\sft,\sfq) : [0,\mathsf{S}]\to [0,T]\ti \Spq $, with $\mathsf{S}= \lim_{\eps \to 0^+}\mathsf{S}_\eps$. The basic properties of $(\sft,\sfq)$ are fixed in the concept of \emph{admissible parametrized curve}, see Definition \ref{def:adm-p-c}. For studying the limit $\eps \to 0^+$, we need to bring into play the limiting \RJMF\ $ \mename{0}{\alpha}$. \begin{definition} \label{defM0} We define $ \mename{0}{\alpha} : [0,T]\ti \domq \ti [0,\infty) \ti \Spq \to [0,\infty] $ via \begin{equation} \label{mename-0} \mename{0}{\alpha}(t,q,t',q') : = \begin{cases} \mfB_0^\alpha(t',u',z', \slov u tq, \slov ztq)\hspace*{-2em} & \hspace*{2em}\text{ for } \frsubq qtq \neq \emptyset, \\[0.3em] 0 & \text{for } t'=0, \, q'=0 \text{ and } \\ & (t,q) \in \overline{\mathrm{dom}(\frname q\calE)}^{\mathrm{w,S}} {\setminus}\mathrm{dom}(\frname q\calE), \\[0.3em] \infty & \text{otherwise}, \end{cases} \end{equation} where $\mfB_0^\alpha$ is defined in Proposition \ref{pr:Mosco.Beps} and $\overline{\mathrm{dom}(\frname q\calE)}^{\mathrm{w,S}} $ is the weak closure of $ \mathrm{dom}(\frname q\calE)$ confined to energy sublevels: \begin{equation} \label{closure-domain-subdifferential} \!\overline{\mathrm{dom}(\frname q\calE)}^{\mathrm{w,S}} \! := \bigset{ (t,q) }{ \exists\, (t_n,q_n)_n \subset \mathrm{dom}(\frname q\calE){:} \ (t_n,q_n) \weakto (t,q), \ \sup_{n} \mfE (q_n) <\infty }\,. \end{equation} \end{definition} It follows from Proposition \ref{pr:Mosco.Beps} that \begin{equation} \label{important-for-later} (t',q') \mapsto \mename{0}{\alpha}(t,q,t',q') \text{ is convex and $1$-homogeneous for all $(t,q) \in [0,\infty) \ti \Spq $}. \end{equation} Relying on Proposition \ref{pr:Mosco.Beps} and Hypothesis \ref{hyp:Sept19}, we are ready to prove the following $\Gamma$-convergence result, which straightforwardly gives that $ \mename{0}{\alpha}$ is (sequentially) lower semicontinuous with respect to the weak topology of $\R\ti\Spq \ti \R\ti\Spq$ along sequences with bounded energy. \begin{proposition}[Weak $\Gamma$-convergence of M-functions] \label{pr:Mosco.Meps} The limiting M-function \\ $\mename 0\alpha : [0,T]\ti \domq \ti [0,\infty) \ti \Spq \to [0,\infty]$ is the $\Gamma$-limit of the M-functions $(\mename{\eps}{\alpha})_\eps$, with respect to the weak topology, \emph{along sequences with bounded energy}, namely the following assertions hold: \smallskip \noindent (a) $\Gamma$-$\liminf$ estimate: \begin{subequations} \label{Gamma-convergence-concrete} \begin{equation} \label{Gamma-liminf} \begin{aligned} & \begin{aligned} \Big( (t_\eps,q_\eps,{t_\eps'}, q_\eps')\weakto (t,q,t', q') \text{ in } \R\ti\Spq \ti \R\ti\Spq \text{ as } \eps \to 0^+ \ \text{ with } \sup_{\eps>0} \mfE(q_\eps)< \infty \Big) \end{aligned} \\ &\qquad \Longrightarrow \quad \meq{0}{\alpha} tq{t'}{q'} \leq \liminf_{\eps \to 0^+} \meq{\eps}{\alpha} {t_\eps}{q_\eps}{t_\eps'}{q_\eps'}; \end{aligned} \end{equation} \noindent (b) $\Gamma$-$\limsup$ estimate: \begin{equation} \label{Gamma-limsup} \begin{aligned} & \forall\, (t,q,t',q') \in [0,T]\ti \domq \ti [0,\infty) \ti \Spq \ \exists\, (t_\eps,q_\eps,t_\eps',q_\eps')_\eps \text{ such that } \\ & \quad \text{\upshape\;\ (i)}\quad (t_\eps,q_\eps, t_\eps', q_\eps') \weakto (t,q,t',q') \text{ in } \R\ti\Spq \ti \R\ti\Spq \text{ as } \eps \to 0^+, \\ & \quad \text{\upshape\ (ii)}\quad \sup\nolimits_{\eps>0} \mfE(q_\eps) < \infty, \text{ and } \\ & \quad \text{\upshape(iii)}\quad \meq{0}\alpha tq{t'}{q'} \geq \limsup\nolimits_{\eps \to 0^+} \meq{\eps}\alpha {t_\eps}{q_\eps}{t_\eps'}{q_\eps'}\,. \end{aligned} \end{equation} \end{subequations} \end{proposition} \noindent \begin{proof} Concerning (a), let $ (t_\eps,q_\eps,{t_\eps'}, q_\eps')_\eps$ be a sequence as in \eqref{Gamma-liminf}. Of course we can suppose that $\liminf_{\eps \to 0^+} \meq{\eps}{\alpha} {t_\eps}{q_\eps}{t_\eps'}{q_\eps'}<\infty$, and thus that $\sup_\eps \meq{\eps}{\alpha} {t_\eps}{q_\eps}{t_\eps'}{q_\eps'}<\infty$. Then, there exists $\bar\eps>0$ such that for all $\eps \in (0,\bar\eps)$ there holds $t_\eps' > 0$, the Fr\'echet subdifferential $\frsubq q{t_\eps}{q_\eps} $ is non-empty, and $\meq {\eps}{\alpha} {t_\eps}{q_\eps}{t_\eps'}{q_\eps'} = \mfB_\eps^\alpha(t_\eps', u_\eps',z_\eps', \slov u{t_\eps}{q_\eps}, \slov z{t_\eps}{q_\eps}) $. In order to apply Proposition \ref{pr:Mosco.Beps} we now need to discuss the boundedness of the slopes $ (\slov u{t_\eps}{q_\eps})_\eps$, $(\slov z{t_\eps}{q_\eps})_\eps$. Indeed, If $t'>0$, then $t_\eps'\geq c>0$ for all $\eps \in (0,\bar\eps)$ (up to choosing a smaller $\bar\eps$), so that, by the definition \eqref{eq:rescaled.B.eps} of $\mfB_\eps^\alpha$ we infer that $ \slov u{t_\eps}{q_\eps} \leq C \eps^\alpha$ and $ \slov z{t_\eps}{q_\eps} \leq C \eps$ for all $\eps \in (0,\bar\eps)$. In the case $t'=0$, suppose e.g.\ that $\liminf_{\eps \to 0}\slov u{t_\eps}{q_\eps}=+\infty$ while $\liminf_{\eps \to 0}\slov z{t_\eps}{q_\eps}<+\infty$. Then, from the coercivity estimate \eqref{eq:1LowBo.Bae} we deduce (up to extracting a not relabeled subsequence) that $u_\eps'\to 0$. Thus, \[ \begin{aligned} \liminf_{\eps\to 0}\mfB_\eps^\alpha(t_\eps', u_\eps',z_\eps', \slov u{t_\eps}{q_\eps}, \slov z{t_\eps}{q_\eps}) & \geq \liminf_{\eps\to 0}\calB_{\disv z}\big( \frac{t_\eps'}{\eps}, z_\eps', \slov z{t_\eps}{q_\eps}\big) \\ & \geq \mfB_0^\alpha (0,0,z', \slov utq, \slov ztq)\,, \end{aligned} \] with the latter estimate due to Proposition \ref{pr:Mosco.Beps}, Hypothesis \ref{hyp:Sept19}, and the monotonicity of $\mfB^\alpha_0(\tau,q',\sigma_\sfu,\sigma_\sfz)$ in $\sigma_\sfu$ and $\sigma_\sfz$. We may argue similarly in the case $\liminf_{\eps \to 0}\slov u{t_\eps}{q_\eps}<+\infty$ and $\liminf_{\eps \to 0}\slov z{t_\eps}{q_\eps}=+\infty$ and when both limits are finite. The $\Gamma$-$\limsup$ estimate (b) is trivial for all $ (t,q,t',q') \in [0,T]\ti \domq \ti [0,\infty) \ti \Spq $ with $\meq 0\alpha tq{t'}{q'}=\infty$. If $\meq 0\alpha tq{t'}{q'} = \mfB_0^\alpha(t',u',z',\slov utq, \slov ztq)<\infty$, then the $\limsup$ estimate immediately follows via the constant recovery sequence $(t_\eps, q_\eps, t_\eps', q_\eps') \equiv (t,q,t',q')$ with the same arguments as in the proof of Proposition \ref{pr:Mosco.Beps}. Let us now suppose that $(t',q')=(0,0) $ with $ (t,q) \in \overline{\mathrm{dom}(\frname q\calE)}^{\mathrm{w,S}} {\setminus} \mathrm{dom}(\frname q\calE)$, so that $\meq 0\alpha tq{t'}{q'} =0$. We observe that there exists a sequence $(t_n,q_n)_n \subset \mathrm{dom}(\frname q\calE)$ with $(t_n,q_n) \weakto (t,q)$ and $\sup_{n} \mfE(q_n)<\infty$. We will show that for every null sequence $(\eps_k)_{k\in \N}$ there exists a recovery sequence for $(t,q,0,0)$. For this, we first fix $n\in \N$ and associate with $(t_n,q_n, 0, 0)$ the recovery sequence $(t_{\eps_k,n}, q_{\eps_k,n}, t_{\eps_k,n}', q_{\eps_k,n}') = (t_n,q_n,t_{\eps_k,n}', 0)$, where we choose $ t'_{\eps_k,n}>0$ such that \[ t'_{\eps_k,n} \leq \eps_k \, ,\quad \frac{t_{\eps_k,n}'}{\eps_k^\alpha} \slov u{t_n}{q_n} \leq \eps_k \, , \quad \text{and} \quad \frac{t_{\eps_k,n}'}{\eps_k} \slov z{t_n}{q_n} \leq \eps_k\,. \] Setting $n=k$ we obtain the sequence $(\tilde{t}_{\eps_k}, \tilde{q}_{\eps_k}, \tilde{t}_{\eps_k}', \tilde{q}_{\eps_k}')= (t_k,q_k, t'_{\eps_k,k}, 0) \weakto (t,q,0,0)$, which gives (i). By construction we also have $\sup_{k\in \N}\mfE( \tilde{q}_{\eps_k})\leq \sup_{n\in \N}\mfE( q_n) <\infty$, which gives (ii). Moreover, because of $ \tilde{t}_{\eps_k}'> 0$ and $\tilde q'_{\eps_k}=0$ we have \begin{align*} &\meq{\eps_k}\alpha {\tilde{t}_{\eps_k}}{\tilde{q}_{\eps_k}}{\tilde{t}_{\eps_k}'}{\tilde{q}_{\eps_k}'} = \mfB^\alpha_{\eps_k}(\tilde{t}_{\eps_k}', 0 , \slov u{t_k}{q_k}, \slov z{t_k}{q_k}) \\ &= \frac{t_{\eps_k,k}'}{\eps_k^\alpha} \slov u{t_k}{q_k} + \frac{t_{\eps_k,k}'}{\eps_k} \slov z{t_k}{q_k} \leq 2\eps_k\to 0 = \meq 0\alpha tq{0}{0}\, . \end{align*} Thus, condition (iii) in \eqref{Gamma-limsup} holds as well. With this, Proposition \ref{pr:Mosco.Meps} is established. \end{proof} For later use we also introduce the \emph{`reduced' \RJMF} \begin{equation} \label{decomposition-M-FUNCTION} \mredname{0}{\alpha} : [0,T]\ti \domq \ti [0,\infty) \ti \Spq \to [0,\infty], \quad \mredq 0\alpha tq{t'}{q'} : = \meq 0\alpha tq{t'}{q'}-\calR(z'). \end{equation} We observe that the dissipation potentials $\psi_\sfu: = \disv u$ and $\psi_{\mathsf{z}}: =\calR{+} \disv z$ have rate-independent parts null and equal to $\calR$, respectively, and that $\mfb_{\psi_{\mathsf{z}}} = \calR + \mfb_{\disv z}$ thanks to \eqref{later-added}. Thus, from \eqref{mename-0} and Proposition \ref{pr:Mosco.Beps} we infer that the following representation formula for $ \mredname{0}{\alpha}$: \begin{subequations} \label{l:partial} \begin{align} \nonumber \text{for } \frsubq qtq \neq \emptyset \text{ we have }\hspace*{-2em} \\ \qquad \nonumber t'>0:\qquad & \mredq 0\alpha tq{t'}{q'} = \begin{cases} 0 & \text{ for } \slov utq=\slov ztq=0, \\ \infty& \text{ otherwise}; \end{cases} \\ \nonumber t'=0, \ \alpha>1{:}\ \ & \mredq 0\alpha tq{0}{q'} =\begin{cases} \mfb_{\disv z} (z',\slov ztq) &\text{for }\slov utq=0, \\ \mfb_{\disv u}(u',\slov utq)& \text{for } \slov utq>0, \ z'=0,\\ \infty& \text{otherwise}; \end{cases} \\ \label{l:partial-1} t'=0, \ \alpha=1{:}\ \ & \mredq 0{\,1\,} tq{0}{q'}= \mfb_{\disv u \oplus \disv z}(q',\slov utq{+}\slov ztq) \\ \nonumber t'=0,\ \alpha<1{:}\ \ & \mredq 0\alpha tq{0}{q'} =\begin{cases} \mfb_{\disv u} (u',\slov utq) &\text{for } \slov ztq=0, \\ \mfb_{\disv z}(z',\slov ztq)& \text{for } \slov ztq>0, \ u'=0,\\ \infty& \text{otherwise}, \end{cases} \\ \nonumber \text{for } \frsubq qtq = \emptyset \text{ we have }\hspace*{-2em} \\ &\hspace*{-6em} \label{l:partial-2} \mredq 0\alpha tq{t'}{q'} = \begin{cases} 0 & \text{for } t'=0, \, q'=0 \text{ and } (t,q) \in \overline{\mathrm{dom}(\frname q\calE)}^{\mathrm{w,S}} {\setminus}\mathrm{dom}(\frname q\calE), \\[0.3em] \infty & \text{otherwise}. \end{cases} \end{align} \end{subequations} The expressions in \eqref{l:partial} reflect the fact that $\mename \eps\alpha$ only depends on the three cases given by $\alpha \in (0,1)$, $\alpha=1$, or $\alpha>1$. We emphasize that $ \mredname 0\alpha$ depends on $\calR$ as well, namely through $\slovname z$ which is defined via $\conj z$. In particular, for $t'>0$ finiteness of $ \mredq 0\alpha {t}q{t'}{q'}$ enforces that $0=\slov u tq = \slov ztq $ and hence, taking into account Hypothesis \ref{hyp:Sept19}, \begin{equation} \label{station+loc-stab-forced} \left\{ \begin{array}{llc} \text{the stationarity of $u$:} & \exists\, (\mu,\zeta) \in \frsubq qtq\, : & \mu =0 \, , \\[0.3em] \text{the local stability of $z$:} & \exists\, (\widetilde\mu,\widetilde\zeta) \in \frsubq qtq\, : & \widetilde\zeta \in \pl \calR (0)\,. \end{array} \right. \end{equation} In the specific cases of dissipation potentials $\disv u$ and $\disv z$ considered in Example \ref{ex:VVCP}, we even have the explicit expression of the respective contact potentials $\mfb_{\disv u}$ and $\mfb_{\disv z}$, and thus of the (reduced) \RJMF\ $\mredname 0\alpha$. In particular, let us revisit the $p$-homogeneous case: \begin{example}[The $p$-homogeneous case]\slshape \label{ex:p-homog} Suppose that the dissipation potentials $\disv u$ and $\disv z$ are positively $p$-homogeneous with the \emph{same} $p\in (1,\infty)$. Then, combining \eqref{p-homo-mfb} with \eqref{l:partial} we conclude that for $t'=0$ and $\frsubq qtq \neq \emptyset$ we have (where $\hat{c}_p = p^{1/p} (p')^{1/p'}$) \begin{align} \nonumber \alpha>1{:}\ \ & \mredq 0\alpha tq{0}{q'} =\begin{cases} \hat{c}_p \left( \disv z(z')\right)^{1/p} \left( \slov ztq \right)^{1/p'} &\text{for }\slov utq=0, \\ \hat{c}_p \left( \disv u(u')\right)^{1/p} \left( \slov utq \right)^{1/p'} & \text{for } \slov utq>0 , \ z'=0,\\ \infty& \text{otherwise}; \end{cases} \\ \label{RJMF-p-hom} \alpha=1{:}\ \ & \mredq 0\alpha tq{0}{q'}= \hat{c}_p \left( \disv u(u'){+} \disv z(z') \right)^{1/p} \left(\slov utq {+} \slov ztq \right)^{1/p'} \\ \nonumber \alpha<1{:}\ \ & \mredq 0\alpha tq{0}{q'} =\begin{cases} \hat{c}_p \left( \disv u(u')\right)^{1/p} \left( \slov utq \right)^{1/p'} &\text{for } \slov ztq=0, \\ \hat{c}_p \left( \disv z(z')\right)^{1/p} \left( \slov ztq \right)^{1/p'} & \text{for } \slov ztq>0, \ u'=0,\\ \infty& \text{otherwise}. \end{cases} \end{align} \end{example} The M-functions $ \mename \eps{\alpha}$ enjoy suitable coercivity properties that will play a key role in the compactness arguments for proving the existence of $\BV$ solutions. These estimates are direct consequences of the the lower bounds on $\mfB^\alpha_\eps$ derived in Lemma \ref{le:LoBo.Bae} and the definition of $\mename \eps\alpha$. The importance here is the uniformity in $\eps\in [0,1]$. We also emphasize that we are stating a result that is focusing on $z'$ and ignoring $u'$, which reflects the fact that we always assume the bound on $\|u'_\eps\|_{\rmL^1(0,T;\Spu)}$ whereas for $z'_\eps$ we only have a bound in $\rmL^1(0,T;\Spy)$, but we need the derivative $\sfz'(s)\in \Spz$ at least in points where $\slov z{\sft(s)}{\sfq(s)}>0$. \begin{lemma} \label{new-lemma-Alex} The following estimates hold for all $c>0$ and $\eps\in [0,1]$ with $\varkappa$ from Lemma \ref{le:LoBo.Bae}: \begin{subequations} \label{est-Alex} \begin{align} \label{est-Alex-1} \qquad\alpha\in (0,1):\quad & \slov ztq \geq c \quad &\Longrightarrow \quad \|z'\|_{\Spz}\: \leq \: \frac{\mename \eps\alpha(t,q,t',q')}{\varkappa(c)}, \qquad \\ \label{est-Alex-1-bis} \alpha\geq 1:\quad &\slov utq{+} \slov ztq \geq c &\Longrightarrow \quad \|z'\|_{\Spz} \: \leq \: \frac{\mename \eps{\alpha}(t,q,t',q')}{\varkappa(c)}. \qquad \end{align} \end{subequations} \end{lemma} \noindent The proof of \eqref{est-Alex-1} and \eqref{est-Alex-1-bis} follows directly from the definition of $\mename \eps\alpha$ and the corresponding estimates \eqref{eq:1LowBo.Bae} and \eqref{eq:1LowBo.Bag1} for $\mfB^\alpha_\eps$ in Lemma \ref{le:LoBo.Bae}, respectively. The following result is an immediate consequence of the definition of $\mename \eps\alpha$ and of Proposition \ref{pr:VVCP}(b5), if we recall the definitions of $\mathfrak A_\sfx^{*,0}$ from \eqref{not-empty-mislo}. \begin{lemma} \label{new-lemma-Ricky} For all $\alpha>0$ and all $\eps\in [0,1]$ we have that \begin{equation} \label{coerc-ric} \mename \eps\alpha (t,q,t',q') \geq - \pairing{}{\Spu}{{ \mu }}{ u'} - \pairing{}{\Spz}{\zeta}{z'} \end{equation} for all $(t,q,t',q') \in [0,T]\ti \Spq \ti [0,\infty) \ti \Spq$ and all $\xi = (\mu,\zeta) \in \argminSlo utq, \, \zeta \in \argminSlo ztq$. \end{lemma} \Subsection{Admissible parametrized curves} \label{ss:4.1bis} The concept of \emph{admissible parametrized curve} is tailored in such a way that it is able to describe limiting curves $(\sft,\sfq):[a,b]\to [0,T]\ti \Spq$ of a family of parametrized viscous curves $(\sft_\eps,\sfq_\eps)_{\eps}$ satisfying \[ \sup_{\eps\in (0,1)} \int_a^b \meq\eps{\alpha} {\sft_\eps(s)}{\sfq_\eps(s)} {\sft'_\eps(s)} {\sfq'_\eps(s)} \dd s \ < \ \infty. \] Since Proposition \ref{pr:Mosco.Meps} guarantees that $\mename0{\alpha}$ is the $\Gamma$-limit of $\mename\eps{\alpha}$ it seems natural that such curves can be characterized by the condition \begin{equation} \label{eq:M0a.sft.sfq} \int_a^b \calR(z'(s)) \dd s + \int_a^b \mredq 0{\alpha} {\sft(s)}{\sfq(s)} {\sft'(s)} {\sfq'(s)} \dd s < \infty. \end{equation} However, this expression is not well-defined, since we are not able to define the derivatives $\sfq'(s)=(\sfu'(s),\sfz'(s))$ almost everywhere. To reformulate \eqref{eq:M0a.sft.sfq} in a proper way, we take advantage of the special form of $\mathfrak{M}^{\alpha,\mathrm{red}}_0$ given in \eqref{l:partial} by observing that $\sfz'(s)$ is only needed on the special sets $\SetG\alpha\sft\sfq$ to be defined below. Hence, condition \eqref{eq:M0a.sft.sfq} can be replaced by \eqref{summability} in Definition \ref{def:adm-p-c} ahead, which relies on the fact that absolutely continuous curves $z$ with values in (the possibly non-reflexive space) $\Spy$ need not be differentiable with respect to time. Therefore, the pointwise derivative $z'$ is replaced by a scalar surrogate, cf.\ \eqref{calR-mder} below, whose definition involves the dissipation potential $\calR$ and generalizes the concept of \emph{metric derivative} from the theory of gradient flows in metric spaces \cite{AGS08}. \begin{enumerate} \item We say that a curve $z: [a,b]\to \Spy$ is $\calR$-absolutely continuous if there exists a nonnegative function $m \in \rmL^1(a,b)$ such that \[ \calR( z(s_2) {-} z(s_1) ) \leq \int_{s_1}^{s_2} m(s) \dd s \qquad \text{for every } a\leq s_1\leq s_2 \leq b, \] and we denote by $ \mathrm{AC}([a,b]; \Spy, \calR)$ the space of $\calR$-absolutely continuous curves. \item For a curve $z \in \mathrm{AC}([a,b]; \Spy, \calR)$ we set \begin{equation} \label{calR-mder} \calR [z'](s): = \lim_{h\to 0} \calR \Big( \frac1h \big( z(s{+}h)\;\!{-} \!\; z(s)\big) \Big) \qquad \foraa\, s \in (a,b). \end{equation} \end{enumerate} We are now in a position to give our definition of admissible parametrized curve, which adapts \cite[Def.\ 4.1]{MRS13} to the present multi-rate system. We recall that the slope functions $\slovname x$ are lsc according to Hypothesis \ref{hyp:Sept19}. Hence, along continuous curves $(\sft,\sfq):[a,b]\to [0,T]\ti \Spq$ the following sets are relatively open: \begin{equation} \label{setGalpha} \SetG\alpha\sft\sfq : = \begin{cases} \bigset{ s\in [a, b] }{ \slov u{\sft(s)}{\sfq(s)} {+} \slov z{\sft(s)}{\sfq(s)} >0 } & \text{for } \alpha \geq 1, \\ \bigset{ s\in [a, b] }{ \slov z{\sft(s)}{\sfq(s)} >0 } & \text{for } \alpha\in (0,1). \end{cases} \end{equation} The difference between the cases $\alpha > 1$ and $\alpha \in (0,1)$ in the definition of the set $ \SetG\alpha\sft\sfq$ is commented after the following definition. \begin{definition}[${\mathscr{A}([a,b];[0,T]\ti \Spq)}$] \label{def:adm-p-c} A curve $(\sft,\sfq) = (\sft,\sfu,\sfz): [a,b] \to [0,T]\ti \Spq$ is called an \emph{admissible parametrized curve} if \begin{enumerate} \item $\sft$ is non-decreasing, $\sft \in \AC([a,b];\R)$, $\sfu \in \AC([a,b];\Spu)$ and $\sfz \in \mathrm{AC}([a,b];\Spy,\calR)$; \item $\slov u{\sft}{\sfq} = 0$ and $\slov z {\sft}{\sfq} = 0$ on the set $\bigset{ s \in (a,b) }{ \sft'(s) >0 } $; \item $\sfz$ is locally $\Spz$-absolutely continuous on the open set $ \SetG\alpha {\sft}{\sfq}$, and $\sft$ is constant on every connected component of $\SetG\alpha \sft\sfq$; \item $ \sup_{s\in [a,b]}\mfE (\sfq(s)) \leq E$ for some $E>0$; \item there holds \begin{equation} \label{summability} \int_{a}^{b} \calR [\sfz'](s) \dd s + \int_{\SetG\alpha\sft\sfq} \mredname 0\alpha (\sft(s),\sfu(s),\sfz(s),0,\sfu' (s),\sfz' (s)) \dd s <\infty\,. \end{equation} \end{enumerate} We will denote by $\mathscr{A}([a,b];[0,T]\ti \Spq)$ the collection of all admissible parametrized curves from $[a,b]$ to $[0,T]\ti \Spq$. Furthermore, we say that $(\sft,\sfq) \in \mathscr{A}([a,b];[0,T]\ti \Spq)$ is \begin{itemize} \item[\textbullet] \emph{non-degenerate}, if \begin{equation} \label{non-degeneracy} \sft'(s) + \calR [\sfz'](s) + \| \sfu'(s)\|_\Spu >0 \quad \text{ for a.a.\ $s\in (a,b)$;} \end{equation} \item[\textbullet] \emph{surjective}, if $\sft(a) =0$ and $\sft(b)=T$. \end{itemize} Finally, in the case in which the function $\sft$, defined on the canonical interval $[0,1]$, is constant with $\sft(s) \equiv t$ for some $t\in [0,T]$, we call $\sfq$ an \emph{admissible transition curve between $q_0:=q(0)$ and $q_1: = q(1)$ at time $t$}, and we will use the notation \[ \admtcq t{q_0}{q_1}: = \{ \sfq: [0,1]\to \Spq\, : \ (\sft, \sfq) \in \mathscr{A}([0,1];[0,T]\ti \Spq), \ \sft(s) \equiv t, \ \sfq(0) = q_0, \, \sfq(1) = q_1 \}. \] \end{definition} The requirement that $\sfz$ has to be locally $\Spz$-absolutely continuous on the set $\SetG\alpha \sft\sfq$, and the different definition of $\SetG\alpha \sft\sfq$ in the cases $\alpha \geq 1$ and $\alpha \in (0,1)$, are clearly motivated by properties \eqref{est-Alex} (which, in turn, derive from Lemma \ref{le:LoBo.Bae}). Indeed, in the case $\alpha \in (0,1)$, in view of \eqref{est-Alex-1}, once $\meq 0\alpha {\sft}{\sfq}{\sft'}{\sfq'}$ is estimated and $\slov z{\sft}{\sfq}$ is strictly positive, then $\meq 0\alpha {\sft}{\sfq}{\sft'}{\sfq'}$ provides a control on $\|\sfz'\|_{\Spz}$. Because of this, parametrized curves are required to be absolutely continuous on the set $\slov z{\sft}{\sfq}>0$. In the case $\alpha \geq 1$, in view of estimate \eqref{est-Alex-1-bis}, the $z$-component of admissible parametrized curves is expected to be absolutely continuous on the larger set where $\slov u{\sft}{\sfq}+\slov z{\sft}{\sfq}>0$. Hence, on the one hand, in \eqref{summability} we integrate only over the set $\SetG\alpha\sft\sfq$, because it is in $\SetG\alpha\sft\sfq$ where the pointwise derivative \ $\sfz' \in \Spz$ exists, which makes the term $\mredq 0\alpha{\sft(s)}{\sfq(s)}{0}{\sfq'(s) }$ well defined. On the other hand, the specific form of $\mredname{0}{\alpha}$ in \eqref{l:partial} and the fact that $\mfb_\psi(v,0)=0$ for all $v$ show that $z' \in \Spz$ is only needed on the set $\SetG\alpha\sft\sfq $. Hereafter, along an admissible parametrized curve $(\sft,\sfq)$ we shall use the notation \begin{equation} \label{short-hand-M0} \mathfrak{M}_0^\alpha [\sft,\sfq,\sft',\sfq'] (s): = \calR[\sfz'](s) + \indic_{\SetG\alpha\sft\sfq}(s) \mredq 0\alpha{\sft(s)}{\sfq(s)}{0}{\sfq'(s)} \end{equation} with $ \calR[\sfz'] $ from \eqref{calR-mder}, and $\indic_{\SetG\alpha\sft\sfq}$ is the indicator function of the set $\SetG\alpha\sft\sfq$. Let us stress that the above notation makes sense only along an admissible curve. If the admissible curve $(\sft,\sfq)$ has the additional property $\sfz \in \AC([a,b];\Spz)$ and thus $\sfz'(s)$ is well defined as an element of $\Spz\subset\Spy$ for almost all $s\in (a,b)$, then $ \calR[z'](s)= \calR(z'(s)) $ a.e.\ in $(a,b)$. Hence, for an admissible curve with $\sfz \in \AC([a,b];\Spz)$ we have $ \mathfrak{M}_0^\alpha [\sft,\sfq,\sft',\sfq'] (s) = \mathfrak{M}_0^\alpha (\sft(s),\sfq(s),\sft'(s),\sfq'(s)) \ \foraa s \in (a,b). $ \Subsection{Definition of parametrized Balanced-Viscosity solutions} \label{ss:4.2} We are now in a position to precisely define parametrized Balanced-Viscosity ($\pBV$) solutions to the rate-independent system $\RIS$, see Definition \ref{def:pBV}. At the core of this concept there lies a (parametrized) chain-rule inequality, cf.\ Hypothesis \ref{h:ch-rule-param} that will be imposed as an additional property of the rate-independent system, while Proposition \ref{prop:better-chain-rule-MOexpl} will provide sufficient conditions for the validity of Hypothesis \ref{h:ch-rule-param}. We will also introduce an \emph{enhanced} version of the $\pBV$ concept, in which we additionally require $z$ to be absolutely continuous with values in $\Spz$. In \cite[Sec.\,4.2]{MRS13} this notion had been already introduced, using a different terminology that might create slight confusion in the present multi-rate context and has thus been changed here. We believe the enhanced concept to be significant as well because, for some examples (cf.\ e.g.\ the applications discussed in Section \ref{s:appl-dam}), the vanishing-viscosity analysis will directly lead to enhanced BV solutions. The definition of $\pBV$ solutions relies on the validity of the following assumption on the rate-independent system $\RIS$. \begin{hypothesis}[Chain rule along admissible parametrized curves] \label{h:ch-rule-param} For every admissible parametrized curve $(\sft,\sfq) \in \mathscr{A} ([a,b];[0,T]\ti \Spq)$ \begin{equation} \label{ch-rule} \begin{aligned} &\text{the map $s\mapsto \eneq {\sft(s)}{\sfq(s)} $ is absolutely continuous on $[a,b]$ and} \\ &\frac{\rmd}{\rmd s} \eneq{\sft(s)}{\sfq(s)} - \pl_t \eneq{\sft(s)}{\sfq(s)} \sft'(s) \geq - \mathfrak{M}_0^\alpha [\sft,\sfq,\sft',\sfq'](s) \ \foraa\, s \in (a,b). \end{aligned} \end{equation} \end{hypothesis} \begin{remark} \label{rmk:ptfw-chain-rule} \slshape In general, the chain-rule inequality \eqref{ch-rule} along a given admissible parametrized curve $(\sft,\sfq)$ does not follow from the chain rule of Hypothesis \ref{h:ch-rule}, because for these curves the pointwise derivative $\sfz'$ exists as an element in $\Spz$ only on the set $\SetG\alpha\sft\sfq$ from \eqref{setGalpha}. That is why, Proposition \ref{prop:better-chain-rule-MOexpl} provides a sufficient condition under which Hypothesis \ref{h:ch-rule} ensures the validity of Hypothesis \ref{h:ch-rule-param}, albeit restricted to admissible curves satisfying additionally $\sfz \in \AC ([a,b];\Spz)$. \end{remark} We are now ready to introduce the exact notion of $\pBV$ solutions. \begin{definition}[$\pBV$ and enhanced $\pBV$ solutions] \label{def:pBV} In addition to Hypotheses \ref{hyp:setup}, \ref{hyp:diss-basic}, \ref{hyp:1}, \ref{h:closedness}, and \ref{hyp:Sept19}, let the rate-independent system $\RIS$ satisfy Hypothesis \ref{h:ch-rule-param}. We call a curve $(\sft,\sfq) \in \mathscr{A} ([a,b];[0,T]\ti \Spq)$ a \emph{parametrized Balanced-Viscosity} ($\pBV$) solution to the rate-independent system $\RIS$ if $(\sft,\sfq)$ satisfies the \emph{parametrized energy-dissipation balance} \begin{equation} \label{def-parBV} \begin{aligned} \eneq{\sft(s_2)}{\sfq(s_2)} +\int_{s_1}^{s_2} \mathfrak{M}_0^\alpha [\sft,\sfq, \sft',\sfq'](s) \dd s = \eneq{\sft(s_1)}{\sfq(s_1)} +\int_{s_1}^{s_2} \pl_t \eneq{\sft(s)}{\sfq(s)} \sft'(s) \dd s \end{aligned} \end{equation} for every $ a \leq s_1 \leq s_2\leq b$, where $\mathfrak{M}_0^\alpha$ is defined in \eqref{short-hand-M0}. A $\pBV$ solution $(\sft,\sfq)=(\sft,\sfu,\sfz)$ is called \emph{enhanced $\pBV$ solution}, if additionally $\sfz\in \AC ([a,b];\Spz)$. \end{definition} For an enhanced $\pBV$ solution $(\sft,\sfq)$ we have $\sfq \in \AC ([a,b];\Spq)$, since $\sfq \in \mathscr{A} ([a,b];[0,T]\ti \Spq)$ already implies $\sfu \in \AC ([a,b];\Spu)$. As a consequence of the chain-rule inequality \eqref{ch-rule} from Hypothesis \ref{h:ch-rule-param}, we have the following characterization. \begin{lemma}[Characterization of $\pBV$ solutions] \label{l:characterizBV} Let Hypothesis \ref{h:ch-rule-param} hold additionally. Then for an admissible parametrized curve $(\sft,\sfq) \in \mathscr{A} ([a,b];[0,T]\ti \Spq)$, the following three properties are equivalent: \begin{enumerate} \item $(\sft,\sfq) $ is a $\pBV$ solution of the rate-independent system $\RIS$; \item $(\sft,\sfq) $ fulfills the \emph{upper energy estimate} $\leq$ in \eqref{def-parBV} on for $s_1=a$ and $s_2=b$; \item $(\sft,\sfq) $ fulfills the pointwise identity for a.a.\ $s\in (a,b)$ \begin{equation} \label{ptw-ident} \begin{aligned} \frac{\rmd}{\rmd s} \eneq{\sft(s)}{\sfq(s)} - \pl_t \eneq{\sft(s)}{\sfq(s)} \sft'(s) =- \mathfrak{M}_0^\alpha [\sft,\sfq,\sft',\sfq'](s) \,. \end{aligned} \end{equation} \end{enumerate} \end{lemma} \noindent The proof is a simple adaptation of the arguments for \cite[Prop.\,5.3]{MRS12} and \cite[Cor\,3.5]{MRS13} and is thus omitted. \Subsection{Existence results for $\pBV$ solutions} \label{su:Exist.pBV} Our first main result states that any family $(\sft_\eps,\sfu_\eps,\sfz_\eps)_\eps$ obtained by suitably rescaling (cf.\ Remark \ref{rmk:arbitrary-param} ahead) a family of solutions to the viscous system \eqref{van-visc-intro} converges along a subsequence, as $\eps\to 0^+$, to a parametrized solution of the rate-independent system $\RIS$. \begin{theorem}[Existence of $\pBV$ solutions] \label{thm:existBV} Under Hypotheses \ref{hyp:setup}, \ref{hyp:diss-basic}, \ref{hyp:1}, \ref{h:closedness}, \ref{hyp:Sept19}, and \ref{h:ch-rule-param}, let $(q_{\eps_k})_k = (u_{\eps_k}, z_{\eps_k})_k \subset \AC ([0,T]; \Spq)$ be a sequence of solutions to the generalized viscous gradient system \eqref{van-visc-intro} with $(\eps_k)_k \subset (0,\infty)$ a null sequence. Suppose that \begin{equation} \label{init-data-cv} q_{\eps_k}(0) \to q_0 \text{ in } \Spq \ \text{ and } \ \eneq 0{q_{\eps_k}(0)} \to \eneq 0{q_0} \qquad \text{as $k\to\infty$}, \end{equation} for some $q_0=(u_0,z_0) \in \domq$. Let $\sft_{\eps_k}: [0,\sfS]\to [0,T]$ be non-decreasing surjective time-rescalings such that $\sfq_{\eps_k} = (\sfu_\epsk,\sfz_\epsk)$ defined via $\sfq_\epsk(s)=q_{\eps_k}(\sft_{\eps_k}(s))$ satisfies \begin{equation} \label{condition-4-normali} \begin{aligned} & \exists\, C>0 \ \forall\, k \in \N \ \ \foraa s \in (0,\sfS): \\ & \sft_\epsk'(s) + \calR (\sfz_\epsk'(s)) + \mredq {\epsk} \alpha {\sft_\epsk(s)}{\sfq_\epsk(s)} {\sft_\epsk'(s)}{\sfq_\epsk'(s)} + \|\sfu_\epsk'(s)\|_{\Spu} \leq C. \end{aligned} \end{equation} Then, there exist a (not relabeled) subsequence and a curve $(\sft,\sfq) \in \mathscr{A}([0,\sfS]; [0,T]\ti \Spq)$ such that \begin{enumerate} \item \mbox{}\vspace{-1em} \begin{equation} \label{continuity-properties} \begin{aligned} & \sft \in \rmC_{\mathrm{lip}}^0 ([0,\sfS]; [0,T]), \quad \sfq = (\sfu,\sfz) \in \rmC_{\mathrm{weak}}^0([0,\sfS]; \Spw\ti \Spx) , \\ &\sfu \in \rmC_{\mathrm{lip}}^0 ([0,\sfS];\Spu), \quad \sfz \in \rmC_{\mathrm{lip}}^0 ([0,\sfS];\Spy) \cap \rmC^0([0,\sfS];\Spz); \end{aligned} \end{equation} \item the following convergences hold as $k\to\infty$ \begin{subequations} \label{cvs-eps} \begin{align} & \label{cvs-eps-t} \sft_{\eps_k} \to \sft \text{ in } \rmC^0([0,\sfS]), \\ \label{cvs-eps-u-z-added} \mbox{}\quad& \sfu_\epsk \weaksto \sfu \text{ in } W^{1,\infty} (0,\sfS;\Spu), && \sfz_\epsk \to \sfz \text{ in } \rmC^0 ([0,\sfS];\Spz), \\ \label{cvs-eps-u-z} & \sfu_\epsk(s) \weakto \sfu(s) \text{ in } \Spw \ \text{and} && \sfz_\epsk(s) \weakto \sfz(s) \text{ in } \Spx \qquad \text{ for all } s \in [0,\sfS]; \quad \mbox{} \end{align} \item $(\sft,\sfq)$ fulfills the upper energy-dissipation estimate $ \leq$ in \eqref{def-parBV} on $[0,\sfS]$, hence $(\sft,\sfq)$ is a $\pBV$ solution to the rate-independent system $\RIS$. \end{subequations} \end{enumerate} Moreover, $(\sft,\sfu,\sfz)$ is surjective and there hold the additional convergences \begin{subequations} \label{eq:E.M.cvg.PBV} \begin{align} & \label{cvs-eps-energy} \eneq{\sft_{\eps_k}(s)}{\sfq_{\eps_k}(s)} \to \eneq{\sft(s)}{\sfq(s)} \qquad \text{for all } s \in [0,\sfS], \\ & \label{cvs-eps-M} \int_{s_1}^{s_2} \mathfrak{M}_{\eps_k}^\alpha (\sft_{\eps_k}(\sigma) ,\sfq_{\eps_k}(\sigma),\sft'_{\eps_k}(\sigma), \sfq'_{\eps_k}(\sigma)) \dd \sigma \to \int_{s_1}^{s_2} \mathfrak{M}_{0}^\alpha [\sft_,\sfq,\sft',\sfq'](\sigma) \dd\sigma \end{align} \end{subequations} for all $0\leq s_1\leq s_2 \leq \sfS$. \end{theorem} We postpone the proof of Theorem \ref{thm:existBV} to Section \ref{ss:8.2}, but point out here that the core of the limit passage in the parametrized energy-dissipation estimate \eqref{reparam-enineq}, leading to \eqref{def-parBV}, lies in the following straightforward consequence of Ioffe's theorem \cite{Ioff77LSIF} (see also \cite[Thm.\,21]{Valadier90}). A `metric version' of Proposition \ref{prop:Ioffe} below was proved in \cite[Lemma 3.1]{MRS09}. \begin{proposition} \label{prop:Ioffe} Let $\Iof$ be a weakly closed subset of $\Spq$, and let $(\mathscr{M}_\eps)_{\eps}, \, \mathscr{M}_0: \R \ti \Iof \ti \R \ti \Spq \to [0,\infty]$ be measurable and weakly lower semicontinuous functionals fulfilling the $\Gamma$-$\liminf$ estimate \begin{equation} \label{Gamma-liminf-weak} \begin{aligned} \Big( (t_\eps,q_\eps,{t_\eps'}, q_\eps')\weakto (t,q,t', q') \text{ in } \R\ti\Iof \ti \R\ti\Spq \text{ as } \eps \to 0^+ \Big) \Longrightarrow \mathscr{M}_0(t,q,t',q') \leq \liminf_{\eps \to 0^+} \mathscr{M}_\eps(t_\eps,q_\eps, t_\eps', q_\eps')\,. \end{aligned} \end{equation} Suppose that, for $\eps \geq 0$, the functional $\mathscr{M}_\eps(t,q,\cdot,\cdot)$ is convex for every $(t,q) \in \R \ti \Iof$. Let $(\sft_\eps, \sfq_\eps), \, (\sft,\sfq) \in \AC([a,b]; \R\ti\Iof)$ fulfill \begin{equation} \label{ideal-convergences} \sft_\eps(s) \to \sft(s), \quad \sfq_\eps(s) \weakto \sfq(s) \text{ for all } s\in [a,b], \qquad (\sft'_\eps, \sfq_\eps') \weakto (\sft',\sfq') \text{ in } \rmL^1(a,b; \R \ti \Spq)\,. \end{equation} Then, \begin{equation} \label{thesis-Ioffe} \liminf_{\eps \to 0^+} \int_a^b \mathscr{M}_\eps(\sft_\eps(s), \sfq_\eps(s), \sft_\eps'(s),\sfq_\eps'(s)) \dd s \geq \int_a^b \mathscr{M}_0(\sft(s), \sfq(s), \sft'(s),\sfq'(s)) \dd s\,. \end{equation} \end{proposition} \begin{proof} It is sufficient to introduce the functional $\bar{\mathscr{M}} :\R \ti \Iof \ti \R \ti \Spq \ti [0,\infty] \to [0,\infty]$ defined by \[ \bar{\mathscr{M}}(t,q,t',q',\eps): = \left\{ \begin{array}{ll} \mathscr{M}_\eps(t,q,t',q') & \text{if } \eps>0, \\ \mathscr{M}_0(t,q,t',q') & \text{if } \eps=0, \end{array} \right. \] and to observe that $\mathscr{M}$ is lower semicontinuous with respect to the weak topology of $\R \ti \Iof \ti \R \ti \Spq \ti [0,\infty]$ and convex for every $(t,q) \in \R \ti \Spq$ and $\eps\geq 0$. Then, by Ioffe's theorem we conclude that \[ \liminf_{\eps \to 0^+} \int_a^b \bar{\mathscr{M}}(\sft_\eps(s), \sfq_\eps(s), \sft_\eps'(s),\sfq_\eps'(s),\eps) \dd s \geq \int_a^b \bar{\mathscr{M}}(\sft(s), \sfq(s), \sft'(s),\sfq'(s),0) \dd s\,, \] i.e., \eqref{thesis-Ioffe} is established. \end{proof} \begin{remark} \label{rmk:non-deg} \slshape Theorem \ref{thm:existBV} does not guarantee that the $\pBV$ solution is \emph{non-degenerate} even if the quantity in \eqref{condition-4-normali} has a uniform positive lower bound. Nonetheless, any (possibly degenerate) solution $(\sft,\sfq)$ can be reparametrized to a \emph{non-degenerate} one $(\widetilde\sft,\widetilde\sfq): [0,\widetilde\sfS]\to [0,T]\ti \Spq$, fulfilling \begin{equation} \label{normalization-gained} \widetilde\sft'(\sigma) + \calR [\widetilde\sfz'](\sigma) + \| \widetilde\sfu'(\sigma)\|_\Spu =1 \qquad \foraa\, \sigma \in (0,\widetilde\sfS). \end{equation} For this, we proceed as in \cite{EfeMie06RILS} and associate with $(\sft,\sfq)$ the rescaling function $\widetilde\sigma$ defined by $\widetilde\sigma(s) = \int_0^s ( \sft'(r) {+} \calR [\sfz'](r) {+} \| \sfu'(r)\|_\Spu) \dd r $ and set $\widetilde\sfS = \sigma(\sfS)$. We then define $(\widetilde\sft(\sigma),\widetilde\sfq(\sigma)): = (\sft(s),\sfq(s)) $ for $\sigma = \widetilde\sigma(s)$. The very same calculations as in \cite[Lem.\,4.12]{Miel11DEMF} (or based on the reparametrization result \cite[Lem.\,1.1.4]{AGS08}), yield \eqref{normalization-gained}. \end{remark} Our next result, whose proof is omitted (cf.\ also Remark \ref{rmk:arbitrary-param}), addresses the existence of \emph{enhanced} $\pBV$ solutions. \begin{theorem}[Existence of enhanced $\pBV$ solutions] \label{thm:exist-enh-pBV} Assume Hypotheses \ref{hyp:setup}, \ref{hyp:diss-basic}, \ref{hyp:1}, \ref{h:closedness}, \ref{hyp:Sept19}, and \ref{h:ch-rule-param}. Suppose that there exist rescaled solutions $(\sft_\epsk,\sfq_\epsk)_k$ to the viscous system \eqref{van-visc-intro}$_{\eps_k}$ such that, in addition to \eqref{condition-4-normali}, there also holds the estimate \begin{equation} \label{condition-4-normali-enhn} \exists\, C>0 \ \forall\, k \in \N \quad \foraa\, s \in (0,\sfS)\,: \qquad \|\sfz_\epsk'(s)\|_{\Spz} \leq C\,. \end{equation} Then, up to a (not relabeled) subsequence the curves $(\sft_\epsk,\sfq_\epsk)_k$ converge to an admissible parametrized curve $(\sft,\sfq) \in \mathscr{A}([0,\sfS]; [0,T]\ti \Spq)$ such that \eqref{continuity-properties}, \eqref{cvs-eps}, \eqref{eq:E.M.cvg.PBV} hold and additionally $ \sfz \in \rmC_{\mathrm{lip}}^0 ([0,\sfS];\Spz)$, i.e., $(\sft,\sfq)$ is an \emph{enhanced} $\pBV$ solution to the rate-independent system $\RIS$. \end{theorem} \begin{remark} \label{rmk:arbitrary-param} \slshape In the statement of Theorem \ref{thm:existBV}, the reparametrization $t=\sft_\epsk(s)$ yielding the rescaled solutions $\sfq_\epsk$ can be chosen arbitrarily, provided it guarantees the Lipschitz bound \eqref{condition-4-normali}. Under Hypotheses \ref{hyp:setup}, \ref{hyp:diss-basic} and \ref{hyp:1}, \emph{all} viscous solutions $(u_\epsk,z_\epsk)$ satisfy the \ uniform bound $\|z'_\epsk\|_{\rmL^1(0,T;\Spy)}\leq C$, see \eqref{est1}. If, additionally $\|u'_\epsk\|_{\rmL^1(0,T;\Spy)}\leq C$ holds (i.e.\ \eqref{est2} from Corollary \ref{l:3.2}), then a reparametrization yielding \eqref{condition-4-normali} is easily obtained, for instance, by using the energy-dissipation arclength in \eqref{arclength-est1-2}. Similarly, under the stronger a priori estimate \begin{equation} \label{enhanced-z-estimate} \exists\, C>0 \ \forall\, k \in \N \, : \qquad \big\| z'_\epsk \big\|_{\rmL^1(0,T;\Spz)} = \int_0^T \|z_\epsk'(t)\|_\Spz \dd t \leq C, \end{equation} one easily obtains rescaled solutions satisfying the stronger Lipschitz bound \eqref{condition-4-normali-enhn}. Hence, one gains enhanced compactness information for the sequence $(\sfz_\epsk)_k$, and the proof of Theorem \ref{thm:existBV} immediately yields a proof of Theorem \ref{thm:exist-enh-pBV}. \end{remark} We conclude this section with some sufficient conditions for the validity of (a stronger form of) the parametrized chain rule in Hypothesis \ref{h:ch-rule-param}. It will be derived as a consequence of the non-parametrized chain rule in Hypothesis \ref{h:ch-rule}. \begin{proposition}[Sufficient conditions for parametrized chain rule] \label{prop:better-chain-rule-MOexpl} Assume that \linebreak[3] Hypothesis \ref{h:ch-rule} holds and that the vanishing-viscosity contact potentials associated with $\disv u$ and $\disv z$ satisfy \begin{equation} \label{coercivity-VVCP} \exists\, c_{\mathsf{x}}>0 \, \ \forall\, (v,\eta) \in \mathbf{X}\times \mathbf{X}^*\, : \qquad \mfb_{\disv x}(v,\disv x^*(\eta)) \geq c_{\mathsf{x}} \| v\|_{\mathbf{X}} \| \eta\|_{\mathbf{X}^*} \end{equation} for $\mathsf{x}\in \{ \sfu,\sfz\}$ and $\mathbf{X}\in \{ \Spu,\Spz\}$. Then, the parametrized chain rule \eqref{ch-rule} holds along all admissible curves $ (\sft,\sfq) \in \mathscr{A} ([a,b];[0,T]\ti \Spq)$ with $\sfq \in \AC ([a,b];\Spq)$. In particular, we have \begin{equation} \label{better-chain-rule-MOexpl} \frac{\rmd}{\rmd s} \eneq{\sft}{\sfq}- \pl_t \eneq{\sft}{\sfq} \sft' = \pairing{}{\Spu}{\mu}{\sfu'} + \pairing{}{\Spz}{\zeta}{\sfz'} \geq - \mathfrak{M}_0^\alpha (\sft,\sfq,\sft',\sfq') \quad \aein (a,b) \end{equation} for all measurable selections $ (a,b) \ni s \mapsto (\mu(s),\zeta(s) ) \in \Spu^*\ti \Spz^*$ satisfying for almost all $s\in (a,b)$ $ (\mu(s),\zeta(s) ) \in \argminSlo u{\sft(s)}{\sfq(s)} \ti \argminSlo z{\sft(s)}{\sfq(s)} $. \end{proposition} \noindent The proof will be carried out in Appendix \ref{s:app-CR}. \Subsection{Differential characterization of enhanced $\pBV$ solutions} \label{ss:6.3-diff-charact} The main result of this section is Theorem \ref{thm:diff-charact}, which provides a further characterization of \emph{enhanced $\pBV$ solutions} in terms of solutions of a system of subdifferential inclusions, see \eqref{param-subdif-incl}. This differential form has the very same structure as the viscous system \eqref{DNE-system}, except that the small parameters $\eps^\alpha$ and $\eps$ multiplying the viscous terms are replaced by coefficients $\thn{u}$ and $\thn z$ satisfying the switching conditions \eqref{eq:SwitchCond}. For this, we use the optimality in the energy-dissipation balance. In Lemma \ref{new-lemma-Ricky} we have established the estimate \begin{equation} \label{eq:M.a.0.ineq} \mename 0\alpha (t,q,t',q') \geq - \pairing{}{\Spu}{\mu}{u'} - \pairing{}{\Spz}{\zeta}{z'} \quad \text{for all } (\mu,\zeta) \in \argminSlo utq {\times} \argminSlo ztq, \end{equation} which is valid for all $(t,q,t',q') \in [0,T]\ti \Spq \ti [0,\infty) \ti \Spq$ and which is a generalization of the classical Young--Fenchel inequality $\psi(v)+ \psi^*({-}\xi) \geq - \pairing{}{}{\xi}{v}$. With the first result of this section we will show that, in analogy to the characterization of generalized gradient-flow equations via the energy-dissipation principle, we are able to characterize $\pBV$ solutions via the optimality condition that estimate \eqref{eq:M.a.0.ineq} holds as an equality. Thus, we define the \emph{contact set} $\Ctc_\alpha$ (cf.\ \cite[Def.\,3.6]{MRS2013}) via \begin{align} \label{ctc-set} \Ctc_\alpha: = \Big\{\, (t,q,t',q') \in [0,T] \ti \Spq \ti [0,\infty) \ti \Spq \; \Big|\; & \exists\, (\mu,\zeta) \in \argminSlo utq {\times} \argminSlo ztq: \\ \nonumber & \meq{0}\alpha tq{t'}{q'} = -\pairing{}{\Spu}{\mu}{u'} - \pairing{}{\Spz}{\zeta}{z'} \, \Big\}. \end{align} Proposition \ref{pr:char-eBV} below makes the relation between enhanced $\pBV$ solutions and the contact set $ \Ctc_\alpha$ rigorous. We emphasize here that we need to exploit the stronger version \eqref{better-chain-rule-MOexpl} of the parametrized chain rule from Hypothesis \ref{h:ch-rule-param}, in addition to Hypotheses \ref{hyp:setup}, \ref{hyp:diss-basic}, \ref{hyp:1}, \ref{h:closedness}, and \ref{hyp:Sept19}, always tacitly assumed. Recall that a sufficient condition for such a chain rule is provided by Proposition \ref{prop:better-chain-rule-MOexpl}. \begin{proposition}[Enhanced $\pBV$ solutions lie in $\Ctc_\alpha$] \label{pr:char-eBV} Suppose that the parametrized chain rule \eqref{better-chain-rule-MOexpl} holds along all admissible curves $ (\sft,\sfq) \in \mathscr{A} ([a,b];[0,T]\ti \Spq)$ with $\sfq \in \AC ([a,b];\Spq)$. Then, a curve $(\sft,\sfq) \in \mathscr{A} ([a,b];[0,T]\ti \Spq)$ is an enhanced\/ $\pBV$ solution of\/ $\RIS$ if and only if\/ $\sfq \in \AC([a,b];\Spq)$ and $(\sft,\sfq,\sft',\sfq')\in \Ctc_\alpha$ a.e.\ in $(a,b).$ \end{proposition} \begin{proof} Let us consider an admissible parametrized curve $(\sft,\sfq) \in \mathscr{A} ([a,b];[0,T]\ti \Spq)$ with $\sfq \in \AC ([a,b];\Spq)$. By the characterization provided in Lemma \ref{l:characterizBV}, $(\sft,\sfq)$ is a $\pBV$ solution if and only if $ - \mathfrak{M}_0^\alpha (\sft,\sfq,\sft',\sfq') = \frac{\rmd}{\rmd s} \eneq{\sft}{\sfq}- \pl_t \eneq{\sft}{\sfq} \sft' $ almost everywhere in $(a,b)$. Combining this with the chain-rule inequality \eqref{better-chain-rule-MOexpl} we in fact conclude that \[ \frac{\rmd}{\rmd s} \eneq{\sft}{\sfq}- \pl_t \eneq{\sft}{\sfq} \sft' = \pairing{}{\Spu}{\mu}{\sfu'} + \pairing{}{\Spz}{\zeta}{\sfz'} = - \mathfrak{M}_0^\alpha (\sft,\sfq,\sft',\sfq') \quad \aein (a,b), \] \emph{for all} measurable selections $ \xi =(\mu,\zeta): (a,b) \to \argminSlo u{\sft(s)}{\sfq(s)} \ti \argminSlo z{\sft(s)}{\sfq(s)} $, hence $(\sft,\sfq,\sft',\sfq')\in \Ctc_\alpha$ a.e.\ in $(a,b).$ The converse implication follows by the same argument. \end{proof} The final step in relating enhanced $\pBV$ solutions to the solutions of the subdifferential system \eqref{param-subdif-incl} is obtained by analyzing the structure of $\Ctc_\alpha$. For this, we exploit the exact form on $\mename0\alpha$ and use the definition of the set $\argminSlo x t q $ in terms of the Fr\'echet subdifferential $\frsubq x t q$, $\mathsf{x}\in \{\mathsf{u}, \mathsf{z}\}$. To formulate this properly, we recall the definition of the rescaled viscosity potentials $\disve x \lambda$ and their subdifferentials $\pl \disve x \lambda$ from \eqref{eq:Def.Vx.la} for $\lambda \in [0,\infty]$. In particular, we have \begin{equation} \label{convention-recall} \pl \disve x \lambda(v)= \pl \disv x (\lambda v) \text{ for all $\lambda\in [0,\infty)$, and } \pl \disve x \infty(v)= \left\{ \begin{array}{ll} \mathbf{X}^* & \text{for $v=0$}, \\ \emptyset & \text{otherwise}. \end{array} \right. \end{equation} Observe that, thanks to \eqref{later-added} we have $\pl \disv x (0) = \{ 0 \}$ for $\mathsf{x}\in \{\sfu,\sfz\}$. We now consider the system of subdifferential inclusions for the quadruple $(t,q,t',q')= (t,u,z, t',u',z')$ including the two parameters $\thn u, \thn z \in [0,\infty]$: \begin{subequations} \label{static-tq} \begin{align} \label{subdiff-stat.u} \pl \disve u{\thn u } (u') &+\frsubq u{t}{q} \ni 0 && \text{in } \Spu^*, \\ \label{subdiff-stat.z} \qquad \qquad\pl \calR(z') + \pl \disve z{\thn z} ( z') &+\frsubq z{t}{q} \ni 0 && \text{in } \Spz^*, \qquad \qquad \\ \label{switch-1-A} t'\,\frac{\thn u}{1{+} \thn u} &= t'\, \frac{\thn z}{1{+}\thn z } =0 \,. \end{align} \end{subequations} Here we use the usual convention $\infty/(1{+}\infty)=1$ and emphasize that, at this stage, system \eqref{static-tq} is not to be understood as a system of subdifferential inclusions. Instead, $(t',q')\in [0,\infty){\times} \Spq$ are treated as independent variables. With this we are able to introduce the following subsets of $[0,T]\ti \Spq \ti [0,\infty) \ti \Spq$, called \emph{evolution regimes}, thus providing a basis for the informal discussion at the end of Section \ref{s:EDI}: \begin{equation} \label{regime-sets} \begin{aligned} \rgs Eu & := \bigset{ (t,q,t',q') }{ \exists\, \thn z \in [0,\infty]{:} \text{ \eqref{static-tq} holds with } \thn u =0 }, \\ \rgs Rz & := \bigset{ (t,q,t',q') }{ \exists\, \thn u \in [0,\infty]{:} \text{ \eqref{static-tq} holds with } \thn z =0 }, \\ \rgs Vu & := \bigset{ (t,q,t',q') }{ \exists\, \thn z \in [0,\infty]{:} \text{ \eqref{static-tq} holds with } \thn u \in (0,\infty)} , \\ \rgs Vz & := \bigset{ (t,q,t',q') }{ \exists\, \thn u \in [0,\infty]{:} \text{ \eqref{static-tq} holds with } \thn z \in (0,\infty)} , \\ \rgs V{uz} & : = \bigset{ (t,q,t',q') }{ \text{ \eqref{static-tq} holds with } \thn u = \thn z \in (0,\infty)}, \\ \rgs Bu & := \bigset{ (t,q,t',q') }{ \exists\, \thn z \in [0,\infty]{:} \text{ \eqref{static-tq} holds with }\thn u =\infty} , \\ \rgs Bz & := \bigset{ (t,q,t',q') }{ \exists\, \thn u \in [0,\infty]{:} \text{ \eqref{static-tq} holds with }\thn z =\infty} . \end{aligned} \end{equation} The letters $\mathrm{E}, \, \mathrm{R},\, \mathrm{V},\, \mathrm{B} $, stand for \emph{Equilibrated}, \emph{Rate-independent}, \emph{Viscous}, and \emph{Blocked}, respectively. We will discuss the meaning of the names of the evolution regimes below. It will be efficient to use the notation \[ \rgs Au \rgs Cz := \rgs Au \cap \rgs Cz\quad \text{ for } \mathrm{A \in \{E,V,B\}} \text{ and } \mathrm{C \in \{R,V,B\} }; \] nonetheless, note that the set $ \rgs V{uz} $ is different from (indeed, strictly contained in) $\rgs Vu \rgs Vz $. We also remark that any set involving `V' of `B' is necessarily restricted to the subspace with $t'=0$ because of \eqref{switch-1-A}. With this, we are now in a position to state our result for the contact sets $\Ctc_\alpha$, under the additional condition \eqref{it-is-product} on the product form of the Fr\'echet subdifferential $\pl_q\calE$. Proposition \ref{pr:charact-Ctc-set} below will be proven in Section \ref{su:pr:char-Ctc-set}. \begin{proposition}[$\Ctc_\alpha$ and evolution regimes] \label{pr:charact-Ctc-set} If, in addition, the Fr\'echet subdifferential $\pl_q \calE$ has the product structure \eqref{it-is-product}, then we have the following inclusions for the contact set $ \Ctc_\alpha $: \begin{subequations} \label{eq:CtcSet.Incl} \begin{align} \alpha>1:\quad &\label{eq:CtcSet.Incl.g1} \Ctc_\alpha \ \subset \ \rgs Eu \rgs Rz\ \cup \ \rgs Eu\rgs Vz \ \cup \ \rgs Bz , \\ \alpha=1:\quad &\label{eq:CtcSet.Incl.e1} \Ctc_{1 \,}\ \subset \ \rgs Eu \rgs Rz \ \cup \ \rgs V{uz} \ \cup \ \rgs Bu \rgs Bz, \\ \alpha\in (0,1):\ \ &\label{eq:CtcSet.Incl.l1} \Ctc_\alpha \ \subset \ \rgs Eu\rgs Rz \ \cup \ \rgs Vu \rgs Rz \ \cup \ \rgs Bu \,, \end{align} \end{subequations} where in all cases the three sets on the right-hand side are disjoint. \end{proposition} \begin{remark}\slshape \label{rm:???} In the characterization of (enhanced) $\pBV$ solution provided by Proposition \ref{pr:char-eBV}, the contact condition $\meq{0}\alpha tq{t'}{q'} = -\pairing{}{\Spu}{\mu}{u'} - \pairing{}{\Spz}{\zeta}{z'}$ holds \emph{for all} $ (\mu,\zeta) \in \argminSlo utq {\times} \argminSlo ztq$. Hence, it seems possible to define a smaller contact set $\widetilde\Sigma_\alpha$ by replacing ``$\exists$'' in \eqref{ctc-set} by ``$\forall$''. Because of $\widetilde\Sigma_\alpha \subset \Ctc_\alpha$ inclusions \eqref{eq:CtcSet.Incl} would remain true. However, using our larger set $\Ctc_\alpha$ is sufficient to deduce that $\pBV$ solutions satisfy the system of subdifferential inclusions \eqref{param-subdif-incl} ahead. \end{remark} The different evolution regimes characterized by the right-hand sides in \eqref{eq:CtcSet.Incl} can be visualized by considering the three real parameters $(t',\thn u, \thn z)\in [0,\infty)\ti [0,\infty]^2$, since the rate-independent regimes $\rgs Eu$ and $\rgs Rz$ are given by $\thn u=0$ and $\thn z=0$ respectively. Similarly, the viscous regimes $\rgs Vx$, $\mathsf{x} \in \{ \mathsf{u}, \mathsf{z}\}$, are defined via $\thn x\in (0,\infty)$, and the blocking regime $\rgs Bx$ is determined by $\thn x=\infty$. The sets on the right-hand sides in \eqref{eq:CtcSet.Incl} are then defined in terms of the switching conditions \begin{equation} \label{eq:Switch11} \text{\eqref{switch-1-A} holds \quad and } \quad \begin{cases} \thn u=0 \text{ or } \thn z=\infty&\text{ for } \alpha>1,\\ \thn u=\thn z \in [0,\infty] & \text{ for }\alpha=1,\\ \thn u=\infty \text{ or }\thn z =0 & \text{ for }\alpha \in (0,1). \end{cases} \end{equation} The corresponding sets in the $(t',\thn u,\thn z)$ space are depicted in Figure \ref{fig:EqBlViRa}. \begin{figure} \mbox{}\hfill\begin{minipage}{0.38\textwidth} \caption{} \label{fig:EqBlViRa}\raggedright The switching conditions and the different regimes are displayed in the space for $(t',\thn u,\thn z) \in [0,\infty]^3$. For $\sft'>0$ the only admissible regime is given by the intersection $\rgs Eu \rgs Rz=\rgs Eu \cap \rgs Rz$. For $\sft'=0$ the different admissible regimes depend on $\alpha>0$: $\ \ \ \alpha>1$: \ \ $\rgs Eu \cup \rgs Bz$ $\ \ \ \alpha=1$: \ \ $\rgs Eu \rgs Rz \cup \rgs V{uz}\cup \rgs Bu \rgs Bz$ $\alpha\in (0,1)$: $\rgs Vu \rgs Rz \cup \rgs Bu$ \end{minipage} \hfill \begin{minipage}{0.5\textwidth} \newlength{\AMlength} \setlength{\AMlength}{2.95em} \begin{tikzpicture}[scale =1.60] \draw[fill=gray!20] (-1,-0.5)--(0,0)--(0,2)-- (-1,1.5) node[left]{$\rgs Rz$}--(-1,-0.5); \draw[fill=gray!20] (-1,-0.5)--(0,0)--(2,0)-- (1,-0.5) node[below]{$\rgs Eu$}--(-1,-0.5); \draw[->, thick] (-0.4,0)--(2.5,0) node[right]{$\frac{\lambda_\sfz}{1{+}\lambda_\sfz}$}; \draw[->, thick] (0,-0.4)--(0,2.5) node[left]{$\frac{\lambda_\sfu}{1{+}\lambda_\sfu}$}; \draw[->, thick] (0.4,0.2)--(-1.3,-0.65) node[pos=1.1, above]{$\frac{\sft'}{1{+}\sft'}$}; \draw[fill=black] (2,2) circle (0.05); \draw (0.2,2)--(-0.6,2) node[left] {$\rgs Bu $}; \draw (2,0.2)--(2,-0.4) node[below] {$\rgs Bz $}; \node[right] at (2.1,1) {$\left.\rule[-\AMlength]{0pt}{2\AMlength}\right\} \rgs Vu$}; \draw (0.0,0.0)--(1.96,1.96) node[above, rotate=45,pos=0.5] {$\rgs V{uz} $}; \node[above] at (1,2.2) {$\overbrace{\rule{2\AMlength}{0pt}}^{\displaystyle \rgs Vz}$}; \draw[line width=3 , opacity=0.9, color=green!70!blue] (-1,-0.44) -- (0,0.06) -- (0,2)--(1.95,2) node[pos=0.4,below]{$\alpha \in (0,1)$}; \draw[line width=3 , opacity=0.9, color=red] (-1,-0.5) -- (0.0,0.0) --(1.97,1.97) node[pos=0.5,below, rotate=45]{$\alpha =1$}; \draw[line width=3 , opacity=0.9, color=blue] (-0.9,-0.5) -- (0.1,0.0) -- (2,0) node[pos=0.75,above]{$\alpha>1$} --(2,1.95); \fill[color = white] (-1,-0.5) circle (0.1); \draw (-1,-0.5) circle (0.1); \end{tikzpicture} \end{minipage} \hfill\mbox{} \end{figure} The inclusions \eqref{eq:CtcSet.Incl} that relate the contact sets to the different evolution regimes $\rgs Au \rgs Cz$ have a clear and immediate interpretation in terms of the evolutionary behavior of an enhanced $\pBV$ solution $(\sft, \sfq)$: \begin{compactitem} \item[\textbullet] $\rgs Eu$ encodes the regime where $u=\sfu(s)$ stays in \emph{equilibria}, which may depend on $s$. Indeed, inserting $\thn u(s)=0$ in \eqref{subdiff-stat.u} leads to the equilibrium condition $0 \in \frsubq u{\sft(s)}{\sfq(s)}$. This means that $\sfu(s)$ follows $\sfz(s)$ that may evolve rate-independently when $t'>0$, and may follow a viscous jump path, or may be blocked, when $\sft'(s)=0$. \item[\textbullet] $\rgs Rz$ denotes the rate-independent evolution for $\sfz(s)$, where $\thn z(s)=0$. The component $\sfu(s)$ either follows staying in equilibria, evolves viscously, or is blocked. \item[\textbullet] In the case $t'>0$ only the rate-independent regime $\rgs Eu\rgs Rz$ is admissible. This is the regime with $\thn u=\thn z=0$ where the viscous dissipation potentials $\disv u$ and $\disv z$ do not come into action. \item[\textbullet] In the regime $\rgs Vx$, the variable $\sfx(s)$ evolves viscously with $\thn x(s)\in (0,\infty)$, and necessarily $\sft'(s)=0$. \item[\textbullet] $\rgs V{uz}$ is the special case occurring only for $\alpha=1$, where $\thn u(s)=\thn z(s)\in (0,\infty)$, i.e.\ both components have a synchronous viscous phase. \item[\textbullet] The blocked regime $\rgs Bx$, occurring when $\sft'(s)=0$, encodes the situation that $\thn x(s)=\infty$, which means that on the given time scale the viscosity is so strong that the $\mathsf{x}$-component cannot move, i.e.\ it is blocked with $\sfx'(s)=0$. \item[\textbullet] $\rgs B{uz} = \rgs Bu \rgs Bz$ means that both components are blocked, namely $\sfq'(s)=0$. This can occur, for instance, if we set $(\sft(s), \sfq(s))=(t_*,q_*)$ for $s\in (s_1,s_2)$. Then, $\thn u(s)=\thn z(s)=\infty$ still gives a trivial, constant solution. Such a behavior may occur after taking a limit like $\eps\to 0^+$, but of course the interval can be cut out by defining a $\pBV$ solution on $[0,\sfS{-}s_2{+}s_1]$. \end{compactitem} We are now in a position to prove a characterization of \emph{enhanced} $\pBV$ solutions in terms of the following system of subdifferential inclusions \begin{equation} \label{param-subdif-incl} \begin{aligned} \pl \disve u{\thn u(s)} ( \sfu'(s)) + \frsub u{\sft(s)}{\sfu(s)}{\sfz(s)} \ni 0 & \quad\text{in } \Spu^*, \\ \pl \calR(\sfz'(s)) + \pl \disve z{\thn z(s)} ( \sfz'(s)) + \frsub z{\sft(s)}{\sfu(s)}{\sfz(s)} \ni 0 &\quad \text{in } \Spz^*, \end{aligned} \end{equation} where the balanced interplay of viscous and rate-independent behavior in the equations for $u$ and $z$, respectively, is determined by the (arclength-dependent) parameters $\thn u(s)$ or $\thn z(s)$. We emphasize that the so-called \emph{switching conditions} for $t'\geq 0$ and $\thn u, \,\thn z \in [0,\infty]$, cf.\ \eqref{eq:SwitchCond} below, are different for the three cases $\alpha>1$, $\alpha=1$, and $\alpha\in (0,1)$. \begin{theorem}[Differential characterization of enhanced $\pBV$ solutions] \label{thm:diff-charact} Assume Hypotheses \ref{hyp:setup}, \ref{hyp:diss-basic}, \ref{hyp:1}, \ref{h:closedness}, and \ref{hyp:Sept19} and let the parametrized chain rule \eqref{better-chain-rule-MOexpl} hold. In addition, suppose that the Fr\'echet subdifferential $\pl_q \calE$ has the product structure from \eqref{it-is-product}. Let $(\sft,\sfq) \in \mathscr{A}([0,\sfS];[0,T]\ti \Spq)$ be an admissible parametrized curve with $\sfq \in \AC ([0,\sfS];\Spq)$. \begin{enumerate} \item If $(\sft,\sfq):(0,\sfS)\to \Spq$ is a enhanced $\pBV$ solution of $\RIS$, then there exist measurable functions $(\thn u,\thn z): (0,\sfS)\to [0,\infty]^2 $ and $\xi=(\mu,\zeta): (0,\sfS)\to \Spu^*\ti \Spz^*$ with \begin{subequations} \label{e:diff-char} \begin{equation} \label{exist-meas-select} \mu(s) \in \frsub u{\sft(s)}{\sfu(s)}{\sfz(s)} \ \text{ and } \ \zeta(s) \in \frsub z{\sft(s)}{\sfu(s)}{\sfz(s)} \quad \foraa\, s \in (0,\sfS) \end{equation} satisfying for almost all $s\in (0,\sfS)$ the subdifferential inclusions \begin{equation} \label{param-subdif-incl-selections} \begin{aligned} -\mu(s) \in \pl \disve u{\thn u(s)} ( \sfu'(s))\hspace{5em} & \quad\text{in } \Spu^*, \\ - \zeta(s) \in \pl \calR(\sfz'(s)) + \pl \disve z{\thn z(s)} ( \sfz'(s)) &\quad \text{in } \Spz^*, \end{aligned} \end{equation} and the switching conditions \begin{equation} \label{eq:SwitchCond} \sft'(s)\,\frac{\thn u(s)}{1{+}\thn u(s) } = 0 = \sft'(s)\,\frac{\thn z(s)} {1{+}\thn z(s)} \quad \text{and } \quad \begin{cases} \thn u(s)\,\frac1{1{+}\thn z(s)} =0 & \text{for }\alpha>1, \\ \thn u(s) = \thn z(s)& \text{for }\alpha=1, \\ \frac1{1{+}\thn u(s)}\, \thn z(s)=0& \text{for } \alpha\in (0,1). \end{cases} \end{equation} \end{subequations} \item Conversely, if there exist measurable functions $(\thn u,\thn z): (0,\sfS)\to [0,\infty]^2 $ and $\xi=(\mu,\zeta): (0,\sfS)\to \Spu^*\ti \Spz^*$ satisfying \eqref{e:diff-char} and, in addition, \begin{equation} \label{hyp-4-chr} \sup_{s \in (0,\mathsf{S})} |\calE(\sft(s),\sfq(s))|<\infty, \quad \text{and} \quad \int_0^{\mathsf{S}} \!\big( \| \mu(s)\|_{\Spu^*} \| \sfu'(s)\|_{\Spu} {+} \| \zeta(s)\|_{\Spz^*} \| \sfz'(s)\|_{\Spz} \big) \dd s <\infty, \end{equation} then $(\sft,\sfq)$ is an enhanced $\pBV$ solution. \end{enumerate} \end{theorem} \begin{proof} Part (1) basically follows from combining the characterization of enhanced $\pBV$ solutions from Proposition \ref{pr:char-eBV} in terms of the contact set, with Proposition \ref{pr:charact-Ctc-set}. Only the measurability of the coefficients $\thn u,\, \thn z: [0,\sfS]\to [0,\infty] $ and of the selections $\xi=(\mu,\zeta):(0,\sfS) \to \Spu^* \ti \Spz^*$ deserves some discussion that is postponed to Appendix \ref{appendix-measurability}. Let us now carry out the proof of Part (2). After cutting out possible intervals where $(\sft,\sfq)$ may be constant (i.e.\ in the blocking regime $\rgs Bu \rgs Bz$), we may suppose that the admissible parametrized curve $(\sft,\sfq)$ fulfills the non-degeneracy condition \eqref{non-degeneracy}. In what follows, we will use the short-hand notation \begin{equation} \label{short-hand-regimes} (0,\sfS) \cap \rgs Au \rgs Cz: = \{ s \in (0,\sfS)\, : \ (\sft(s),\sfq(s), \sft'(s), \sfq'(s) ) \in \rgs Au \rgs Cz \} \end{equation} for $\mathrm{A \in \{E,V,B\}} $ and $ \mathrm{C \in \{R,V,B\} }$. We will discuss at length the case $\alpha>1$; the very same arguments yield the thesis also in the cases $\alpha=1$ and $\alpha \in (0,1)$. It follows from the switching conditions \eqref{eq:SwitchCond} that the integral $I:=\int_0^\sfS \big( \pairing{}{\Spu}{{-}\mu}{\sfu'}{+} \pairing{}{\Spu}{{-}\zeta}{\sfz'}\big) \dd s$ decomposes as \begin{align} & \label{initial-step} I= I_1+I_2+I_3 \quad \text{with } I_1:= \int_{(0,\sfS) {\cap} \rgs Eu \rgs Rz} \!\! \big( \pairing{}{\Spu}{{-}\mu(s)}{\sfu'(s)}{+} \pairing{}{\Spz}{{-}\zeta(s)}{\sfz'(s)} \big) \dd s\,, \\ & \nonumber I_2:= \int_{(0,\sfS) {\cap} \rgs Eu\rgs Vz} \!\! \big( \pairing{}{\Spu}{{-}\mu}{\sfu'}{+} \pairing{}{\Spz}{{-}\zeta}{\sfz'} \big) \dd s, \text{ and } I_3:= \int_{(0,\sfS) {\cap} \rgs Bz} \!\!\big( \pairing{}{\Spu}{{-}\mu}{\sfu'}{+} \pairing{}{\Spz}{{-}\zeta}{\sfz'} \big) \dd s\,, \end{align} where we use that the three regimes $ \rgs Eu \rgs Rz$, $ \rgs Eu\rgs Vz$, and $\rgs Bz$ are disjoint. Now, on $(0,\sfS) \cap \rgs Eu \rgs Rz$ we have that $\mu(s) \equiv 0$, while $\zeta(s) \in \pl \calR(\sfz'(s))$, so that \begin{equation*} I_1= \int_{(0,\sfS) {\cap} \rgs Eu \rgs Rz} \calR(\sfz'(s)) = \int_{(0,\sfS) {\cap} \rgs Eu \rgs Rz} \meq 0\alpha {\sft(s)}{\sfq(s)}{\sft'(s)}{\sfq'(s)} \dd s \end{equation*} where we used \eqref{decomposition-M-FUNCTION} and \eqref{l:partial}, taking into account $\slov u {\sft(s)}{\sfq(s)} = \slov z {\sft(s)}{\sfq(s)} \equiv 0$ on $(0,\sfS) \cap \rgs Eu \rgs Rz$. On $(0,\sfS) \cap \rgs Eu\rgs Vz$ we have $\slov u {\sft(s)}{\sfq(s)} \equiv 0$ and the $z$-equation in \eqref{param-subdif-incl} holds with $\thn z(s)>0$, so that \[ \begin{aligned} I_2 & = \int_{(0,\sfS) {\cap} \rgs Eu\rgs Vz} \frac1{\thn z(s)}\pairing{}{\Spu}{{-}\zeta(s)}{\thn z(s) z'(s)} \dd s \\ & \stackrel{(1)}{=} \int_{(0,\sfS) {\cap} \rgs Eu\rgs Vz} \frac1{\thn z(s)} \left( \calR \big( \thn z (s) \sfz'(s) \big) {+} \disv z \big( \thn z (s) \sfz'(s)\big) {+} \conj z({-}\zeta(s)) \right) \dd s \\ & \stackrel{(2)}{\geq} \int_{(0,\sfS) {\cap} \rgs Eu\rgs Vz} \frac1{\thn z(s)} \left( \calR \big( \thn z (s) \sfz'(s) \big) {+} \disv z \big( \thn z (s) \sfz'(s)\big) {+} \slov z {\sft(s)}{\sfq(s)} \right) \dd s \\ & \stackrel{(3)}{\geq} \int_{(0,\sfS) {\cap} \rgs Eu\rgs Vz} \!\!\left( \calR \big( \sfz'(s) \big) {+} \mfb_{\disv z}(z'(s),\slov z {\sft(s)}{\sfq(s)}) \right) \dd s \stackrel{(4)}{=} \int_{(0,\sfS) {\cap} \rgs Eu\rgs Vz} \!\!\meq 0\alpha {\sft(s)}{\sfq(s)}{\sft'(s)}{\sfq'(s)} \dd s , \end{aligned} \] where {\footnotesize (1)} follows from \eqref{param-subdif-incl-selections} via Fenchel-Moreau conjugation, {\footnotesize (2)} is a consequence of the definition of $\slov z{\sft}{\sfq}$, {\footnotesize (3)} is due to the definition of $ \mfb_{\disv z}$, and {\footnotesize (4)} again ensues from \eqref{decomposition-M-FUNCTION} and \eqref{l:partial}. Finally, with the very same arguments we find that \[ \begin{aligned} I_3 & = \int_{(0,\sfS) {\cap} \rgs Bz} \pairing{}{\Spu}{{-}\mu(s)}{\sfu'(s)} \ = \ \int_{(0,\sfS) {\cap} \rgs Bz} \frac1{\thn u(s)} \pairing{}{\Spu}{{-}\mu(s)}{\thn u(s)\sfu'(s)} \\ & \geq \int_{(0,\sfS) {\cap} \rgs Bz} \mfb_{\disv u}(\sfu(s),\slov u {\sft(s)}{\sfq(s)}) \dd s \ = \ \int_{(0,\sfS) {\cap} \rgs Bz} \meq 0\alpha {\sft(s)}{\sfq(s)}{\sft'(s)}{\sfq'(s)} \dd s\,. \end{aligned} \] Combining the above estimates with \eqref{initial-step} and with the chain-rule \eqref{eq:48strong} (which applies thanks to \eqref{hyp-4-chr}), we ultimately conclude that \[ \begin{aligned} \eneq {\sft(0)}{\sfq(0)}+ \!\int_0^\sfS \! \pl_t \calE(\sft(s),\sfq(s))\dd s & \geq \eneq {\sft(\sfS)}{\sfq(\sfS)} + \int_{0}^\sfS \!\big( \pairing{}{\Spu}{{-}\mu(s)}{\sfu'(s)}{+} \pairing{}{\Spz}{{-}\zeta(s)}{\sfz'(s)} \big) \dd s \\ & \geq \eneq {\sft(\sfS)}{\sfq(\sfS)} + \int_0^\sfS \meq 0\alpha {\sft(s)}{\sfq(s)}{\sft'(s)}{\sfq'(s)} \dd s \,, \end{aligned} \] namely $(\sft,\sfq)$ fulfills the upper energy-dissipation estimate. Therefore, by Lemma \ref{l:characterizBV} we conclude that $(\sft,\sfq)$ is an (enhanced) $\pBV$ solution. \end{proof} \Section{True Balanced-Viscosity solutions} \label{ss:4.3} This section is devoted to the the concept of true Balanced-Viscosity ($\BV$) solutions, i.e.\ solutions defined on the original time interval $[0,T]$ instead via the artificial arc length $s \in [0,\sfS]$. This concept will be introduced in Section \ref{ss:5.1} in Definition \ref{def:trueBV}. The central ingredient in this notion is a Finsler-type transition cost that measures the energy dissipated at jumps of the curve $(u,z)$, see Definition \ref{def:jump}. In Section \ref{ss:5.2} we will gain further insight into the fine properties of true $\BV$ solutions, while Section \ref{su:Exist.BV} states our two existence results, Theorems \ref{thm:exist-trueBV} and \ref{thm:exist-nonpar-enh}, in which $\BV$ solutions to the rate-independent system $\RIS $ are obtained by taking the vanishing-viscosity limit of system \eqref{van-visc-intro} in the real process time, \emph{without reparametrization}. Section \ref{ss:5.3} addresses the non-parametrized counterpart of enhanced $\pBV$ solutions called \emph{enhanced $\BV$ solutions}, and Section \ref{su:Comp.pBV.BV} provides how parametrized and true $\BV$ solutions are related. We start with some notations for functions having well-defined jumps. \begin{notation}[Regulated functions] \label{not:1.1} \slshape Given a Banach space $\mathbf{B}$, we denote by \begin{equation} \label{def:regulated} \begin{aligned} \Reg 0T{\mathbf{B}}: =\Big\{\: f: [0,T] \to \mathbf{B}\; \Big| \; \forall\, t\in [0,T]: \ &\llim ft: = \lim_{s\to t^-} f(s) \text{ exists in }\mathbf{B}, \\ & \rlim ft: = \lim_{r\to t^+}f(r) \text{ exists in }\mathbf{B} \; \Big\} \end{aligned} \end{equation} the space of (everywhere defined) \emph{regulated functions} on $[0,T]$ with values in $\mathbf{B}$, where we use $\llim f0:= f(0)$ and $\rlim fT: = f(T)$. The symbol $\rmB\rmV([0,T];\mathbf{B})$ denotes the space of everywhere defined functions of bounded $\mathbf{B}$-variation such that $\rmB\rmV([0,T];\mathbf{B}) \subset \Reg 0T{\mathbf{B}}$ with continuous embedding. \end{notation} \noindent Note that for $f \in \Reg 0T{\mathbf{B}}$ the three values $f(t^-)$, $f(t)$, $f(t^+)$ may all be different for $t\in (0,T)$, and that distinguishing these values will be crucial for our notion of $\BV$ solutions. For a given $z\in \rmB\rmV([0,T];\Spy)$ we also introduce the $\calR$-variation \begin{equation} \label{R-variation} \Var_{\calR}(z;[a,b]): = \sup\Bigset{ \sum_{i=1}^N \calR(z(t_i){-}z(t_{i-1}))} {N\in \N,\ a=t_0<t_1<\ldots<t_{N}=b } \end{equation} for $ [a,b]\subset [0,T]$, and we observe that \begin{equation} \label{cited-VarR-later} \Var_{\calR}(z;[a,b]) = \int_a^b \calR[z'](t) \dd t \quad \text{ for } z \in \AC ([a,b];\Spy,\calR). \end{equation} We mention in advance that true $\BV$ solutions are curves $q=(u,z)$, with $u\in \rmB\rmV([0,T];\Spu)$ and $z\in\Reg 0T{\Spz} \cap \rmB\rmV([0,T];\Spy)$. For such $q=(u,z)$ we introduce the \emph{jump set} \begin{equation} \label{def:jumpset} \mathrm{J}[q] = \mathrm{J}[u] \cup \mathrm{J}[z] \text{ with } \mathrm{J}[w] : = \bigset{ t \in [0,T] }{ \llim wt \neq w(t) \text{ or } \rlim wt \neq w(t) }; \end{equation} we record that $ \mathrm{J}[q]$ consists of at most countably many points. Note that for $ \mathrm{J}[z]$ the left and the right limits are considered with respect to the norm topology of $\Spz$. For later use, we finally observe that \begin{equation} \label{4later-use-embed} \rmL^\infty(0,T;\Spx) \cap \rmB\rmV([0,T];\Spy) \subset \Reg 0T{\Spz}, \end{equation} which can be easily checked exploiting the (compact) embeddings $\Spx\Subset \Spz \subset \Spy$. \Subsection{Definition of true $\BV$ solution} \label{ss:5.1} The (possibly asymmetric) Finsler cost function is obtained by minimizing an `infinitesimal cost', depending on the fixed process time $t\in [0,T]$ and defined in terms of the \RJMF\ $ \mename{0}{\alpha}$, along \emph{admissible transition curves} $\sfq : [0,1] \to \Spq$. From now on, for better clarity we will denote a generic transition curve by $\serifTeta$ in place of $\sfq$. \begin{definition}[Admissible transition curves, Finsler cost] \label{def:jump} For given $t\in [0,T]$ and $q_0=(u_0,z_0), q_1=(u_1,z_1) \in \Spu\ti \Spz$, we define the Finsler cost induced by $\mename 0\alpha$ by \begin{equation} \label{Finsler-cost} \costq{\mename 0\alpha}{t} {q_0}{q_1} : = \inf_{\serifTeta \in \admtcq t{q_0}{q_1}} \int_0^1 \mename 0\alpha [t, \serifTeta, 0, \serifTeta'] \dd r \end{equation} with the short-hand notation $\mename 0\alpha [\cdot,\cdot,\cdot,\cdot]$ from \eqref{short-hand-M0} and $\admtcq t{q_0}{q_1}$ the set of all admissible transition curves at time $t$ between $q_0$ and $q_1$, see Definition \ref{def:adm-p-c}. \end{definition} Thanks to the $1$-positive homogeneity of the functional $\serifTeta' \mapsto \mename 0\alpha [t,\serifTeta,0,\serifTeta']$, we observe that it is not restrictive to suppose that all transition curves are defined on $[0,1]$. We are now ready to define a new variation called the $\mename 0 \alpha$-total variation of a curve $q=(u,z):[0,T]\to \Spq$. It consists, cf.\ \eqref{Finsler-variation} below, of the $\calR$-variation of $z$ as defined in \eqref{R-variation} plus extra contributions at jump points $t_*\in \mathrm{J}(q)$ that may arise through rate-independent or viscous transition costs between $\llim q{t_*}$, $q(t_*)$, and $\rlim q{t_*}$. These extra contributions are given by the Finsler cost \eqref{Finsler-cost}, from which the $\calR$-variation is subtracted to avoid that it is counted twice in the $\mename 0 \alpha$-variation. The resulting terms are positive because we always have $\costq{\mename 0\alpha}{t} {(u_0,z_0)}{(u_1,z_1)} \geq \calR(z_1{-}z_0)$ since $ \mename 0\alpha [t,q,0,q'] \geq \calR(z')$ (using $\mredq 0\alpha t{q}{0}{q'}\geq 0$). \begin{definition}[$\mename 0\alpha$-variations] \label{def:m.a.Variation} Let $q =(u,z):[0,T] \to \Spq$ with $u \in \rmB\rmV([0,T];\Spu) $ and $z\in \mathrm{R}([0,T];\Spz) {\cap} $ $ \rmB\rmV([0,T];\Spy) )$ be a curve with $\sup_{t\in [0,T]} \mfE (q(t)) \leq E <\infty$ and jump set $\mathrm{J}[q]$. For closed subintervals $[a,b]\subset [0,T]$ we define \begin{enumerate} \item the \emph{extra Viscous Jump Variation} of $q$ induced by $\mename 0\alpha$ on $[a,b]$ via \begin{equation} \label{eq:def.EVJC} \begin{aligned} \mathrm{eVJV}_{\mename 0\alpha}(q;[a,b])& : = \big( \costq{\mename0\alpha}{a}{q(a)}{\rlim q a} -\calR(\rlim z a{-} z(a))\big) \\[0.3em] &\ \quad + \!\! \sum_{t \in \mathrm{J}[q]\cap (a,b)} \!\! \big(\costq{\mename 0\alpha}{t}{\llim q t}{q(t)} -\calR(z(t){-} \llim zt) \\[-0.7em] &\ \qquad \qquad \qquad +\costq{\mename0\alpha}{t}{q(t) }{\rlim q t } -\calR(\rlim z t{-} z(t))\big) \\[0.3em] &\ \quad + \big(\costq{\mename0\alpha}{b}{\llim qb}{q(b)} -\calR(z(b){-} \llim zb)\big) \, ; \end{aligned} \end{equation} \item the $\mename 0\alpha$-total variation \begin{equation} \label{Finsler-variation} \Var_{\mename 0\alpha}(q;[a,b]): = \mathrm{Var}_{\calR}(z;[a,b]) + \mathrm{eVJV}_{\mename 0\alpha}(q;[a,b])\,. \end{equation} \end{enumerate} \end{definition} \noindent With slight abuse of notation, here we will use the symbol $ \Var_{\mename 0\alpha}$ for the $\mename 0\alpha$-total variation, although this is not a standard form of total variation, cf.\ \cite[Rem.\,3.5]{MRS12}. Just like for its parametrized counterpart, our definition of (true) $\BV$ solutions will rely on a suitable chain-rule requirement, enhancing Hypothesis \ref{h:ch-rule} to curves $q=(u,z)$ having just a $\mathrm{BV}$-time regularity. For consistency, we will formulate this $\mathrm{BV}$-chain rule as a {hypothesis}. \begin{hypothesis}[Chain rule in BV] \label{hyp:BV-ch-rule} For every curve $q=(u,z):[0,T]\to \Spq$ with $u \in \rmB\rmV([0,T];\Spu)$ and $z \in \mathrm{R}([0,T];\Spz) {\cap}$ $ \rmB\rmV([0,T];\Spy)$ and satisfying \[ \slov ut{q(t)} + \slov zt{q(t)} = 0 \quad \text{for all } \ t \in [0,T]\setminus \mathrm{J}[q] \] the following chain-rule estimate holds, for all closed subset $[t_0,t_1] \subset [0,T]$: \begin{equation} \label{BV-ch-rule} \begin{aligned} & \text{the map } t\mapsto \eneq t{q(t)} \text{ belongs to } \rmB\rmV([0,T]) \text{ and} \\ & \eneq {t_1}{q(t_1)} - \eneq {t_0}{q(t_0)} - \int_{t_0}^{t_1}\pl_t \eneq{s}{q(s)} \dd s \geq - \Variq{\mename 0\alpha}q{t_0}{t_1} . \end{aligned} \end{equation} \end{hypothesis} In Lemma \ref{l:nice-implication} in Appendix \ref{s:app-CR} we to show that the parametrized chain rule from Hypothesis \ref{h:ch-rule-param} also guarantees the validity of Hypothesis \ref{hyp:BV-ch-rule}. Hence, subsequently we will directly assume Hypothesis \ref{h:ch-rule-param}. Let us now give our definition of $\BV$ solutions $q:[0,T]\to \Spu\ti \Spz$, i.e.\ $\BV$ solutions without parametrization. We sometimes use the word `true $\BV$ solution' to distinguish $\BV$ solutions from `parametrized $\BV$ solutions', hence there is no difference between $\BV$ solutions and true $\BV$ solutions. Definition \ref{def:trueBV} below is a natural extension of the concept of $\BV$ solutions introduced in \cite[Def.\,3.10]{MRS13}, now taking care of the equilibrium condition \eqref{stationary-u} for $u$ corresponding to the regime $\rgs Eu$, the local stability condition \eqref{loc-stab} for $z$ corresponding to the regime $\rgs Rz$, and an energy-dissipation balance \eqref{enid-strictBV}. Hence, all jump behavior is compressed into the definition of the Finsler cost $\costname{\mename 0\alpha}$, the total $\mename 0\alpha$-variation, and the validity of the energy-dissipation balance. \begin{definition}[$\BV$ solutions] \label{def:trueBV} Let the rate-independent system $\RIS$ fulfill Hypothesis \ref{hyp:BV-ch-rule}. A curve $q=(u,z):[0,T]\to \Spq$ is called a \emph{Balanced-Viscosity solution} to $\RIS$ if satisfies the following conditions: \begin{subequations} \label{eq:def.BVsol} \begin{itemize} \item[\textbullet] $u \in \mathrm{BV}([0,T];\Spu)$ and $z \in \mathrm{R}([0,T];\Spz) \cap \rmB\rmV([0,T];\Spy) $; \item[\textbullet] the stationary equation \begin{equation} \label{stationary-u} \slov ut{q(t)}=0 \quad \text{for all } t \in [0,T]\setminus \mathrm{J}[q]; \end{equation} \item[\textbullet] the local stability condition \begin{equation} \label{loc-stab} \slov zt{q(t)}=0 \quad \text{for all } t \in [0,T]\setminus \mathrm{J}[q]; \end{equation} \item[\textbullet] the energy-dissipation balance \begin{equation} \label{enid-strictBV} \eneq t{q(t)} + \Variq{\mename 0\alpha}qst = \eneq s{q(s)} + \int_s^t \pl_t \eneq r{q(r)} \dd r \quad \text{for } 0\leq s\leq t \leq T\,. \end{equation} \end{itemize} \end{subequations} \end{definition} We postpone to Section \ref{ss:5.3} a result comparing parametrized and true $\BV$ solutions. With the exception of our existence results Theorems \ref{thm:exist-trueBV} and \ref{thm:exist-nonpar-enh}, in the following statements we will omit to explicitly recall the assumptions of Section \ref{s:setup}; we will only invoke the chain rule from Hyp.\ \ref{h:ch-rule-param}. \Subsection{Characterization and fine properties of $\BV$ solutions} \label{ss:5.2} In the same way as for their parametrized version, thanks to the chain rule \eqref{BV-ch-rule} we have a characterization of $\BV$ solutions in terms of the upper energy estimate $\leq $ in \eqref{enid}, on the \emph{whole} interval $[0,T]$. We also have a second characterization in terms of a simple energy-dissipation balance like for energetic solutions as in \cite{MieThe99MMRI,DM-Toa2002,Miel05ERIS,MieRouBOOK}, combined with jump conditions that balance the different dissipation mechanics that may be active at a jump point. The proof of Proposition \ref{prop:BV-charact} follows, with minimal changes, from the arguments for \cite[Cor.\,3.14, Thm.\,3.15]{MRS13}, to which the reader is referred. \begin{proposition} \label{prop:BV-charact} Let the rate-independent system $\RIS$ fulfill Hypothesis \ref{h:ch-rule-param}. For a curve $q=(u,z) \in \rmB\rmV([0,T];\Spu) \ti ( \mathrm{R}([0,T];\Spz) {\cap} \rmB\rmV([0,T];\Spy) ) $ fulfilling the stationary equation \eqref{stationary-u}, and the local stability \eqref{loc-stab}, the following three assertions are equivalent: \begin{enumerate} \item $q$ is a $\BV$ solution of system $\RIS$; \item $q$ fulfills \begin{equation} \label{enineq-sBV0T} \eneq T{q(T)} + \Var_{\mename 0\alpha}(q;[0,T]) \leq \eneq 0{q(0)} + \int_0^T \pl_t \eneq r{q(r)} \dd r; \end{equation} \item $q $ fulfills the $\calR$-energy-dissipation inequality \begin{equation} \label{enineq-ene-st} \eneq t{q(t)} + \Var_{\calR}(q;[s,t]) \leq \eneq s{q(s)} + \int_s^t \pl_t \eneq r{q(r)} \dd r \end{equation} with $\Var_\calR$ from \eqref{R-variation}, \emph{and} the jump conditions at every $t\in \mathrm{J}[q]\, :$ \begin{equation} \label{jumpBV} \begin{aligned} \eneq t{\llim qt} - \eneq t{q(t)} & = \costq{\mename0\alpha}{t}{\llim qt}{q(t)}, \\ \eneq t{q(t)} - \eneq t{\rlim qt}& = \costq{\mename0\alpha}{t}{q(t)}{\rlim qt}. \end{aligned} \end{equation} \end{enumerate} \end{proposition} Conditions \eqref{jumpBV} provide a fine description of the behavior of $\BV$ solutions $(u,z)$ at jumps. However, the $\inf$ in the definition of $\costname{\mename 0\alpha}$ need not be attained, as the functional $\mename 0\alpha$ does not control the norm of the space where we look for the $\tht u$-component of admissible transition curves. Nonetheless, in certain situations (cf.\ the proof of Theorem \ref{th:pBV.v.BVsol} below) the existence of transitions attaining the optimal cost will play a key role. In fact, it will be sufficient to require the existence of these curves in cases in which the Finsler cost equals the energy release, which happens at the jump points of a true $\BV$ solution as in \eqref{jumpBV}. That is why, hereafter we will refer to such transitions as \emph{optimal jump transitions}, a notion that will be made precise in Definition \ref{def-OT}. Therein we restrict to transition curves, defined on $[0,1]$, connecting points $q_-=(u_-,z_-)$ and $q_+=(u_+,z_+)$ such that the $u$-components $u_-$ and $u_+$ are at equilibrium, and the $z$-components $z_-$ and $z_+$ are locally stable. \begin{definition} \label{def-OT} Given $t\in [0,T]$ and $q_-=(u_-,z_-), \, q_+=(u_+,z_+) \in \Spq$ fulfilling $\slov ut{q_\pm} = \slov zt{q_\pm}=0$, we call an admissible curve $\serifTeta \in \admtcq t{q_-}{q_+} $ an \emph{optimal transition} between $q_-$ and $ q_+ $ at time $t$ if it fulfills \[ \eneq t{q_-} - \eneq t{q_+} = \costq{\mename 0\alpha}{t}{q_-}{q_+} = \mename 0\alpha [t,\serifTeta, 0, \serifTeta'] \quad \aein\, (0,1)\,. \] Furthermore, we say that $\serifTeta = (\db\serifteta u,\db\serifteta z)$ is of \begin{itemize} \item[\textbullet] \emph{sliding type} if $\slov ut{\serifTeta(r)}=\slov zt{\serifTeta(r)}=0$ for all $r\in [0,1]$; \item[\textbullet] \emph{viscous type} $\slov ut{\serifTeta(r)}+\slov zt{\serifTeta(r)}>0$ for all $r\in (0,1)$. \end{itemize} \end{definition} \noindent Observe that an optimal transition of \emph{viscous} type can be governed by viscosity either in $u$, or in $z$, or in both variables. With the very same argument as for the proof of \cite[Prop.\,3.19]{MRS13}, to which we refer for all details, we can also show that every optimal transition can be decomposed in a canonical way into an (at most) countable collection of \emph{sliding} and \emph{viscous} transitions. We also refer to \cite[Sec.\,2.3]{RiScVe21TSSN} for the concept of so-called \emph{two-speed solutions}, which are defined in terms of slow rate-independent parts connected by jumps which themselves a concatenation of at most countable `jump resolution maps'. \subsection{Existence of $\BV$ solutions} \label{su:Exist.BV} A most interesting feature of $\BV$ solutions, already observed in \cite{MRS13}, is that it is possible to prove their existence by directly taking the vanishing-viscosity limit of the viscous system \eqref{dne-q}, \emph{without} reparametrization. In the following result, we take a slightly different viewpoint and in fact prove that every limit point $q$ (in the sense of pointwise weak convergence) of a sequence of viscous solutions $(q_\epsk)_k = (u_\epsk,z_\epsk)_k$, starting from well-prepared initial data and such that the $\rmB\rmV([0,T];\Spu)$-norm of $(u_\epsk)_k$ is \emph{a priori bounded} (cf.\ \eqref{estBV} below), is in fact a true $\BV$ solution. In fact, the existence of limit points can be proved, based on the energy estimates from Lemma \ref{l:1} and on \eqref{estBV}, via a standard compactness argument and the Helly Theorem. The statement of Theorem \ref{thm:exist-trueBV} below mirrors that of Theorem \ref{thm:existBV}: \begin{compactitem} \item First, \eqref{estBV} corresponds exactly to the a priori estimate for $\|\sfu_{\epsk}'\|_{\Spu} $ in \eqref{condition-4-normali}, and to estimate \eqref{est2} established in Proposition \ref{l:3.2}. Sufficient conditions for this estimate have been discussed in Section \ref{su:AprioViscSol}; alternatively, in concrete examples this estimate could be verified by direct calculations. \item Secondly, in the same way as with \eqref{eq:E.M.cvg.PBV} for parametrized solutions, with \eqref{cvs-BV-b}--\eqref{cvs-BV-c} ahead we are stating the convergence of the left-hand side terms in the viscous energy-dissipation estimate \eqref{enid-ineq} - in particular, \eqref{cvs-BV-c} ensures the convergence for $\eps_k \to 0^+$ of \begin{equation} \label{expl-me-1} \begin{aligned} & \int_s^t \meq \epsk{\alpha}{r}{q_\epsk(r)}1{q_\epsk'(r)} \dd r \\ & = \int_s^t \left( \disve {u}{\eps_k^\alpha}( u_\epsk'(r)) {+} \calR(z_\epsk'(r)) {+} \disve z\epsk( z_\epsk'(r)) {+} \frac{\slov ur{q_\epsk(r)}}{\epsk^\alpha} {+} \frac{\slov zr{q_\epsk(r)}}\epsk \right) \dd r \end{aligned} \end{equation} to the corresponding terms in the energy-dissipation balance \eqref{enid-strictBV}. We emphasize here that, for \eqref{cvs-BV-c} to hold it is crucial that the definition of the total variation functional $\Var_{\mename 0\alpha}$, in the general closed subinterval $[s,t] \subset [0,T]$, takes into account the appropriate contributions at the jump points. In particular, we point out that, by \eqref{eq:def.EVJC}, also the jumps occurring at the extrema $s$ and $t$ are taken into account exactly, in the sense that $\costq{\mename0\alpha} s {q(s)}{q(s^+)}=\lim_{\sigma \to s^+} \lim_{\eps_k\to 0} \int_0^\sigma \mename{\eps_k}\alpha(\cdot) \dd r $. \end{compactitem} \begin{theorem}[Convergence to $\BV$ solutions] \label{thm:exist-trueBV} Let the rate-independent system $\RIS$ fulfill Hypotheses \ref{hyp:setup}, \ref{hyp:diss-basic}, \ref{hyp:1}, \ref{h:closedness}, \ref{hyp:Sept19}, and \ref{h:ch-rule-param}. For any null sequence $(\eps_k)_k$ let $(q_\epsk)_k = (u_{\eps_k},z_{\eps_k})_k \subset \AC ([0,T]; \Spq)$ be a sequence of solutions to the generalized gradient system \eqref{dne-q}, such that convergences \eqref{init-data-cv} to a pair $(u_0,z_0)\in \domq$ hold at the initial time $t=0$, and such that, in addition, \begin{equation} \label{estBV} \widehat S = \sup_k \| u_\epsk\|_{\rmB\rmV([0,T];\Spu)} <\infty. \end{equation} Let $q:[0,T] \to \Spq$ be such that, along a not relabeled subsequence, there holds as $k\to\infty$ \begin{equation} \label{ptw-weak-limit} q_\epsk(t) \weakto q(t) \quad \text{in } \Spq \quad \text{for all } t \in [0,T] \end{equation} (every sequence in the above conditions possesses at least one limit point in the sense of \eqref{ptw-weak-limit}). Then, \begin{enumerate} \item $q=(u,z) \in \rmB\rmV([0,T];\Spu) \ti (\Reg 0T{\Spz} \cap \BV ([0,T];\Spy))$, and $q$ is a true $\BV$ solution to the rate-independent system $\RIS$; \item there hold the additional convergences as $k\to\infty$ \begin{subequations} \label{cvs-BV} \begin{align} & \label{cvs-BV-a} u_\epsk(t)\weakto u(t) \text{ in } \Spw, \qquad z_\epsk(t)\weakto z(t) \text{ in } \Spx && \text{for all } t \in [0,T], \\ \label{cvs-BV-b} & \eneq t{q_\epsk(t)} \to \eneq t{q(t)} \quad \text{for all } t \in [0,T], \\ \label{cvs-BV-c} & \lim_{k\to\infty} \int_s^t \meq \epsk{\alpha}{r}{q_\epsk(r)}1{q_\epsk'(r)} \dd r = \Var_{\mename 0\alpha}(q;[s,t]) && \text{for all $0\leq s \leq t \leq T$.} \end{align} \end{subequations} \end{enumerate} \end{theorem} \noindent The {proof} will be carried out in Section \ref{ss:8.2}. \begin{remark}[Vanishing-viscosity approximation versus $\BV$ solutions] \label{rm:VVAvsBV} We emphasize that the concept of $\BV$ solutions enjoys better closedness properties than defining solutions simply as all the limiting points in the vanishing-viscosity approximation. Such solutions are called `\emph{approximable}' in \cite{Miel11DEMF} and there, in Examples 2.5 and 2.6, it is shown in a simple model with $\Spz=\R$ that there are more $\BV$ solutions than approximable solutions. It is also made apparent that, for systems with $\delta$-dependent energy $\calE_\delta$, approximable solutions $q^\delta:[0,T]\to \R$ may have a limit $q^{\delta_*} $for $\delta\to \delta_*$ that is no longer an approximable solution, but $q^{\delta_*}$ is still a $\BV$ solution. Thus, $\BV$ solutions seem to have better stability properties, see e.g.\ \cite[Thm.\,4.8]{MRS2013}. \end{remark} \begin{remark}[Existence of $\BV$ solutions by time discretization] \label{rm:BVSol.viaTD}\slshape Another interesting features of true $\BV$ solutions is that they can be obtained as limits of discrete solutions of the time-incremental scheme \begin{equation} \label{tim-q} q_{\tau,\eps}^n \in \mathop{\mathrm{Argmin}}\limits_{q\in \mathbf{Q}} {\Big\{\tau \Psi_{\eps,\alpha}\Big(\frac{q{-}q_{\tau,\eps}^{n-1}}{\tau} \Big) + \calE(t_\tau^n,q)\Big\}}, \quad n =1,\,\ldots \,, N_\tau \end{equation} with $\Psi_{\eps,\alpha}$ from \eqref{eq:def.Psi.e.a}, as the viscosity parameter $\eps$ \emph{and} the time-step $\tau$ jointly tend to $0$. (Of course, \emph{fixing} $\eps>0$ and letting $\tau\to 0^+$ in \eqref{tim-q} gives rise to solutions $q_\eps:[0,T]\to \Spq$ of the generalized gradient system \eqref{GGS-structure}). This alternative construction of $\BV$ solutions in the joint discrete-to-continuous and vanishing-viscosity limit of the time-incremental scheme for viscous solutions was carefully explored in \cite[Thm.\,4.10]{MRS12} and \cite[Thm.\,3.12]{MRS13}. Following these lines it is possible to show convergence to $\BV$ solutions along (a subsequence of) any sequence $(\tau_k,\eps_k)$ as long as $\tau_k$ tends to $0$ faster than the time scales in our system, i.e.\ \begin{equation} \label{cond-alpha-eps/tau} \lim_{k\to\infty} \frac{\tauk}{\min\{ \eps_k^\alpha,\epsk\}}=0\,. \end{equation} To avoid overburdening of the exposition here we refrain from giving a precise convergence statement, but refer to \cite[Thm.\ 3.12]{MRS13}, which can adapted to our setup using condition \eqref{cond-alpha-eps/tau}. The same applies to the convergence of time-discrete solutions to {enhanced} $\BV$ solutions introduced below. \end{remark} \Subsection{Enhanced $\BV$ solutions} \label{ss:5.3} This solution concept is to be compared with the notion introduced in \cite[Def.\ 3.21]{MRS13} and, of course, with enhanced $\pBV$ solutions. In particular, recall that for an enhanced $\pBV$ solution $(\sft,\sfq) = (\sft, \sfu,\sfz)$ we required the additional regularity $\sfz \in \AC ([0,\sfS];\Spz)$. Accordingly, an \emph{enhanced $\BV$ solution} $q=(u,z)$ is required to fulfill $z\in \rmB\rmV([0,T];\Spz)$. Moreover, enhanced $\BV$ solutions enjoy the additional regularity property that at all jump points the left and right limits are connected by optimal transitions with finite length in $\Spu{\ti}\Spz$, such that the total length of the connecting paths $\teta =(\tht u,\tht z)$ is finite. In contrast, for general $\BV$ solutions it is only required that length of the $\tht u$-component of an optimal jump transition is finite in $\Spu$. \begin{definition}[Enhanced $\BV$ solutions] \label{def:enhanced-strict-BVsols} A curve $q:(u,z):[0,T]\to \Spq$ is called an \emph{enhanced $\BV$ solution} of $\RIS$, if it is a $\BV$ solution and it satisfies the following additional properties: \begin{itemize} \item[\emph{(i)}] $q\in \rmB\rmV([0,T];\Spq) $; \item[\emph{(ii)}] for all $t \in \mathrm{J}[q]$ there exists an optimal jump transition $\teta^t = (\tht u^t,\tht z^t) \in \admtcq t{\llim qt}{\rlim qt} $ such that $\teta^t \in \AC ([0,1];\Spq)$ and $q(t)=\teta^t(\hat r_t)$ for some $\hat r_t \in [0,1]$; \item[\emph{(iii)}] $\sum_{t\in \mathrm{J}[q]} \int_0^1 \|(\teta^t)'(r)\|_{\Spq} \dd r = \sum_{t\in \mathrm{J}[q]} \int_0^1 \left(\|(\tht u^t)'(r)\|_{\Spu} {+} \|(\tht z^t)'(r)\|_{\Spz} \right) \dd r <\infty$. \end{itemize} \end{definition} Our existence result for enhanced $\BV$ solutions can be again proved by our vanishing-viscosity approach without reparametrizing the trajectories, by taking the vanishing-viscosity limit of viscous solutions that satisfy an additional estimate on $\sup_{k \in \N} \| z_\epsk\|_{\rmB\rmV([0,T];\Spz)} $. \begin{theorem}[Convergence of viscous solutions to enhanced $\BV$ solutions] \label{thm:exist-nonpar-enh} Assume Hypotheses \ref{hyp:setup}, \ref{hyp:diss-basic}, \ref{hyp:1}, \ref{h:closedness}, \ref{hyp:Sept19}, and \ref{h:ch-rule-param}. Let $(q_\epsk)_k \subset \AC ([0,T]; \Spq)$ be a sequence of solutions to the generalized gradient system \eqref{van-visc-intro} such that convergences \eqref{init-data-cv} hold at $t=0$, as well as \begin{equation} \label{estBV-u} \exists\, S>0 \ \forall\, k \in \N \,: \qquad \| q_\epsk\|_{\rmB\rmV([0,T];\Spq)} \leq \widehat S. \end{equation} Let $q:[0,T]\to \Spq$ be a limit point for $(q_\epsk)_k$ in the sense of \eqref{ptw-weak-limit}. Then, $q$ is an \emph{enhanced $\BV$ solution} of $\RIS$, and the additional convergences \eqref{cvs-BV} hold. \end{theorem} \noindent Since the proof of Theorem \ref{thm:exist-nonpar-enh} follows from combining the argument for Theorem \ref{thm:exist-trueBV} with that developed for \cite[Thm.\,3.22]{MRS13}, it is omitted. \Subsection{Comparing $\pBV$ and true $\BV$ solutions} \label{su:Comp.pBV.BV} In this final subsection we explore the relations between parametrized and true $\BV$ solutions, also in the enhanced case. Indeed, there is a very natural transition between parametrized and true $\BV$ solutions. The converse passage will be obtained by `filling the graph' of a true $\BV$ solution at its jump points, by means of an optimal jump transition, under the \emph{additional} assumption that it exists. This condition is codified in the following \begin{hypothesis} \label{hyp:OJT} For every $t\in [0,T]$ and $q^-, q^+\in \Spq$ such that $\slov ut{q_\pm} = \slov zt{q_\pm}=0$ and \[ \eneq t{q^-} - \eneq t{q^+} = \costq {\mename 0\alpha} t{q^-}{q^+} \] there exists an \emph{optimal jump transition} $\serifTeta^{\mathrm{opt}} \in \admtcq t{q^-}{q^+}$. \end{hypothesis} \begin{remark} Let us emphasize that Hypothesis \ref{hyp:OJT} plays no role in proving the existence of $\BV$ solutions. It only serves the purpose of showing that a true $\BV$ solution gives rise to a parametrized one. In this connection, let us mention in advance that, in the statement of Theorem \ref{th:pBV.v.BVsol}, Hypothesis \ref{hyp:OJT} will not be required for relating \emph{enhanced} $\BV$ solutions to their parametrized analogues, as the definition of enhanced $\BV$ solutions already encompasses the information that optimal jump transitions exist. \end{remark} We are now ready to state the following relations between true and parametrized $\BV$ solutions. \begin{theorem}[$\pBV$ versus true $\BV$ solutions] \label{th:pBV.v.BVsol} Let $\RIS$ fulfill Hypothesis \ref{h:ch-rule-param}. \newline Then the following statements are true: \begin{enumerate} \item If $(\sft,\sfq):[0,\mathsf{S}]\to [0,T]\ti \Spq$ is a non-degenerate $\pBV$ solution of $\RIS$ with $\sft(0)=0$ and $\sft(\mathsf{S})=T$, then every $q:[0,T]\to \Spq$ satisfying \begin{equation} \label{eq:Project.pBV} q(t) \in \bigset{\sfq(s) }{ \sft(s)=t} \end{equation} is a (true) $\BV$ solution that enjoys, moreover, the following property: for every $t\in \mathrm{J}[q]$ there exists an optimal jump transition $\tetaopt \in \admtcq t{\llim qt}{\rlim qt}$ such that $q(t)=\tetaopt(\hat r)$ for some $\hat r \in [0,1]$. Furthermore, there holds \begin{equation} \label{equality-between-variations} \Variq{\mename 0\alpha}{q}{t_0}{t_1} = \int_{\sfs(t_0)}^{\sfs(t_1)} \mename 0\alpha [\sft, \sfq, \sft', \sfq'](s) \dd s \quad \text{for all } 0 \leq t_0\leq t_1\leq T\,. \end{equation} \item Conversely, assume additionally Hypothesis \ref{hyp:OJT}. Then, for every $\BV$ solution $q:[0,T]\to \Spq$, there exists a non-degenerate, surjective $\pBV$ solution $(\sft,\sfq) \in \mathscr{A}([0,\mathsf{S}];[0,T]\ti \Spq)$ such that \eqref{eq:Project.pBV} and \eqref{equality-between-variations} hold. \item If $(\sft,\sfq):[0,\mathsf{S}]\to [0,T]\ti \Spq$ is a (non-degenerate) enhanced $\pBV$ solution with $\sft(0)=0$ and $\sft(\mathsf{S})=T$, then every $q:[0,T]\to \Spq$ given by \eqref{eq:Project.pBV} is an enhanced $\BV$ solution, and \eqref{equality-between-variations} holds. \item Conversely, for any enhanced $\BV$ solution $q:[0,T]\to \Spq$, there exists a (non-degenerate, surjective) enhanced $\pBV$ solution $(\sft,\sfq) \in \mathscr{A}([0,\mathsf{S}];[0,T]\ti \Spq)$ such that \eqref{eq:Project.pBV} and \eqref{equality-between-variations} hold. \end{enumerate} \end{theorem} \begin{remark}[Greater generality of true $\BV$ solutions]\slshape \label{rm:GeneralityBV} Theorem \ref{th:pBV.v.BVsol} seems to suggest that true $\BV$ solutions are more general than their parametrized analogues. Indeed, while, under the standing assumptions of Section \ref{s:setup}, parametrized solutions always give rise to true $\BV$ ones, the converse passage is possible under the additional Hypothesis \ref{hyp:OJT}. Hence, the set of true $\BV$ solutions is apparently bigger. To emphasize this, we have chosen to prove that any limit curve $q$ for a sequence $(q_{\epsk})_k$ of (non-parametrized) viscous solutions is a true $\BV$ solution, as stated in Theorem \ref{thm:exist-trueBV}, by resorting to Theorem \ref{thm:existBV} for parametrized solutions. Namely, in Sec.\ \ref{ss:8.2} we will use that the graphs of a sequence $(q_{\epsk})_k$ of viscous solutions are contained in the image sets of their parametrized counterparts $(\sft_{\epsk}, \sfq_{\epsk})_k$ and apply Theorem \ref{thm:existBV} to the latter curves, guaranteeing their convergence to a $\pBV$ solution $(\sft,\sfq)$. We will then proceed to showing that $q$ and $(\sft,\sfq)$ are related by \eqref{eq:Project.pBV} and thus conclude, by Thm.\ \ref{th:pBV.v.BVsol}(1), that $q$ is a true $\BV$ solution. \end{remark} \begin{proof} \STEP{1: From $\pBV$ to $\BV$ solutions.} First, we show that, given a $\pBV$ solution $(\sft,\sfq) = (\sft,\sfu,\sfz)$, formula \eqref{eq:Project.pBV} defines a curve $q=(u,z) \in \rmB\rmV([0,T];\Spu) \ti (\Reg 0T\Spz {\cap} \rmB\rmV([0,T];\Spy)$. Indeed, let $\sfs:[0,T]\to [0,\mathsf{\mathsf{S}}] $ be any inverse of $\sft$, with jump set $ \mathrm{J}[\sfs]$. It can be easily checked that, since $(\sft,\sfq) = (\sft,\sfu,\sfz) $ is non-degenerate, \[ t\in \mathrm{J}[q] = \mathrm{J}[u] \cup \mathrm{J}[z] \qquad \Longleftrightarrow \qquad t \in \mathrm{J}[\sfs] \text{ and } \sft(s) \equiv t \text{ for all } s \in [\llim \sfs t, \rlim \sfs t]\,. \] If for $t\in \mathrm{J}[\sfs]$ we have $q(t) = \sfq(s_*)$ for some $s_*\in [\llim \sfs t, \rlim \sfs t]$, then defining $\sfs(t): = s_*$ gives the identity \begin{equation} \label{not-clear-use} q(t) = (u(t), z(t)) = \sfq(\sfs(t)) =( \sfu(\sfs(t)), \sfz(\sfs(t))) \qquad \text{for all } t\in [0,T]. \end{equation} From this, we deduce $u\in \rmB\rmV([0,T];\Spu)$ and $z\in \rmB\rmV([0,T];\Spy)$. Moreover, since $\sup_{t\in [0,T]} \mfE(q(t))\leq E$ for some $E>0$ and the functional $\mfE + \| \cdot\|_\Spu + \| \cdot\|_{\Spy} $ has sublevels bounded in $\Spw\ti \Spx$, we also have $z\in \rmL^\infty(0,T;\Spx)$, which gives $z\in \Reg 0T{\Spz}$ thanks to \eqref{4later-use-embed}. From \eqref{not-clear-use} we easily deduce that \begin{equation} \label{ingred-stim-var-1} \mathrm{Var}_{\calR}(z;[t_0,t_1]) = \int_{\sfs(t_0)}^{\sfs(t_1)} \calR[\sfu'](s) \dd s \qquad \text{for all } 0\leq t_0\leq t_1 \leq T\,. \end{equation} Furthermore, we mimic the argument from the proof of \cite[Prop.\,4.7]{MRS13} and observe that for every $t\in \mathrm{J}[q]$ the curve $\sfq = (\sfu,\sfz): [\llim \sfs t, \rlim \sfs t] \to \Spu \ti \Spz$, reparametrized in such a way that it is defined on the interval $[0,1]$, is an admissible transition curve between $ \llim qt$ and $\rlim qt$. Hence, \begin{equation*} \costq{\mename 0\alpha}{t}{\llim qt}{q(t)} \leq \int_{\llim \sfs t}^{\sfs (t)} \!\!\mename 0\alpha [\sft,\sfq, 0, \sfq'] (s) \dd s,\quad \costq{\mename 0{\alpha\!}}{t}{q(t)}{\rlim qt} \leq \int_{\sfs (t)}^{\rlim \sfs t} \!\! \mename 0\alpha [\sft,\sfq, 0, \sfq'](s) \dd s\,. \end{equation*} Combining this with \eqref{ingred-stim-var-1} we conclude that \begin{equation} \label{est-variations} \Variq{\mename 0\alpha}{q}{t_0}{t_1} \leq \int_{\sfs(t_0)}^{\sfs(t_1)} \mename 0\alpha [\sft, \sfq, \sft', \sfq'](s) \dd s \end{equation} for all $[t_0,t_1]\subset [0,T]$. Ultimately, we infer that $q$ fulfills the energy-dissipation estimate \eqref{enineq-sBV0T}. In order to show that $q$ complies with the stationary equation \eqref{stationary-u} and the local stability condition \eqref{loc-stab}, we argue in the following way. Recalling the definition of the sets $\mathsf{\mathscr{G}}^\alpha$ from \eqref{setGalpha}, we introduce \[ \mathscr{H}^\alpha[q] : = \begin{cases} \{ t \in [0,T]\, : \ \slov ut{q(t)} = \slov zt{q(t)} =0 \} & \text{if } \alpha \geq 1, \\ \{ t \in [0,T]\, : \ \slov zt{q(t)} =0 \} & \text{if } \alpha \in (0,1). \end{cases} \] Observe that the set $ \mathscr{H}^\alpha[q]$ is dense in $[0,T]$. Indeed, its complement $[0,T]\setminus \mathscr{H}^\alpha[q] = \sft(\SetG \alpha{\sft}{\sfq})$ has null Lebesgue measure, since $\sft$ is constant on each connected component of the open set $\SetG \alpha{\sft}{\sfq}$. Therefore, by the lower semicontinuity properties of $\slovname u$ and $\slovname z$ ensured by Hypothesis \ref{hyp:Sept19}, in the case $\alpha\geq 1$ we immediately conclude \eqref{stationary-u} and \eqref{loc-stab}. For $\alpha \in (0,1)$, the above argument only yields \eqref{loc-stab}, and for the validity of \eqref{stationary-u}, we observe that for any $t\notin \mathrm{J}[q] $, then \[ t = \sft (\bar s) \text{ and } q = \sfq (\bar s) \quad \text{for } \bar s \in \overline{\{ s \in [0,\mathsf{S}]\, : \ \sft'(s)>0 \}}\,. \] Then, since $\slov u{\sft}{\sfq} \equiv 0$ on the set $\{ s \in (0,\mathsf{S})\, : \ \sft'(s) >0 \} $ as prescribed by Definition \ref{def:adm-p-c}, we conclude that $\slov u{t}{q(t)}=0$. Since $q$ complies with \eqref{stationary-u}, \eqref{loc-stab}, and \eqref{enineq-sBV0T}, by Proposition \ref{prop:BV-charact} we conclude that it is a true $\BV$ solution. In order to conclude \eqref{equality-between-variations}, we observe that, for all $0\leq t_0\leq t_1\leq T$ and $ s_0\leq s_1 \in [0,\sfS]$ such that $\sft(s_i) = t_i$ for $i\in \{0,1\}$, there holds \begin{equation} \label{for-eq-var} \begin{aligned} & \Variq{\mename 0\alpha}{q}{t_0}{t_1} \overset{\eqref{enid-strictBV}}{=} \eneq {t_0}{q(t_0)} - \eneq {t_1}{q(t_1)} +\int_{t_0}^{t_1} \pl_t \eneq {r}{q(r)} \dd r \\ & = \eneq {\sft(s_0)}{\sfq(s_0)}- \eneq {\sft(s_1)}{\sfq(s_1)} + \int_{s_0}^{s_1} \pl_t \eneq {\sft(s)}{\sfq(s)} \sft'(s) \dd s \overset{\eqref{def-parBV}}{=} \int_{s_0}^{s_1} \mathfrak{M}_0^\alpha [\sft,\sfq, \sft',\sfq'](s) \dd s\,. \end{aligned} \end{equation} It is immediate to see that the above arguments also yield an enhanced $\BV$ solution from any enhanced $\pBV$ solution. Hence, assertions (1) and (3) are proved. \smallskip \STEP{2: From $\BV$ to $\pBV$ solutions} First of all, we show that, under the additional Hypothesis \ref{hyp:OJT}, with any true $\BV$ solution $q\in \rmB\rmV([0,T];\Spu) \ti (\Reg 0T\Spz {\cap} \rmB\rmV([0,T];\Spy))$ we can associate a non-degenerate, surjective curve $(\sft,\sfq) = (\sft,\sfu,\sfz) \in \mathscr{A}([0,\mathsf{S}];[0,T]\ti \Spq)$ such that \eqref{eq:Project.pBV} holds and \begin{equation} \label{balance-variations} \Variq{\mename 0\alpha}{q}0{T} = \int_{0}^{\mathsf{\mathsf{S}}} \mename 0\alpha [\sft,\sfq,\sft',\sfq'](s) \dd s\,. \end{equation} Indeed, along the lines of \cite[Prop.\,4.7]{MRS13} we introduce the parametrization $\sfs$, defined on $[0,T]$ by \[ \begin{aligned} & \sfs(t): = t+ \Variq{\mename 0\alpha}{q}{0}{t}, \qquad \sfS: = \sfs(T) \quad \text{with } \\ & \mathrm{J}[\sfs]= \mathrm{J}[u] \cup \mathrm{J}[z] =(t_m)_{m\in M} \text{ and $M$ a countable set}. \end{aligned} \] We set $I := \cup_{m\in M} I_m$ with $I_m = (\sfs(t_{m}^-), \sfs(t_{m}^+) )$. Hence, we define $(\sft,\sfq) = (\sft,\sfu,\sfz)$ on $[0,\sfS]\setminus I$ by $ \sft: = \sfs^{-1} : [0,\sfS]\setminus I \to [0,T] $ and $ \sfq : = q {\circ} \sft $. In order to extend $\sft$ and $\sfq$ to $I$, we need to use the fact that, by Hypothesis \ref{hyp:OJT}, for every $m \in M$ there exists an optimal jump transition jump transition $\tetaopt_m \in \admtcq t{\llim q{t_m}}{\rlim q{t_m}}$, defined on the canonical interval $[0,1]$ and such that $\tetaopt_m(\hat{r}_m) = q(t_m)$ for some $\hat r_m \in [0,1]$. We may then define $\sft$ and $\sfq$ on $I = \cup_{m\in M} I_m$ by \[ \sft(s) \equiv t_m, \qquad \sfq(s) : = \tetaopt_m (\sfr_m(s)) \text{ for } s \in I_m, \quad \text{where } \sfr_m(s) = \tfrac{s\ -\ \sfs(t_m^-) }{\sfs(t_m^+)-\sfs(t_m^-)}\,. \] It can be easily checked that $(\sft,\sfq) \in \mathscr{A}([0,\mathsf{S}]; [0,T]\ti \Spq)$. By construction, the curves $q$ and $(\sft,\sfq)$ satisfy \eqref{eq:Project.pBV}. Furthermore, recalling \eqref{ingred-stim-var-1} and the fact that $\tetaopt_m \in \admtcq t{\llim q{t_m}}{\rlim q{t_m}}$, it is not difficult to check that \eqref{balance-variations} holds. Therefore, since $q$ is a $\BV$ solution, we infer that $(\sft,\sfq) $ is a $\pBV$ solution, and we obtain \eqref{equality-between-variations} by repeating the argument in \eqref{for-eq-var}. This argument also allows us to prove that any enhanced $\BV$ solution gives rise to an enhanced $\pBV$ solution. Hence, the proof of Theorem \ref{th:pBV.v.BVsol} is finished. \end{proof} \Section{Proof of major results} \label{s:8} This section focuses on the proofs of our main existence results for $\pBV$ and true $\BV$ solutions, i.e.\ Theorems \ref{thm:existBV} and \ref{thm:exist-trueBV}. They will be carried out in Sections \ref{ss:8.1} and \ref{ss:8.2}, respectively. Moreover, Section \ref{su:pr:char-Ctc-set} provides the proof of Proposition \ref{pr:charact-Ctc-set}. Throughout this section and, in particular, in the statement of the various auxiliary results, we will always tacitly assume the validity of Hypotheses \ref{hyp:setup}, \ref{hyp:diss-basic}, \ref{hyp:1}, \ref{h:closedness}, \ref{hyp:Sept19}, and of the parametrized chain rule from Hyp.\ \ref{h:ch-rule-param}: recall that, by Lemma \ref{l:nice-implication} it implies the $\BV$-chain rule \eqref{hyp:BV-ch-rule}. \Subsection{Proof of Theorem \ref{thm:existBV}} \label{ss:8.1} Our first result lays the ground for the vanishing-viscosity analysis of Theorem \ref{thm:existBV} by settling the compactness properties of a sequence of parametrized curves enjoying the a priori estimates \eqref{condition-4-normali}. We have chosen to extrapolate such properties from the proof of Theorem \ref{thm:existBV}, since we believe them to be of independent interest. Prior to stating Proposition \ref{prop:compactness-param}, let us specify the meaning of the third convergence in \eqref{compactness-7-2} below. Indeed, the sequence $(\sfu_k)_k$ is contained in a closed ball $\overline{B}_R \subset \Spu$ by virtue of estimate \eqref{bounds-rescaled-curves} (cf.\ Hypothesis \ref{h:1}). Now, since $\Spu$ is reflexive and separable, it is possible to introduce a distance $d_{\mathrm{weak}}$ inducing the weak topology on $\overline{B}_R$. Hence, convergence in $\rmC^0 ([0,\sfS]; \Spu_{\mathrm{weak}})$ means convergence in $\rmC^0 ([0,\sfS]; (\Spu, d_{\mathrm{weak}}))$. \begin{proposition} \label{prop:compactness-param} Let $(\sft_k, \sfq_k)_k \subset \AC([ 0,\sfS]; [0,T]\ti \Spq)$, with $\sft_k$ non-decreasing and $\sfq_k = (\sfu_k, \sfz_k)$, enjoy the following bounds, along a null sequence $(\eps_k)_k$: \begin{equation} \label{bounds-rescaled-curves} \exists\, C_*\geq 1 \ \ \forall\, k \in \N \, : \quad \begin{cases} & \hspace{-0.1cm} \sup_{s\in [0,\mathsf{S}]} \mfE(\sfq_k(s)) \leq C_*, \smallskip \\ & \hspace{-0.1cm} \sft_k'(s) + \calR (\sfz_k'(s)) + \mredq {\epsk} \alpha {\sft_k(s)}{\sfq_k(s)} {\sft_k'(s)}{\sfq_k'(s)} \\ & \qquad \qquad \qquad \qquad + \|\sfu_k'(s)\|_{\Spu} \leq C_* \ \foraa\, s \in (0,\mathsf{S}). \end{cases} \end{equation} Then, there exist an admissible parametrized curve $(\sft, \sfq)=(\sft,\sfu, \sfz) \in \mathscr{A} ([0,\sfS]; [0,T]\ti \Spq)$ with \begin{subequations} \label{compactness-7} \begin{align} \label{compactness-7-1} & \begin{aligned} & \sft \in \rmC_{\mathrm{lip}}^0 ([0,\sfS]; [0,T]), \quad \sfu \in \rmC_{\mathrm{lip}}^0 ([0,\sfS];\Spu), \ \text{ and } \sfz \in \rmC_{\mathrm{lip}}^0 ([0,\sfS];\Spy) \cap \rmC^0([0,\sfS];\Spz), \end{aligned} \intertext{and a (not relabeled) subsequence such that the following convergences hold as $k\to\infty$:} \label{compactness-7-2} & \left\{{\renewcommand{\arraystretch}{1.15} \begin{array}{ll} \sft_{k} \to \sft \text{ in } \rmC^0([0,\sfS]) \quad \text{and} \quad \sft'_{k} \weaksto \sft' \text{ in } \rmL^\infty(0,\sfS), \\ \sfu_k \weaksto \sfu \text{ in } W^{1,\infty} (0,\sfS;\Spu), \\ \sfu_k \to \sfu \text{ in } \rmC^0 ([0,\sfS];\Spu_{\mathrm{weak}}) \\ \sfz_k \to \sfz \text{ in } \rmC^{0} ([0,\sfS];\Spz), \\ \sfu_k(s) \weakto \sfu(s) \text{ in } \Spw \text{ and } \sfz_k(s) \weakto \sfz(s) \text{ in } \Spx \quad \text{ for all } s \in [0,\sfS], \end{array} }\right. \\ \label{compactness-7-3} & \int_{0}^{\mathsf{S}} \mathfrak{M}_{0}^\alpha [\sft,\sfq,\sft',\sfq'](\sigma) \dd\sigma \leq \liminf_{k\to \infty} \int_{0}^{\sfS} \mathfrak{M}_{\eps_k}^\alpha (\sft_{k}(\sigma) ,\sfq_{k}(\sigma),\sft'_{k}(\sigma), \sfq'_{k}(\sigma)) \dd \sigma\,. \end{align} \end{subequations} \end{proposition} \begin{proof} We split the proof in three steps. \STEP{1. Compactness:} From \eqref{bounds-rescaled-curves} we infer the following compactness information. (1.A) By the Ascoli-Arzel\`a Theorem, there exists a non-decreasing $\sft \in W^{1,\infty}(0,\mathsf{S})$ such that $\sft_k\to \sft$ uniformly in $[0,\mathsf{S}]$ and weakly$^*$ in $ W^{1,\infty}(0,\mathsf{S})$. (1.B) Since the sequence $(\sfu_k)_k$ is bounded in $W^{1,\infty}(0,\mathsf{S};\Spu)$ we conclude that there exists $ \sfu $ with the regularity from \eqref{compactness-7-1} such that, along a not relabeled subsequence, the second convergence in \eqref{compactness-7-2} hold for $(\sfu_k)_k$. The convergence in $\rmC^0([0,\sfS];\Spu_{\mathrm{weak}})$ follows from an Ascoli-Arzel\`a type theorem, see e.g.\ \cite[Prop.\,3.3.1]{AGS08}). (1.C) From $\sup_{s\in [0,\mathsf{S}]} \mfE(\sfq_k(s)) \leq C$ we deduce that there exists a ball \begin{subequations} \label{compactness-Arzela} \begin{equation} \label{comp-ingr-1} \text{$\overline{B}_M^{\Spx}\subset \Spx \Subset \Spz$ such that $\sfz_k(s) \in \overline{B}_M^{\Spx}$ for all $s\in [0,\mathsf{S}]$ and all $k\in \N$.} \end{equation} Using $\Spx\Subset \Spz\subset \Spy$ and the coercivity \eqref{R-coerc} of $\calR$, Ehrling's lemma gives that \[ \forall\, \omega>0 \ \exists\, C_\omega>0 \ \forall\, z\in \overline{B}_M^{\Spx} \quad \| z\|_{\Spz} \leq \omega +C_\omega \calR(z). \] Hence, defining $\Omega_M(r): = \inf_{\omega>0} ( \omega {+}C_\omega r)$ and noting that $\Omega_M(\lambda r) \leq \lambda \Omega_M(r)$ for all $\lambda \geq 1$, we find \begin{equation} \label{comp-ingr-2} \| \sfz_k(s_1) {-} \sfz_k(s_2) \|_{\Spz} \leq \Omega_M(\calR(\sfz(s_1) {-} \sfz_k(s_2) )) \leq C_* \Omega_M(|s_1{-}s_2|) \text{ for all } 0 \leq s_1\leq s_2 \leq \mathsf{S}, \end{equation} \end{subequations} where the last estimate follows from the bound for $\calR(\sfz'_k)$ in \eqref{bounds-rescaled-curves}. We combine the compactness information provided by \eqref{comp-ingr-1} with the equicontinuity estimate \eqref{comp-ingr-2} and again apply, \cite[Prop.\,3.3.1]{AGS08} to deduce that there exists $\sfz \in \rmC^0([0,\mathsf{S}];\Spz)$ such that, along a not relabeled subsequence, $(\sfz_k)_k$ converges to $\sfz$ in the sense of \eqref{compactness-7-2}. Let us denote by $\sfq$ the curve $ (\sfu, \sfz)$. \STEP{2. $\sfq$ is an admissible parametrized curve:} Combining the previously found convergences with the first estimate in \eqref{bounds-rescaled-curves}, we obtain $ \sup_{s\in [0,\mathsf{S}]} \mfE(\sfq(s)) \leq C$. Using the second estimate in \eqref{bounds-rescaled-curves} and \eqref{R-coerc} we have $\|\sfz(s_2){-}\sfz(s_1)\|_{\Spy} \leq C_*|s_2{-}s_1|/c_\calR$. With \eqref{comp-ingr-2} we also infer that $\sfz \in \rmC_{\mathrm{lip}}^0([0,\sfS]; \Spy)$. We will now show that $\sfz$ is locally absolutely continuous in the set $ \SetG\alpha t{\sfq}$ from \eqref{setGalpha}. Let us first examine the case $\alpha \in (0,1)$. Since the function $s\mapsto \slov zt{\sfq(s)}$ is lower semicontinuous thanks to Hypothesis \ref{hyp:Sept19}, for every $[\varsigma,\beta] \subset \SetG\alpha t {\sfq} $ there exists $ c>0$ such that $\slov zt{\sfq(s)} \geq c$ for all $s\in [\varsigma,\beta]$. This estimate bears two consequences: \begin{compactenum} \item Exploiting the \emph{uniform} convergence of $\sfz_k$ to $\sfz$ and again relying on Hypothesis \ref{hyp:Sept19}, \begin{equation} \label{positivity-duals} \exists\, \bar k \in \N \ \forall\, k \geq \bar k \, \ \forall\, s \in [\varsigma,\beta]\, : \qquad \slov zt{\sfq_k(s)} \geq \frac c2. \end{equation} This implies that, for $k\geq \bar k$, the sets $ \SetG\alpha t{\sfq_k} = \{ s\, : \, \slov zt{\sfq_k(s)}>0 \} $ contain the interval $[\varsigma,\beta]$. \item Since, by \eqref{bounds-rescaled-curves}, $C_* \geq \mredq {\epsk} \alpha {\sft_k(s)}{\sfq_k(s)} {\sft_k'(s)}{\sfq_k'(s)} $ for almost all $s\in (\varsigma,\beta)$, we are in a position to apply estimate \eqref{est-Alex-1} from Lemma \ref{new-lemma-Alex} and deduce that \begin{equation} \label{pavia} \exists\, \overline C>0 \ \exists\, \bar k \in \N \ \forall\, k \geq \bar k \ \foraa s \in (\varsigma,\beta)\, : \qquad \|\sfz_k'(s)\|_{\Spz} \leq \overline{C}\,. \end{equation} \end{compactenum} The discussion of the case $\alpha\geq1$ follows the very same lines: for every $[\varsigma,\beta] \subset \SetG\alpha t{\sfq}$ we find $\tilde{c}>0$ and $\tilde k \in \N$ such that for every $k \geq \tilde k$ we have $ \slov ut{\sfq_k(s)} {+} \slov zt{\sfq_k(s)} \geq \frac{\tilde{c}}2$ for every $s\in [\varsigma,\beta]$. Then, estimate \eqref{pavia} follows from \eqref{est-Alex-1-bis} in Lemma \ref{new-lemma-Alex}. All in all, for all $\alpha >0$ the curves $\sfz_k$ are uniformly $\Spz$-Lipschitz on $[\varsigma,\beta]$. This entails that $\sfz$ is ultimately $\Spz$-Lipschitz on any subinterval $[s_1,s_2] \subset \SetG\alpha t{\sfq}$, and reflexivity of $\Spz$ gives us \begin{equation} \label{conv-teta-z-k-prime} \sfz_k \weaksto \sfz \text{ in } W^{1,\infty} (\varsigma,\beta;\Spz) \quad \text{for all } [s_1,s_2]\subset \SetG\alpha t{\sfq}. \end{equation} \STEP{3. Proof of \eqref{compactness-7-3}:} In order to conclude that $(\sft,\sfq) \in \mathscr{A} ([0,\sfS]; [0,T]\ti \Spq)$, it remains to show that it fulfills property \eqref{summability}, which will a consequence of \eqref{compactness-7-3}. By the lower semicontinuity we have \begin{equation} \label{lim-pass-disr} \begin{aligned} \liminf_{k\to\infty} \int_0^{\mathsf{S}} \calR(\sfz_k'(s)) \dd s \overset{(1)}{=} \liminf_{k\to\infty} \Varname{\calR}(\sfz_k; [0,\mathsf{S}]) & \geq \Varname{\calR}(\sfz; [0,\mathsf{S}]) \overset{(2)}{=} \int_0^{\mathsf{S}} \calR[\sfz'](s) \dd s, \end{aligned} \end{equation} with $ \overset{(1)}{=} $ and $\overset{(2)}{=} $ due to \eqref{cited-VarR-later}. Furthermore, we have \begin{equation} \label{reduced-lsc} \begin{aligned} \liminf_{k\to\infty} \int_{0}^{\sfS} \mredq {\eps_k}{\alpha}{\sft_{k}} {\sfq_{k}}{\sft'_{k}}{\sfq'_{k}} \dd s & \geq \liminf_{k\to\infty} \int_{(0,\sfS)\cap \SetG\alpha \sft \sfq} \mredq {\eps_k}{\alpha}{\sft_{k}} {\sfq_{k}}{\sft'_{k}}{\sfq'_{k}} \dd s \\ & \overset{(3)}{\geq} \int_{(0,\sfS)\cap \SetG\alpha \sft \sfq} \mredq{0}{\alpha}{\sft}{\sfq}0{\sfq'} \dd s\,. \end{aligned}\vspace{-0.4em} \end{equation} Here, $ \overset{(3)}{\geq} $ follows from Proposition \ref{prop:Ioffe}, applied to the functionals $ \mredname{\epsk}{\alpha}$ and $\mredname 0\alpha$, which we consider restricted to the (weakly closed, by assumption \eqref{h:1.1}) energy sublevel $\Iof = \{ q \in \Spq\, : \ \mfE(q) \leq C\}$. Combining \eqref{lim-pass-disr} and \eqref{reduced-lsc}, we infer \eqref{compactness-7-3} and thus conclude the proof of Proposition \ref{prop:compactness-param}. \end{proof} We are now in the position to conclude the \noindent \begin{proof}[Proof of Theorem \ref{thm:existBV}] Let $(\sft_\epsk,\sfq_\epsk)_k $ be a sequence of rescaled viscous trajectories satisfying \eqref{condition-4-normali}. We apply Proposition \ref{prop:compactness-param} and conclude that there exist a limit parametrized curve $(\sft,\sfq) \in \mathscr{A}([0,\sfS];[0,T]\ti \Spq)$, fulfilling \eqref{continuity-properties}, and a (not relabeled) subsequence along which convergences \eqref{cvs-eps} hold. We now show that the curves $(\sft,\sfq) $ fulfill the upper energy-dissipation estimate $\leq$ in \eqref{def-parBV} by passing to the limit as $\epsk\to 0^+$ in \eqref{reparam-enineq} for $s_1=0$ and $s_2= s\in (0,\sfS]$. The key lower semicontinuity estimate \begin{equation} \label{keyLSC-HS} \int_0^s \mathfrak{M}_0^\alpha[\sft, \sfq, \sft',\sfq'](\sigma) \dd \sigma \leq \liminf_{k\to\infty} \int_{0}^{s} \calM_\eps^\alpha (\sft_\epsk(\sigma) ,\sfq_\epsk(\sigma), \sft'_\epsk(\sigma), \sfq'_\epsk(\sigma)) \dd \sigma \quad \text{for all $s\in [0,\sfS]$} \end{equation} follows from \eqref{compactness-7-3} in Proposition \ref{prop:compactness-param}. Convergences \eqref{cvs-eps}, the lower semicontinuity \eqref{h:1.2} of $\calE$, and the continuity \eqref{h:1.3e} of $\pl_t \calE$ give for all $s\in [0,\sfS]$ that \begin{equation} \label{lsc-en-usc-power} \liminf_{k\to\infty} \eneq {\sft_\epsk (s)}{\sfq_\epsk(s)} \geq \eneq {\sft(s)}{\sfq(s)} \ \text{ and } \ \int_0^{s} \pl_t \eneq {\sft_\epsk}{\sfq_\epsk} \sft_\epsk' \dd \sigma \to \int_0^{s} \pl_t \eneq {\sft}{\sfq} \sft' \dd \sigma\,. \end{equation} For the last convergence we use $\sft'_k \weaksto \sft'$ in $\rmL^\infty(0,\sfS)$ and $|\pl_t \eneq {\sft_\epsk(\sigma)}{\sfq_\epsk(\sigma)} | \leq C_\# \eneq {\sft_\epsk(\sigma)}{\sfq_\epsk(\sigma)} \leq C$ by \eqref{h:1.3d} and \eqref{est1}, which together with \eqref{h:1.3e} gives $\pl_t \eneq {\sft_\epsk}{\sfq_\epsk} \to \pl_t \eneq {\sft}{\sfq} $ strongly in $\rmL^2(0,\sfS)$. Taking into account the convergence of the initial energies guaranteed by \eqref{init-data-cv}, we complete the limit passage in \eqref{reparam-enineq}. Thanks to Lemma \ref{l:characterizBV}, the validity of the upper energy-dissipation estimate in \eqref{def-parBV} ensures that $(\sft,\sfq) = (\sft,\sfu,\sfz)$ is a $\pBV$ solution. The enhanced convergences \eqref{cvs-eps-energy} and \eqref{cvs-eps-M} are a by-product of this limiting procedure. Although the argument is standard, we recap it for the reader's convenience and later use, and introduce the following place-holders for every $s\in [0,\sfS]$: \[ \left\{ \begin{array}{lll} E_{\epsk}^s := \eneq {\sft_\epsk(s)}{\sfq_\epsk(s)}, && E_{0}^s := \eneq {\sft(s)}{\sfq(s)} \\ M^s_\epsk : = \int_{0}^{s} \mathfrak{M}_\epsk^\alpha (\sft_\epsk(\sigma), \sfq_\epsk(\sigma), \sft_\epsk'(\sigma), \sfq_\epsk'(\sigma)) \,\dd \sigma && M^s_ 0: = \int_{0}^{s} \mathfrak{M}_0^\alpha [\sft, \sfq, \sft', \sfq'](\sigma) \,\dd \sigma \\ E_{\epsk}^0 := \eneq {\sft_\epsk(0)}{\sfq_\epsk(0)} && E_{0}^0 := \eneq {\sft(0)}{\sfq(0)} \\ P^s_\epsk: = \int_{0}^{s} \pl_t \eneq {\sft_\epsk(\sigma)}{\sfq_\epsk(\sigma)} \,\sft'_\epsk(\sigma) \,\dd \sigma && P^s_0: = \int_{0}^{s} \pl_t \eneq {\sft(\sigma)}{\sfq(\sigma)} \,\sft'(\sigma) \,\dd \sigma\,. \end{array} \right. \] Hence, the parametrized energy-dissipation estimate \eqref{reparam-enineq} rephrases as $ E_{\epsk}^s +M^s_\epsk \leq E_{\epsk}^0+ P^s_\epsk $, and the limiting energy-dissipation balance rewrites as $ E_{0}^s +M_0 = E_{0}^0 + P^s_0 $. So far, we have shown that \[ E_{0}^s +M^s_0 \leq \liminf_{k\to\infty} \left( E_{\epsk}^s +M^s_\epsk\right) \leq \limsup_{k\to\infty} (E_{\epsk}^s +M^s_\epsk) \leq \limsup_{k\to\infty} (E_{\epsk}^0+ P^s_\epsk) = E_{0}^0 + P^s_0 = E_{0}^s +M^s_0 \,. \] Since we have $\liminf_{k\to\infty} E_{\epsk}^s \geq E_0^s$ and $\liminf_{k\to\infty} M^s_\epsk \geq M^s_0$, we thus conclude that $\liminf_{k\to\infty} E_{\epsk}^s = E_0^s$ and $\lim_{k\to\infty}M^s_\epsk = M^s_0$ for all $s\in [0,\sfS]$, which means \eqref{eq:E.M.cvg.PBV}. Thus, Theorem \ref{thm:existBV} is established. \end{proof} \Subsection{Proof of Theorem \ref{thm:exist-trueBV}} \label{ss:8.2} \begin{proof} We split the argument in three steps. \smallskip \STEP{1. Construction of a suitable $\pBV$ solution.} Let $(q_{\epsk})_k$, $q$ be as in the statement of Theorem \ref{thm:exist-trueBV}. Lemma \ref{l:1} ensures the validity of the basic energy estimates \eqref{est-quoted-later} and \eqref{est1} for the sequence $(q_k)_k = (u_{\epsk},z_\epsk)_k$. The additional estimate for $(u_\epsk)_k$ in $\rmB\rmV([0,T];\Spu) $ is assumed in \eqref{estBV}, such that the arc-length functions $\sfs_{\epsk}$ from \eqref{arclength-est1-2} fulfill $\sup_{k\in \N} \sfs_\epsk (T) \leq C$. We reparametrize the curves $q_\epsk$ by means of the rescaling functions $\sft_\epsk: = \sfs_\epsk^{-1}$ by setting $\sfq_\epsk: = q_\epsk{\circ} \sft_{\eps_k}$. Without loss of generality we may suppose that $(\sft_\epsk,\sfq_\epsk)$ is surjective and defined on a fixed interval $[0,\sfS]$. Now, for the sequence $(\sft_\epsk, \sfq_\epsk)_k$ the a priori estimate \eqref{condition-4-normali} holds. Hence, we are in a position to apply Thm.\ \ref{thm:existBV} to the curves $(\sft_\epsk,\sfq_\epsk)_k$ and we conclude that $(\sft_\epsk,\sfq_\epsk)_k$ convergence along a (not relabeled) subsequence to a $\pBV$ solution $(\sft,\sfq): [0,\sfS]\to [0,T]\ti \Spq$. In what follows, we will prove that $q$ is related to the parametrized curve $(\sft,\sfq)$ via \eqref{eq:Project.pBV}.\smallskip \STEP{2. Every limit point $q$ is a true $\BV$ solution:} We first prove that \begin{equation} \label{proj.pBV.to.BV} q(t) \in \{ \sfq(s)\, : \ \sft(s) = t \} \text{ for all } t \in [0,T]. \end{equation} For this we fix $t_*\in [0,T]$ and choose $s_k\in [0,\sfS]$ such that $\sft_\epsk(s_k)=t_*$. After choosing a (not relabeled) subsequence we may assume $s_k\to s_*$. As $(\sft_\epsk,\sfq_\epsk)_k$ converges uniformly to $(\sft,\sfq)$ in $\rmC^0([0,\sfS];\R\ti \Spq_\mathrm{weak})$ we obtain \[ \sft_\epsk(s_k)\to \sft(s_*) \quad \text{and} \quad \sfq_\epsk(s_k)\weakto \sfq(s_*). \] However, by construction we have \[ \sft_\epsk(s_k)=t_* \quad \text{and} \quad \sfq_\epsk(s_k) \overset{\text{\tiny Step 1}}= q_\epsk(\sft_\epsk(s_k)) = q_\epsk(t_*) \weakto q(t_*), \] where the last convergence is the assumption in \eqref{ptw-weak-limit}. Hence we conclude $\sft(s_*)=t_*$ and $\sfq(s_*)=q(t_*)$ which is the desired relation \eqref{proj.pBV.to.BV}. Thanks to \eqref{proj.pBV.to.BV}, we can apply Theorem \ref{th:pBV.v.BVsol}(1), which ensures that $q$ is a true $\BV$ solution.\smallskip \STEP{3. Proof of convergences \eqref{cvs-BV}:} Since $(q_{\eps_k})_k$ is bounded in $\rmL^\infty (0,T;\Spw\ti \Spx)$ by estimate \eqref{est1} and Hypothesis \ref{hyp:1}, the pointwise weak convergence in $\Spq$ improves to convergences \eqref{cvs-BV-a}. Next, we observe that for every $ 0 \leq s_0 \leq s_1 \leq \mathsf{S}$ there holds \[ \begin{aligned} \lim_{k\to \infty} \int_{\sft(s_0)}^{\sft(s_1)} \meq \epsk{\alpha}{r}{q_\epsk(r)}1{q_\epsk'(r)} \dd r & = \lim_{k\to \infty} \int_{s_0}^{s_1} \meq \epsk{\alpha}{\sft_\epsk(\sigma)}{\sfq_\epsk(\sigma)}{\sft_\epsk'(\sigma)} {\sfq_\epsk'(\sigma)} \dd \sigma \\ & \overset{(1)}{=} \int_{s_0}^{s_1} \mename 0{\alpha}[\sft, \sfq, \sft', \sfq'](\sigma) \dd \sigma \overset{(2)}{=} \Variq{\mename 0\alpha}{q}{\sft(s_0)}{\sft(s_1)}, \end{aligned} \] with {\footnotesize (1)} due to \eqref{cvs-eps-M} and {\footnotesize (2)} due to \eqref{equality-between-variations}. Hence, \eqref{cvs-BV-c} follows. Finally, the lower semicontinuity of $\calE$, the continuity \eqref{h:1.3e} of $\pl_t \calE$, give that \[ \begin{aligned} \liminf_{k\to\infty} \eneq t{q_k(t)} \geq \eneq t{q(t)} \ \text{ for all } t\in [0,T] \quad \text{and} \quad \int_0^t \pl_t \eneq s{q_k(s)} \dd s \to \int_0^t \pl_t \eneq s{q(s)} \dd s. \end{aligned} \] Hence, with similar arguments as in the proof of Theorem \ref{thm:existBV} (cf.\ the end of Section \ref{ss:8.1}), we conclude \[ \eneq t{q(t)} + \Variq{\mename 0\alpha} q0t = \lim_{k\to\infty} \eneq t{q_\epsk(t)} + \lim_{k\to\infty} \int_{0}^t \meq{\epsk}\alpha r{q_{\epsk}(r)}1{q_{\epsk}'(r)} \dd r \ \text{ for all } t \in [0,T], \] and \eqref{cvs-BV-b} ensues from the previously obtained \eqref{cvs-BV-c}. This finishes the proof of Theorem \ref{thm:exist-trueBV}. \end{proof} \Subsection{Proof of Proposition \ref{pr:charact-Ctc-set}} \label{su:pr:char-Ctc-set Our task is to show inclusions \eqref{eq:CtcSet.Incl} for the contact sets $\Ctc_\alpha$ and the flow regimes $\rgs Au \rgs Cz$ for the three different cases for $\alpha$. We rely on the explicit form of $\mename0\alpha=\calR+\mredname 0\alpha $ from \eqref{l:partial}.\smallskip \begin{proof}[Proof of Proposition \ref{pr:charact-Ctc-set}] \STEP{1: The case $t'>0$.} We start by showing that for all $\alpha>0$ we have \[ \Ctc_\alpha^{>0}:= \bigset{ (t,q,t',q')\in \Ctc_\alpha }{ t'>0} = \rgs Eu \rgs Rz = \rgs Eu \cap \rgs Rz . \] Indeed, in the case $t'>0$ we have $\mename 0\alpha(t,q,t',q') <\infty$ if and only if $\slov utq =\slov ztq =0$ and then $\mename 0\alpha(t,q,t',q') = \calR(z')$. From the former we obtain that, in fact, every $(\mu,\zeta)\in \argminSlo utq {\times} \argminSlo ztq$ satisfies $\mu=0$ and $-\zeta \in \pl \calR (0)$. From the contact condition $\calR(z') = \mathfrak{M}_0^\alpha(t,q,t',q') = {-}\pairing{}{\Spz}{\zeta}{z'}$ and the 1-homogeneity of $\calR$ we infer that $-\zeta \in \pl \calR(z')$, see \eqref{eq:subdiff.calR}. Taking into account that $\argminSlo xtq \subset \frsubq xtq$ for $\mathsf{x} \in \{\mathsf{u}, \mathsf{z}\}$, we ultimately infer \begin{equation} \label{EuRz-in-viscosity} \frsubq utq \ni 0, \qquad \pl \calR(z')+ \frsubq ztq \ni 0, \end{equation} namely system \eqref{static-tq} holds with $\thn u = \thn z =0$, i.e.\ $(t,q,t',q') \in \rgs Eu \rgs Rz $. Hence, we have shown $\Ctc_\alpha^{>0} \subset \rgs Eu \rgs Rz$. In fact, reverting the arguments the opposite inclusion holds as well.\smallskip \STEP{2. The case $t'=0$.} We define $\Ctc_\alpha^0:= \bigset{(t,q,t',q')\in \Ctc_{\alpha}}{ t'=0}$ and treat the three cases $\alpha=1$, $\alpha>1$, and $\alpha \in (0,1)$, separately.\smallskip \STEP{2.A. $t'=0$ and $\alpha=1$.} We want to show the inclusion \begin{equation} \label{desired-alpha=1} \Ctc_{\alpha=1}^0 \subset \big(\rgs Eu \rgs Rz \cap \{t'=0\}\big) \cup \rgs V{uz} \cup \rgs Bu \rgs Bz\,. \end{equation} From \eqref{l:partial} we have $\mename0\alpha(t,q,0,q')=\calR(z')+\mfb_{\disv u \oplus \disv z}(q',\slov utq{+}\slov ztq)$. \newline Hence, for $\slov utq{+}\slov ztq=0$ we argue as in Step 1 and obtain $(t,q,0,q')\in \rgs Eu \rgs Rz \cap \{t'=0\}$. We may now suppose that $\slov u{t}{q} {+} \slov z{t}{q}>0$ and $q'=(u',z')=0$. Clearly, the contact condition $ \mename 0{\alpha=1}(t,q,t',q') = -\pairing{}{\Spu}{\mu}{u'}- \pairing{}{\Spz}{\zeta}{z'} $ holds for all $(\mu,\zeta) \in \argminSlo utq {\times} \argminSlo ztq$. However, $\slov u{t}{q} {+} \slov z{t}{q}>0$ gives $\big(\{0\}{\times}\pl \calR(0) \big)\cap\frsubq qtq = \emptyset$, and because of $ \argminSlo utq {\times} \argminSlo ztq \subset \frsubq qtq$ we conclude that $(t,q,0,(0,0))$ fulfills system \eqref{static-tq} with $\thn u = \thn z =\infty $. Hence, $(t,q,0,q')=(t,q,0,0) \in \rgs Bu \rgs Bz$ as desired. Suppose now $ \disv z (z') {+} \disv u (u')>0$ in addition to $\slov u{t}{q} {+} \slov z{t}{q}>0$. According to Proposition \ref{pr:VVCP}(b2) there exists $\ell = \ell(t,q,q')>0$ with \[ \mfb_{\disv u{\oplus}\disv z}(q', \slov u{t}{q} {+} \slov z{t}{q}) = \ell \:\Big(\disv u \big( \frac1\ell \, {u'}\big) {+}\disv z \big( \frac1{\ell}\,{z'}\big) {+} \slov u{t}{q}{+} \slov z{t}{q} \Big) \,. \] Now, $(t,q,0,q') \in \Ctc^0_1$ means that there exists $(\mu,\zeta) \in \argminSlo u{t}{q} {\times} \argminSlo z{t}{q}$ fulfilling the contact condition \begin{align*} \mename01(t,q,0,q')= \calR(z')+ \mfb_{\psi_\sfu{\oplus}\psi_\sfz}(q', \slov u{t}{q} {+} \slov z{t}{q}) = - \pairing{}{\Spu}{\mu}{u'} - \pairing{}{\Spz}{\zeta}{z'}. \end{align*} Moreover, the definition of $\argminSlo xtq$ gives $\slov utq=\disv u^*({-}\mu)$ and $\slov ztq=\conj z({-}\zeta)$. Together with the definition of $\ell$ we find the identity \begin{align*} & \disv u \big( \frac1\ell \, {u'}\big) {+} \calR \big( \frac1{\ell}\,{z'} \big) {+} \disv z \big( \frac1{\ell}\,{z'}\big) {+} \disv u^*({-}\mu){+} \conj z({-}\zeta) \\ &=\frac1\ell \:\mename01(t,q,0,q')= -\frac1\ell\big( \pairing{}{\Spu}{\mu}{u'} {+} \pairing{}{\Spz}{\zeta}{z'} \big) = \pairing{}{\Spu}{{-}\mu}{\frac1{\ell}\,{u'}} {+} \pairing{}{\Spz}{{-}\zeta}{\frac1{\ell} \, {z'} }. \end{align*} Since $\disv u^*\oplus \conj z$ is is the Legendre-Fenchel dual of $\disv u \oplus(\calR {+}\disv z)$ we conclude \[ -\mu \in \pl \disv u\big(\frac1\ell\,u'\big) = \pl \disve u{1/\ell}(u') \ \text{ and } \ {-} \zeta \in \pl \calR \big( \frac1\ell\,z' \big) {+} \pl \disv z \big( \frac1\ell\,z' \big) = \pl \calR(z') {+} \pl \disve z{1/\ell}(z'). \] From this we see that $(t,q,0,q')$ system \eqref{static-tq} holds with $\thn u = \thn z = 1/\ell \in (0,\infty)$, i.e.\ we have $(t,q,0,q') \in \rgs V{uz}$, and the inclusion \eqref{desired-alpha=1} is established.\smallskip \STEP{2.B. $t'=0$ and $\alpha>1$.} Let us now examine the case $\alpha>1$ and prove that \begin{equation} \label{desired-alpha>1} \Sigma^0_\alpha \subset \big(\rgs Eu \rgs Rz \cap\{t'=0\}\big) \cup \rgs Eu \rgs Vz \cup \rgs Bz \,. \end{equation} Using the explicit expression for $\mredq 0\alpha tq0{q'}$ in \eqref{l:partial}, we see that $\meq 0\alpha tq0{q'} < \infty$ implies that either (i) $\slov utq=0$ or (ii) $\big(\,\slov utq>0$ and $z'=0\,\big)$. In case (i), which means $\rgs Eu$, the contact condition reads \[ \slov utq=0 \quad \text{and} \quad \exists\, \zeta \in \argminSlo ztq\,{:} \ \ \calR(z')+ \mfb_{\psi_\sfz}(z', \slov ztq) = - \pairing{}{\Spz}{\zeta}{z'} . \] If $\slov ztq=0$, we have $\mfb_{\psi_\sfz}(z', \slov ztq)=0$ and infer $\calR(z')+ \pairing{}{\Spz}{\zeta}{z'}=0$. Moreover, $\slov ztq=0$ implies $\zeta \in \argminSlo ztq=\pl \calR(0)$, and we conclude $\pl \calR(z') + \zeta \ni 0$ by \eqref{eq:subdiff.calR}. Hence, we can choose $\thn z=0$ in \eqref{subdiff-stat.z} and obtain $(t,q,0,q')\in \rgs Eu \rgs Rz$. If $z'=0$ holds but not $\slov ztq>0$, then \eqref{subdiff-stat.z} holds for $\thn z=\infty$ and $(t,q,0,q')\in \rgs Bz$. Finally, if $z'\neq 0$ and $\slov ztq>0$, then the very same discussion as in the last part of Step 2.A provides $\thn z \in (0,\infty)$ such that $ \pl \calR(z') + \pl \disve z{ \thn z} (z') + \frsubq ztq \ni 0 $, which means $(t,q,0,q')\in \rgs Eu \rgs Vz$. The discussion of the case (ii) with $\slov utq>0$ and $z'=0$ proceeds along the same lines, relying on the contact condition \[ \exists\, \mu \in \argminSlo utq: \quad \meq 0\alpha tq0{u',0}=\calR(0)+\mfb_{\psi_\sfz}(u', \slov utq) = - \pairing{}{\Spu}{\mu}{u'} . \] For $u' \neq 0$ we find $\thn u\in (0,\infty)$ with $\pl \disve u{\thn u}(u') + \mu \ni 0$, which gives $(t,q,0,q')\in \rgs Vu \rgs Bz$. For $u'=0$ we can choose $\thn u=\infty$ such that $\pl \disve u{\infty}(0) + \mu =\Spu^*+\mu \ni 0$. Hence \eqref{static-tq} holds with $\thn z=\infty$ and $\thn u\in (0,\infty]$, i.e.\ $(t,q,0,q')\in \rgs Bz$. Thus, in both cases, (i) and (ii), we conclude \eqref{desired-alpha>1}, and Step 2.B is completed.\smallskip \STEP{2.C. $t'=0$ and $\alpha \in (0,1)$.} This case works similarly as the case $\alpha>1$ in Step 2.B, but with the roles of $u$ and $z$ interchanged, where $\rgs Eu$ is interchanged with $\rgs Rz$. Thus, in analogy to \eqref{desired-alpha>1} we obtain $\Sigma^0_\alpha \subset \big(\rgs Eu \rgs Rz \cap\{t'=0\}\big) \cup \rgs Vu \rgs Rz \cup \rgs Bu $. This concludes the proof of Proposition \ref{pr:charact-Ctc-set}. \end{proof} \Section{Application to a model for delamination} \label{s:appl-dam} In this section we discuss the application of our vanishing-viscosity analysis techniques to a PDE system modeling adhesive contact. A previous vanishing-viscosity (and vanishing-inertia, in the momentum balance) analysis was carried out for a delamination model in \cite{Scala14} where, however, an energy balance only featuring defect measures, in place of contributions describing the dissipation of energy at jumps, was obtained in the null-viscosity limit. After introducing the viscous model and discussing its structure as a generalized gradient system in Section \ref{ss:10.-1}, we are going to state the existence of \emph{enhanced} $\BV$ and parametrized solutions to the corresponding rate-independent system $\RIS$ in Theorem \ref{thm:BV-adh-cont}. This result will be proved throughout Sections \ref{ss:10.0}--\ref{su:Dela.ExiApriGener} by showing that the `abstract' Theorems \ref{thm:exist-enh-pBV} and \ref{thm:exist-nonpar-enh} apply. As we will emphasize later on, our analysis crucially relies on the fact that, in the delamination system, the coupling between the displacements and the delamination parameter only occurs through lower order terms. \Subsection{The `viscous' system for delamination} \label{ss:10.-1} We consider two bodies located in two bounded Lipschitz domains $\Omega^\pm \subset \R^3$ and adhering along a prescribed interface $\GC$, on which some adhesive substance is present. We denote that part of $\Omega^\pm$ that coincides with $\GC$ by $\Gamma_\pm$, see Figure \ref{fig:delam.dom}, thus being able to talk about one-sided boundary conditions. In what follows, for simplicity we will assume that $\GC$ is a `flat' interface, i.e., $ \GC$ is contained in a plane, so that, in particular, $\mathscr{H}^{2}(\GC) =\mathscr{L}^{2}(\GC) >0$. While the generalization to a smooth curved interface is standard, this restriction will allow us to avoid resorting to Laplace-Beltrami operators in the flow rule for the delamination parameter. The state variables in the model are indeed the displacement $u : \Omega \to \R^3$, with $\Omega: = \Omega^+ \cup \Omega^- $, and the delamination variable $z: \GC \to [0,1]$, representing the fraction of fully effective molecular links in the bonding. Therefore, $z(t,x) =1$ ($z(t,x)=0$, respectively) means that the bonding is fully intact (completely broken) at a given time instant $t\in [0,T]$ and in a given material point $x\in \Gamma$. We denote by $n^\pm$ the outer unit normal of $\Omega^\pm$ restricted to $\Gamma_\pm$ and by $\JUMP{u}$ the jump of $u$ across $\GC$, namely $\JUMP{u} = u|_{\Gamma_+} - u|_{\Gamma_-}$, but now defined as function on $\GC$. \begin{figure} \begin{minipage}{0.45\textwidth} \begin{tikzpicture} \draw[very thick,black, fill=gray!20] (0,1)-- (4,1)--(4,2)--(3,3)--(1,3)--(0,2)--(0,1); \node[black] at (3,2){$\Omega^+$}; \draw[very thick, red] (-0.03,1) node[left]{\raisebox{0.8em}{$\Gamma_+$}}--(4.03,1); \draw[very thick] (-0.03,0.9)--(4.03,0.9) node[right]{$\GC$}; \draw[very thick,black, fill=gray!20] (0,0.8)-- (4,0.8)--(4,-1.1)--(-1,-1.3) --(0,0.8); \draw[very thick, red] (-0.03,0.8) node[left]{\raisebox{-1.6em}{$\Gamma_-$}}--(4.03,0.8); \node[black] at (3,-0.05){$\Omega^-$}; \draw[blue, very thick] (4,2)-- (3,3) node[pos=0.5, right]{\ $\GDir$}; \draw[blue, very thick] (4,-1.1)--(-1,-1.3) node[pos=0.05, below]{$\GDir$}; \end{tikzpicture} \end{minipage} \begin{minipage}{0.3\textwidth} \caption{} \label{fig:delam.dom} The two domains $\Omega^+$ and $\Omega^-$ touch along the delamination hypersurface $\GC$. \end{minipage} \end{figure} For simplicity, we impose homogeneous Dirichlet boundary conditions $u=0$ on the Dirichlet part $\GDir$ of the boundary $\pl \Omega$, with $\mathscr{H}^2(\GDir)>0$. We consider a given applied traction $f$ on the Neumann part $\GNeu=\pl \Omega \setminus (\GDir{\cup} \GC) $. All in all, we address the following \emph{rate-dependent} PDE system \begin{subequations} \label{PDEadhc} \begin{align} & \label{PDEadhc-a} -\mathrm{div}(\eps^\alpha \bbD e(\dot u) + \bbC e(u))= F && \text{ in } \Omega \ti (0,T), \\ & \label{PDEadhc-b} u=0 && \text{ on } \GDir \ti (0,T), \\ \label{PDEadhc-c} & (\eps^\alpha\bbD e(\dot u) + \bbC e(u))|_{\GNeu} \nu = f && \text{ on } \GNeu \ti (0,T), \\ & \label{PDEadhc-d} (\eps^\alpha \bbD e(\dot u) + \bbC e(u))|_{\GC} n^\pm \pm \gamma (z) \pl \psi(\JUMP{u}) \pm \beta( \JUMP{u} ) \ni 0 && \text{ on } \Gamma_\pm \ti (0,T), \\ & \label{PDEadhc-e} \pl \mathrm{R}(\dot z) + \eps \dot z- \Delta z + \tilde{\phi}(z) + \pl \gamma(z) \psi(\JUMP{u}) \ni 0 && \text{ on } \GC \ti (0,T), \end{align} \end{subequations} where $\dot u$ and $\dot z$ stand for the partial time derivatives of $u$ and $z$. Here, $F$ is a volume force, $\bbD,\, \bbC \in \text{Lin}(\R^{d \ti d}_\mathrm{sym} ) $ the positive definite and symmetric viscosity and elasticity tensors, $\nu$ the exterior unit normal to $\pl (\Omega^+ {\cup} \GC {\cup} \Omega^-)$, and $\mathrm{R}$ is given by \begin{equation} \label{explicit-R-adh} \mathrm{R}(r) = \kappa_+ \max\{r,0\} + \kappa_- \max\{{-}r,0\} \quad \text{ with } \kappa_\pm>0. \end{equation} Hence, healing of the broken molecular links is disfavored, but not totally blocked. Giving up unidirectionality allows for a more straightforward application of our abstract results. Nonetheless, we expect that, at the price of some further technicalities our techniques could be adapted to deal with unidirectionality by means of additional estimates (like for instance in the application of $\BV$ solutions to \emph{unidirectional} damage developed in \cite{KRZ13}). The term $\gamma(z)\pl \psi(\JUMP{u})$, with $\gamma$ and $\psi$ nonnegative functions (we may think of $\gamma(z) = \max\{ z, 0\}$) and $\psi$ convex, in \eqref{PDEadhc-d} derives from the contribution $\gamma(z) \psi(\JUMP{u})$ to the surface energy, cf.\ \eqref{energy-delamination} ahead, which penalizes the constraint $z\JUMP{u} =0$ a.e.\ on $\GC$, typical of \emph{brittle} delamination models. Indeed, to our knowledge, existence results for brittle models are available only in the case of a rate-independent evolution for $z$, cf.\ e.g.\ \cite{RoScZa09QDP, RosTho12ABDM}. In fact, \eqref{PDEadhc} is rather a model for \emph{contact with adhesion} and will be accordingly referred to in this way. Our assumptions on the constitutive functions $\gamma $, $\psi$ and $\beta$, and on the multivalued operator $\tilde\phi$ (indeed, on the mapping $z\mapsto \tilde\phi(z) -z$), will be specified in \eqref{eq:Del.Ass02General} ahead. We define the operators $\bsC,\bsD : \rmH^1(\Omega;\R^3)\to \rmH^1(\Omega;\R^3)^*$ via \[ \pairing{}{\rmH^1(\Omega)}{\bsC u }{v}: = \int_{\Omega}\bbC e(u): e(v) \dd x, \qquad \pairing{}{\rmH^1(\Omega)}{\bsD u }{v}: = \int_{\Omega}\bbD e(u): e(v) \dd x, \] while we denote by $\bsJ:\rm\rmH^1(\Omega;\R^3)\to \rmL^4(\GC;\R^3); \; u\mapsto \JUMP{u}$ the jump operator, by $\|\bsJ\|$ its operator norm, and by $\bsJ^*$ its adjoint. We denote by $\bsA$ the Laplacian with homogeneous Neumann boundary conditions \[ \bsA: \rmH^1(\GC)\to \rmH^1(\GC)^* \qquad \pairing{}{\rmH^1(\GC)}{\bsA z}{\omega}: = \int_{\GC} \big( \nabla z \nabla \omega + z \omega \big) \dd x\,. \] In particular, we have $\|z\|^2_{\rmH^1(\GC)}= \pairing{}{\rmH^1(\GC)}{\bsA z}{z}$. Finally, we denote by $\ell_u: (0,T)\to \Spu^*$ the functional encompassing the volume and surface forces $F$ and $f$, namely \[ \pairing{}{\rmH^1(\Omega)}{\ell_u(t)}{u}: = \int_{\Omega} F(t) u \dd x +\int_{\GNeu} f(t) u \dd S\,. \] Throughout, we will assume that \begin{equation} \label{bold-force-F} \ell_u \in \rmC^1([0,T]; \rm\rmH^1(\Omega;\R^3)^*)\,. \end{equation} Hence, system \eqref{PDEadhc} takes the form \begin{subequations} \label{eq:DelamSyst} \begin{align} \label{eq:DelamSyst.a} 0& \in \eps^\alpha \bsD \dot u + \bsC u + \bsJ^*\big(\beta(\JUMP{u}) + \gamma(z) \pl \psi(\JUMP{u}) \big) - \ell_u && \text{in } \rmH^1(\Omega;\R^3)^* \\ \label{eq:DelamSyst.b} 0&\in \pl \mathrm{R}(\dot z) + \eps \dot z + \bsA z + \pl \wh\phi(z) + \pl \gamma(z)\psi(\JUMP{u}) && \aein\, \GC \end{align} \end{subequations} almost everywhere in $(0,T)$. In \eqref{eq:DelamSyst}, $\widehat{\beta}$ a primitive for $\beta$ and $\wh \phi$ a primitive for the multivalued operator $z \mapsto \tilde\phi(z)-z$. \subsubsection*{\bf Structure as a (generalized) gradient system} First of all, let us specify our assumptions on the constitutive functions $\widehat\beta$, $\gamma$, $\widehat \phi$, and $\psi$: \begin{equation} \label{eq:Del.Ass02General} \left. \begin{aligned} &\psi,\,\wh\beta:\R^3\to [0,\infty) \text{ are lsc and convex with } \psi(0)=\wh\beta(0)=0,\\ &\exists\, C_\psi>0 \ \forall\, a \in \R^3\, : \quad \psi(a)\leq C_\psi (1{+}|a|^2), \\ & \wh \beta \in \rmC^1(\R^3) \text{ and } \beta = \rmD\wh\beta \text{ is globally Lipschitz} \\ & \gamma \text{ is convex, non-decreasing and $1$-Lipschitz, with } \gamma(0)=0, \\ &\wh\phi:\R \to [0,\infty] \text{ is lsc and $(-\Lambda_\phi)$-convex for some $\Lambda_\phi>0$, with } \wh\phi(z)=\infty \text{ for }z\notin[0,1].\quad \end{aligned} \right\} \end{equation} Hence, $\pl \gamma$ and $\pl \psi$ in \eqref{eq:DelamSyst} are convex analysis subdifferentials, while $\pl \wh \phi :\R \rightrightarrows \R$ is the Fr\'echet subdifferential of $\wh \phi$. To fix ideas, prototypical choices for $\widehat\beta$, $\gamma$, $\widehat \phi$, and $\psi$ would be: \begin{compactitem} \item[\emph{(i)}] $\widehat\beta$ the Yosida regularization of the indicator function of the cone $C = \{ v \in \R^3\, : \ v \cdot n^+ \leq 0 \} $ (cf.\ also Remark \ref{rmk:why-no-indicator}); \item[\emph{(ii)}] $\gamma(z) = \max\{ z,0\}$; \item[\emph{(iii)}] $\widehat \phi$ encompassing the indicator function $I_{[0,1]}$, which would ensure that $z \in [0,1]$; \item[\emph{(iv)}] $\psi (\JUMP{u}) = \tfrac k2 |\JUMP{u} |^2$ with $k>0$. \end{compactitem} Observe that \eqref{eq:DelamSyst} falls into the class of gradient systems \eqref{dne-q}, with the ambient spaces \begin{subequations} \label{setup-adh-cont} \begin{equation} \label{spaces-adh-cont} \Spu = \rmH_{\GDir}^1(\Omega;\R^3), \quad \Spz = \rmL^2(\GC), \quad \Spy = \rmL^1(\GC) \end{equation} where $\rmH_{\GDir}^1(\Omega;\R^3)$ the space of $\rmH^1$-functions on $\Omega$ fulfilling a homogeneous Dirichlet boundary condition on $\GDir$. By Korn's inequality, the quadratic form associated with the operator $\bsD$ induces on $\Spu$ a norm equivalent to the $\rm\rmH^1$-norm; hereafter, we will in fact use that \[ \|u\|_\Spu^2 := \sideset{_{}}{_{\rmH^1(\Omega)}} {\mathop{\langle \bsD u ,u \rangle}}, \quad \sideset{_{}}{_{\rmH^1(\Omega)}} {\mathop{\langle \bsC u ,u \rangle}} \geq c_{\bsC}\|u\|_\Spu^2 . \] The $1$-homogeneous dissipation potential $\calR: \Spy \to [0,\infty)$ is defined by \begin{equation} \label{diss-pot-adhc} \calR(\dot z): = \int_{\GC} \mathrm{R}(\dot z) \dd x \qquad \text{with } \mathrm{R} \text{ from \eqref{explicit-R-adh}.} \end{equation} The viscous dissipation potentials $\disv u : \Spu \to [0,\infty)$ and $\disv z : \Spu \to [0,\infty)$ are \begin{equation} \label{disv-adhc} \disv u (\dot u) :=\frac12 \sideset{_{}}{_{\rmH^1(\Omega)}} {\mathop{\langle \bsD \dot u ,\dot u \rangle}} \ \qquad \disv z (\dot z): = \int_{\GC} \tfrac12 |\dot z |^2 \dd x. \end{equation} The driving energy functional $\calE : [0,T]\ti \Spu \ti \Spz\to (\infty,+\infty]$ is given by \begin{equation} \begin{aligned} \label{energy-delamination} \ene tuz : = \frac12 \pairing{}{\rmH^1(\Omega)}{\bsC u}u & - \pairing{}{\rmH^1(\Omega)}{\ell_u(t)}{u} + \frac12 \pairing{}{\rmH^1(\GC)}{\bsA z}z + \int_{\GC} \big( \widehat{\beta}(\JUMP{u}) {+} \gamma(z) \psi(\JUMP{u}) {+} \widehat{\phi}(z) \big) \dd x \\ & \quad \text{ if } z \in \rmH^1(\GC) \text{ and } \widehat{\phi}(z) \in \rmL^1(\GC), \end{aligned} \end{equation} and $\infty$ otherwise. \end{subequations} As we will see in Proposition \ref{l:comprehensive}, under the conditions on $\widehat\beta$, $\gamma$, $\widehat \phi$, and $\psi$ specified in \eqref{eq:Del.Ass02General}, $\calE$ complies with the coercivity conditions from Hyp.\ \ref{hyp:1} with the spaces \begin{equation} \label{coercivity-spaces-delam} \Spw = \Spu =\rmH_{\GDir}^1(\Omega;\R^3) \quad \text{and} \quad \Spx =\rmH^1(\GC) \Subset \Spz, \end{equation} and its Fr\'echet subdifferential $\pl_q \calE : [0,T] \ti \Spu \ti \Spz \rightrightarrows \Spu^* \ti \Spz^*$ is given by \begin{equation} \label{Frsub-adh-cont} \begin{aligned} & (\mu,\zeta) \in \pl_q \ene tuz \qquad \text{ if and only if } \\ & \! \! \! \! \! \ \begin{cases} & \! \! \! \mu = \bsC u + \bsJ^*\big( \beta(\JUMP{u}) + \gamma(z)\varrho \big) - \ell_u(t) \text{ for some selection } \GC \ni x \mapsto \varrho(x)\in \pl \psi(\JUMP{u(x)}) \\ & \! \! \! \zeta = \bsA z +\omega \psi(\JUMP{u}) + \phi \text{ for selections } \GC \ni x \mapsto\omega(x) \in \pl \gamma(z(x)) \text{ and } \GC \ni x \mapsto\phi(x) \in \pl \widehat{\phi}(z(x)) \text{ s.t. } \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \bsA z + \phi \in \rmL^2(\GC) \end{cases} \end{aligned} \end{equation} (indeed, observe that, by the growth properties of $\gamma$ and $\psi$, the term $ \omega \psi(\JUMP{u}) $ is in $\rmL^2(\GC)$ for any selection $\omega \in \pl \gamma$). In particular, here we have used that the Fr\'echet subdifferential of the ($({-}\Lambda_\phi)$-convex) functional $\calF: \Spz \to [0,\infty]$ \begin{subequations} \label{calF-stuff} \begin{align} & \calF(z): = \left\{ \begin{array}{ll}\! \! \! \tfrac12 \pairing{}{\rmH^1(\GC)}{\bsA z}z +\int_{\GC} \wh\phi(z) \dd x & \text{ if $z\in \rmH^1(\GC)$ and $\wh\phi(z) \in \rmL^1(\GC)$}, \\ \! \! \! \infty & \text{else}, \end{array} \right. \intertext{is given by} & \pl \calF(z) = \{ \bsA z + \tilde\phi\, : \, \tilde \phi(x) \in \pl \wh \phi(x) \ \foraa x \in \GC, \ \bsA z + \tilde\phi \in \rmL^2(\GC)\}\,. \end{align} \end{subequations} We also point out for later use that $\frname q\calE$ fulfills the structure condition \eqref{it-is-product}, i.e.\ $ \frsub qtuz =\frsub utuz \ti \frsub ztuz $ for every $(t,u,z) \in [0,T]\ti \Spu \ti \Spz$. \subsubsection*{\bf Existence for the viscous system} As we will check in Proposition \ref{l:comprehensive} ahead, our general existence result, Theorem \ref{th:exist}, applies to the viscous delamination system. Hence, for every pair of initial data $(u_0,z_0) \in \rmH_{\GDir}^1(\Omega;\R^3) \ti \rmH^1(\GC)$ there exists a solution \begin{equation} \label{regularity-viscous-solutions-delam} u \in \rmH^1(0,T;\rmH_{\GDir}^1(\Omega;\R^3)) \text{ and } z\in \rmL^\infty(0,T;\rmH^1(\GC)) \cap \rmH^1(0,T;\rmL^2(\GC)), \end{equation} to the Cauchy problem for system \eqref{eq:DelamSyst}. \Subsection{The vanishing-viscosity limit} \label{ss:10.0} We will now address the vanishing-viscosity limit as $\eps\to 0^+$ of system \eqref{eq:DelamSyst}. Our main result states the convergence of (a selected family of) viscous solutions to an \emph{enhanced} $\BV$ solution to the system $\RIS$ defined by \eqref{setup-adh-cont}, in fact enjoying the additional regularity $z\in \BV ([0,T];\Spx)$ (with $\Spx = \rm\rmH^1(\GC)$). Analogously, we also obtain the existence of parametrized $\BV$ solutions, to which Theorem \ref{thm:diff-charact} applies, providing a characterization in terms of system \eqref{diff-charact-delam} ahead. In fact, we will be able to obtain solutions to the viscous system \eqref{eq:DelamSyst} enjoying estimates, uniform with respect to the viscosity parameter, suitable for the vanishing-viscosity analysis, only by performing calculations on a version of system \eqref{eq:DelamSyst} in which the functions $\widehat\beta$, $\gamma$, $\widehat \phi$, and $\psi$ are suitably smoothened, cf.\ \eqref{eq:DelAss01.b}. That is why, in Theorem \ref{thm:BV-adh-cont} below will state: \begin{itemize} \item[\emph{(i)}] the existence of \emph{qualified} viscous solutions to (the Cauchy problem for) \eqref{eq:DelamSyst}, where by `qualified' we mean enjoying estimates \eqref{qualified-estimates} below; \item[\emph{(ii)}] their convergence (up to a subsequence) to an enhanced $\BV$ solution (we mention that, since the viscous dissipation potentials from \eqref{disv-adhc} are both $2$-homogeneous, the formulas in \eqref{decomposition-M-FUNCTION} and \eqref{RJMF-p-hom} yield an explicit representation formula for the functional $\mename 0\alpha$ involved in the definition of $\BV$ solution); \item[\emph{(iii)}] the convergence of reparametrized (\emph{qualified}) viscous solutions to an enhanced $\pBV$ solution for which the differential characterization from Theorem \ref{thm:diff-charact} holds. \end{itemize} For simplicity, in Theorem \ref{thm:BV-adh-cont} we shall not consider a sequence of initial data $(u_0^\eps,z_0^\eps)_{\eps}$ but confine the statement to the case of \emph{fixed} data $(u_0,z_0)$. We will impose that $(u_0,z_0)$ fulfill the additional `compatibility condition' \eqref{eq:Del.IniCompati}. \begin{theorem} \label{thm:BV-adh-cont} Assume conditions \eqref{bold-force-F} and \eqref{eq:Del.Ass02General}. Let $(u_0,z_0) \in \Spu \times \Spx$ fulfill \begin{equation} \label{eq:Del.IniCompati} u_0\in \rmH_{\GDir}^1(\Omega;\R^3),\qquad \Delta z_0 \in \rmL^2(\GC), \quad \pl\wh\phi(z_0) \cap \rmL^2(\GC) \neq \emptyset. \end{equation} Then, there exists a family \begin{equation} \label{additional-regularity} (u_\eps,z_\eps)_\eps \subset \rmH^1(0,T;\rmH_{\GDir}^1(\Omega;\R^3)) \ti \rmH^1(0,T;\rmH^1(\GC)) \end{equation} solving the Cauchy problem for the viscous delamination system \eqref{regularity-viscous-solutions-delam} with the initial data $(u_0,z_0)$, and enjoying the following estimate \begin{equation} \label{qualified-estimates} \sup_{\eps>0} \int_0^T \big\{ \|\dot u_\eps\|_{\rmH^1(\Omega)} + \|\dot z_\eps\|_{\rmH^1(\GC)}\big\} \dd t \leq C\,. \end{equation} Moreover, for any null sequence $(\eps_k)_k$ the sequence $(u_{\eps_k},z_{\eps_k})_k $ admits a (not relabeled) subsequence, and there exists a pair \[ (u,z) \in \rmB\rmV([0,T]; \rmH_{\GDir}^1(\Omega;\R^3)) \ti \BV ([0,T];\rmH^1(\GC)), \] such that \begin{enumerate} \item the following convergences hold as $k\to\infty$ \begin{equation} \label{pointwise-cvg-delam} u_{\eps_k}(t) \weakto u(t) \text{ in } \rmH_{\GDir}^1(\Omega;\R^3), \quad z_{\eps_k}(t) \weakto z(t) \text{ in } \rmH^1(\GC) \qquad \text{for all } t \in [0,T]; \end{equation} \item $(u,z)$ is an \emph{enhanced} $\BV$ solution to the delamination system $\RIS$ from \eqref{setup-adh-cont}. \end{enumerate} Finally, reparametrizing the sequence $(u_{\eps_k},z_{\eps_k})_k $ in such a way that the rescaled curves $(\sft_{\eps_k}, \sfu_{\eps_k}, \sfz_{\eps_k})_k$ enjoy estimates \eqref{condition-4-normali} and \eqref{condition-4-normali-enhn}, up to a subsequence we have convergence of $(\sft_{\eps_k}, \sfu_{\eps_k}, \sfz_{\eps_k})_k$, in the sense of \eqref{cvs-eps}, to an enhanced $\pBV$ solution $(\sft,\sfu,\sfz): [0,\mathsf{S}] \to [0,T]\ti \rmH_{\GDir}^1(\Omega;\R^3)\ti \rmH^1(\GC)$ for which the differential characterization from Theorem \ref{thm:diff-charact} holds. Namely, there exist measurable functions $\thn u,\, \thn z: (0,\sfS)\to [0,\infty] $ satisfying for almost all $s\in (0,\sfS)$ the switching conditions \eqref{eq:SwitchCond} and the subdifferential inclusions \begin{subequations} \label{diff-charact-delam} \begin{align} \label{diff-charact-delam.a} 0& \in \thn u(s) \bsD \dot{\sfu}(s) + \bsC \sfu(s) + \bsJ^*\big(\beta(\JUMP{\sfu(s)}) + \gamma(\sfz(s)) \pl \psi(\JUMP{\sfu(s)}) \big) - \ell_u(\sft(s)) && \text{in } \rmH^1(\Omega;\R^3)^* \\ \label{diff-charact-delam.b} 0&\in \pl \mathrm{R}(\dot{\sfz}(s)) + \thn z(s) \dot{\sfz}(s) + \bsA \sfz(s) + \pl \wh\phi(\sfz(s)) + \pl \gamma(\sfz(s))\psi(\JUMP{\sfu(s)}) && \aein\, \GC \end{align} \end{subequations} (with convention \eqref{convention-recall} in the case $\thn x(s) = \infty$). \end{theorem} \begin{proof} It is sufficient to check that the rate-independent system $\RIS$ from \eqref{setup-adh-cont} complies with the assumptions of Theorems \ref{thm:exist-enh-pBV}, \ref{thm:diff-charact}, and \ref{thm:exist-nonpar-enh}, and that there exist `qualified' viscous solutions enjoying estimates \eqref{qualified-estimates}. More precisely, \begin{enumerate} \item In Proposition \ref{l:comprehensive} ahead we will check that the rate-independent delamination system $\RIS$ complies with Hypotheses \ref{hyp:setup}, \ref{hyp:diss-basic}, \ref{hyp:1}, \ref{h:closedness}, \ref{hyp:Sept19}, and \ref{h:ch-rule-param} (in fact, the parametrized chain rule \eqref{better-chain-rule-MOexpl} holds). \item We will obtain the existence of viscous solutions enjoying estimates \eqref{qualified-estimates} by working on a smoothened version of system \eqref{eq:DelamSyst}, introduced in Section \ref{su:DelamSmooth} ahead. Therein, we will obtain estimates for the solutions to the regularized viscous system \emph{uniform} with respect to the regularization parameter. Hence, with Proposition \ref{pr:Del.ViscSolImprov} in Section \ref{su:Dela.ExiApriGener} we will conclude the existence of `qualified' solutions for which \eqref{qualified-estimates} holds, and thereby conclude the proof of Theorem \ref{thm:BV-adh-cont}. \end{enumerate} \end{proof} In what follows, we will most often use the place-holders $\Spu$, $\Spz$, ... (cf.\ \eqref{spaces-adh-cont} and \eqref{coercivity-spaces-delam}) for the involved function spaces. \Subsection{Properties of the rate-independent system for delamination} \label{ss:prop-comprehensive} This section is centered around Proposition \ref{l:comprehensive} below, in which we check the rate-independent system $\RIS$ from \eqref{setup-adh-cont}. complies with the `abstract' Hypotheses from Section \ref{s:setup}. In particular, from the following result we gather that Theorem \ref{th:exist} is applicable, yielding the existence of solutions as in \eqref{regularity-viscous-solutions-delam} to the viscous delamination system. \begin{proposition} \label{l:comprehensive} Assume \eqref{bold-force-F} and \eqref{eq:Del.Ass02General}. Then, the delamination system $\RIS$ from \eqref{setup-adh-cont} fulfills Hypotheses \ref{hyp:setup}, \ref{hyp:diss-basic}, \ref{hyp:1}, \ref{h:closedness}, \ref{hyp:Sept19}, and \ref{h:ch-rule-param}. \end{proposition} \begin{proof} The proof consists of three steps. \noindent \STEP{1. Hypotheses \ref{hyp:setup}, \ref{hyp:diss-basic}, \ref{hyp:1}, and \ref{hyp:Sept19}:} The validity of Hypothesis \ref{hyp:setup}, \ref{hyp:diss-basic}, and \ref{hyp:1} is obvious. A straightforward calculation shows that the Fr\'echet subdifferential of $\calE$ is given by \eqref{Frsub-adh-cont}, so that the structure condition $\frsubq qtq = \frsubq utq \ti \frsubq ztq$ holds at every $q=(u,z) \in \Spu\times \Spz$. Therefore, by Lemma \ref{l.4.13}, Hypothesis \ref{hyp:Sept19} will be ensured by the validity of Hypothesis \ref{h:closedness}, which we now check. \noindent \STEP{2. Hypothesis \ref{h:closedness}:} Let $(t_n)_n \subset [0,T]$ and $(u_n,z_n)_n\subset \Spu \ti \Spz$ be in the conditions of Hypothesis \ref{h:closedness}, and let $(\mu_n,\zeta_n)_n$, with $\mu_n \in \frsub u{t_n}{u_n}{z_n} $ and $\zeta_n \in \frsub z{t_n}{u_n}{z_n} $, fulfill $\mu_n \weakto \mu $ in $\Spu^*$ and $\zeta_n\weakto \zeta $ in $\Spz^*$. Hence, \[ \begin{aligned} & \mu_n = \bsC u_n +\bsJ^* (\beta(\JUMP{u_n}){+}\gamma(z_n)\varrho_n) -\ell_u(t_n) \qquad \text{with } \varrho_n\in \pl \psi(\JUMP{u_n}) \text{ a.e.\ in } \GC, \\ & \zeta_n = \bsA z_n + \omega_n \psi(\JUMP{u_n}) + \phi_n \text{ for some } \omega_n\in \pl \gamma(z_n) \text{ and } \phi_n \in \pl \widehat{\phi}(z_n) \,. \end{aligned} \] We observe that, by Sobolev embeddings and trace theorems, from the convergences $u_n\weakto u $ in $\Spw$ and $z_n\weakto z$ in $\Spx $ we infer that $\JUMP{u_n}\to \JUMP{u}$ in $\rmL^{q}(\GC;\R^3)$ for all $1\leq q<4$, and $z_n\to z $ in $\rmL^p(\GC)$ for all $1\leq p<\infty$. Furthermore, since $0 \leq z_n \leq 1 $ a.e.\ on $\GC$, we even have $z_n\weaksto z$ in $\rmL^\infty(\GC)$. Since $\gamma$ is Lipschitz, we gather that $\gamma(z_n) \to \gamma(z)$ in $\rmL^p(\GC)$ for all $1\leq p<\infty$, too. By the growth properties of $\psi$, we have that the sequence $(\varrho_n)_n$ with $\varrho_n \in \pl \psi(\JUMP{u_n})$ a.e.\ in $\GC$ is bounded in $\rmL^4(\GC)$ and thus, up to a subsequence, it weakly converges in $\rmL^4(\GC)$ to some $\varrho$. By the strong-weak closedness of the graph of $\pl \psi$ (or, rather, of the maximal monotone operator that $\pl \psi: \R^3 \rightrightarrows \R^3$ induces on $\rmL^{2}(\GC)$), we have that $\varrho \in \pl \psi(\JUMP u)$ a.e.\ in $\GC$. Moreover, we find that $\gamma(z_n)\varrho_n \weakto \gamma(z) \varrho$, for instance in $\rmL^2(\GC)$. Since $\beta $ is Lipschitz, we also have $\beta(\JUMP{u_n})\to \beta(\JUMP{u})$ in $\rmL^{q}(\GC;\R^3)$ for all $ 1\leq q<4$. Also taking into account that $\ell_u \in \rmC^1([0,T];\Spu^*)$, we then conclude the weak limit $\mu$ of the sequence $(\mu_n)_n$ belongs to $\frsubq utq$. Let us now discuss the weak $\Spz$-limit $\zeta$ of the sequence $(\zeta_n)_n$. First of all, from the Lipschitz continuity of $\gamma$ we gather that the sequence $(\omega_n)_n$ is bounded in $\rmL^\infty (\GC)$. Hence, $ (\omega_n \psi(\JUMP{u_n}) )_n$ is bounded in $\rmL^2(\GC)$ and, a fortiori, we gather that also the terms $ (\bsA z_n + \phi_n)_n$ are bounded in $\rmL^2(\GC)$. By the strong-weak closedness of the graph of (the operator induced by) $\pl \gamma$ (on $\rmL^2(\GC)$), we infer that $\omega \in \pl \gamma(z)$ a.e. in $\GC$. Since $\psi$ has at most quadratic growth, from $\JUMP{u_n}\to \JUMP{u}$ in $\rmL^{q}(\GC;\R^3)$ for all $1\leq q<4$ we obtain that $\psi(\JUMP{u_n})\to \psi(\JUMP{u})$ in $\rmL^{q/2}(\GC;\R^3)$ via the dominated convergence theorem. All in all, we have that $\omega_n \psi(\JUMP{u_n}) \weakto \omega \psi(\JUMP{u}) \in \psi(\JUMP{u}) \pl \gamma(z)$ in $\rmL^2(\GC)$. Now, with arguments similar to those in the previous lines and that in $\rmL^2(\GC)$, since $\omega_n \weaksto \omega$ in $\rmL^\infty(\GC)$ and $\psi(\JUMP{u_n}) \to \psi(\JUMP{u})$ in $\rmL^{q/2}(\GC)$ for all $1\leq q<4$. In turn, also taking into account that the sequence $(\bsA z_n)_n$ is itself bounded in $\rmH^1(\GC)^*$, it is immediate to check that there exists $\phi \in \rmH^1(\GC)^*$ such that $\bsA z_n + \phi_n\weakto \bsA z + \phi$ in $\rmL^2(\GC)$. Since the functional $\calF$ from \eqref{calF-stuff} $\calF$ is also $({-\Lambda_\phi})$-convex, its Fr\'echet subdifferential has a strongly-weakly closed graph in $\rmL^2(\GC) \ti \rmL^2(\GC)$, and we thus infer that $\phi \in \pl \wh \phi(z) $ a.e.\ in $\GC$. All in all, we conclude that that $\zeta \in \frsubq ztq$. This concludes the proof of Hypothesis \ref{h:closedness}. \par \noindent \STEP{3. Hypothesis \ref{h:ch-rule-param}:} Let us now turn to the parametrized chain rule from Hypothesis \ref{h:ch-rule-param}. Since the viscous dissipation potentials are $2$-homogeneous, the associated vanishing-viscosity contact potentials are given by \eqref{p-homo-mfb} (cf.\ Example \ref{ex:VVCP}) so that, in particular, the coercivity condition \eqref{coercivity-VVCP} holds, and Proposition \ref{prop:better-chain-rule-MOexpl} is applicable. Therefore, Hypothesis \ref{h:ch-rule-param} follows from the chain rule of Hyp.\ \ref{h:ch-rule}. The latter chain-rule property can be verified by resorting to Proposition \ref{prop:ch-ruleApp} ahead. Hence, we need to show $\calE$ complies with condition \eqref{uniform-subdiff}, which states that the Fr\'echet subdifferential $\pl_q \calE$ can be characterized by a \emph{global} inequality akin to that defining the convex analysis subdifferential: for every $E>0$, and energy sublevel $\subl E$, there exists an upper semicontinuous function $\varpi^E : [0,T]\ti \subl E \ti \subl E\to [0,\infty]$, with $\varpi^E(t,q,q) =0 $ for every $t \in [0,T]$ and $ q \in \subl E$, such that \begin{equation} \label{modulus-subdif-delam} \eneq t{\hat q} - \eneq tq \geq \pairing{}{\Spq}{\xi}{\hat{q}{-}q} - \varpi^E(t,q,\hat{q}) \| \hat{q}{-} q\|_{\Spq} \ \text{for all } t \in [0,T], \ q,\, \hat{q} \in \subl E \text{ and all } \xi \in \pl_q \calE(t, q)\,. \end{equation} In order to check \eqref{modulus-subdif-delam}, we will resort to a decomposition for the energy functional from \eqref{energy-delamination} as \begin{subequations} \label{decomposition-energy} \begin{align} & \ene tuz = \calE_{\mathrm{elast}}(t,u) + \calF(z) + \calE_{\mathrm{coupl}}(u,z) \intertext{with $\calF$ from \eqref{calF-stuff}, } & \calE_{\mathrm{elast}}(t,u) := \tfrac12 \pairing{}{\rmH^1(\Omega)}{\bsC u}u - \pairing{}{\rmH^1(\Omega)}{\ell_u(t)}{u}, \intertext{while, for later convenience, we encompass the surface term $\int_{\GC} \widehat{\beta}(\JUMP{u}) \dd x$ in the coupling energy} & \calE_{\mathrm{coupl}}(u,z) := \int_{\GC} \left( \widehat{\beta}(\JUMP{u}) {+} \gamma(z) \psi(\JUMP{u}) \right) \dd x \,. \end{align} \end{subequations} Now, $\calE_{\mathrm{elast}}(t,\cdot)$ is convex while $\calF$ is $(-\Lambda_\phi)$-convex. Hence, they both comply with \eqref{modulus-subdif-delam}. Hence, it is sufficient to check its validity for $\calE_{\mathrm{coupl}}$, and indeed for its second contribution, only, since $\wh\beta$ is also convex. Indeed, for every $\hat u, \, u \in \Spu$ and $\hat z,\, z \in \Spz$ and for all selections $\GC \ni x \mapsto \varrho(x) \in \pl \psi(\JUMP{u(x)})$ and $ \GC \ni x \mapsto \omega(x) \in \pl \gamma(z(x)) $ there holds \[ \begin{aligned} & \int_{\GC} \big( \gamma(\hat z) \psi(\JUMP{\hat u}) {-} \gamma(z) \psi(\JUMP{u}) \big) \dd x - \int_{\GC} \gamma(z) \varrho \JUMP{\hat u{-}\hat u} \dd x - \int_{\GC}\omega\psi(\JUMP{u}) (\hat{z}{-}z) \dd x \\ & = \int_{\GC} \gamma(\hat z) \big\{ \psi(\JUMP{\hat u}{-}\psi(\JUMP u) \big\} \dd x - \int_{\GC} \gamma(z) \varrho \JUMP{\hat u{-}\hat u} \dd x + \int_{\GC} \big\{ \gamma(\hat z){-}\gamma(z) {-} \omega (\hat{z}{-}z) \big\} \psi(\JUMP u) \dd x \\ & \stackrel{(1)}{\geq} \int_{\GC} \big(\gamma(\hat z){-} \gamma(z) \big) \big( \psi(\JUMP{\hat u}){-}\psi(\JUMP u) \big) \dd x + \int_{\GC} \gamma(z) \big\{ \psi(\JUMP{\hat u}){-}\psi(\JUMP u) {-} \varrho \JUMP{\hat u{-} u} \big\} \dd x \\ & \stackrel{(2)}{\geq} \int_{\GC} \big(\gamma(\hat z){-} \gamma(z) \big) \big( \psi(\JUMP{\hat u}){-}\psi(\JUMP u) \big) \dd x \\ & \stackrel{(3)}{\geq} - \| \hat {z}{-} z\|_{\rmL^2(\GC)} \| \psi(\JUMP{\hat u}){-}\psi(\JUMP u)\|_{\rmL^2(\GC)} \end{aligned} \] where {\footnotesize (1)} \& {\footnotesize (2)} follow from the convexity of $\gamma$ and $\psi$, respectively, whereas {\footnotesize (3)} is due to the $1$-Lipschitz continuity of $\gamma$. Then, estimate \eqref{modulus-subdif-delam} follows with the function $\varpi_t^E(\hat q, q): = \| \psi(\JUMP{\hat u}){-}\psi(\JUMP u)\|_{\rmL^2(\GC)}$. We have thus checked the validity of \eqref{modulus-subdif-delam} and, a fortiori, of Hypothesis \ref{h:ch-rule-param}. This concludes the proof. \end{proof} \begin{remark}\slshape \label{rmk:why-no-indicator} The Lipschitz continuity of $\beta$ has played a key role in the proof that $\calE$ complies with the closedness condition from Hyp.\ \ref{h:closedness}. In fact, we could allow for a nonsmooth $\widehat \beta$, but with a suitable polynomial growth condition, that would still ensure that the maximal monotone operator induced by $\beta = \pl \wh\beta$ on $\rmL^2(\GC)$ is strongly-weakly closed. However, it would not be possible to check Hypothesis \ref{h:closedness} in the case $\beta$ is an unbounded maximal monotone operator, such as the subdifferential of an indicator function. That is why, we are not in a position to encompass in our analysis the non-interpenetration constraint between the two bodies $\Omega^+$ and $\Omega^-$. \end{remark} \Subsection{A priori estimates for the smooth semilinear system} \label{su:DelamSmooth} In this section we address a version of the viscous system \eqref{eq:DelamSyst} in which the functions $\widehat\beta$, $\gamma$, $\widehat \phi$, and $\psi$, complying with \eqref{eq:Del.Ass02General}, are also smoothened. Namely, we will additionally suppose that they fulfill \begin{align} \label{eq:DelAss01.b} & \left.\begin{aligned} & \gamma,\,\wh\phi \in \rmC^2(\R;\R), \quad \psi,\,\wh\beta \in \rmC^2(\R^3;\R), \\ & \gamma'',\, \wh\phi'',\, \rmD^2\wh\beta \text{ are bounded},\quad |\rmD\psi(a)|\leq C_\psi^{(1)} \text{ for all }a\in \R^3 . \end{aligned} \right\} \end{align} These conditions will allow us to \emph{rigorously} perform, on the solutions to system \eqref{eq:DelamSyst}, calculations that will ultimately lead to bounds, uniform with respect to viscosity parameter, suitable for our vanishing-viscosity analysis. Such estimates will however only depend on the constants occurring in \eqref{eq:Del.Ass02General}, and not on those in \eqref{eq:DelAss01.b}. For these calculations we will crucially make use of the \emph{semilinear} structure of this regularized system and of the fact that the coupling between the displacement equation and the flow rule for the delamination parameter is weak enough to allow us to treat those equations separately. As already mentioned, for all $\eps \in (0,1)$ and all initial data $(u_0,z_0)\in \Spu\ti \Spx$ system \eqref{eq:DelamSyst} admits finite-energy solutions $(u_\eps,z_\eps)$ with the standard time regularity \eqref{regularity-viscous-solutions-delam}. We now aim to derive higher order estimates as well, and to show that these estimates are independent of $\eps$. We will make them as explicit as possible. Let us mention in advance that one crucial argument involves the interpolation between the different norms for the time derivative $\dot z$, namely \begin{equation} \label{eq:Interpol.dotz} \forall \, \dot z \in \Spx:\quad \|\dot z\|_\Spz \leq C_\text{GN} \calR(\dot z)^{1/2} \| \dot z\|_{\Spx}^{1/2}. \end{equation} Indeed, \eqref{eq:Interpol.dotz} follows by combining the lower bound $\calR(v)\geq c_R\|v\|_{\rmL^1}$ with the classical Gagliardo-Nirenberg interpolation $\|v\|_{\rmL^2}^2 \leq C \|v\|_{\rmL^1} \|v\|_{\rm\rmH^1}$. This will allows us to exploit the $\eps$-independent dissipation estimate $\int_0^T \calR(\dot z_\eps)\dd t \leq C$. \medskip \STEP{1. Basic energy and dissipation estimates:} The simple energy-dissipation estimate stemming from the energy balance \eqref{enid.a} (cf.\ Lemma \ref{l:1}), together with $\ell_u \in \rmC^1([0,T;\Spu^*)$ implies that for all $E_0$ there exists $C^{E_0}_1>0$ such all solutions $(u_\eps,z_\eps)$ of \eqref{eq:DelamSyst} with $\calE(0,u_\eps(0),z_\eps(0)\leq E_0$ satisfy the basic energy estimates \begin{equation} \label{eq:DelamEst01} \int_0^T \!\!\big\{ \calR (\dot z_\eps(t)) + \eps^\alpha\|\dot u_\eps(t)\|_\Spu^2 + \eps \| \dot z_\eps\|_\Spz^2\big\} \dd t \leq C^{E_0}_1 \quad \text{and} \quad \forall\, t\in [0,T]:\ \|u_\eps(t)\|_\Spu + \| z_\eps(t)\|_{\Spx} \leq C^{E_0}_1 . \end{equation} As a consequence of this a priori bound, of the fact that $\bsJ: \Spu \to \rmL^4(\GC;\R^3)$ is a bounded operator, and of upper estimates on $\psi$ via the constants $C_\psi$ and $C^{(1)}_\psi$, we find another constant $C^{E_0}_2$ such that all solutions $(u_\eps,z_\eps)$ of \eqref{eq:DelamSyst} with $\calE(0,u_\eps(0),z_\eps(0))\leq E_0$ satisfy \begin{subequations} \label{eq:Del.psi.est} \begin{align} \label{eq:Del.psi.est.a} &\| \psi(\JUMP{u_\eps(t)})\|_{\rmL^2} \leq C_\psi C^{E_0}_2, \quad \| \rmD\psi(\JUMP{u_\eps(t)})\|_{\rmL^4} \leq C_\psi C^{E_0}_2, \\ \label{eq:Del.psi.est.b} &\| \psi(\JUMP{u_\eps(t)})\|_{\rmL^4}\leq C^{(1)}_\psi C^{E_0}_2, \quad \| \rmD\psi(\JUMP{u_\eps(t)})\|_{\rmL^\infty}\leq C^{(1)}_\psi C^{E_0}_2. \medskip \end{align} \end{subequations} Estimate \eqref{eq:Del.psi.est.b} will in fact only be used for gaining enhanced regularity of the viscous solutions $(u_\eps,z_\eps)$, and not for the vanishing-viscosity analysis. \STEP{2. Estimate for $\dot u_\eps$:} Because of the smoothness of $\wh\beta$ and $\psi$, the displacement equation \eqref{eq:DelamSyst.a} for $u_\eps$ is a semilinear equation with a smooth nonlinearity, if we consider $z_\eps \in \rm\rmH^1(0,T;\Spz)$ as \emph{given} datum. Thus, we can use the classical technique of difference quotients to show that $u_\eps \in \rm\rmH^2(0,T;\Spu)$ provided that $\dot u_\eps(0)= \eps^{-\alpha}\bsD^{-1}\big(\bsC u(0)+\bsJ^*(\cdots)-\ell_u(0)\big) \in \Spu$. Hence, it is possible to differentiate \eqref{eq:DelamSyst.a} with respect to time, which yields \begin{equation} \label{eq:DelamEqn.dotu} 0=\eps^\alpha \bsD \ddot u_\eps + \bsC \dot u_\eps + \bsJ^*\Big(\rmD^2 \wh\beta (\JUMP{u_\eps})\JUMP{\dot u_\eps} + \gamma(z_\eps) \rmD^2\psi (\JUMP{u_\eps})\JUMP{\dot u_\eps} + \gamma'(z_\eps)\dot z_\eps \rmD\psi(\JUMP{u_\eps})\Big) - \dot \ell_u(t). \end{equation} We can test \eqref{eq:DelamEqn.dotu} by $\dot u_\eps \in \rm\rmH^1(0,T;\Spu)$ and obtain \begin{align*} 0&=\frac{\eps^\alpha}2 \,\frac{\rmd}{\rmd t}\langle \bsD \dot u_\eps, \dot u_\eps\rangle_\Spu + \langle \bsC \dot u_\eps,\dot u_\eps\rangle_\Spu + \langle \rmD^2 \wh\beta (\JUMP{u_\eps})\JUMP{\dot u_\eps} {+} \gamma(z_\eps) \rmD^2\psi (\JUMP{u_\eps})\JUMP{\dot u_\eps} , \JUMP{\dot u_\eps}\rangle_\Spz\\ &\quad -\langle \dot \ell_u,\dot u_\eps\rangle_\Spu - \langle \gamma'(z_\eps)\dot z_\eps \rmD\psi(\JUMP{u_\eps}) , \JUMP{\dot u_\eps}\rangle_\Spz \end{align*} Here the last duality product in the first line is nonnegative, because $a\mapsto \wh\beta(a) + \gamma(z)\psi(a)$ is convex. The last duality product can be estimated using \eqref{eq:Del.psi.est.a}. Defining $\bftheta^\eps_\Spu$, $\bftheta^\eps_\Spz$, and $\lambda_{\Spu^*}$ via \[ \bftheta^\eps_\Spu (t)^2:=\langle \bsD \dot u_\eps(t),\dot u_\eps(t)\rangle_\Spu \quad \text{and} \quad \bftheta^\eps_\Spz (t)^2:=\| \dot z_\eps(t)\|_{\Spz}^2 , \quad \text{and } \lambda_{\Spu^*}(t) = \|\dot \ell_u(t)\|_{\Spu^*}, \] we have established the estimate \[ \frac{\eps^\alpha}2\,\frac{\rmd}{\rmd t}(\bftheta^\eps_\Spu)^2 + c_{\bsC} (\bftheta^\eps_\Spu)^2 \leq \lambda_{\Spu^*} \bftheta^\eps_\Spu + 1 \, C_\psi C^{E_0}_2 C_{\rmH^1,\rmL^4} \|\bsJ\| \bftheta^\eps_\Spz \bftheta^\eps_\Spu \] where we have also used that $\gamma$ is $1$-Lipschitz continuous, and $ C_{\rm\rmH^1,\rmL^4} $ denotes the constant associated with the continuous embedding $\Spu \subset \rmL^4(\GC;\R^3)$. Using $\frac{\rmd}{\rmd t}(\bftheta^\eps_\Spu)^2= 2 \bftheta^\eps_\Spu \,\dot\bftheta^\eps_\Spu$ we can divide by $ \bftheta^\eps_\Spu\geq 0$ and obtain \begin{equation} \label{eq:Delam.Est02} \eps^\alpha \dot\bftheta^\eps_\Spu + c_{\bsC}\bftheta^\eps_\Spu \leq \lambda_{\Spu^*} + C_\psi C^{E_0}_2 C_{\rm\rmH^1,\rmL^4} \|\bsJ\|\, \bftheta^\eps_\Spz.\medskip \end{equation} Let us mention that the above estimate could be rigorously obtained by replacing $ \bftheta^\eps_\Spu$ by $\sqrt{ (\bftheta^\eps_\Spu)^2 +\delta}$, which satisfies the same estimate, and then letting $\delta \down 0$, cf.\ \cite[Sec.\,4.4]{Miel11DEMF}. \STEP{3. Uniqueness and higher regularity of $\dot z_\eps$:} We first observe that \emph{given} $u_\eps \in \rm\rmH^1([0,T];\Spu)$ and $z_0$ there is a unique solution $z_\eps$ for \eqref{eq:DelamSyst.b}. Indeed, assuming that $z_1$ and $z_2$ are solutions (with $\varrho_j \in \pl\rmR(\dot z_j))$ we set $w=z_1{-}z_2$ and test the difference of the two equations by $\dot w=\dot z_1{-}\dot z_2$, which yields \begin{align} \label{eq:Del.Uniquen} 0=& \pairing{}{\Spz}{ \varrho_1{-}\varrho_2}{\dot z_1{-}\dot z_2} +\eps\|\dot w\|_\Spz^2 + \frac12\,\frac\rmd{\rmd t} \pairing{}{\Spx}{\bsA w}{w} + \pairing{}{\Spz}{G(u_\eps,z_1)-G(u_\eps,z_2)}{\dot w}, \end{align} where we have set $G(u,z)=\wh\phi'(z) + \gamma'(z)\psi(\JUMP u)$. By our strengthened assumptions \eqref{eq:DelAss01.b} and Gagliardo-Nirenberg interpolation we have \begin{align*} \| G(u_\eps,z_1)-G(u_\eps,z_2)\|_{\Spz^*} &\leq \|\wh\phi''\|_\infty \| z_1{-}z_2\|_\Spz + \|\gamma'(z_1){-} \gamma'(z_2)\|_{\rmL^4}\|\psi(\JUMP u_\eps) \|_{\rmL^4} \\ & \leq \big( \|\wh\phi''\|_\infty \| z_1{-}z_2\|_\Spz + \| \gamma''\|_{\infty} \|z_1{-}z_2\|_{\rmL^4} C^{(1)}_\psi C^{E_0}_2 \big)\leq C_G \|w\|_\Spz^{1/2} \| w\|_{\Spx}^{1/2}. \end{align*} By using the monotonicity of $\pl\rmR$, the first term in \eqref{eq:Del.Uniquen} is nonnegative. Using $\|w\|_{\Spx}^2 =\pairing{}{\rmH^1(\GC)}{\bsA w}{w} $ we obtain \[ \frac12\,\frac\rmd{\rmd t} \|w\|_{\Spx}^2 + \eps\|\dot w\|_\Spz^2 \leq C_G \|w\|_\Spz^{1/2} \| w\|_{\Spx}^{1/2}\|\dot w\|_\Spz \leq \frac{C_G^2}{4\eps} \|w\|_\Spz\| w\|_{\Spx} + \eps\|\dot w\|_\Spz^2. \] Canceling the terms $\eps\|\dot w\|_\Spz^2$ and using $\|w\|_\Spz\leq \| w\|_{\Spx}$ provides the estimate \begin{equation} \label{constant-CF} \| z_1(t){-}z_2(t)\|_{\Spx} \leq \rme^{{C_G}^2(t-s)/(4\eps)} \| z_1(s){-}z_2(s)\|_{\Spx} \quad \text{ for } 0\leq s\leq t \leq T. \end{equation} We emphasize that this uniqueness result is special and relies strongly on the semilinear structure of the flow rule for $z$ under the strengthened assumption \eqref{eq:DelAss01.b}. It is indeed thanks to \eqref{eq:DelAss01.b} that $G(u,\cdot):\Spx \to \Spz^*$ is globally Lipschitz, and in fact the constant $C_G$ in \eqref{constant-CF} does depend on $ C^{(1)}_\psi $. This uniqueness is central to derive higher regularity as it is now possible to use suitable regularizations such as Galerkin approximations or replacing the nonsmooth function $\rmR$ by a smoothed version $\rmR_\delta$. We do not go into detail here, but refer to \cite{Mielke-Zelik} and \cite[Sec.\,4.4]{Miel11DEMF}. In particular, our problem fits exactly into the abstract setting of \cite[Sec.\,3]{Mielke-Zelik} with $H=\Spz=\rmL^2(\GC)$, $\calB=\bsA$, and $\Phi(t,z)= \int_\Omega \big(\wh\phi(z)+ \gamma(z) \psi(\JUMP{u(t)}\big) \dd x$. Thus, under the additional condition $\bsA z_0\in \Spz$ (or $z_0\in \rm\rmH^2(\GC)$), the unique solution $z_\eps$ with $z_\eps(0)=z_0$ satisfies the following higher regularity properties: \begin{equation} \label{eq:Del.HighRegul} \dot z_\eps \in \rmL^\infty(0,T;\Spx) \quad \text{and} \quad \sqrt {t\,}\, \ddot z_\eps \in \rmL^2(0,T;\Spz). \end{equation} Of course, at this stage we have no control over the dependence on $\eps$ of the corresponding norms.\medskip \STEP{4. Identities not involving $\rmR$:} Surprisingly, there are two identities for the solution $z_\eps$ that are completely independent of $\rmR$, i.e.\ they look like energy estimates for a semilinear parabolic problem: \begin{subequations} \label{eq:Dela.Ident} \begin{align} \label{eq:Dela.Ident.B} \frac\eps2\,\frac\rmd{\rmd t} \| \dot z_\eps\|_\Spz^2 + \| \dot z_\eps\|_{\Spx}^2 + \pairing{}{\Spz}{\rmD^2_z \Phi(u_\eps,z_\eps) \dot z_\eps}{\dot z_\eps} + \pairing{}{\Spz}{ \rmD_z\rmD_u \Phi(u_\eps,z_\eps)\dot u_\eps}{\dot z_\eps} &=0, \\ \label{eq:Dela.Ident.C} \eps \| \ddot z_\eps\|_\Spz^2 + \frac12\,\frac\rmd{\rmd t} \| \dot z_\eps\|_{\Spx}^2 + \pairing{}{\Spz}{\rmD^2_z \Phi(u_\eps,z_\eps) \dot z_\eps}{\ddot z_\eps} + \pairing{}{\Spz}{\rmD_z\rmD_u \Phi(u_\eps,z_\eps)\dot u_\eps}{\ddot z_\eps} &\leq 0. \end{align} \end{subequations} We refer to \cite[Eqn.\,(95) and Lem.\,4.16]{Miel11DEMF} for a rigorous derivation based on the smoothness established in \eqref{eq:Del.HighRegul}. Relations \eqref{eq:Dela.Ident} can be formally derived from equation \eqref{eq:DelamSyst.b} by forgetting the nonsmooth term $\pl\rmR$, then differentiating the whole equation with respect to $t$, and finally testing with $\dot z_\eps$ or $\ddot z_\eps$ respectively. Indeed, \eqref{eq:Dela.Ident.C} will not be used below any more, but its relevance is obvious by comparison with \eqref{eq:Del.Uniquen} and for deriving the ($\eps$-dependent) a priori estimate for $\sqrt{t\,}\,\ddot z_\eps$ (via Galerkin approximations). It is the identity \eqref{eq:Dela.Ident.B} that will be crucial for deriving $\eps$-independent a priori estimates. It origin can formally understood by looking at general smooth $p$-homogeneous dissipation potentials $\bfPsi$ (i.e.\ fulfilling $\bfPsi(\gamma v)=\gamma^p\bfPsi(v)$ for all $v$ and $\gamma>0$). Then, Euler's formula gives $\langle \rmD \bfPsi(v),v\rangle = p \bfPsi(v) $, and we find the identity \[ \big\langle \frac{\rmd}{\rmd t} \big( \rmD\bfPsi(\dot z)\big),\dot z \big\rangle =\rmD^2\bfPsi(\dot z)[\ddot z,\dot z]= \frac{\rmd}{\rmd t}\big( \langle \rmD\bfPsi(\dot z), \dot z\rangle- \bfPsi(\dot z)\big) = (p{-} 1)\, \frac{\rmd}{\rmd t}\bfPsi(\dot z) . \] The quadratic case $p=2$ was applied above several times. Of course, in the case $p=1$ the potential $\calR$ is nonsmooth. Hence, the proof in \cite[Lem.\,4.16]{Miel11DEMF} is different and uses simple arguments based on the characterization of $\pl \calR$ in the $1$-homogeneous case.\medskip \STEP{5. $\rmL^1$ estimates for $\bftheta^\eps_\Spu$, $\bftheta^\eps_\Spz$, and $\bftheta^\eps_{\Spx}$:} In \eqref{eq:Dela.Ident.B} the coupling term $ \langle \rmD_z\rmD_u \Phi(u_\eps,z_\eps)\dot u_\eps, \dot z_\eps\rangle$ can be estimated via the weaker assumption \eqref{eq:Del.Ass02General}, namely \[ \begin{aligned} \pairing{}{\Spz} {\rmD_z\rmD_u \Phi(u_\eps,z_\eps)\dot u_\eps}{\dot z_\eps} & \leq 1 \| \dot z_\eps\|_\Spz \| \rmD\psi(\JUMP{u_\eps})\|_{\rmL^4} \| \JUMP{\dot u_\eps})\|_{\rmL^4} \\ & \leq C_3 \, \bftheta^\eps_\Spz(t) \,\bftheta^\eps_\Spu(t) \text{ with } C_3:=C_\psi C^{E_0}_2 C_{\rm\rmH^1,\rmL^4} \|\bsJ\|\,, \end{aligned} \] where we exploited the $1$-Lipschitz continuity of $\gamma$ and \eqref{eq:Del.psi.est.a}. Introducing the short-hand notation $\bftheta^\eps_{\Spx}$ via $(\bftheta^\eps_{\Spx}(t))^2=\|\dot z_\eps(t)\|^2_{\Spx} = \pairing{}{\rmH^1(\GC)}{\bsA \dot z_\eps(t)}{\dot z_\eps(t)}$ and exploiting the $\Lambda_\phi$-convexity of $\wh\phi$ and the convexity of $\gamma$, identity \eqref{eq:Dela.Ident.B} yields \[ \eps \bftheta^\eps_\Spz \dot \bftheta^\eps_\Spz + \big(\bftheta^\eps_{\Spx}\big)^2 \leq \Lambda_\phi \big(\bftheta^\eps_\Spz\big)^2 + C_3 \, \bftheta^\eps_\Spz \,\bftheta^\eps_\Spu\,. \] For the first term on the right-hand side we can now exploit the interpolation \eqref{eq:Interpol.dotz} and after division by $\bftheta^\eps_\Spz\geq 0$ (recall $\bftheta^\eps_\Spz \leq \bftheta^\eps_{\Spx}$) we arrive, together with \eqref{eq:Delam.Est02}, at the differential estimates \begin{subequations} \label{eq:Del.theta.syst} \begin{align} \label{eq:Del.theta.syst.a} \eps^\alpha \dot\bftheta^\eps_\Spu + c_{\bsC}\bftheta^\eps_\Spu &\leq \lambda_{\Spu^*} +C_\text{GN}C_3\, \big(\calR(\dot z_\eps) \bftheta^\eps_{\Spx}\big)^{1/2}, \\ \label{eq:Del.theta.syst.b} \eps \dot \bftheta^\eps_\Spz + \bftheta^\eps_{\Spx} &\leq \Lambda_\phi C_\text{GN} \calR(\dot z_\eps) +C_3 \, \bftheta^\eps_\Spu\,. \end{align} \end{subequations} We emphasize that all the appearing coefficients, except for the leading factors $\eps^\alpha$ and $\eps$, are independent of $\eps \in (0,1)$ and indeed depend only on $C_\psi$. From the first equation we obtain via the constants-of-variation formula (or Gr\"onwall's lemma) the estimate \[ \bftheta^\eps_\Spu(t)\leq K_\eps(t) \eps^\alpha\bftheta^\eps_\Spu(0) + \!\int_0^t\!\! K_\eps(t{-}s)\big( \lambda_{\Spu^*}(s) {+} C_\text{GN}C_3 \big(\calR(\dot z_\eps(s)) \bftheta^\eps_{\Spx}(s)\big)^{1/2} \big)\dd s \ \text{ with } K_\eps(t)=\frac{\rme^{-c_{\bsC} t/\eps^\alpha}}{\eps^\alpha}. \] Because of $\| K_\eps\|_{\rmL^1} = \int_0^\infty K_\eps(t)\dd t =1/c_{\bsC}$ the $\rmL^1$-convolution estimate leads to \[ I_U:=\int_0^T \!\!\bftheta^\eps_{\Spu}(t) \dd t \leq \frac1{c_{\bsC}} \Big(\eps^\alpha \bftheta^\eps_\Spu(0) + \int_0^T\!\! \lambda_{\Spu^*}(t)\dd t + C_\text{GN}C_3 \int_0^T\!\!\big(\calR(\dot z_\eps(t)) \bftheta^\eps_{\Spx}(t)\big)^{1/2} \dd t \Big) . \] Applying the Cauchy-Schwarz inequality to the last integral and integrating \eqref{eq:Del.theta.syst.b} over $[0,T]$ we obtain the estimates \begin{align*} I_\Spu& \leq \frac1{c_{\bsC}} \Big(\eps^\alpha \bftheta^\eps_\Spu(0) + \int_0^T\!\! \lambda_{\Spu^*}(t)\dd t + C_\text{GN}C_3 \, I_R^{1/2} I_{\Spx}^{1/2} \Big) , \\ I_{\Spx}&:=\int_0^T \!\!\bftheta^\eps_{\Spx}(t) \dd t \leq \eps \bftheta^\eps_\Spz(0) + \Lambda_\phi C_\text{GN} I_R + C_3 I_\Spu, \quad \text{where } I_R:=\int_0^T\!\! \calR(\dot z_\eps(t)) \dd t. \end{align*} From this it is easy to show that there exists a constant $C_*$, which only depends on $C_3 =C_\psi C^{E_0}_2 C_{\rm\rmH^1,\rmL^4} \|\bsJ\|$, $c_{\bsC}$, $C_\text{GN}$, and $\Lambda_\phi$, such that $I_\Spu{+}I_{\Spx}$ can be estimated by $C_*\big(\eps^\alpha \bftheta^\eps_\Spu(0) + \eps \bftheta^\eps_\Spz(0) +\int_0^T\lambda_{\Spu^*}\dd t +I_R\big) $. We have thus proved the following result. \begin{lemma}[Rate-independent a priori estimate in the semilinear case] \label{le:ApriSemiLinCase} Assume \eqref{bold-force-F} and \eqref{eq:Del.Ass02General}. Additionally, let $\widehat\beta$, $\gamma$, $\widehat\phi$, and $\psi$ satisfy \eqref{eq:DelAss01.b} and let the initial data $(u_0,z_0) \in \Spu \times \Spx$ comply with \eqref{eq:Del.IniCompati}. Then, There exists a constant $C_*>0$, only depending on the initial data and on the constants $\Lambda_\phi$ and $C_\psi$ from \eqref{eq:Del.Ass02General}, such that the unique solution $(u_\eps,z_\eps)$ of \eqref{eq:DelamSyst} satisfies the a priori estimate \begin{equation} \label{eq:ApriSemLinCase} \int_0^T\!\Big( \| \dot u_\eps\|_\Spu+ \| \dot z_\eps\|_{\Spx} \Big) \dd t \leq C_*\Big( \eps^\alpha\| \dot u_\eps(0)\|_\Spu + \eps \| \dot z_\eps\|_{\Spz} + \int_0^T \!\!\big( \| \dot \ell_u\|_{\Spu^*} + \calR(\dot z_\eps)\big) \dd t \Big) . \end{equation} \end{lemma} \Subsection{Existence and a priori estimates in the general case} \label{su:Dela.ExiApriGener} We now return to the setup of Sections \ref{ss:10.-1} and \ref{ss:10.0}, in which the constitutive functions $\wh \beta$, $\gamma$, $\wh \phi$, and $\psi$ only comply with \eqref{eq:Del.Ass02General}. We exhibit approximations of $\wh \beta$, $\gamma$, $\wh \phi$, and $\psi$ that also satisfy \eqref{eq:DelAss01.b}. For this, we will resort to the following general construction. \paragraph{\bf Smoothening the Yosida approximation} Following, e.g., the lines of \cite[Sec.\,3]{Gilardi-Rocca}, for a given convex function $\widehat{\chi}: \R^d \to [0,\infty]$ with subdifferential $\chi= \pl \wh\chi: \R^d \rightrightarrows \R^d$, and for a fixed $\delta\in (0,1)$, we define \[ \chi^\delta : = \chi_\delta^{\mathrm{Y}} \star \eta_\delta \] where $ \chi_\delta^{\mathrm{Y}} $ is the Yosida regularization of the maximal monotone operator $\chi$ (we refer to, e.g., \cite{Brez73OMMS}) and \begin{equation} \label{convol-kernel} \eta_\delta(x): = \tfrac1{\delta^{2}} \eta \left( \tfrac x{\delta^2}\right) \qquad \text{with } \left\{ \begin{array}{ll} \eta \in \rmC^\infty(\R^d), \\ \|\eta\|_{1} =1, \\ \mathrm{supp}(\eta) \subset B_1(0). \end{array} \right. \end{equation} Thus, $\chi^\delta \in \rmC^\infty(\R^d)$ and it has been shown in \cite{Gilardi-Rocca} that \begin{subequations} \label{properties-delta-approx} \begin{equation} \label{prop-delta-1} \|\rmD\chi^\delta\|_{\infty} \leq \frac1\delta, \qquad |\chi^\delta(x){-} \chi_\delta^{\mathrm{Y}}(x)| \leq \delta \text{ for all } x \in \R^d\,. \end{equation} Taking into account the properties of the Yosida we deduce that \begin{equation} \label{prop-delta-1-bis} |\chi^\delta(x)| \leq |\chi^o(x)| +\delta \qquad \text{ with } |\chi^o(x)| = \inf\{|y|\, : \, y \in \chi(x) \}\,. \end{equation} Furthermore, $\chi^\delta $ admits a convex potential $\widehat\chi^\delta$ satisfying, as a consequence of \eqref{prop-delta-1}, (below $\wh\chi_\delta^{\mathrm{Y}}$ denotes the Yosida approximation of $\wh \chi$): \begin{equation} \label{prop-delta-2} -\delta |x| \leq \wh{\chi}_\delta^{\mathrm{Y}} (x) -\delta |x| \leq \widehat{\chi}^\delta(x) \leq \wh\chi_\delta^{\mathrm{Y}} (x) +\delta |x| \leq \widehat\chi(x)+\delta|x| \ \text{ and } \ \widehat\chi^\delta(x) \to \widehat\chi(x) \qquad \text{for all } x \in \R^d\,. \end{equation} Finally, the following analogue of Minty's trick holds: given $O \subset \R^m$ and sequence $(v_\delta)_\delta\, v,\, \chi \in \rmL^2 (O;\R^d)$ such that $v_\delta\weakto v$ and $\chi^\delta(v_\delta) \weakto \eta $ in $ \rmL^2 (O;\R^d)$, \begin{equation} \label{prop-delta-3} \limsup_{\delta\to 0^+} \int_O \chi^\delta(v_\delta) \cdot v_\delta \dd x \leq \int_O \eta \cdot v \quad \Longrightarrow \quad \eta \in \pl \widehat{\chi}(v) \text{ a.e.\ in } O. \end{equation} \end{subequations} We apply this construction to $\gamma$, obtaining a smooth approximation $\gamma^\delta$. The definition of $\wh \beta^\delta$ clearly simplifies, since we have already required that $\wh\beta \in \rmC^1(\R)$ with $\beta$ Lipschitz. As for $\phi$, we define \[ \phi^\delta:\R \to \R \qquad \phi^\delta(z): = f^\delta(z) - \frac{\Lambda_\phi}{2}z^2 \] with $f^\delta$ the smoothened Yosida approximation of the convex function $z\mapsto f(z)= \wh\phi(z) +\frac{\Lambda_\phi}{2}z^2 $. It follows from \eqref{prop-delta-1} that $\wh \beta^\delta$, $\gamma^\delta$ and $ \phi^\delta$ comply with \eqref{eq:DelAss01.b}. \paragraph{\bf The construction of $\psi^\delta$.} In smoothening $\psi$ we also have to take care of the linear growth constraint encompassed in \eqref{eq:DelAss01.b}. Hence, we construct $\psi^\delta$ in two steps: \noindent \STEP{1. Inf-convolution} We define $\psi_\delta^{\mathrm{ic}}: \R^3 \to [0,\infty)$ via inf-convolution with the smooth function $h:\R^3 \to [0,\infty)$, $h(a): = \sqrt{1{+}|a|^2}-1$ by setting \begin{equation} \label{inf-convol-psi} \psi_\delta^{\mathrm{ic}}(a): = \inf_{x\in \R^3} \left(\frac1\delta h(x{-}a)+\psi(x) \right)\,. \end{equation} It turns out that $\psi_\delta^{\mathrm{ic}}$ is convex, of class $\rmC^1$, and since $h(0)=0$ we have that \begin{subequations} \begin{equation} \label{bound-from-above} \psi_\delta^{\mathrm{ic}}(a)\leq \psi(a) \qquad \text{for all } a \in \R^3. \end{equation} Since $h$ is even, we also have $\psi_\delta^{\mathrm{ic}}(a) = \inf_{x\in \R^3} \{ \tfrac1\delta h(x)+\psi(a{-}x)\}$. Hence, recalling that $\psi(0)=0$ we find that \begin{equation} \label{linear-growth-ic} \psi_\delta^{\mathrm{ic}}(a)\leq \frac1\delta h(a) \qquad \text{for all } a \in \R^3, \end{equation} so that, in particular, $\psi_\delta^{\mathrm{ic}}$ has linear growth. Finally, let $a_\delta \in \mathop{\mathrm{Argmin}}\limits_{x\in \R^3} {\{\tfrac1\delta h(x{-}a)+\psi(x)\} }$. Then, $\tfrac1\delta h(a_\delta{-}a) \leq \psi_\delta^{\mathrm{ic}} \leq \psi(a)$, so that $\lim_{\delta \to 0^+}h(a_\delta{-}a) =0$, hence $|a_\delta{-}a| = \sqrt{(h(a_\delta{-}a){+}1)^2{-}1} \longrightarrow 0$ as $\delta \to 0^+$. All in all, we conclude that \begin{equation} \label{bound-from-below} \psi_\delta^{\mathrm{ic}}(a) = \frac1{\delta} h(a_\delta{-}a) +\psi(a_\delta) \geq \psi(a_\delta) \qquad \text{with } a_\delta \to a \text{ as } \delta \to 0^+\,. \end{equation} \end{subequations} \STEP{2. Smoothening} We then define $\psi^\delta \in \rmC^\infty(\R^3; \R)$ via \begin{equation} \label{final-psi-delta} \psi^\delta: = \psi_\delta^{\mathrm{ic}} \star \eta_\delta \qquad \text{with $\eta_\delta$ from \eqref{convol-kernel}.} \end{equation} Clearly, $\psi^\delta$ is also convex. Combining \eqref{prop-delta-2} and \eqref{bound-from-above}, \eqref{linear-growth-ic}, and \eqref{bound-from-below} we gather that \begin{subequations} \label{psi-delta-properties} \begin{equation} \label{psi-delta-also-linear} - \delta |a| \leq \psi(a_\delta) - \delta |a| \leq \psi^\delta(a) \leq \min\big\{ \frac1\delta h(a),\psi(a) \big \} + \delta|a| \qquad \text{with } a_\delta \to a \text{ as } \delta \to 0^+\,. \end{equation} Thus, $\psi^\delta$ has also linear growth. Taking into account that it is convex, from \eqref{psi-delta-also-linear} we easily deduce that \begin{equation} \label{linear-bound-der-psi-delta} |\rmD \psi^\delta(a)| \leq |\pl \psi^\circ(a)| + \delta \qquad \text{for all } a \in \R^3, \end{equation} (where we have again used the notation $ |\pl \psi^\circ(a)| = \inf\{ |\eta|\, : \ \eta \in \pl \psi(a)\}$. \end{subequations} Finally, \begin{equation} \label{converg-psi-delta} \lim_{\delta \to 0^+} \psi^\delta(a) = \psi(a) \qquad \text{for all } a \in \R^3\,. \end{equation} The delamination system \eqref{eq:DelamSyst} featuring $\wh \beta^\delta$, $\gamma^\delta$, $\wh \phi^\delta$ and $\psi^\delta$ obviously has a gradient structure in the ambient spaces \eqref{spaces-adh-cont}, with the dissipation potentials from \eqref{diss-pot-adhc} and \eqref{disv-adhc}, and with the driving energy (cf.\ \eqref{decomposition-energy}) \begin{subequations} \begin{align} & \calE^\delta(t,u,z):= \calE_{\mathrm{elast}}(t,u) + \calF^\delta(z) + \calE_{\mathrm{coupl}}^\delta(u,z) \intertext{with $ \calE_{\mathrm{elast}}$ from \eqref{decomposition-energy}, and} & \calF^\delta(z) : = \frac12\pairing{}{\rmH^1(\GC)}{\bsA z}{z} +\int_{\GC} \wh\phi^\delta(z) \dd x \quad \text{if } z \in \rmH^1(\GC), \text{ and $\infty$ else}, \\ & \calE_{\mathrm{coupl}}^\delta(u,z) : = \int_{\GC} \big(\wh\beta^\delta(\JUMP u)+\gamma^\delta(z) \psi^\delta(\JUMP u)) \dd x \,. \end{align} \end{subequations} which indeed Mosco converges as $\delta \to 0^+$, with respect to the topology of $\Spu\ti\Spz$, to the energy functional $\calE$ from \eqref{energy-delamination}. We will pass to the limit, as $\delta\to 0^+$, in the corresponding energy-dissipation balance \eqref{enid.a} to prove that the solutions $(u^\eps_\delta,z^\eps_\delta)_\delta$ to the regularized delamination system converge to a solution of the original system \eqref{eq:DelamSyst}, satisfying the basic energy estimate \eqref{eq:DelamEst01} as well as the rate-independent a priori estimate \eqref{eq:ApriSemLinCase}. \begin{proposition}[Existence of viscous solutions with improved estimates] \label{pr:Del.ViscSolImprov} Under assumptions \eqref{eq:Del.Ass02General} for $\wh\beta$, $\psi$, $\gamma$, and $\wh\phi$ and the compatibility conditions \eqref{eq:Del.IniCompati} on the initial data $(u_0,z_0)$, there exists a constant $C_*>0$ such that for all $\eps>0$ there exist a solution $(u_\eps,z_\eps) \in \rmH^1(0,T;\Spu)\ti \rmH^1(0,T;\Spx)$ satisfying the energy estimate \eqref{eq:DelamEst01} with $C_1^{E_0}=C_*$, as well as the improved estimate \[ \int_0^T \big( \|\dot u_\eps\|_\Spu + \|\dot z_\eps\|_{\Spx}\big) \dd t \leq C_*. \] \end{proposition} \begin{proof} Let $(\delta_k)_k$ be a null sequence and, for $\eps>0$ fixed, let $(q^\eps_{\delta_k})_k$ be the corresponding sequence of solutions to the regularized system \eqref{eq:DelamSyst}; from now on, we will simply write $(q_k)_k$. Our starting point is the energy-dissipation balance \begin{align} \label{enid.a-delta} & \calE^{\dk}(t,q_k(t)) + \int_s^t \Big( \disve u{\eps^\alpha} (\eps^\alpha u_k'(r)) + \calR(z_k'(r)) + \disve z \eps (\eps\,z_k'(r)) \Big) \dd r \\ \nonumber &\quad + \int_s^t \Big( \frac1{\eps^\alpha} \disv u^*({-} \mu_k (r)) + \frac1\eps \conj z({-}\zeta_k(r))\Big) \dd r = \calE^{\dk} (s,q_k(s)) + \int_s^t\pl_t \calE^{\dk} (r,q_k(r)) \dd r \text{ for all $[s,t]\subset [0,T]$} \end{align} with \[ \begin{aligned} & \mu_k(t) = \bsC u_k(t) + \bsJ^*\big(\beta^{\dk}(\JUMP{u_k(t)})+ \gamma^{\dk}(z_k(t))\rmD \psi^{\dk} (\JUMP{u_k(t)}) \big) - \ell_u(t), \\ & \zeta_k(t) = \bsA z_k(t) +(\gamma^{\dk})'(z_k(t) ) \psi^\delta(\JUMP{u_k(t) }) + \phi^{\dk}(z_k(t))\,. \end{aligned} \] Relying on the energy estimate \eqref{eq:DelamEst01} and on well known compactness results, we infer that there exists $q_\eps = (u_\eps,z_\eps) $ such that, along a not relabeled subsequence, \begin{equation} \label{ptwise-q} q_k \weakto q_\eps \text{ in } \rmH^1(0,T;\Spu\ti \Spz) \quad \text{ and } \quad q_k (t) \weakto q_\eps(t) \text{ in } \Spu\ti \Spx \text{ for all } t \in [0,T]\,. \end{equation} It also follows from estimate \eqref{est-quoted.a} in Lemma \ref{l:1} that there exist $\mu_\eps$ and $\zeta_\eps$ such that, up to a further subsequence, \[ \mu_k \weakto \mu_\eps \text{ in } \rmL^2(0,T;\Spu^*) \quad \text{ and } \quad \zeta_k \weakto \zeta_\eps \text{ in } \rmL^2(0,T;\Spz^*)\,. \] In order to identify the weak limit $\zeta_\eps(t)$ as an element of $\frsub z {tt}{u_\eps(t)}{z_\eps(t)}$ for almost all $t\in (0,T)$, we observe that, by \eqref{prop-delta-1-bis}, $|(\gamma^{\dk})'(z_k ) | \leq \delta + |\pl \gamma^o(z_k) | \leq \delta +1$, taking into account that $\gamma(z) = \max\{ z,0\}$. Therefore, \[ \| (\gamma^{\dk})'(z_k ) \psi^\delta(\JUMP{u_k })\|_{\rmL^2} \stackrel{(1)}{\leq} (\delta{+}1) \left( \| \psi(\JUMP{u_k })\|_{\rmL^2}{+}\delta \| \JUMP{u_k }\|_{\rmL^2} \right) \stackrel{(2)}{\leq} (\delta{+}1) \left( C_\psi^{(2)} \| \JUMP{u_k }\|_{\rmL^4}^2 {+}\delta \| \JUMP{u_k }\|_{\rmL^2}{+}C\right) \] with {\footnotesize (1)} due to \eqref{psi-delta-also-linear} and {\footnotesize (2)} to \eqref{eq:Del.Ass02General}. Since $(u_k)_k$ is bounded in $\rmL^\infty(0,T;\rmH^1(\Omega;\R^3))$, we immediately deduce that $((\gamma^{\dk})'(z_k ) \psi^\delta(\JUMP{u_k }) )_k$ is bounded in $\rmL^\infty(0,T;\rmL^2(\GC))$. A standard argument based on the fact that $z\mapsto \phi^{\delta_k}(z)+\Lambda_\phi z$ is a non-decreasing function then yields a separate estimate in $\rmL^2(0,T;\rmL^2(\GC))$ for both $(\bsA z_k)_k$ and $( \phi^{\dk}(z_k))_k$ so that, up to a subsequence, $\phi^{\dk}(z_k) \weakto \phi$ in $\rmL^2(0,T;\rmL^2(\GC))$ for some $\phi$. Combining this with the fact that $z_k\to z_\eps$ in $\rmL^2(0,T;\rmL^2(\GC))$ we immediately conclude by \eqref{prop-delta-3} that $\phi \in \pl \wh\phi(z_\eps)$ a.e.\ in $(0,T)\ti \GC$. With the same arguments we find that $(\gamma^{\dk})'(z_k )\weaksto \omega$ in $\rmL^\infty((0,T)\ti\GC)$ with $\omega \in \pl \gamma(z_\eps)$ a.e.\ in $(0,T)\ti \GC$. Finally, again applying \eqref{psi-delta-also-linear} to estimate $| \psi^\delta(\JUMP{u_k })| $ and taking into account that $\JUMP{u_k }\to \JUMP{u}$ strongly in $\rmL^\infty(0,T;\rmL^q(\GC))$ for every $1\leq q<4$, with the dominated convergence theorem we conclude that $\psi^\delta(\JUMP{u_k })\to \psi(\JUMP{u_\eps})$, for instance, in $\rmL^{3/2}((0,T)\ti \GC)$. All in all, we find that $(\gamma^{\dk})'(z_k ) \psi^\delta(\JUMP{u_k }) \weakto \omega \psi(\JUMP{u_\eps})$ in $\rmL^{3/2}((0,T)\ti \GC)$. We have thus proved that \[ \zeta_\eps =\bsA z + \omega \psi(\JUMP{u_\eps})+ \phi \qquad \text{with } \omega \in \pl \gamma(z_\eps), \ \phi \in \pl \wh\phi(z_\eps) \ \aein\, (0,T)\ti \GC\,, \] and thus $\zeta_\eps(t) \in \frsub zt{u_\eps(t)}{z_\eps(t)}$. The identification of $\mu_\eps$ as an element of $\frsub u {\cdot}{u_\eps(\cdot)}{z_\eps(\cdot)}$ first of all follows from observing that, by \eqref{ptwise-q}, $ \bsC u_k \weaksto \bsC u$ in $\rmL^\infty(0,T;\Spu^*)$. Moreover, with similar arguments as in the above lines, based on properties \eqref{properties-delta-approx}, we find that $\gamma^{\dk}(z_k) \to \gamma(z_\eps)$ in $\rmL^q((0,T)\ti\GC)$ for all $1\leq q<\infty$ and, recalling that $\beta$ is Lipschitz, that there exists $\widetilde \beta \in \rmL^\infty (0,T;\rmL^4(\GC))$ such that $\beta^{\delta_k}(\JUMP {u_k}) \weakto \widetilde{\beta}$ in $ \rmL^\infty (0,T;\rmL^4(\GC))$. Finally, taking into account \eqref{linear-bound-der-psi-delta} and the fact that $\psi$ has quadratic growth we conclude that there exists $\varrho \in \rmL^\infty (0,T;\rmL^4(\GC))$ such that $\rmD\psi^{\delta_k}(\JUMP{u_k}) \weaksto \varrho $ in $\rmL^\infty (0,T;\rmL^4(\GC))$. All in all, we find that \[ \bsJ^*(\beta^{\delta_k}(\JUMP {u_k})+\gamma^{\delta_k}(z_k) \rmD \psi^{\delta_k}(\JUMP {u_k}) ) \weakto \eta =\widetilde{\beta} +\gamma(z_\eps) \varrho \quad \text{in } \rmL^2(0,T;\Spu^*), \] and it remains to show that $\eta = \bsJ^*(\beta(\JUMP {u})+\gamma(z) \rmD \psi(\JUMP {u}) )$. For this, we observe that the functionals $\mathcal{J}^{\delta_k} : \rmL^2(0,T; \Spu{\ti}\Spz) \to \R$ defined by $ \mathcal{J}^{\delta_k} (u,z): = \int_0^T \mathcal{}\int_{\GC} \big(\wh\beta^{\delta_k}(\JUMP u)+\gamma^{\delta_k}(z) \psi^{\delta_k}(\JUMP u)\big) \dd x \dd t, $ clearly fulfilling \[ \rmD_u \mathcal{J}^{\delta_k} (u,z) = \bsJ^*(\beta^{\delta_k}(\JUMP {u})+\gamma^{\delta_k}(z) \rmD \psi^{\delta_k}(\JUMP {u})) \qquad \text{for every $(u,z) \in \rmL^2(0,T; \Spu{\ti}\Spz)$}, \] enjoy the following property: \[ \left\{ \begin{array}{ll} (u_k, z_k) \weakto (u,z) \text{ in } \rmL^2(0,T; \Spu{\ti}\Spz), \\ \rmD_u \mathcal{J}^{\delta_k} (u_k,z_k) \weakto \eta \text{ in } \rmL^2(0,T; \Spu^*{\ti}\Spz^*), \\ \limsup_{k\to\infty} \int_0^T \pairing{}{\Spu}{ \rmD_u \mathcal{J}^{\delta_k} (u_k,z_k)}{u_k} \dd t \leq \int_0^T \pairing{}{\Spu}{ \eta}{u} \dd t \end{array} \right. \quad \Longrightarrow \quad \eta \in \bsJ^*\big(\beta(\JUMP{u})+ \gamma(z)\pl \psi (\JUMP{u}) \big)\,. \] Hence, we need to prove that \[ \limsup_{k\to\infty} \int_0^T \int_{\GC} \big\{ \beta^{\dk}(\JUMP{u_k}) \JUMP{u_k} {+} \gamma^{\delta_k}(z_k) \rmD \psi^{\delta_k}(\JUMP {u_k}) \JUMP{u_k} \big\} \dd x \dd t \leq \int_0^T \pairing{}{\rmH^1(\Omega)}{\eta}{u} \dd t\,. \] This follows from testing the $u$-equation \eqref{eq:DelamSyst.a} at the level $\delta_k$ by $u_k$, taking the limit as $k\to\infty$, and using that, by the convergence arguments in the above lines, the quadruple $(u,z,\tilde \beta,\varrho)$ fulfills the limit equation $ 0 = \eps^\alpha \bsD \dot u_\eps + \bsC u_\eps + +\bsJ^*(\tilde\beta{+}\gamma(z_\eps) \varrho) - \ell_u $ in $\Spu^*$ a.e.\ in $(0,T)$. All in all, we conclude that $ \bsJ^*(\tilde\beta {+} \gamma(z_\eps) \varrho) \in \bsJ^*(\beta(z_\eps){+} \gamma(z_\eps) \pl \psi(\JUMP{u_\eps}))$, so that \[ \mu_\eps \in \bsC u_\eps + \bsJ^*\big(\beta(\JUMP{u_\eps})+ \gamma(z_\eps)\pl \psi (\JUMP{u_\eps}) \big) -\ell_u(t) = \frsub ut{u_\eps}{z_\eps}\,. \] Therefore, passing to the limit as $k\to\infty$ in \eqref{enid.a} we infer that the quadruple $(u_\eps,z_\eps, \mu_\eps, \zeta_\eps)$ fulfills $( \mu_\eps(t), \zeta_\eps(t)) \in \frsubq q t{q_\eps(t)}$ for almost all $t\in (0,T)$, joint with the energy-dissipation upper estimate in \eqref{enid.a}. Now, by Proposition \ref{l:comprehensive} the energy functional $\calE$ from \eqref{energy-delamination} complies with the chain rule of Hypothesis \ref{h:ch-rule}. Hence, by Remark \ref{rmk:GS-used-later} the validity of the energy-dissipation upper estimate is sufficient to conclude that $(u_\eps,z_\eps)$ solve the Cauchy problem for the delamination system \eqref{eq:DelamSyst}. By lower semicontinuity arguments, the a priori estimate \eqref{eq:ApriSemLinCase} is inherited by $(u_\eps,z_\eps)$. This concludes the proof of Proposition \ref{pr:Del.ViscSolImprov} and, ultimately, of Theorem \ref{thm:BV-adh-cont}. \end{proof}
proofpile-arXiv_065-41
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} On 22 September 2017, IceCube reported a neutrino event with energy $\sim 290\ \rm TeV$, which was shown to be associated with the blazar TXS 0506+056 \citep{aartsen1807science}. This opened a window of high energy neutrino astrophysics. The origin of high energy neutrinos is not clear yet, and TDEs are the other possible sources. Recently, \cite{stein2021tidal} reported a correlation between a neutrino event with energy $\sim$0.2 PeV detected by IceCube on 1 October 2019 (IceCube-191001A) and a TDE (AT2019dsg) discovered by Zwicky Transient Facility (ZTF), and the neutrino event lags the onset of TDE by 6 months. AT2019dsg's redshift is $z=0.051$, i.e., the luminosity distance is $D=230$ Mpc. The optical luminosity was observed to decrease from $10^{44.5} \rm erg\ s^{-1}$ to $10^{43.5} \rm erg\ s^{-1}$ \citep{van2020} on a timescale of a half year. AT2019dsg is among the top 10\% of the 40 known optical TDEs in luminosities. The peak radiation is well described by a $10^{14}\rm cm$-sized blackbody photosphere of temperature $10^{4.59}$ K. AT2019dsg is an unusual TDE \citep{2102.11879}; it belongs to neither the typical soft-X-ray TDEs nor the typical optical/UV TDEs because it emits optical/UV radiations as well as X-ray and radio radiations. Fermi Large Area Telescope (Fermi-LAT) provides an upper limit of flux $1.2\times 10^{-11}\rm erg\ cm^{-2}\ s^{-1}$ in 0.1-800 GeV range. HAWC observatory also set an upper limit for the period from 30 September to 2 October, $ E^{2}\Phi=3.51\times 10^{-13}(\frac{E}{\rm TeV})^{-0.3}\rm TeV\ cm^{-2}\ s^{-1} $ for 300 GeV-100 TeV \citep{van2020}. In the previous literatures (e.g., \citealt{wang2016tidal,liu2020}), a jet from the TDE is assumed to accelerate protons and generate neutrinos via p$\gamma$ interactions, dominating over pp interactions since the density of photons is extremely high. Moreover, in the TDE model by \cite{murase2020high} high-energy neutrinos and soft gamma-rays may be produced via hadronic interactions in the corona, the radiatively inefficient accretion flows and the hidden sub-relativistic wind. It is known that, in addition to the radiative outburst, TDE is also expected to launch ultra-fast and energetic outflows. First, due to the relativistic apsidal precession, after passing the pericenter, the stream collides with the still falling debris, leading to a collision-induced outflow \citep{lu2020self}. Second, after debris settling into an accretion disk, the high accretion mode will launch energetic outflows \citep{curd2019grrmhd}. Since the duration of both processes above are of the order of months, the duration of launching energetic TDE outflows is also roughly in months. For AT2019dsg, the physics of the outflow is estimated via radio emission. The velocity inferred in different models are similar: 0.07 c in outflow--circumnuclear medium (CNM) interaction model \cite{cendes2021radio}, or 0.06--0.1c in outflow--cloud interaction model \citep{mou2021radio}). However, the outflow energy or kinetic luminosity of outflows ranges a lot in different models. \citet{cendes2021radio} estimated an energy of $4\times 10^{48}$ erg, which is much smaller than the energy budget of the TDE system ($\sim 10^{53}$ erg). \citet{mou2021radio} inferred that the outflow power should be around $10^{44}$ erg s$^{-1}$ which is consistent with numerical simulations \citep{curd2019grrmhd}, and the total energy should be in the order of $10^{51}$ erg if the outflow continues for months. When the outflow impacts a cloud, the bow shock (BS) can be produced outside the cloud, and could effectively accelerate particles via diffusive shock acceleration (DSA) processes. The electrons accelerated at BSs give rise to synchrotron emission, which can be detected in $\sim$GHz radio band \citep{mou2021radio}. The accelerated high energy protons may be the precursor of the neutrinos, especially considering the high-density cloud near the BS which is favorable for pp collisions \citep{mou2021years2}. A basic premise is whether there are clouds around the black hole, especially at the distance of $10^{-2}$ pc as inferred from the delay of the neutrino event and the possible outflow speed. It is well known that for active galactic nuclei (AGN), there exists a so-called broad line region (BLR) around the supper massive black hole (SMBH) \citep{antonucci1993unified}, which is frequently referred to as ``clouds''. However, for quiescent SMBH or low-luminosity AGN (the case for AT2019dsg), due to the lack of ionizing radiation irradiating the clouds, the existence of the clouds becomes difficult to verify, and the physics of such clouds remains largely unknown. To distinguish from the BLR cloud concept in AGN, we hereby call the possibly existing clouds around quiescent or low-luminosity SMBH at similar position of the BLR as ``dark clouds''. Transient events may help reveal the physics of dark clouds. For AT2019dsg, in addition to the indirect speculation of the existence of dark clouds via radio emission \citep{mou2021radio}, direct evidences arise from the dusty echo, and broad emission line components. First, \cite{van2021establishing} reported that AT2019dsg was detected with remarkable infrared echo about 30 days after the optical onset, suggesting that clouds should exist at a distance of 0.02 pc to the SMBH (note that there may exist clouds in more inward regions but not surveyed by WISE/neoWISE). Second, it is reported that there exist the broad emission line components (line widths $>$ 70 \r{A}) of H$\alpha$, H$\beta$, and He\uppercase\expandafter{\romannumeral2} \citep{cannizzaro2021accretion}, implying the existence of materials of the velocity over several thousand kilometers per second, although their nature is unclear. In our TDE outflow-cloud interaction model, we assume that there exist dark clouds at $\sim 0.01$ pc to the BH, and we simply set up the parameters (covering factor, cloud size and density) of dark clouds similar referring to those of classical BLR clouds. This paper is organized as follows. We introduce the general physics picture of the model in Sec.~\ref{Pp}; The products (GeV-TeV gamma-rays and PeV neutrinos) from hadronic emission are described in \sethlcolor{red}Sec.~\ref{dp}; In Sec.~\ref{cwo}, we compare the calculations with the present observations; The conclusions and discussion are presented in the last section. \section{Physical picture of outflow-cloud interactions}\label{Pp} As shown in Fig. \ref{S}, consider that there are dark clouds surrounding the SMBH The TDE outflows released from the SMBH collide with the dark clouds, forming two shock waves \citep{mckee1975}, i.e., a bow shock (BS) outside the cloud and a cloud shock (CS) sweeping through the cloud. Following \cite{celli2020spectral}, protons may be accelerated to very high energy with a power-law distribution. The high-energy protons may partly propagate into the cloud and interact with matter therein. \begin{figure*} \centering \includegraphics[scale=0.8]{9f.eps} \caption{Schematic plot of the TDE outlfow--cloud interaction model. A star engages the Roche radius of the SMBH thereby the disrupted debris is blown away. The collision-induced outflows hit clouds forming shock waves, a bow shock and a cloud shock. The outflow--cloud interactions occur at 0.01 pc from the SMBH. See section \ref{poo} for details. } \label{S} \end{figure*} \subsection{Dynamics}\label{poo} We consider a simplified spherically symmetric outflow, with kinetic luminosity $L_{\rm kin}$ and velocity $V_{\rm o}$. We take $L_{\rm kin}=10^{45} \rm erg~s^{-1}$ as the fiducial value \citep{curd2019grrmhd}, which is also close to the constraint given by \cite{mou2021radio}. Following the interpretation of the radio observation \citep{stein2021tidal} by synchrotron radiation from non-thermal electrons in the outflow-CNM model, we take the outflow velocity derived from the model, $V_{\rm o} = 0.07\rm c$ \citep{cendes2021radio}. The duration of the outflow launching is assumed to be $T_{\rm o}\sim 6$ months. Define $\rho_{\rm o}$ the density of outflow material, then we write the kinetic luminosity as $L_{\rm kin}=\frac{1}{2}\dot{M_{\rm o}}V_{\rm o}^{2}=2\pi r_{\rm o}^{2}\rho_{\rm o}V_{\rm o}^3$, with $\dot{M_{\rm o}}$ the mass flowing rate. Since the time delay of the neutrino event and the TDE is $t_{\rm delay}\sim 6$ month \citep{van2020}, we assume the typical distance of the dark clouds from the central SMBH is $r_{\rm o}=V_{\rm o}t_{\rm delay} \simeq 0.01$ pc. Thus the number density of the outflow is $n_{\rm o}=\frac{\rho_{\rm o}}{m_{\rm H}}\sim 1.14\times 10^{7}(\frac{L_{\rm kin}}{{{10^{45}\rm erg\ s^{-1}}}})(\frac{V_{\rm o}}{0.07{\rm c}})^{-3}(\frac{r_{\rm o}}{0.01{\rm pc}})^{-2}\rm cm^{-3}$. The interaction between outflows and clouds drives a CS that sweeps across the cloud. The velocity of CS is related to the outflow velocity by $V_{\rm c}=\chi^{-0.5}V_{\rm o}$ \citep{mckee1975, mou2020years1}, where $\chi\equiv \frac{n_{\rm c}}{n_{\rm o}}$, with $n_{\rm c}$ the particle density in the cloud. According to the photoionization models in BLRs of AGN, we assume the cloud particle density to be $n_{\rm c} \sim 10^{10} \rm cm^{-3}$ \citep{osterbrock2006astrophysics}. So, here $\chi\simeq 8.8\times 10^{2}$. Let us assume the size of the clouds is typically $r_{\rm c}=10^{14}$cm\footnote{This can be obtained from the column density ($\sim10^{24}{\rm cm^{-2}}$) and the cloud density $ n_{\rm c}\sim10^{10}\rm cm^{-3}$ \citep{osterbrock2006astrophysics}.}. The CS crosses the cloud in a timescale of \begin{equation} T_{\rm cloud}=\frac{r_{\rm c}}{V_{\rm c}}=\frac{r_{\rm c}}{V_{\rm o}} \chi^{0.5}, \end{equation} i.e., $T_{\rm cloud}\sim1(\frac{r_{\rm c}}{10^{14}\rm cm})(\frac{V_{\rm o}}{0.07\rm c})^{-1}(\frac{n_{\rm c}}{10^{10}\rm cm^{-3}})^{0.5}(\frac{n_{\rm o}}{1.14 \times 10^{7}\rm cm^{-3}})^{-0.5}$ month. Note, the cloud could be devastated by the outflow, and the survival timescale of the clouds after the CS crossing is comparable to $T_{\rm cloud}$ \citep{klein1994hydrodynamic}, so $T_{\rm cloud}$ can also be regarded as the survival timescale of the cloud. \subsection{Particle acceleration and propagation} \label{swa} Both BS and CS may accelerate particles. According to DSA mechanism, the acceleration timescale in the BS for a particle with energy $E_p$ and charge number $Z$ is \citep{drury1983introduction} \begin{equation} T_{\rm acc,BS}\approx\frac{8}{3}\frac{cE_{\rm p}}{ZeB_{\rm o}V_{\rm o}^2}, \label{AM} \end{equation} where $B_o$ is the magnetic field strength in the outflow. For the CS, the acceleration timescale is \begin{equation} T_{\rm acc,CS}\approx\frac{8}{3}\frac{cE_{\rm p}}{ZeB_{\rm c}V_{\rm c}^2} \end{equation} where $B_{\rm c}$ is the magnetic field in the cloud. We will assume $B_{\rm o}= B_{\rm c}=B$ in the nearby region of the outflow--cloud interaction, and we take $B=1$ G. The particle acceleration suffers from several factors. The first is the particle energy loss due to hadronic interactions, namely the pp interactions. In the BS, the timescale is \begin{equation} t_{\rm pp,BS}=\frac{1}{cn_{\rm o}\sigma_{\rm pp}} ,\label{tpp} \end{equation} whereas in the CS, \begin{equation} t_{\rm pp,CS}=\frac{1}{cn_{\rm c}\sigma_{\rm pp}} .\label{tpp,cs} \end{equation} Here $\sigma_{\rm pp}\simeq 30$mb is the pp cross section. The other suppression factor is the lifetime of the relevant shocks. As the cloud survival timescale is comparable to the CS crossing time, and after the cloud is destroyed, both BS and CS end. So the acceleration in either BS or CS is only available within a time period of $T_{\rm cloud}$ \citep{klein1994hydrodynamic}. Finally, the maximum energy of the accelerated particles is determined by equating the acceleration time to the shortest one between the pp interaction time and the CS crossing time of the cloud. All the timescales are plotted in Fig. \ref{tt}. For the BS, $T_{\rm cloud} \sim 1$ month is the main constraint than the pp energy loss time, due to the low density in the outflow, $t_{\rm pp,BS}=3.1(\frac{n_{\rm o}}{1.14\times 10^{7}{\rm cm^{-3}}})^{-1}$yr. By equating $T_{\rm acc,BS}=T_{\rm cloud}$ we obtain the maximum energy of particles accelerated in the BS, $E_{\rm p,max}\simeq 60(\frac{B}{1{\rm G}})(\frac{V_{\rm o}}{0.07{\rm c}})^{2}(\frac{T_{\rm acc,BS}}{{\rm 1\ month}})$PeV. For the CS, due to the dense cloud, the pp collision time is short, $t_{\rm pp,CS}=1(\frac{n_{\rm c}}{10^{10}\rm cm^{-3}})^{-1}$ day, and is more important in suppressing acceleration. By $T_{\rm acc,CS}=t_{\rm pp,CS}$, one obtains the maximum energy $E_{\rm p,max} \simeq 2.9(\frac{B}{1{\rm G}})(\frac{V_{\rm c}}{7.1\times 10^{7}{\rm cm\ s^{-1}}})^{2}(\frac{T_{\rm acc,CS}}{1 {\rm day}})$TeV. Thus only the BS can accelerate particles up to PeV scale. \begin{figure} \centering \includegraphics[scale=0.6]{timescale_10f.eps} \caption{The timescales of particle acceleration (solid) and pp interaction (dotted) in the BS (blue) and CS (red), and the cloud survival timescale (black). } \label{tt} \end{figure} Note that we neglect the effect of energy loss due to p$\gamma$ interactions between the high-energy particles and the TDE photons. Given the cross section $\sigma_{\rm p\gamma}\sim0.2$mb on average, and the TDE photon number density at $r_{\rm o}$, $n_{\rm ph}\simeq 10^{9} \rm cm^{-3}$ (see Sec.~\ref{gammarays}), the timescale of $\rm p\gamma$ interactions is relatively large, $t_{\rm p\gamma}\sim 3.2(\frac{n_{\rm ph}}{1\times 10^{9}{\rm cm^{-3}}})^{-1}\rm yr$. In the previous works (e.g., \citealt{liu2020}), $\rm p\gamma$ reaction is important because a site closer to the center is considered, so the photon density is high, $n_{\rm ph}\sim 10^{16}(\frac{L}{10^{43}\rm erg\ s^{-1}})(\frac{r_{\rm o}}{10^{14.5}\rm cm})^{-2}\rm cm^{-3}$ (with $L$ being the TDE luminosity, see below) and $\rm p\gamma$ reaction is important. In our case, neither p$\gamma$ nor pp reactions in the BS consume significant energy of the accelerated particles. After acceleration in the BS, the high-energy particles may diffuse away from the BS \citep{bultinck2010origin,taylor1961diffusion,kaufman1990explanation}. As suggested by some literatures \citep{1997ApJ...478L...5D,2010ApJ...724.1517B,2012ApJ...755..170B,2018arXiv180900601W}, we assume that a significant fraction $F$ of the accelerated particles can effectively reach and enter the cloud, whereas the other propagate away bypassing the cloud. Basically, the relatively low-energy protons are likely to be advected with the shocked material, while the relatively energetic particles ($\gtrsim 1\,\rm TeV$) tend to diffuse up to the cloud. Besides, these high-energy particles entering the cloud could be more important by suppressing the possible advection escape under the certain magnetic configuration \citep{bosch2012}. The detailed treatment of the particle propagation is beyond the scope of this paper. To evaluate this uncertainty, $F\simeq 0.5$ is invoked in our calculations. For the other high-energy particles that do not propagate into the cloud, no hadronic interactions are expected, given the low density of cold particles outside the cloud. After entering the cloud, the particles may propagate in the cloud by diffusion. The residence time in the cloud before escaping can be estimated by \begin{equation} \tau_{\rm es}=C_{\rm e}\frac{r_{\rm c}^{2}}{D_{\rm B}}, \label{eqescape} \end{equation} where $D_{\rm B}$ is the Bohm diffusion coefficient. and $C_{\rm e}$ is a correction factor that accounts for the difference between the actual diffusion coefficient and the Bohm diffusion. We take $C_{\rm e}=0.75$. The Bohm diffusion coefficient of protons is given by \citep{kaufman1990explanation} $D_{\rm B}=\frac{r_{\rm g}^{2}\omega_{\rm g}}{16}$, where $r_{\rm g}=\frac{E_{\rm p}}{eB}$ is the cyclotron radius, and $\omega_{\rm g}= \frac{eBc}{E_{\rm p}}$ is the cyclotron frequency. Thus we get $\tau_{\rm es}\sim 1.3(\frac{E_{\rm p}}{7\rm PeV})^{-1}(\frac{B}{1\rm G})$day, a value similar to $t_{\rm pp,CS}$ for $E_{\rm p}\sim7$ PeV. \section{Hadronic emission}\label{dp} In the outflow--cloud interaction, the kinetic energy of the outflow will be converted into the BS and CS. The energy ratio between the CS and BS is $\chi^{-0.5}\simeq0.034$, so the energy dissipation in the CS can be neglected compared to the BS (see appendix A in \citealt{mou2020years1}). The covering factor of the clouds is $C_{\rm v}\sim 0.1$, and the shock acceleration efficiency, i.e., the fraction of the shock energy converted to accelerated particles, is $\eta \sim 0.1$. Given the kinetic energy of the outflow, the average luminosity of the accelerated particles is, in the BS, \begin{equation} L_{\rm b}=C_{\rm v}\eta L_{\rm kin}, \label{Eb} \end{equation} and in the CS, \begin{equation} L_{\rm c}=C_{\rm v}\chi^{-0.5}\eta L_{\rm kin}. \label{Eb} \end{equation} Plugging the numbers, $L_{\rm c}\approx 3.4\times 10^{41}\rm erg\ s^{-1}$, and $L_{\rm b}\approx 10^{43}\rm erg\ s^{-1}$ for $\eta=0.1$. The luminosity of CSs is so small, which allow us to neglect their contribution in emission. We assume the accelerated relativistic particles distribute as a power-law spectrum with spectral index $\Gamma$ and an exponential cut-off at the high energy: \begin{equation} \frac{dn(E_{\rm p})}{dE_{\rm p}}=K_{\rm p}E_{\rm p}^{-\Gamma}e^{-\frac{E_{\rm p}}{E_{\rm p,max}}}\label{Ep}. \end{equation} The normalization factor $K_{\rm p}$ can be determined by the normalization of the particle luminosity $L_{\rm p}=\int E_{\rm p}\frac{dn(E_{\rm p})}{dE_{\rm p}}dE_{\rm p}$. Since neglecting contribution from the CS, we have $L_{\rm p}=L_{\rm b}$. We will consider a range of $\Gamma$ value, from 2 to 1.5 \citep{celli2020spectral} The pp collision produce neutral and charged pions, \begin{eqnarray} &p+p\to p+p+a\pi^{0}+b(\pi^{+}+\pi^{-}),\\ & p+p\to p+n+\pi^{+}+a\pi^{0}+b(\pi^{+}+\pi^{-}),\label{pp2} \end{eqnarray} where $a\approx b$. The pions decay and generate $\gamma$-rays and leptons: \begin{eqnarray} & \pi^{0}\to 2\gamma\\ & \pi^{+}\to \mu^{+}+\nu_{\mu},~ \mu^{+}\to e^{+}+\nu_{e}+\bar{\nu}_{\mu},\\ & \pi^{-}\to \mu^{-}+\bar{\nu}_{\mu},~ \mu^{-}\to e^{-}+\bar{\nu}_{e}+\nu_{\mu}. \label{n2} \end{eqnarray} The final product particles produced by pp collisions in the clouds per unit time on average can be given by \citep[e.g.][]{kamae2006,aartsen2014}: \begin{equation} \frac{dn_{\rm f}}{dE_{\rm f}}= 1.5Fcn_{\rm H} \int\frac{d\sigma(E_{\rm p},E_{\rm f})}{dE_{\rm f}}\frac{dn(E_{\rm p})}{dE_{\rm p}}dE_{\rm p},\label{nf} \end{equation} where f$=\gamma, \nu$, etc., represents the type of final particles, $\sigma$ is the inclusive cross section as function of final particles and proton's energy, and $n_{\rm H}$ is the background number density of protons. The coefficient 1.5 is a correction factor accounting for the contribution of Helium (we assume that the Helium abundance in the BS is similar to Galactic cosmic rays \citep{mori1997galactic}). Here the integration is calculated using the cparamlib package\footnote{https://github.com/niklask/cparamlib}. If the particle escape from the cloud is fast, the calculated spectrum above should be multiplied by a factor $\frac{\tau_{\rm es}(E_{\rm p})}{t_{\rm pp,CS}}$ to take into account the reduction of secondary products by escape. \begin{table*} \centering \caption{Model parameters} \label{123} \begin{tabular}{lcr} \hline Parameters & Descriptions & Fiducial Values \\ \hline $L_{\rm kin}$ & the kinetic luminosity of outflow & $10^{45}\rm erg\ s^{-1}$ \\ $V_{\rm o}$ & the velocity of outflow & 0.07c \\ $T_{\rm o}$ & outflow launching duration & 6 months \\ $r_{\rm o}$ & the typical distance of clouds from the SMBH & $0.01\,\rm pc$ \\ $n_{\rm c}$& the particle density of cloud & $10^{10}\,\rm cm^{-3}$ \\ $r_{\rm c}$ & the typical size of clouds & $10^{14}\,\rm cm$ \\ $B$ & magnetic field strength around outflow-cloud interaction region & 1G \\ $C_{\rm v}$ & covering factor of clouds & 0.1 \\ $\eta$ & the fraction of shock energy converted to accelerated particles & 0.1 \\ $F$&the fraction of accelerated particles propagate into the cloud & 0.5\\ $C_{\rm e}$ & the correction factor of diffusion coefficient relative to Bohm limit & 0.75\\ \hline \end{tabular} \end{table*} \subsection{Neutrino} Given the neutrino luminosity and spectrum, we calculate the neutrino event number expected to be detected by IceCube in a time period of $T_{\rm o}$, \begin{equation} N_{\nu}=\frac{T_{\rm o}}{4\pi D^{2}}\int_{0.1\rm PeV}^{1\rm PeV}dE_{\nu}A_{\rm eff}(E_{\nu})\frac{dn_{\nu}}{dE_{\nu}}.\label{n} \end{equation} The detected event IceCube-191001A has a neutrino energy of $>0.2$ PeV, thus we only calculate the sub-PeV neutrino events in $0.1-1$ PeV range. The real time effective area of IceCube is described by \citep{blaufuss2021next} \begin{equation} A_{\rm eff}=2.058\times \left(\frac{E_{\nu}}{1\rm TeV}\right)^{0.6611}-32 \end{equation} The number of neutrino events is calculated to be $N_{\nu}\simeq 3.5\times10^{-3}$ for $\Gamma=1.5$, considering particle escape from cloud (see details in Fig. \ref{NF}). \begin{figure} \includegraphics[scale=0.57]{11FN.eps} \caption The energy distribution of neutrino luminosity. The blue, red and green lines correspond to $\Gamma=1.5$, 1.7, and 2, respectively. The solid (dotted) lines correspond to the cases with (without) the consideration of particle escape from the cloud. } \label{NF} \end{figure} \subsection{$\gamma$-ray}\label{gammarays} The intrinsic $\gamma$-ray spectrum accompanying the neutrino emission can also be calculated with equation \ref{nf}, but high-energy $\gamma$-ray photons may be attenuated by interacting with low-energy background photons via $\gamma\gamma\to e^{+}e^{-}$. The background photons may come from the TDE, host galaxy, extragalactic background light (EBL), cosmic microwave background (CMB), and even radiation in the cloud, etc. Consider the TDE photons first. At a distance $r_{\rm o}$ from the SMBH, the number density of TDE photons around typical energy $E_{\rm ph}\sim10$eV is estimated by \begin{equation} n_{\rm ph}=\frac{L}{4\pi r_{\rm o}^{2}cE_{\rm ph}}, \label{n_ph} \end{equation} where $L$ is the TDE radiation luminosity, which is given by \cite{stein2021tidal} for AT2019dsg. The TDE luminosity may evolve with time, approximately estimated as following the accretion rate evolution, i.e., $ L\propto \dot{M}\propto \left(\frac{t}{T_{\ast}} \right)^{-5/3}$, where $T_{\ast}\approx 0.1(\frac{R_{\ast}}{1R_{\odot}})^{3/2}(\frac{M_{\ast}}{1M_{\odot}})^{-1/2}$ yr is the minimum orbit period of the disrupted material \citep{evans1989} depending on the radius $R_{\ast}$ and the mass of the star $M_{\ast}$. The TDE luminosity decreases to about $6\%$ of the peak luminosity after $t_{\rm delay}=6$ months, i.e., $L\sim10^{43}\rm erg\ s^{-1}$. Thus, the TDE photon density is estimated as $n_{\rm ph}\sim 10^{9}(\frac{L}{10^{43}\, \rm erg\ s^{-1}})(\frac{r_{\rm o}}{0.01\,\rm pc})^{-2} (\frac{E_{\rm ph}}{10\,\rm eV})^{-1}\,\rm cm^{-3}$. The $\gamma$-rays are emitted from clouds $\sim0.01$ pc away from the SMBH, so the absorption due to the TDE photon field relies on the emergent angle. We calculated the angle-averaged optical depth for the $\gamma$-rays (see the appendix J of \citealt{mou2020years1}). The optical depth due to TDE photons is found to be moderate for $\gamma$-rays of tens of GeV, as presented in Fig. \ref{GF}. Next, the absorption of the host galaxy's background light is considered. The photons are assumed to be isotropic, and we calculate $\gamma\gamma$ absorption as \cite{aharonian2004}. For host galaxy background photon fields, there was no more observation of the host galaxy 2MASS J20570298+1412165, except the infrared observation \citep{skrutskie2006}, namely J ($1.25\rm \mu m$), H ($1.65\rm \mu m$), and K ($2.16\rm \mu m$). Thus we get the mean luminosity in J, H, K bands to be about $10^{42} \rm erg\ s^{-1}$. For the spectral profile we adopt the model in \cite{finke2010modeling} (the black line for redshift $z=0$ in Fig. 5 therein), and normalized to the infrared luminosity. The size of the host galaxy is typically of kpc scale. we find only mild absorption beyond TeV (see Fig. \ref{GF}). Furthermore, there would be significant absorption by the EBL and CMB. Considering the spectrum of EBL as model C in \cite{finke2010modeling} our calculation result for the EBL and CMB absorption is presented in Fig. \ref{GF}. Both intrinsic and attenuated spectra are plotted for comparison. The absorption significantly changes the emergent spectrum, mainly due to the EBL and CMB absorption. Notice that we have considered also the absorption in the cloud since the high-energy $\gamma$-rays are produced in the cloud, but we find the attenuation in the cloud is negligible. Firstly, the shocked cloud is optically thin to the high-energy gamma-rays for the electron-photon scattering between the thermal electron and $E_\gamma\sim\,\rm GeV$ high-energy photon, the optical depth for the GeV photon ${\tau _{e\gamma }} \simeq {r_{\rm c}}{n_{\rm c}}{\sigma _{\rm KN,GeV}} \simeq 6\times 10^{-4} r_{\rm c,14} n_{\rm c,10}$ with $\sigma _{\rm KN,GeV} \sim \sigma _{\rm T} (\varepsilon_{\gamma}/m_e c^2)^{-1} \simeq \sigma _{\rm T}/1000$ and the Bethe-Heitler process ${\tau _{\rm BH}} = {r_{\rm c}}{n_{\rm c}}{\sigma _{\rm BH}} \simeq 0.05{r_{\rm c,14}}{n_{\rm c,10}}$ for the parameters adopted in our model, i.e., cloud density $n_{\rm c,10}=n_{\rm c}/10^{10}\,\rm cm^{-3}$ and cloud radius $r_{\rm c,14}=r_{\rm c}/10^{14}\,\rm cm$. In addition to e-$\gamma$ scattering and Bethe-Heitler process for the high-energy gamma-rays, we evaluate the $\gamma\gamma$ absorption due to thermal radiations of clouds next. The shocked cloud emits through free-free radiation with a temperature $T_{\rm c} \approx {m_{\rm p}}V_{\rm c}^2/3k \approx 10^{7}\left( \frac{V_{\rm c}}{7\times 10^7 \, \rm cm/s} \right)^2\,\rm K$, and a luminosity for a single cloud ${L_{\rm X}} \simeq kT_{\rm c} * 4\pi r_{\rm c} ^3 n_{\rm c}/\max ({t_{\rm ff}},{t_{\rm sc}})\simeq 5\times 10^{37} \,\rm erg/s$, where $t_{\rm ff}\sim 2\times 10^{4} T_{\rm c,7}^{1/2}n_{\rm c,10}^{-1}\,\rm s$ is the cooling timescale of free-free radiation and $T_{\rm cloud}\sim 1.4 \times 10^{6}\,\rm s$ is the CS crossing timescale. The optical depth for the high-energy gamma-rays can be estimated as \begin{equation} {\tau _{\gamma \gamma,c }} \sim {n_{\rm X}}{r_{\rm c}}{\sigma _{\gamma \gamma }} \sim 0.2{n_{\rm X}}{r_{\rm c}}{\sigma _{\rm T}} \sim {10^{ - 4}}n_{\rm X,7}r_{\rm c,14}\, \end{equation} with the most optimistic cross section, where the number density can be written as ${n_{\rm X}} = \frac{{{L_{\rm X}}}}{{4\pi ck{T_{\rm c}}r_{\rm c}^2}} \simeq 1 \times {10^7}\,\rm c{m^{-3}}$ since the cloud is optically moderately thin to its own radiations with the optical depth ${\tau _{e\gamma,{\rm c} }} \simeq {r_{\rm c}}{n_{\rm c}}{\sigma _{\rm T}} \simeq 0.6 r_{\rm c,14} n_{\rm c,10}$ (contributions of other clouds to number density can be easily neglected). Therefore, the opacity ($\tau _{e\gamma }, \tau _{\rm BH}, \tau _{\gamma \gamma }$) caused by the cloud to the high-energy gamma-rays can be neglected. In addition, considering the superposition of the free-free emission of $C_{\rm v} r_{\rm o}^2/r_{\rm c}^2 \sim 10^3$ clouds (total luminosity $\sim 10^3 L_{\rm X}\sim 5 \times 10^{40} \,\rm erg/s$), the corresponding total flux is quite low with a value $\sim 5 \times 10^{-15}\,\rm erg\, cm^{-2} s^{-1}$ at the keV energy band so that it is much lower than the observational upper limit of X-rays, even for the deep upper limit of $9 \times 10^{-14}\,\rm erg\, cm^{-2} s^{-1}$ (0.3-10 keV) given by \emph{XMM} observations in \cite{stein2021tidal}. \begin{figure} \centering \includegraphics[scale=0.55]{odf.eps} \includegraphics[scale=0.59]{G13F.eps} \caption{ The $\gamma\gamma$ optical depth (upper penal) and energy distribution (bottom penal) of $\gamma$-ray emission. {\bf Upper penal:} the blue, yellow and red lines correspond to absorption due to TDE photons, host galaxy background light and EBL and CMB, respectively. {\bf Bottom penal:} The predicted gamma-ray emission spectra during the outflow-cloud interactions. The blue, red and green lines present the spectrum for $\Gamma=1.5$, 1.7 and 2, respectively. The dotted and solid lines present the intrinsic and attenuated spectra. Also shown are the cumulative upper limits to the $\gamma$-ray flux observed by HAWC (dash-dot black line) and Fermi-LAT (dashed line). } \label{GF} \end{figure} \subsection{Other radiation} The BS accelerates both electrons and protons. The leptonic process of accelerated electrons also produces radiations. However, the ratio between the energy budgets of electrons and protons could be around $\sim 10^{-2}$ \citep{mou2020years1}, leading to a radiation luminosity by electrons is at most $\sim 10^{41}\rm erg\ s^{-1}$ for the fast cooling case, and the corresponding flux is quite low, $\sim 10^{-14}\rm erg\ cm^{-2}\ s^{-1}$ and can be neglected. Moreover, the secondary electrons from $\gamma\gamma$ absorption may generate photons again via Inverse Compton scatterings, leading to the electromagnetic cascades. As shown in Fig.~\ref{GF}, only the EBL and CMB absorption is significant, and may results in electromagnetic cascades in the intergalactic medium. The deflection of electrons by the intergalactic magnetic field is expected to spread out the cascade emission, and contribute little to the observed flux. \section{Results compared with observations}\label{cwo} We summarize in Table \ref{123} the fiducial values for the model parameters used in the calculation. The neutrino luminosity is presented in Fig. \ref{NF}. According to equation \ref{n}, we get the expected event number of 0.1-1PeV neutrinos detected by IceCube. Without considering the partile escape from cloud, the expected number is $ 7.1\times10^{-3}$, $ 2.3\times10^{-3}$, $ 7.6\times10^{-4}$, and $3.8\times10^{-4}$ for $\Gamma=1.5$, 1.7, 1.9 and 2, respectively. However, if consider particle escape, as described by equation \ref{eqescape}, the numbers change to $ 3.5\times10^{-3}$, $1.3\times10^{-3}$, $4\times10^{-4}$, and $1.9\times10^{-4}$, respectively. Thus, for the fiducial model parameters, the expected neutrino event number is somewhat lower than the expected neutrino number of $0.008 \lesssim N_\nu \lesssim 0.76$ for AT2019dsg \citep{stein2021tidal}. The interactions produce $\gamma$-rays from GeV to TeV energy bands. Considering the host galaxy distance of 230 Mpc, the photons above $\sim 100$ TeV cannot arrive at the Earth due to the absorptions by CMB and EBL. In addition, the strong absorption by the host galaxy photon field based on the infrared observations \citep{skrutskie2006} is moderate. Finally, the $pp$ processes produce the maximum $\gamma$-ray flux up to $\sim 10^{-13}\ \rm erg\ cm^{-2}\ s^{-1}$ for $\Gamma =1.5-2$ in the bands of 0.1 GeV - 1 TeV, which is lower than the present gamma-ray observational limits by Fermi/LAT and HAWC (see Fig. \ref{GF}). \section{Conclusion and discussion} In this work, we considered the high-speed TDE outflows colliding with the clouds to produce the high-energy neutrinos and gamma-rays, which can explain the sub-PeV neutrino event in AT2019dsg. The assumed outflow velocity is $V_{\rm o}\sim 0.07\rm c$ and the kinetic luminosity is $L_{\rm kin}\sim 10^{45}\rm erg~s^{-1}$. The outflow-cloud interactions will produce the BS ahead clouds. Particle acceleration is efficient in the BS and the pp process would contribute to the observational high energy neutrinos and gamma-rays. We assumed an escaping parameter of $F=0.5$, which means half of the accelerated protons escape from the BS region while the rest enter the dense cloud participating in pp collisions. Here we should point out that the escaping parameter of $F$ cannot be constrained well, leading to the $F$ values being quite uncertain, and the factor $F$ may be taken as a much smaller value in the more realistic cosmic-ray escape models. For the fiducial model parameters, the expected neutrino event number would be relatively low compared to observations. In order to reach the observational neutrino number, one has to invoke some challenging model parameters. For instance, (1) considering a higher cloud density or a larger cloud size, inducing the escaping time $\tau_{\rm es}$ will be much higher than $t_{\rm pp,CS}$ (see equation \ref{eqescape}) and the interactions of protons which produce 0.1-1 PeV neutrinos becomes more efficient, the expected number of neutrinos would increase by a factor of $\sim 2$ (see Fig.~\ref{NF}); (2) the converting fraction of energy from the outflow to protons relies on the covering factor of BS $C_{\rm v}$, and shock acceleration efficiency $\eta$, so the expected number of neutrinos could increase by a factor of $\sim 10 (C_{\rm v} / 0.3)(\eta / 0.3)$ by taking a larger $C_{\rm v}$ and a larger $\eta$ than the fiducial values listed in Table.~\ref{123}. For sure, on the contrary, the expected neutrino number could be reduced if a lower cloud column density, smaller $C_{\rm v}$ and $\eta$, or a softer proton index is adopted. As a result, the predicted neutrino number in our model depends on the uncertainties of model parameters and in order to match the observations, some challenging values of parameters have been involved. In above calculations, we anchored two parameters of the outflows, the kinetic luminosity $L_{\rm kin}=10^{45} \rm erg~s^{-1}$ and the velocity of outflows $V_{\rm o}=0.07\rm c$. Numberical simulations indicate that TDE can launch powerful wind with a kinetic luminosity of $10^{44-45}$ erg s$^{-1}$ \citep{curd2019grrmhd}, or even higher \citep{dai2018unified}. AT2019dsg also exhibits radio flares, arising fifty days post burst and lasting for more than one year \citep{stein2021tidal,cendes2021radio}. Modeling the radio flare by the outflow-CNM (circumnuclear medium) model suggests that the averaged kinetic luminosity is $10^{43}$ erg s$^{-1}$ \citep{stein2021tidal} or even lower \citep{cendes2021radio}. However, if the radio flare originates from outflow-cloud interaction which is the same scenario as our current model, the inferred kinetic luminosity may be in the order of $10^{44}$ erg s$^{-1}$ \citep{mou2021radio}. Radio outflow and delayed neutrino may be explained by the same physical process. The detected neutrino number is linearly proportional to the kinetic luminosity. For the case of $L_{\rm kin}=10^{44}\rm erg ~s^{-1}$, the modeling neutrino luminosity is presented in Fig. \ref{N44}. The expected neutrino number will be about one order of magnitude lower than the above values in the case of $L_{\rm kin}\sim 10^{45}\rm erg ~s^{-1}$. The velocity of the outflow is taken as $V_{\rm o}=0.07\rm c$, but this value is still uncertain, the radio observations suggested the outflow velocity in AT2019dsg is $V_{\rm o}=0.12\rm c$ \citep{stein2021tidal}, 0.07c \citep{cendes2021radio} or around 0.1c \citep{mou2021radio}. If the outflow velocity is higher, the maximum energy of accelerated protons in the BS will be also higher accordingly. Then the peak neutrino luminosity will move to the higher energy ranges. If we integrate the neutrino number from $0.1-1$ PeV, the detected neutrino number would be different. For a comparison, we also plotted the neutrino luminosity versus energy in the case of $L_{\rm kin}=10^{44}~\rm{erg~s^{-1}}$, $V_{\rm o}=0.12\rm c$ in Fig. \ref{N44}. Since we only integrate the neutrino number in the range of $0.1-1$ PeV, if $V_{\rm o}$ is about 0.04c, and $E_{\rm p,max}\sim 20$ PeV, the neutrino SED will peak at $0.1-1$ PeV, and the expected number of neutrinos will increase by $50\%$. After the submission of this work, we notice recent reports on more neutrino events associated with the time-variable emission from accreting SMBHs \citep{van2021establishing,reusch2021candidate}, among which AT2019fdr is a TDE candidate in a Narrow-Line Seyfert 1 AGN in which the BLR clouds should exist. Moreover, the neutrino events lag the optical outbursts by half year to one year (\citealt{van2021establishing}), consistent with the assumption that clouds exist at the distance of $\sim 10^{-2}$ pc from the central BH if the outflow velocity is in the order of $10^9$ cm s$^{-1}$. The outflow--cloud interaction may also contribute to the high energy neutrino background \citep{abbasi2021icecube}. \begin{figure} \centering \includegraphics[scale=0.57]{diss5f.eps} \caption{The neutrino luminosity as function of neutrino energy for different $L_{\rm kin}$ and $V_{\rm o}$, assuming $\Gamma =1.7$. The red line is the same as in Fig.3 with fiducial values. The blue line represents the case of $L_{\rm kin} = 10^{44}\ \rm erg/s$, resulting in the expected neutrino events of $1.3\times 10^{-5}$, about one order of magnitude lower than the fiducial value case. When the velocity of outflows is 0.12c, the maximum energy of accelerated protons reaches 180 PeV. The green line represents the case of $L_{\rm kin} = 10^{44}\ \rm erg/s$ and $V_{\rm o}= 0.12c$, leading to $E_{\rm max} = 180\ \rm PeV$ and the expected neutrino events of $9\times10^{-6}$. } \label{N44} \end{figure} \section*{Acknowledgments} We are grateful to the referee for the useful suggestions to improve the manuscript. This work is supported by the National Key Research and Development Program of China (Grants No. 2021YFA0718500, 2021YFA0718503), the NSFC (12133007, U1838103, 11622326, 11773008, 11833007, 11703022, 12003007, 11773003, and U1931201), the Fundamental Research Funds for the Central Universities (No. 2020kfyXJJS039, 2042021kf0224), and the China Manned Space Project (CMS-CSST-2021-B11). \section*{Data Availability} The data used in this paper were collected from the previous literatures. These data X-ray, gamma-ray and neutrino observations are public for all researchers. \bibliographystyle{mnras}
proofpile-arXiv_065-42
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In the last three decades, the acquisition of data on the shape of small heliocentric bodies, by ground and space-based observations \citep{Hudson1995,Hanuvs2013,Hanuvs2017} and by space-mission explorations -- such as OSIRIS-REx and Hayabusa spacecraft \citep{Yoshikawa2015,Lauretta2017} -- fostered the study of the dynamics around these bodies. This class of objects, which involves asteroids, trans-Neptunian objects, Centaurs, and comets, characteristically have diameters of less than 1000~km \citep{Jorda2016}. Due to their small sizes, these bodies do not have enough mass to reach hydrostatic equilibrium, showing irregular and asymmetric shapes. The development of space-missions was a strong motivation for the search of equilibrium regions around irregular bodies, as accomplished, e.g., by \cite{Scheeres2000} which obtained stable orbits around the asteroid 433 Eros for the spacecraft NEAR-Shoemaker \citep{Prockter2002}. Some other works with such purpose are \cite{Yu2012,Shang2015,Wang2016,Winter2020,Moura2020}. The discovery of satellites and rings around this class of objects were also justifications for the interest in the stability of irregular bodies systems \citep{Chapman1995,Merline2002,Braga2014,Ortiz2017}. When investigating the motion around irregular bodies, it is essential to consider the gravitational field generated by their odd shape. One method used for this is to approximate the irregular shape to a symmetric one -- such as a MacLaurin spheroid or a triaxial ellipsoid -- which allows studying the system theoretically or through low-cost simulations. Another course of action is to decompose the irregular body into a set of regular polyhedra \citep[Polyhedron Shape Model,][]{Werner1994} or mass points \citep[Mascon Model,][]{Geissler1996}. Despite the high level of accuracy, this methodology has a higher computational cost. In the current and subsequent works, we study the dynamics around a class of objects classified by us as Non-Spherical Symmetric Bodies (NSSBs): contact binaries, triaxial ellipsoids with uniform density, and spherical bodies with a mass anomaly. The motion around NSSBs has already been studied in some articles, such as \cite{Lages2017} which analysed the stability around contact binaries through a generalized Kepler map technique \citep{Meiss1992,Shevchenko2011}, obtaining chaotic gravitational zones around the central body, similar to those found for symmetrical elongated bodies \citep{Mysen2006,Mysen2007}. Their results are appliable to the asteroids 243~Ida and 25143~Itokawa \citep{Lages2017}. \cite{Lages2018} also use the Kepler map technique to study the chaotic region around cometary nuclei of dumb-bell shape, obtaining that such region is responsible for engulfing most of the Hill sphere of Comet 1P/Halley. \cite{Amarante2020} studied the dynamics around 486958~Arrokoth, an object similar to a contact binary, using a Polyhedron Shape Model and found an unstable zone in the equatorial region of the asteroid. \cite{Rollin2021} obtain that the particles in the equatorial plane of 486958~Arrokoth are lost due to the chaotic diffusion of the orbits, which results in collisions or particle ejection. Interestingly, \cite{Rollin2021} also obtain theoretical dumb-bell-shaped objects with certain combinations of mass and spin period that host, not a complete chaotic zone, but a chaotic ring. The dynamics around triaxial ellipsoids were previously studied by \cite{Scheeres1994,Vantieghem2014}, and in particular by \cite{Winter2019} which analysed the motion around 136108~Haumea, an ellipsoidal-shape object. This dwarf planet is particularly interesting due to its complex system that includes a pair of satellites, Hi’iaka and Namaka, and a ring \citep{Ragozzine2009,Ortiz2017}. The non-asymmetric terms of the gravitational field of the NSSBs create strong resonances between the orbital period of the ring particles and the spin of the central body. \cite{Ortiz2017} propose that the Haumea ring would be associated with the 1:3 resonance. However, \cite{Winter2019} using Poincar\'e surface of sections showed that this resonance is doubled, generating a large chaotic region in the resonance separatrix. Consequently, the ring is not associated with the 1:3 resonance but probably with first kind periodic orbits. 10199~Chariklo is another irregular body with a complex system involving a pair of narrow rings and possibly small satellites \citep{Braga2014,Berard2017}. The shape of Chariklo is still not well known. Observational data suggest triaxial and Jacobi ellipsoid shapes for the object \citep{Leiva2017}. \cite{Sicardy2020} discuss the possibility of Chariklo to be a sphere with topographical features of a few kilometres, i.e., an object with a mass anomaly. Assuming a spherical Chariklo with a mass anomaly, \cite{Sicardy2019} and \cite{Sicardy2020b} show that particles inside the corotation radius migrate onto the body, and the outer material is pushed beyond the 1:2 resonance. Here, we apply some well-known techniques to study the dynamics around a spherical body with a mass anomaly. Relations for the width and location of the spin-orbit resonances, a.k.a., sectoral resonances are presented. The dependence of resonances on the central body parameters are analysed. Poincaré surface of section technique is also applied to the system for analysing the stability of the particles. We advance the reader the existence of a chaotic region near the object with a mass anomaly. This region extension is measured, and an adjusted equation is obtained as a function of the system parameters. In Section~\ref{dynamicalsystem}, we present the disturbing function of our case of interest. In Section~\ref{theory}, we follow the prescription of the pendulum model developed by \cite{Winter1997a} and \cite{Murray1999} for the restricted planar 3-body problem (RP3BP) to obtain an analytical recipe for the location and width of the spin-orbit resonances. Section~\ref{secPSS} presents the Poincaré Surface of Section technique \citep{Henon1965a,Henon1965b,Henon1966a,Henon1966b,Henon1969,Jefferys1971}. In Section~\ref{overview}, we identify, through numerical simulations, stable regions and give an overview of the system. In Section~\ref{resoperi}, we use the Poincaré surface of section technique to confront our analytical model and study the spin-orbit resonances in details. We apply our results to Chariklo in Section~\ref{chariklosection}, exploring the dynamics around the object, in particular in the region of the rings. We address our final comments in Section~\ref{secdiscu}. \section{Dynamical System} \label{dynamicalsystem} \begin{figure} \includegraphics[width=1.\columnwidth]{Figures/sistema.png} \caption{Schematic diagram of the trajectory of a particle around a spherical object with a mass anomaly at its equator. The trajectory is fixed in the rotating frame with the central body's angular velocity $\omega$. $x_0$ is the initial position of the particle, and the red arrow indicates the initial velocity. \label{fig:system}} \end{figure} In the present work, we analyse the dynamics of particles orbiting a hypothetical spherical object of mass $M$ and radius $R$, with a mass anomaly $m_a$ at its equator (Figure~\ref{fig:system}). We assume the object with a uniform mass distribution, where the masses $M$ and $m_a$ have the same bulk density ($\rho=1~{\rm g/cm^3}$). The object is also assumed to rotate with constant angular velocity $\omega$ ($\omega=2\pi/T$, where $T$ is the rotation period) without wobbling motion. For simplicity, we will express our physical quantities in the following units: $GM=1$, while $R=1$ is the distance between the system centre and the mass anomaly. We also define as a unit the Keplerian frequency of the mass anomaly, scaled by the density $\rho$ of the object: \begin{equation} \omega_k=\sqrt{\frac{GM}{R^3}}=\sqrt{\frac{4\pi G\rho}{3}}=1. \end{equation} Two dimensionless parameters will define our dynamic system: the normalized mass anomaly $\mu=m_a/M$ and the rotating rate $\lambda=\omega/\omega_k$. Equations of motion in a frame $Oxy$ rotating with the same period as the central body's spin are given by \citep{Scheeres1996} \begin{equation} \ddot{x}-2\lambda\dot{y}=\lambda^2x+U_x \end{equation} and \begin{equation} \ddot{y}+2\lambda\dot{x}=\lambda^2y+U_y, \end{equation} where $U_x$ and $U_y$ stand for the partial derivatives of the gravitational potential. The potential acting on a particle with position-vector $\vec{r}=x\hat{x}+y\hat{y}$ ($r=|\vec{r}|$) in the rotating frame is obtained by adding the gravitational potential of the spherical portion of the object -- at the centre of the system -- with the gravitational potential of the mass anomaly, located at $\vec{R}=\hat{x}$ \citep{Sicardy2019}: \begin{equation} U(r)=-\frac{1}{r}-\frac{\mu}{|\vec{r}-\hat{x}|}+\lambda^2\mu(\vec{r}\cdot\hat{x}). \label{potential} \end{equation} Note that the potential given in Equation~\ref{potential} differs from that acting on a particle in the RP3BP \citep[][]{Murray1999} by the rotating parameter $\lambda^2$ in the indirect term. While the secondary mass in RP3BP surrounds the central body with Keplerian velocity $\omega_k$, here the mass anomaly rotates with angular velocity $\lambda\omega_k$. We introduced the rotating parameter to correct this difference. Similar to the dynamics of a particle in the RP3BP with a disturbing internal body, we obtain the expansion of the potential $U$ for the lowest order terms in eccentricity ($e$) as: \begin{equation} U=-\frac{1}{r}-\sum_{j=0}^\infty\sum_{m=-\infty}^\infty \mu e^j\left[\alpha F_jb^{(m-j)}_{1/2}(\alpha)+\frac{\lambda^2}{\alpha}f_j\delta_{|m|,1}\right]\cos{\phi}, \label{potentialexp} \end{equation} where $\alpha=1/a<1$ ($a$=semi-major axis of the particle), $b_{1/2}^{(m)}$ is the Laplace coefficient, $f_j$ and $F_j$ are linear operators \citep[Table~\ref{operatorF},][]{Murray1999,Ellis2000}, $\delta_{|m|,1}$ is the Kronecker delta and $\phi$ is a characteristic angle of the system relating the rotation of the central object with the longitudes of the particle. The characteristic angle associated with the sectoral resonances is presented in Section~\ref{theory}. \begin{table*} \centering \caption{The linear operators $f_j$ and $F_j$ for $j\leq 5$. The derivative operator D is given by D=d/d$\alpha$.\label{operatorF}} \begin{tabular}{lrl} \hline \hline j & $f_j$ & $F_j$ \\ \hline 1 & ${\rm -\frac{1}{2}}$ & ${\rm \frac{1}{2}\left[\left(-1+2m\right)+\alpha D\right]}$ \\ 2 & ${\rm -\frac{3}{8}}$ & ${\rm \frac{1}{8}\left[\left(2-7m+4m^2\right)+\left(-2+4m\right)\alpha D+\alpha^2D\right]}$ \\ 3 & ${\rm -\frac{1}{3}}$ & ${\rm \frac{1}{48}\left[\left(-6+29m-30m^2+8m^3\right)+\left(6-21m+12m^2\right)\alpha D+\left(-3+6m\right)\alpha^2D+\alpha^3D^3\right]}$ \\ \multirow{2}{*}{4} & \multirow{2}{*}{${\rm -\frac{125}{384}}$} & ${\rm \frac{1}{384}\left[\left(24-146m+211m^2-104m^3+16m^4\right)+\left(-24+116m-120m^2+32m^3\right)\alpha D+\left(12-42m+24m^2\right)\alpha^2D+\right. }$ \\ & & ${\rm \left.+\left(-4+8m\right)\alpha^3D^3+\alpha^4D^4\right]}$ \\ \multirow{2}{*}{5} & \multirow{2}{*}{${\rm -\frac{27}{80}}$} & ${\rm \frac{1}{3840}\left[\left(-120+874m-1595m^2+1110m^3-320m^4+32m^5\right)+\left(120-730m+1055m^2-520m^3+80m^4\right)\alpha D+\right.}$ \\ & & ${\rm \left.\left(-60-290m-300m^2+80m^3\right)\alpha^2D+\left(20-70m+40m^2\right)\alpha^3D^3+\left(5+10m\right)\alpha^4D^4+\alpha^5D^5\right]}$ \\ \hline \end{tabular} \end{table*} In conservative systems, such as those analysed in this work, the Jacobi constant $C_J$ is a conserved quantity used to obtain the Poincar\'e surface of sections. It is expressed here in the units $R^2\omega_k^2$ and is given by \citep{Scheeres1996} \begin{equation} C_J=\lambda^2(x^2+y^2)+2U(x,y)-\dot{x}^2-\dot{y}^2. \label{eq:Cj} \end{equation} \section{Sectoral resonances} \label{theory} At the planar limit, a pair of fundamental frequencies describe the motion of a particle: the synodic and radial epicyclic frequencies. The first, $n-\omega$ ($n$= angular frequency of the particle), corresponds to the frequency of the particle's return to a fixed position on the rotating frame. The second, $\kappa=n-\dot{\varpi}$, is the frequency of the particle's return to its pericentre, being $\dot{\varpi}$ the derivative of the particle's longitude of pericentre. If these frequencies are commensurable, the particle is in a sectoral resonance -- spin-orbit resonance -- with the central body. Once in resonance, the orbital evolution of the particle will be modelled by the energy balance provided by the resonant configuration. Sectoral resonances with real non-spherical bodies were studied in \cite{Borderes2018} and \cite{Winter2019} for the asteroid 4179~Toutatis and the dwarf planet Haumea, respectively. A particle at the centre of a $m$:$(m-j)$ resonance satisfies the resonance condition \citep{Sicardy2019} \begin{equation} m\omega-(m-j)n-j\dot{\varpi}=0, \label{phidot0} \end{equation} where $m$ and $j$ are integers responsible for giving the commensurability of the frequencies. For $j=0$, the particle is in corotation resonance, while for $j=m$, we have the apsidal resonances. Both cases are out of the scope of this work \citep[for details, see][]{Sicardy2019} and here we will focus on resonances with $j\geq1$, where the numerical value of $j$ gives the order of the resonance. When a particle is in a $m$:$(m-j)$ resonance, the characteristic angle $\phi$ -- also called resonant angle -- librates with an amplitude lower than $360^{\circ}$. The angle is given by \begin{equation} \phi=m\omega t-(m-j)\lambda_p-j\varpi, \label{phi} \end{equation} where $\lambda_p$ is the mean longitude of the particle. For simplicity, we ignore variations in the mean longitude of epoch. \subsection{Resonance Location} \label{resonancelocation} The angular and radial epicyclic frequencies are given by \citep{Chandrasekhar1942} \begin{equation} n^2=\frac{1}{r}\frac{dU_0}{dr} \label{fn0} \end{equation} and \begin{equation} \kappa^2=\frac{1}{r^3}\frac{d(r^4n^2)}{dr}, \label{fk0} \end{equation} where $U_0$ is the axisymmetric part of the gravitational potential ($j=m=0$). From equation~\ref{potentialexp}, we obtain: \begin{equation} U_0=-\frac{1}{r}-\frac{\mu}{2r}b_{1/2}^{(0)}\left(\alpha\right). \end{equation} Expanding the Laplace coefficient up to second order in $\alpha$, \begin{equation} \frac{1}{2}b_{1/2}^{(0)}\left(\alpha\right)=1+\frac{1}{4}\alpha^2, \end{equation} we obtain the axisymmetric part of the gravitational potential for the spherical body with a mass anomaly: \begin{equation} U_0=-\frac{1}{r}\left(1+\mu+\frac{\mu}{4}\alpha^2\right). \end{equation} Keeping the lowest order terms in $\mu$ in eqs.~\ref{fn0} and \ref{fk0}, we obtain \begin{equation} n^2=\frac{1}{r^3}\left(1+\mu+\frac{3\mu}{4}\alpha^2\right) \label{fn} \end{equation} and \begin{equation} \kappa^2=\frac{1}{r^3}\left(1+\mu-\frac{3\mu}{4}\alpha^2\right). \label{fk} \end{equation} The location of the resonances can be obtained by numerical methods, such as the Newton-Raphson method \citep[see][]{Press1989,Renner2006}, by applying eqs.~\ref{fn} and \ref{fk} in the resonance condition (eq.~\ref{phidot0}). Table~\ref{tab:location} shows the location of the resonances in the ranges $-4\leq m\leq 4$ and $j\leq 5$ (up to fifth-order resonances). The central body is a Chariklo-type body with $\mu=10^{-3}$ and $\lambda=0.471$ \citep[$M=6.3\times 10^{18}$~kg and $T=7.004$~hr,][]{Leiva2017}, defined as our reference object. We assume $\mu=10^{-3}$ as a reference value because it is small enough for the centre of the system to be approximately the physical centre of the spherical portion and large enough for the effects of the mass anomaly to be observed. \begin{table*} \centering \caption{The location of the $m$:$(m-j)$ resonances in the ranges $-4\leq m\leq 4$ and $j\leq 5$. We assumed a central body with parameters based on the centaur Chariklo, with $\lambda=0.471$ and $\mu=10^{-3}$ (reference object). The resonances marked ``inside'' occur within the physical radius of the central body and, therefore, do not exist in the considered system. The resonances marked as ``apsidal'' are out of the scope of this work. \label{tab:location} } \begin{tabular}{cccccccccc} \hline \hline j & m $\rightarrow$ & -4 & -3 & -2 & -1 & 1 & 2 & 3 & 4 \\ \hline \multirow{2}{*}{1} & resonance & 5:6 & 4:5 & 3:4 & 2:3 & 1:2 & 1:0 & 2:1 & 3:2 \\[0.1cm] & $a/R$ & 1.909 & 1.993 & 2.156 & 2.612 & apsidal & inside & 1.256 & 1.358 \\ \hline \multirow{2}{*}{2} & resonance & 5:7 & 4:6 & 3:5 & 2:4 & 1:3 & 1:-1 & 2:0 & 3:1 \\[0.1cm] & $a/R$ & 2.156 & 2.313 & 2.612 & 3.423 & inside & apsidal & inside & inside \\ \hline \multirow{2}{*}{3} & resonance & 5:8 & 4:7 & 3:6 & 2:5 & 1:4 & 1:-2 & 2:-1 & 3:0 \\[0.1cm] & $a/R$ & 2.389 & 2.612 & 3.031 & 4.146 & inside & inside & apsidal & inside \\ \hline \multirow{2}{*}{4} & resonance & 4:8 & 3:7 & 2:6 & 1:5 & 1:-3 & 2:-2 & 3:-1 & 4:0 \\[0.1cm] & $a/R$ & 2.612 & 2.895 & 3.423 & 4.811 & inside & inside & inside & apsidal \\ \hline \multirow{2}{*}{5} & resonance & 4:9 & 3:8 & 2:7 & 1:6 & 1:-4 & 2:-3 & 3:-2 & 4:-1 \\[0.1cm] & $a/R$ & 2.825 & 3.164 & 3.793 & 5.433 & inside & inside & inside & inside \\ \hline \end{tabular} \end{table*} \subsection{Resonance Width} In this subsection, we follow the classical approach of the pendulum model, presented in \cite{Winter1997a} and \cite{Murray1999}, to obtain the resonance width for our case of interest. A particle is in a $m$:$(m-j)$ sectoral resonance when its resonant angle $\phi$ librates, which means that the particle oscillates in the rotating frame around the central position of the resonance (eq.~\ref{phidot0}). We can evaluate the maximum amplitude of a resonant particle through the temporal variations of $\phi$: \begin{equation} \dot{\phi}=m\omega-(m-j)n-j\dot{\varpi} \label{phidot} \end{equation} and \begin{equation} \ddot{\phi}=-(m-j)\dot{n}-j\ddot{\varpi}. \label{phiddot} \end{equation} Considering only the lowest order terms in eccentricity ($e$), we obtain using the Lagrange's equations \citep{Murray1999}: \begin{equation} \dot{n}=-3nC_r(m-j)e^j\sin{\phi} \label{ndot} \end{equation} and \begin{equation} \dot{\varpi}=je^{j-2}C_r\cos{\phi}, \label{varpidot} \end{equation} where \begin{equation} C_r=\mu \frac{n}{\alpha}\left[\alpha F_jb_{1/2}^{(m-j)}+\frac{\lambda^2}{\alpha}f_j\delta_{|m|,1}\right]. \label{CR} \end{equation} From equation~\ref{varpidot}, we obtain that the second derivative of $\varpi$ is \begin{equation} \ddot{\varpi}=j(j-2)e^{j-3}\dot{e}C_r\cos{\phi}-je^{j-2}C_r\sin{\phi}\dot{\phi}, \end{equation} where the time variation of eccentricity ($\dot{e}$) obtained through Lagrange's equations is $\dot{e}=-je^{j-1}C_r\sin{\phi}$. It can be shown that \begin{equation} \ddot{\varpi}=j^2e^{2(j-2)}C_r^2\sin{j\phi}-je^{j-2}C_r(m\omega-(m-j)n)\sin{\phi}. \end{equation} Therefore, \begin{equation} \begin{split} \ddot{\phi}=&-j^3e^{2(j-2)}C_r^2\sin{2\phi}+3nC_r(m-j)^2e^j\sin{\phi}+\\ +&j^2e^{j-2}C_r(m\omega-(m-j)n)\sin{\phi}. \label{phiddotb} \end{split} \end{equation} By inspection, we can evaluate the contribution of each term of equation~\ref{phiddotb}. The $C_r$ function is proportional to $\mu$, a value lower than one. In fact, for high values of mass anomaly ($\mu\gtrsim10^{-2}$), we can not assume the centre of mass of the system as the physical centre of the spherical object, and equation~\ref{potentialexp} is no longer applied -- this range of $\mu$ defines another NSSB, the contact binary. Since $\mu<<1$, the term that depends on $C_r$ will dominate those dependent on $C_r^2$, in principle. For first-order resonances ($j=1$), the first and third terms in equation~\ref{phiddotb} are proportional to $1/e^2$ and $1/e$, respectively -- $e$ is a small value -- and dominate over the second term, proportional to $e$. \subsubsection{Second and higher-order resonances} For second and higher-order resonances, the eccentricity exponents in equation~\ref{phiddotb} are positive, and we can approximate the equation to \begin{equation} \ddot{\phi}+\omega_0^2\sin{\phi}=0, \label{eq8} \end{equation} where $\omega_0^2=3n|C_r|(m-j)^2e^j$. To obtain this result, we have assumed $m\omega-(m-j)n\approx0$ since the particle is in resonance. From equation~\ref{eq8}, we can see that a resonant particle is confined in a pendulum motion around an equilibrium position of the resonance. The number of equilibrium positions of a $m$:$(m-j)$ sectoral resonance is $j$. Analogous to the simple pendulum problem, the particle reduced energy in the rotating frame is \begin{equation} E=\frac{\dot{\phi}^2}{2}+2\omega_0^2\sin^2\frac{\phi}{2}. \end{equation} The maximum possible energy of the pendulum ($\dot{\phi}=0$~deg and $\phi=90$~deg) defines the separatrix between libration and circulation of the resonant angle. That is, the separatrix corresponds to the boundary between bounded and unbounded motions. The energy of such trajectory is $E=2\omega_0^2$, and the temporal variation of the resonant angle is $\dot{\phi}=\pm2\omega_0\cos(\phi/2)$. Relating $\phi$ and $n$: \begin{equation} dn=\frac{\dot{n}}{\dot{\phi}}d\phi=\pm\sqrt{3n|C_r|e^j}\sin{\frac{\phi}{2}}d\phi, \end{equation} we obtain, by integration, the range of angular frequency in which a particle is in a $m$:$(m-j)$ sectoral resonance: \begin{equation} n=n_0\pm\sqrt{12n|C_r|e^j}\cos{\frac{\phi}{2}} \label{2orplusn}, \end{equation} where $n_0$ is the central angular frequency of the resonance. Therefore, a particle is in a second or higher-order resonance if its semi-major axis meets the relation: \begin{equation} a=a_0\pm\left(\frac{16}{3}\frac{|C_r|}{n}e^j\right)^{1/2}a_0, \label{2orplus} \end{equation} where $a_0$ is the central semi-major axis of the resonance (Section~\ref{resonancelocation}). \subsubsection{First-order resonances} For $m$:$(m-1)$ resonances, none of the terms in equation~\ref{phiddotb} can be disregarded, requiring a different solution than the one obtained. As \textit{ansatz}, we assume a solution similar to eq.~\ref{2orplusn}, $n=n_0+k\cos(\phi/2)$, where $k$ is an as-yet-unknown constant. By integrating equation~\ref{phiddotb}, we obtain the kinetic energy of the system \begin{equation} \begin{split} \frac{1}{2}\dot{\phi}^2=&\int \ddot{\phi}d\phi=\frac{C_r^2}{e^2}\left(2\cos^2{\frac{\phi}{2}+\cos^2{\phi}}\right)+\\ -&6nC_r(m-1)^2e\cos^2{\frac{\phi}{2}}+\frac{4}{3}\frac{C_r}{e}(m-1)k\cos^3{\frac{\phi}{2}}, \label{e1} \end{split} \end{equation} where the constant arising from the integration was determined considering $\phi=0$~deg and $\phi=180$~deg. Applying $n=n_0+k\cos(\phi/2)$ to equation~\ref{phidot} and assuming that the particle is exactly at the centre of the resonance ($\phi=0$~deg and $\phi=180$~deg), we find that $m\omega-(m-1)n_0=-C_r/e$. From equation~\ref{phidot}, we get \begin{equation} \begin{split} \frac{1}{2}\dot{\phi}^2=&\frac{1}{2}\frac{C_r^2}{e^2}(1+\cos{\phi})^2+\frac{1}{2}(m-1)^2k^2\cos^2{\frac{\phi}{2}}+\\ +&\frac{C_r}{e}(1+\cos{\phi})(m-1)k\cos{\frac{\phi}{2}}. \label{e2} \end{split} \end{equation} Taking equations~\ref{e1} and~\ref{e2} as equal and assuming $\phi=0$~deg: \begin{equation} (m-1)^2k^2+\frac{4}{3}\frac{C_r}{e}(m-1)k+12nC_r(m-1)^2e=0. \end{equation} Therefore, the boundaries of the angular frequency and semi-major axis in which a particle is in a first-order resonance are, respectively: \begin{equation} n=n_0\pm\sqrt{12|C_r|ne}\left(1+\frac{1}{27(m-1)^2e^3}\frac{|C_r|}{n}\right)^{1/2}-\frac{|C_r|}{3(m-1)e} \end{equation} and \begin{equation} \begin{split} a=&a_0\pm\left(\frac{16}{3}\frac{|C_r|}{n}e^j\right)^{1/2}\left(1+\frac{1}{27(m-1)^2e^3}\frac{|C_r|}{n}\right)^{1/2}a_0\\ +&\frac{2}{9(m-1)e}\frac{|C_r|}{n}a_0. \end{split} \end{equation} \section{Poincar\'e Surfaces of Section} \label{secPSS} Poincar\'e surface of section technique is usually applied in studies of the RP3BP \citep{Henon1965a,Henon1965b,Henon1966a,Henon1966b,Henon1969,Jefferys1971,Winter1994a,Winter1994b}, to analyse the dynamics of the third body, providing information such as the location and size of stable and chaotic regions, including the mean motion resonance regions. In RP3BP, the problem is considered in a rotating system where the primary and secondary bodies are fixed, and only the third body describes a free motion. Some works have also adopted the Poincar\'e surface of section to study dynamical systems composed of two bodies, with a non-spherical central object. \cite{Scheeres1996} applied this technique to find periodic orbits around the asteroid 4769~Castalia. This technique was also applied by \cite{Borderes2018} and \cite{Winter2019} to study the region around Toutatis and Haumea, respectively. This work also applies the Poincar\'e surface of section to a two-body problem composed of a massive central body and a massless particle. Instead of the orbital motion between the primary and secondary bodies, the rotation of the central body gives the motion of the rotating frame. The Poincar\'e surface of section applied to the two-body problem with a mass anomaly provides information about stability and resonances. However, in this case, there are spin-orbit resonances instead of mean motion resonances. Poincar\'e surface of sections are maps generated in the phase space through the intersection points of the particle orbits with a fixed section in the system. These maps are generated for fixed values of the Jacobi constant (equation~\ref{eq:Cj}). In Figure~\ref{fig:poincare}, we see an example of this map for a system composed of a massive central body with a mass anomaly. The Poincar\'e surface of section was defined in the plane $y=0$ around our reference object and for the fixed value of the Jacobi constant ${\rm C_J}=2.032~{\rm R^2\omega_k^2}$. We distributed the initial conditions on the $x$-axis. In Figure~\ref{fig:poincare}, the different sets of closed curves, called stability islands, delimit the stable regions of the system. Each stability island is formed by a single quasi-periodic orbit that is named because it does not have a defined orbital period. At the centre of the stability islands, we have periodic orbits. The latter crosses the Poincar\'e surface of section always at the same points and can be classified into two kinds \citep{Poincare1895}: those not associated with resonances are the first kind, and those associated with resonance are the second kind. The point in the centre of all black closed curves is a first kind periodic orbit. In contrast, the points in the centres of the blue and green islands are the second kind orbits associated with the 1:3 and 2:7 resonances, respectively (Fig.~\ref{fig:poincare}). A single stability island identifies periodic orbits of first kind, while one or more stability islands can identify the orbits of second one. The number of islands for the second kind orbits is related to the order of the resonance \citep{Winter1997b}. For example, the pair of blue islands in Figure~\ref{fig:poincare} is formed by quasi-periodic orbits that librate around the periodic orbit associated with the 1:3 resonance, a second-order resonance. In the same vein, each particle in the 2:7 resonance -- a fifth-order resonance -- generates five islands on the surface of section. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{Figures/poincare.png} \caption{Poincar\'e surface of section for ${\rm C_J}=2.032~{\rm R^2\omega_k^2}$ around an object with $\mu=10^{-3}$ and $\lambda=0.471$. The black islands are quasi-periodic orbits associated with the periodic orbit of first kind. Blue islands are associated with the 1:3 resonance and the green ones with the 2:7 resonance. The red points are chaotic orbits that cross the phase plane irregularly. \label{fig:poincare}} \end{figure} A set of first and second kind orbits belonging to the same resonance usually appears in a continuous Jacobi constant range and defines a family of orbits. The families present evolution in structure and position in the Poincar\'e surface of section during the variation of the Jacobi constant. In addition to the stability region delimited by the islands, there are also unstable regions filled with scattered red points in the figure, created by chaotic orbits. These chaotic regions are seen around the stability islands associated with the periodic orbits of 1:3 and 2:7 resonances. They are associated with the resonance separatrix and do not enter the stable regions, as we can see in Figure~\ref{fig:poincare}. A stable region bounded by quasi-periodic orbits (black curves) separates the two chaotic regions. In the following sections, we use Poincar\'e surface of section to explore the stability around bodies with a mass anomaly by varying the central body parameters. \section{System overview} \label{overview} We studied the dynamics around our object by simulating a set of particles with pericentre distance q and eccentricity in the ranges $1<q/R\leq q_f$ and $0\leq e\leq 0.5$, respectively ($\Delta e=0.05$ and $\Delta q/R=0.01$). $q_f$ is a given value of $q$ for which all particles survive. The particles were simulated for 10,000~orbits. We assumed the parameters $\lambda$ and $\mu$ in the ranges $0.01\leq \lambda\leq1$ and $10^{-6}\leq\mu\leq5\times 10^{-3}$, respectively. Except for the near-Earth asteroids, we have that the vast majority of small heliocentric bodies have $\omega>\omega_k$ \citep{Warner2009}, justifying the fact that we do not focus on cases with $\lambda>1$. \begin{figure} \includegraphics[width=\columnwidth]{Figures/stable.png} \caption{Boundary curves between the chaotic (on the left) and stable (on the right) regions. The solid black line corresponds to the reference object, while the coloured solid and dashed lines are the cases in which we varied the parameters $\lambda$ and $\mu$, respectively. \label{fig:trans}} \end{figure} We verified in all numerical simulations the existence of a chaotic region just outside the central body in which particles collide or are ejected from the system. Beyond the chaotic region, there is a stable region, and the boundaries between them are shown in Figure~\ref{fig:trans}. Particles with semi-major axis and eccentricity in the region bounded by the curve (on the left side of the figure) will be lost, while those outside the boundary will survive for at least 10,000 orbits. The solid black line correspond to our reference object, while the solid coloured and dashed lines provide the boundary curves for systems where we vary $\lambda$ and $\mu$, respectively. The successive close-encounters of the particle with the mass anomaly are responsible for exchanges of energy and angular momentum, resulting in the variation of the particles' orbital elements. Particles with sufficiently small semi-major axis show orbital evolution with chaotic diffusive character \citep[for details, see][]{Rollin2021}. In general, eccentricities in the chaotic region tend to increase, resulting in occasional collisions or until the orbit becomes hyperbolic. Figure~\ref{fig:rc} shows the trajectory in the rotating frame (a) and eccentricity (b) of a pair of particles initially in circular orbits around a central body with parameters $\lambda=0.471$ and $\mu=5\times 10^{-3}$. The semi-major axis of the innermost (red line) and outermost (blue line) particles are $1.74~R$ and $3.48~R$, respectively. We observe that the eccentricity shows a secular increase for the innermost particle, reaching values up to $0.15$. The particle collides with the central body after about $3.5\omega_k^{-1}$ or $\sim12$ spin periods. The eccentricity shows periodic variations for the outermost particle, and the particle remains stable around the central body. The boundary curves are robust against the final simulation period and are preserved when we extend the simulations to 100,000 orbits. \cite{Lages2017} analyse through the Lyapunov exponent the stability of particles around a contact binary, obtaining a chaotic region coherent with ours. Our boundary curve is also coherent with the region where the particles are lost in the numerical simulations for a Chariklo with a mass anomaly performed by \cite{Sicardy2019}. \begin{figure} \centering \subfigure[]{\includegraphics[width=0.85\columnwidth]{Figures/rcr.png}} \subfigure[]{\includegraphics[width=\columnwidth]{Figures/rce.png}} \caption{a) Trajectory in the rotating frame and b) temporal evolution of the eccentricity. The innermost particle (red line) is at $1.74~R$ and the outermost one (blue line) at $3.48~R$, and both are initially in circular orbits. The parameters of the central body are $\lambda=0.471$ and $\mu=5\times 10^{-3}$. \label{fig:rc}} \end{figure} The boundary between chaotic and stable regions has only a slight dependence on the relative mass anomaly. Although the increase of $\mu$ generates only a small swell of the chaotic region, it produces larger increments in eccentricity and the particles are lost more quickly. The extension of the chaotic region is mainly affected by the parameter $\lambda$. Decreasing in 10 times the rotating rate, we obtain that the chaotic region is more than doubled, a result also obtained by \cite{Lages2017}. In order to crudely evaluate the extension of the chaotic region, we calculate for a set of systems the semi-major axis at which a particle in circular orbit will survive for up to 10,000 orbits -- the threshold semi-major axis ($a_t$). The curve fitted from the numerical results is given by \begin{equation} \frac{a_{t}}{R}=\left[1.298-0.007\mathcal{M}+0.006\mathcal{M}^2+0.674\lambda^{-0.75}\right], \label{trs} \end{equation} where $\mathcal{M}=-\log{\mu}$. Physically, we can interpret the threshold semi-major axis as the minimum semi-major axis, beyond which rings and satellites can exist around a body with a mass anomaly. \begin{figure} \includegraphics[width=\columnwidth]{Figures/tmu.png} \caption{Threshold semi-major axis obtained in selected numerical simulations (markers) and through equation~\ref{trs} (solid lines). The $x$-axis gives the normalized mass anomalies, and the different colours and markers give the rotating rates. \label{fig:fit}} \end{figure} Figure~\ref{fig:fit} shows the threshold semi-major axis obtained in the numerical simulations. The $x$-axis gives the normalized mass anomaly of each simulation, and different colours and markers show the different rotating rates. The solid lines correspond to the curves given by equation~\ref{trs} (the colour of the lines matches the colour of the markers for the same $\lambda$). \begin{figure*} \centering \subfigure[$\mu=10^{-4}$]{\includegraphics[width=1.5\columnwidth]{Figures/q047_mu1e-4.png}} \centering \subfigure[$\mu=10^{-3}$]{\includegraphics[width=1.5\columnwidth]{Figures/q047_mu1e-3.png}} \centering \subfigure[$\mu=5\times 10^{-3}$]{\includegraphics[width=1.5\columnwidth]{Figures/q047_mu5e-3.png}} \caption{Semi-major axis ($a/R$) versus eccentricity ($e$) for systems, with $\lambda=0.471$ and a) $\mu=10^{-4}$, b) $\mu=10^{-3}$, and c) $\mu=5\times 10^{-3}$. Particles with initial $a/R$ and initial $e$ in the left white region have pericentre within the central body and collide. Particles in the grey area collide with the central body or are ejected, and those in the right white one remain in the system for more than 10,000 orbits. The dashed black lines correspond to the corotation radius, and the coloured lines provide the theoretical boundaries of the resonances. Coloured lines not referenced on the label and close to the corotation radius correspond to first order resonances with $|m|>4$. \label{fig:mu}} \end{figure*} \begin{figure*} \centering \subfigure[$\lambda=0.047$]{\includegraphics[width=1.5\columnwidth]{Figures/q004_mu1e-3.png}} \centering \subfigure[$\lambda=0.471$]{\includegraphics[width=1.5\columnwidth]{Figures/q047_mu1e-3.png}} \centering \subfigure[$\lambda=1.000$]{\includegraphics[width=1.5\columnwidth]{Figures/q100_mu1e-3.png}} \caption{Semi-major axis ($a/R$) versus eccentricity ($e$) of systems with $\mu=10^{-3}$ and rotating rate $\lambda$ given in the caption of each panel. Particles with initial conditions in the white region on the left have pericentre within the central body and collide, while those in the grey area show chaotic behaviour. The white region on the right is the stable region. The dashed black line provides the corotation radius, and the coloured lines give the theoretical boundaries of the resonances. Coloured lines not referenced on the label and close to the corotation radius correspond to first order resonances with $|m|>4$. \label{fig:lambda}} \end{figure*} Figure~\ref{fig:mu} shows the position and width of the sectoral resonances, obtained theoretically (Section~\ref{theory}), where each panel corresponds to a different normalized mass anomaly, while the rotating rate is fixed as $\lambda=0.471$. The vertical black line in each panel corresponds to the corotation radius $a_c$ of the system ($a_c=\lambda^{-2/3}$), and the coloured lines give the boundaries of the resonances. The white region on the left provides the initial conditions of particles with pericentre within the central body, and the white area on the right is the stable region. The grey area places the chaotic region. Since the sectoral resonances are spin-orbit resonances, we have that $\mu$ has a minor effect on their locations, as seen in the figure. However, the resonance width will depend on $\mu$, as an increase in the mass anomaly will enhance the gravitational perturbation felt by the particles, allowing larger regions to be connected to the resonance equilibrium points. As we increase the numerical value of $m$, the resonances approach the corotation radius. The first-order resonances with $|m|>4$ overlap for the case with $\mu=10^{-4}$ (fig~\ref{fig:mu}a). For $\mu=10^{-3}$ (fig~\ref{fig:mu}b), we see that additional first-order resonances, such as 4:5 and 3:4, overlap for high eccentricities, while for $\mu=5\times10^{-3}$ (fig~\ref{fig:mu}c) the overlap is intensified, covering the 2:3 resonance. The overlap of first-order resonances is generally responsible for eliminating stable regions associated with the resonance \citep{Wisom1980,Winter1997a,Winter1997b}. Thus, they should contribute to the chaotic behaviour verified in the systems. It is not by chance that the region surrounding the corotation radius is always chaotic. However, while the overlap helps to carve the chaotic region around the central object, it is not the primary source of chaoticity for the system. Such a fact can be seen in \ref{fig:lambda}a, where the chaotic region covers a region where there is no overlap of first-order resonances. As already mentioned, encounters with the mass anomaly produce chaotic diffusion of the orbits, clearing an entire region that extends beyond the corotation radius. In our numerical simulations, we did not find stable particles in internal resonances. Analogously to Figure~\ref{fig:mu}, we present in Figure~\ref{fig:lambda} the resonances and chaotic and stable regions around an object with a mass anomaly, now keeping $\mu$ constant and varying $\lambda$. As we can see in the figure, the parameter affects both the location and the width of the resonances (Eqs.~\ref{phidot0} and \ref{CR}). When $\lambda $ is increased by one order, resonances move more than four times closer to the body, while chaotic regions approach it by only a factor of two. So, by changing $\lambda$, we change which resonances will be in the stable region. In the case shown in Figure~\ref{fig:lambda}c, the rotation frequency is equal to the Keplerian one, which places the corotation radius on the surface of the spherical portion of the body. Consequently, the internal resonances and some external ones reside within the central body, with only a few resonances in the stable region. Assuming objects with even faster rotation, we get a narrower chaotic region with fewer resonances outside the object, corresponding to unattractive cases. In the hypothetical case where the spin frequency tends to infinity, it would be not sectoral resonances or chaotic region since it falls in the case of a non-rotating spherical object with a ridge at its equator. In Section~\ref{resoperi}, we analyse the evolution of the stable region and the external resonances using Poincar\'e surfaces of section. \section{Stable Region} \label{resoperi} In Section~\ref{overview}, we have shown the existence of two distinct regions around a spherical body with a mass anomaly: a chaotic region where particles collide or are ejected, and a stable region, which will be our focus in this section. First, we compare the resonance widths obtained by numerical simulations with those predicted by the analytical model described in Section~\ref{theory}. We then analyse the motion of test particles in the vicinity of external resonances. We put our analytical model to the test using the following methodology: i) for a given resonance, we theoretically calculate its central position (Section~\ref{resonancelocation}) and the Jacobi constant (Equation~\ref{eq:Cj}), initially assuming a circular orbit; ii) we generate Poincar\'e surface of section of a broad region around the central position; iii) by visual inspection, we obtain the position of the stable fixed point of the resonance -- which corresponds to the real central position of the resonance -- and the limits of the widest island surrounding the point -- the width of the resonance; iv) then, we successively increase the eccentricity by $10^{-2}$ and repeat the previous steps until the islands disappear or until we reach $e=0.5$. \begin{figure*} \centering \subfigure[$\lambda=0.471$ and $\mu=10^{-3}$]{\includegraphics[width=1.6\columnwidth]{Figures/map_1T.png}} \centering \subfigure[$\lambda=0.157$ and $\mu=10^{-3}$]{\includegraphics[width=1.6\columnwidth]{Figures/map_3T.png}} \caption{The width of the external sectoral resonances in the stable region for a) an object with $\lambda=0.471$ and $\mu=10^{-3}$ and b) for an object with $\lambda=0.157$ and $\mu=10^{-3}$. The solid and dashed lines give the widths predicted by the analytical model, and the coloured filled regions delimit the obtained numerically widths. The grey region corresponds to the chaotic region near the central body. \label{fig:TSSP}} \end{figure*} Figure~\ref{fig:TSSP} shows the resonance widths obtained theoretically and through the Poincar\'e surfaces of sections, for our reference object and an object with $\lambda=0.157$ and $\mu=10^{-3}$. We found that numerical data agree reasonably well with the analytical model, indicating that the pendulum model with necessary adaptations applies to our system. In general, we obtain that the largest divergences occur for larger eccentricities ($e\gtrsim 0.2$). It is expected, since we assumed first-order approximations in eccentricity in the development of the pendulum model. We verify that the innermost resonances present the largest displacements in the central position for the reference case. These same displacements are verified for the case with $\lambda=0.157$ (Figure~\ref{fig:TSSP}b) in which resonances are at least twice as far from the central body. As a rule, we obtain that displacements depend on the distance $a_t/a_c$ from the resonance to the corotation radius. The central positions we obtained differ by less than 5\% from those theoretically obtained, demonstrating the robustness of the analytical method. \begin{figure} \centering \subfigure[]{\includegraphics[width=0.88\columnwidth]{Figures/2_5/P1T.png}} \subfigure[]{\includegraphics[width=0.88\columnwidth]{Figures/2_5/DP_1T.png}} \centering \subfigure[]{\includegraphics[width=0.88\columnwidth]{Figures/2_5/rotating.png}} \caption{a) Poincar\'e surface of section for $C_J=1.964~{\rm R^2\omega_k^2}$, with $\lambda=0.471$ and $\mu=10^{-3}$. We assume initial conditions with $3.15\leq x_0/R\leq 3.84$. The black curves are the periodic and quasi-periodic orbits of first kind, and the orange curves are orbits associated with the 2:5 resonance. Red dots correspond to chaotic orbits. b) Evolution of the 2:5 resonance islands, where the colours of the dots correspond to values of $C_J$ given on the figure's label. c) Central orbit of the 2:5 resonance for $C_J=1.964~{\rm R^2\omega_k^2}$ in the rotating frame. The temporal evolution of the orbit is given by numbers and dots equally spaced in time, while the colour-coding gives the velocity in the rotating frame. \label{fig:25}} \end{figure} After the validity of the analytical model is attested, we turn our attention to the resonance dynamics. Figure~\ref{fig:25}a shows Poincar\'e surface of section of the region around the 2:5 resonance for the reference object and $C_J=1.964~{\rm R^2\omega_k^2}$. In the figure, we can identify four types of motion in the stable region: periodic motion of first kind, quasi-periodic motion associated with the latter, resonant and chaotic motion -- which is not stable despite being within the region defined by us as stable. The single black dot at $x_0/R=3$ corresponds to the orbit classified as periodic of first kind. Periodic orbits in a Poincar\'e surface of section divide the $x$-axis positions into pericentric positions -- at smaller $x$ -- and apocentric positions -- at larger $x$. Seeing the right part of the figure, we have particles with higher initial eccentricity, forming black closed curves surrounding the periodic orbit. They are quasi-periodic orbits and define regions where particles remain indefinitely in stable motion without other effects. The orange islands correspond to orbits associated with the 2:5 resonance, where every single dot in the centre of an island is a stable fixed point of the resonance. All three orange dots in the figure correspond to a single periodic orbit of second kind. Due to energy exchanges between the central body and resonant particles, the latter can remain stable even in the presence of other effects, depending on the system conditions. The red dots show the chaotic zone between the resonant and quasi-periodic orbits. Appendix~\ref{pssrc} shows Poincar\'e surfaces of section of the resonances found around the reference object. As one can see, the chaotic zones at the resonances separatrices are always narrow, showing that the region we named the ``stable region'' has in fact a few very small strips of confined chaotic motion. Moving to the right in Figure~\ref{fig:25}a, there is another region with quasi-periodic orbits that extends up to $x_0/R=3.57$. After this limit, we have the chaotic region, where the red dot at $x_0/R=3.65$ corresponds to a particle that collides with the central body. In Figure~\ref{fig:25}b, we present the evolution of the 2:5 resonance, showing the largest stable orbit of the resonance, an intermediate one, and the central orbit, for different values of the Jacobi constant. Since the 2:5 resonance is a third-order resonance ($j=3$), each initial condition produces three distinct islands in Poincar\'e surfaces of section. Except for the case with $C_J=1.959~{\rm R^2\omega_k^2}$, we see that the three islands shrink and get closer as the value of $C_J$ increases. The Jacobi constant and the eccentricity are inversely proportional, so the latter decreases from right to left in the figure. The resonance width decreases with the eccentricity, and the resonant orbits tend towards the periodic orbit of the first kind, explaining why the islands shrink until they disappear. To understand why the largest red island is smaller than the largest blue island (Figure~\ref{fig:25}b), we present in Figure~\ref{fig:25zoom}a the Poincar\'e surfaces of section for $C_J=1.959~{\rm R^2\omega_k^2}$, and in Figure~\ref{fig:25zoom}b, the theoretical and numerical boundaries of the 2:5 resonance. The red dashed line places the case with $C_J=1.959~{\rm R^2\omega_k^2}$ and the grey area is the chaotic region. For this value of Jacobi constant, the resonance is at the edge between the stable and chaotic regions. As a result, the particles most bounded to the resonance - closer to the stable fixed point -- remains stable (in orange), while particles closer to the resonance boundaries initially follow the pattern expected for resonant particles. However, they are showing the stickiness phenomenon behaviour mimicking the resonant behaviour, but they are lost from the system at some point. The eccentricity of one of these less bounded particles is shown in Figure~\ref{fig:25zoom}c by the solid red line, while the eccentricity of the central resonant orbit is the solid orange line. Both particles show a periodic variation in eccentricity. However, the eccentricity of the less bounded particle also shows an increase, reaching $e\sim 0.145$. \begin{figure} \centering \subfigure[]{\includegraphics[width=0.88\columnwidth]{Figures/2_5/P1T2.png}} \subfigure[]{\includegraphics[width=0.88\columnwidth]{Figures/2_5/zoom2_5.png}} \centering \subfigure[]{\includegraphics[width=0.88\columnwidth]{Figures/2_5/eccentricity.png}} \caption{a) Poincar\'e surface of section for $C_J=1.959~{\rm R^2\omega_k^2}$ where the periodic/quasi-periodic orbits of first kind are in black, the 2:5 resonance islands are in orange, and the particles in the chaotic region are in red. b) Theoretical boundaries of the 2:5 resonance are shown by the solid orange lines. In contrast, the filled orange and grey regions are regions numerically obtained for the 2:5 resonance and the chaotic region, respectively. The red dashed line gives the initial conditions of the simulation with $C_J=1.959~{\rm R^2\omega_k^2}$. c) Eccentricity of a pair of particles: the one that remains in the system is orange, and the unstable one is red.\label{fig:25zoom}} \end{figure} Figure~\ref{fig:25}c shows, in the rotating frame, the periodic orbit of the second kind seen in Figure~\ref{fig:25}a, where the colour-coding gives the velocity. Since the particle is at the stable fixed point of the resonance, the orbit is closed. Also, the orbit is retrograde ($v<0$) because the resonance is beyond the corotation radius. As the central body is symmetric, there will always be at least one axis that divides the orbit into two symmetric parts. For example, for the orbit shown in Figure~\ref{fig:25}c, this axis corresponds to $y=0$. \cite{Sicardy2020} discusses some additional symmetries expected for the trajectory of a particle in a $m:m-j$ sectoral resonance. The orbit is invariant by a rotation of $360/|m|$~deg, and it has a total of $|m|(j-1)$ self-crossing. For the 2:5 case ($j=3$ and $m=-2$), we see that the orbit is invariant by a rotation of 180~deg and has four self-crossing. A peculiarity of our system is the positions of the particle pericentre and apocentre. In RP3BP, in which the disturbing body is at $x_0/R=1$ and the particle is initially at $x_0/R<1$, the gravitational effect felt by the particle is weaker (stronger) when it is on the $x$-axis at $x/R>0$ ($x/R<0$). Consequently, the particle starts at the pericentre, the apocentre being in the opposite direction. In our case, we have the opposite scenario. The orbit position where a particle feels the strongest gravitational effect is on the $x$-axis at $x/R>0$ -- where the modulus of gravitational force is the sum of the forces of the two portions of the central body. Thus, a particle initially on the $x$-axis ($x/R>0$) starts at its apocentre (minimum velocity), as we can see from the dot labelled ``1'' in Figure~\ref{fig:25}. \begin{figure*} \begin{minipage}[t]{0.49\linewidth} \vfill \vfill \subfigure[]{\includegraphics[width=0.88\columnwidth]{Figures/1_4/P1T.png}} \end{minipage} \hfill \begin{minipage}[t]{0.49\linewidth} \subfigure[]{\includegraphics[width=0.88\columnwidth]{Figures/1_4/DP_1T.png}} \subfigure[]{\includegraphics[width=0.88\columnwidth]{Figures/1_4/rotating.png}} \end{minipage} \caption{a) Poincar\'e surface of section for $C_J=2.087~{\rm R^2\omega_k^2}$, with $\lambda=0.471$ and $\mu=10^{-3}$. We assumed initial conditions with $3.70\leq x_0/R\leq 5.97$ and separated the distinct types of orbits by colour: the periodic/quasi-periodic orbits of first kind are in black, the 1:4 resonance orbits are in purple and green and chaotic ones in red. b) Resonance islands for different values of $C_J$. The label on the panel gives the colour of the largest island for each value of $C_J$. c) Central orbit in the rotating frame of one of the families associated with the 1:4 resonance (in green in the top panel) for $C_J=2.087~{\rm R^2\omega_k^2}$. The numbers and colours on the panel provide time evolution and velocity in the rotating frame, respectively. \label{fig:14}} \end{figure*} Figure~\ref{fig:14} shows, from top to bottom, the Poincar\'e surface of section for $C_J=2.087~{\rm R^2\omega_k^2}$, the whole evolution of the islands of the 1:4 resonance and the trajectory of a particle at a stable fixed point of the resonance. As shown in Figure~\ref{fig:14}a, the overview of the resonance neighbourhood is similar to the 2:5 resonance, with a narrow, chaotic region at the resonance boundaries, surrounded by a region with periodic/quasi-periodic orbits. A crucial difference, however, is obtained in the resonance islands. While the 2:5 resonance has three stable fixed points, we obtained in Figure~\ref{fig:14}a six stable points for the 1:4 resonance. To understand the dynamics of the resonance, we colour green (Figure~\ref{fig:14}a) the trajectory of a particle near one of these points. The particle is responsible for forming three islands around three of the stable fixed points (in green in the figure). This fact leads us to conclude that the resonance is the 1:4 (of third-order) and not the 2:8 (of sixth-order) as we would obtain in the ellipsoidal problem -- which we will address in a following publication. The islands produced by the particle have the particularity of being asymmetric in relation to the $x$-axis -- we say that the particle is in asymmetric libration \citep{Beauge1994,Winter1997b}. In Figure~\ref{fig:14}b, we highlight the Poincar\'e surface of section islands of some particles by plotting them in black, intending to show the asymmetric libration. Each island produced by a particle in asymmetric libration has a mirror image obtained from the motion of a different particle in asymmetric libration. Closer to the resonance boundaries, we also get ``horseshoe fashion'' orbits encompassing pairs of fixed points of two different trajectories. When we refer to asymmetric resonance or libration, we refer to the symmetry of the trajectory in Poincar\'e surface of section and not in the $xy$-plane. As already mentioned, the trajectory in the $xy$-plane of the resonant particles has a symmetry axis due to the symmetric mass distribution in the central body. For example, in Figure~\ref{fig:14}c, the axis of symmetry would correspond to the axis connecting the point ``6'' to the centre of the system. Several works such as \cite{Message1970}, \cite{Frangakis1973a,Frangakis1973b}, \cite{Message1978}, \cite{Bruno1994}, \cite{Beauge1994} and \cite{Winter1997b} have studied asymmetric periodic orbits in the context of RP3BP, showing that these orbits are characteristics of $1:1+p$ resonances and are obtained only for particles with eccentricities above a critical value. Similar to particles in $m:m-j$ resonance with $m\neq -1$, the ones with eccentricity lower than this threshold value present symmetric libration in Poincar\'e surface of section. We obtained these same results for the case with mass anomaly. In Figure~\ref{fig:14}b, the critical eccentricity is reached somewhere between the Jacobi constants $2.122~{\rm R^2\omega_k^2}$ and $2.152~{\rm R^2\omega_k^2}$. Carrying out a set of Poincar\'e surface of section in this interval, we obtain that the critical eccentricity for the 1:4 resonance is $e\sim0.167$ ($2.136~{\rm R^2\omega_k^2}$). Figure~\ref{fig:14bisec}a shows one island of the 1:4 resonance for the critical eccentricity ($2.136~{\rm R^2\omega_k^2}$, in green), for $2.133~{\rm R^2\omega_k^2}$ (in purple) and $2.139~{\rm R^2\omega_k^2}$ (in blue). For the highest value of Jacobi constant (smallest eccentricity), we see a single stable point in the figure related to a single family of resonant orbits. The critical eccentricity is reached by decreasing the Jacobi constant, and the stable point bifurcates into two points (the stars in the figure). Each of the points gives rise to an independent family of resonant orbits. The $x$-axis, which previously allocated the single stable point, now allocates the unstable equilibrium point after the bifurcation, corresponding to the inflexion position of the ``horseshoe fashion'' orbits. Figure~\ref{fig:14bisec}b shows the trajectories of the stable points given by stars in the top panel. The orbits are mirror versions of each other. The same is obtained for eccentricities in Figure~\ref{fig:14bisec}c, in which the red curve is the mirror version of the black one with respect to time $t\approx4.2\omega_k^{-1}$ (pericentre passage time). As discussed in \cite{Bruno1994}, the bifurcation of the stable points is related to the indirect term of the disturbing function, which differs from zero only for $1:1+p$ resonances (equation~\ref{potentialexp}). \begin{figure} \centering \subfigure[]{\includegraphics[width=0.88\columnwidth]{Figures/1_4/bisec.png}} \subfigure[]{\includegraphics[width=0.88\columnwidth]{Figures/1_4/orb.png}} \subfigure[]{\includegraphics[width=0.88\columnwidth]{Figures/1_4/ecclim.png}} \caption{a) Poincar\'e surface of section of one island of the 1:4 resonance for $C_J=2.133$,~$2.136$, and~$2.139~{\rm R^2\omega_k^2}$ (in purple, green, and blue, respectively). The black and red stars are the stable points obtained after bifurcation. b) Trajectories and c) eccentricities of the stable points given by stars in the top panel, where the colour of the solid lines coincides with the colour of the star for the same stable point.\label{fig:14bisec}} \end{figure} We show another example of particles in asymmetric libration in Figure~\ref{fig:12}. From top to bottom, this figure shows Poincar\'e surface of section of the region of 1:2 resonance for a central body with $\lambda=0.157$ and $\mu=10^{-3}$, the whole evolution of the resonance, and the trajectory of a particle in a stable fixed point of the resonance. As in the reference case, chaotic behaviour is seen only in a narrow region in the separatrices, with a large regular region of periodic/quasi-periodic orbits of first kind around the resonances. For the 1:2 resonance, we have a low critical eccentricity ($e\sim 10^{-2}$), with symmetric libration only in the cases where the resonance islands are tiny. Trajectories of particles in 1:2 resonance are the only ones without self-crossings in the rotating frame, as we can see in the bottom panel. Such fact has implications for the temporal evolution of a ring of particles, as self-crosses increase collisions between particles. In this context, a ring with particles into the 1:2 resonance or in periodic/quasi-periodic orbits of first kind -- which do not show self-crossing either -- should have a lower rate of collisions than a ring with particles in other resonances, disregarding other external effects. The particles shown in Figures~\ref{fig:14}c and \ref{fig:12}c do not start at the apocentre of the orbit because they are not initially with $\dot{x}=0$. A particle around a body with mass anomaly will start at its apocentre only when that condition is met. \begin{figure} \centering \subfigure[]{\includegraphics[width=0.88\columnwidth]{Figures/1_2/P3T.png}} \subfigure[]{\includegraphics[width=0.88\columnwidth]{Figures/1_2/DP_1T.png}} \centering \subfigure[]{\includegraphics[width=0.88\columnwidth]{Figures/1_2/rotating.png}} \caption{a) Poincar\'e surface of section for $C_J=0.915~{\rm R^2\omega_k^2}$, with $\lambda=0.157$ and $\mu=10^{-3}$. The non-resonant orbits are in black. Particles in 1:2 resonance and chaotic orbits are in orange and green and red, respectively. b) Poincar\'e surface of section for some particles in 1:2 resonance, with $C_J=0.907$, $0.912$, $0.915$, and $0.918~{\rm R^2\omega_k^2}$. Different colours of the islands involved by the same ``horseshoe fashion'' orbit correspond to different particles. c) Trajectory of a stable fixed point shown in orange in the top panel, where the colour-coding gives the velocity and the numbers and dots, the time evolution of the orbit. \label{fig:12}} \end{figure} \section{Application to the Chariklo system} \label{chariklosection} \cite{Leiva2017} using stellar occultation data, investigated the shape of Chariklo, obtaining four distinct shapes models for the object: a sphere, a MacLaurin spheroid, a triaxial ellipsoid, and a Jacobi ellipsoid. According to \cite{Sicardy2019}, observational data suggest the presence of topographic features of typical heights of 5~km in the spherical solution. This fact places Chariklo as a possible body with a mass anomaly. In this section, we briefly study the dynamics around Chariklo, in particular in the region of the ring. The rings have orbital radii of 391~km and 405~km, with radial widths of 7~km and 3~km, respectively \citep{Berard2017}. We performed numerical simulations adopting the spherical Chariklo given by \cite{Leiva2017}, $\lambda=0.471$, with a mass anomaly of $\mu=7\times10^{-6}=(5~{\rm km}/(2\times129~{\rm km}))^3$. Figure~\ref{fig:chgeneral} shows the width of the resonances and the location of the chaotic region. The vertical dashed line gives the corotation radius and the central location of the rings by the vertical dotted lines. We obtained a threshold semi-major axis of $a_t/R=2.5$ in the numerical simulation. This result is in good agreement with our adjusted equation (eq.~\ref{trs}) which returns $a_t/R=2.6$. The 1:2 resonance is the only first-order resonance beyond the chaotic region. The region beyond the chaotic one is essentially stable, hosting the rings and possibly moons, depending on their eccentricity. \begin{figure*} \centering \includegraphics[width=1.5\columnwidth]{Figures/chariklo/ae_M_mu_T.png} \caption{Semi-major axis versus eccentricity for Chariklo system, where coloured lines place the sectoral resonances, and the grey area corresponds to the chaotic region. Coloured lines not referenced on the label, between $1.4-1.8$, correspond to first order resonances with $|m|>4$. A vertical dashed line at $a/R\approx1.7$ gives the corotation radius, while vertical dotted lines give the central location of the rings.\label{fig:chgeneral}} \end{figure*} \cite{Leiva2017} proposes that the inner ring is associated with the 1:3 spin-orbit resonance. Therefore, we studied this resonance in detail, as it is close to both rings. Figure~\ref{fig:ch13}a shows Poincar\'e surface of section for the largest Jacobi constant obtained by us for the 1:3 resonance ($C_J=2.038~{\rm R^2\omega_k^2}$). For this value of $C_J$, the resonance has not reached the critical eccentricity, and we obtain only a single symmetric periodic orbit. A narrow, chaotic region surrounds the islands of resonance, but the whole set is surrounded by a stable region associated with periodic/quasi-periodic orbits of first kind. \begin{figure} \centering \subfigure[]{\includegraphics[width=0.90\columnwidth]{Figures/chariklo/P1_3_1T.png}} \subfigure[]{\includegraphics[width=0.88\columnwidth]{Figures/chariklo/gir.png}} \subfigure[]{\includegraphics[width=0.88\columnwidth]{Figures/chariklo/radius.png}} \caption{a) Poincar\'e surface of section of Chariklo system for $C_J=2.038~{\rm R^2\omega_k^2}$. We show different orbits by different colours: the non-resonance orbits are black, the 1:3 resonant orbits are blue, and the chaotic ones are red. b) motion in the rotating frame for $y>0$ and c) radial variation of periodic orbits shown in panel a). The orbits of the first and second kind are given by black and blue lines, respectively, and the green regions correspond to the positions of Chariklo rings. \label{fig:ch13}} \end{figure} Figure~\ref{fig:ch13}b shows the motion in the rotating frame of the periodic orbits given in Figure~\ref{fig:ch13}a, in which the colour of the orbits matches those given in the top panel, and the green regions correspond to the location of the rings. For clarity, we only show the portions of the orbits with $y>0$. We show the radial variation of the orbits in Figure~\ref{fig:ch13}c. The 1:3 resonance orbit has one self-crossing at $y=0$ and a period of almost $6.4\omega_k^{-1}$. In contrast, the trajectory of the first kind follows the Chariklo shape, with a period of almost $3.2\omega_k^{-1}$. As one can see in the figure, both orbits are initially in the inner ring -- near its outer edge. However, only the first kind of periodic orbit remains within the ring throughout the simulation, while the resonant orbit crosses the ring edges and reaches the outer ring. The difference in radial variation is due to the different nature of the orbits. Periodic orbits of first kind correspond to nearly circular orbits, while those of second kind are intrinsically eccentric, explaining why the latter has a significantly larger radial variation. Here, when we refer to eccentricity, we are referring to osculating elements defined in the context of the classical 2-body problem. We refer the reader to the work of \cite{Ribeiro2021} for a detailed discussion regarding the orbital elements in the context of NSSBs. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/chariklo/ae_M_mu_Tzoom.png} \caption{Diagram of the semi-major axis versus eccentricity. The green regions show the range of values that corresponds to the location of the rings. The blue line shows the orbital elements obtained for the central orbit of the 1:3 resonance, and the blue filled regions give the boundaries of the resonance. The black line gives the periodic orbits of first kind. \label{fig:chzoom}} \end{figure} The results discussed in the last paragraph lead us to question whether the inner ring is associated with the 1:3 resonance. To verify this, we present in Figure~\ref{fig:chzoom} a diagram with the semi-major axis versus eccentricity for a range of values corresponding to the rings (in green). We also show the orbital elements obtained for the periodic orbits of first and second kinds (black and blue lines, respectively). The region filled in blue shows the 1:3 resonance boundaries obtained in the Poincar\'e surface of sections. The largest possible eccentricity for a particle to remain within the boundaries of the inner ring is $e=9\times10^{-3}$, which is smaller than the smallest eccentricity obtained for the resonant orbits ($e=10^{-2}$). In addition, the resonant periodic orbits and the ring are displaced, indicating that the ring is not confined by such resonance. Meanwhile, periodic orbits of first kind cover a broad region and encompass both rings. The entire region shown in Figure~\ref{fig:chzoom}, which is not associated with the 1:3 resonance (blue region), is composed of periodic/quasi-periodic orbits of first kind, including the ring region. Therefore, we conclude that Chariklo rings are associated with first kind orbits and not with the 1:3 resonance, as proposed by \cite{Leiva2017}. Similar results were obtained by \cite{Winter2019} for the Haumea ring. \section{Conclusions and discussion} \label{secdiscu} In this paper, we have attempted to perform a general analysis of the dynamics of particles around a spherical body with a mass anomaly. For this, we used well-known techniques of the 3-body problem study, varying the parameters of the central object. We can summarise our overall results as follows: \begin{itemize} \item The pendulum model with the necessary adaptations and the Poincar\'e surface of section proved to apply to the mass anomaly problem. We verified a strong agreement between the results by comparing both techniques. \item There is a chaotic region near the central object where particles collide or are ejected due to chaotic diffusion caused by successive close-encounters with the mass anomaly. \cite{Mysen2006,Mysen2007} and \cite{Lages2017} also obtains chaotic regions near the central object for elongated bodies and contact binaries, respectively. \item For the set of parameters analysed by us, the chaotic region extends beyond the corotation radius. This fact indicates a lack of stable internal sectoral and corotation resonances in the mass anomaly system. \item Resonances location is mainly affected by the mass of the spherical portion and the spin period. In contrast, the masses of the spherical and anomalous portions of the body and spin period are responsible for determining the width of the resonances. \item Beyond the chaotic region, there is a region where the motion of the particles is dynamically stable. In such a region, there is chaotic behaviour only in a narrow region in the separatrices of the resonances. \item The behaviour of the particles in the external sectoral resonances is similar to those obtained for the mean motion resonances in the RP3BP \citep{Winter1997a,Winter1997b}. Similar to RP3BP, we verify the existence of asymmetric periodic orbits associated with $1:1+p$ resonances. \end{itemize} Although objects with the shape assumed in this work are unknown so far, the completely irregular shapes known for some asteroids lead us to speculate that such a class of object might exist. We emphasize that bodies with mass anomaly are perfectly reasonable outputs from a collision of a satellite that spirals towards the central body due to tidal dissipation or a collision between two objects at low velocity, with partial accretion \citep{Leinhardt2011}. \cite{Sicardy2019} discuss the possibility of Chariklo having a spherical shape with topographic feature with ${\rm \mu\sim10^{-5}}$, which places the Centaur as a first candidate to integrate the class of mass anomaly objects. We studied the dynamics around a Chariklo with mass anomaly and found that 1:3 resonant particles present radial variations too large for the radial extension of the inner ring. On the other hand, particles in periodic/quasi-periodic first kind orbits show radial motions that match the extension of the two rings of Chariklo. Consequently, the ring must be associated with these orbits and not with orbits of second kind as proposed by \cite{Leiva2017}. With the constant increase in data on small heliocentric bodies, we believe that objects with shapes similar to bodies with mass anomalies may soon be detected. It is essential to point out that in the current study, we limited ourselves to analyse the dynamics of an isolated particle around a NSSB, disregarding the effects associated with the ring particles, such as collisions between them, local viscous, and self-gravity effects. We also disregard external effects that modulate the dynamics of small particles, such as solar radiation pressure and Poynting-Robertson drag. Nevertheless, the location and width of resonances and the chaotic region are general results and should remain almost unchanged in the presence of other effects. Therefore, our work presents some tools and first general results for studies on dynamics of mass anomaly systems. \section*{Acknowledgements} This study was financed in part by the Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\' ivel Superior - Brasil (CAPES) - Finance Code 001, Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP) - Proc.~2016/24561-0 and Proc.~2018/23568-6, Conselho Nacional de Desenvolvimento Cient\' ifico e Tecnol\' ogico (CNPq) - Proc.~305210/2018-1 and Proc.~313043/2020-5. Finally, we thank the anonymous reviewer for the comments that significantly improved our work. \section*{Data Availability} The data underlying this article will be shared on reasonable request to the corresponding author. \bibliographystyle{mnras}
proofpile-arXiv_065-43
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In our project, we focus on NLP based hybrid recommendation systems. Our data is from Yelp Data. For our hybrid recommendation system, we have two major components: the first part is to embed the reviews with Bert model and word2vec model; the second part is the implementation an item-based collaborative filtering algorithm to compute similarity of each review under different categories of restaurants. At the end, with the help of similarity scores, we are able to recommend users the most matched restaurant based on their recorded reviews. The coding work is split to several parts: selecting samples and data cleaning, proprecessing, embedding, computing similarity, and computing prediction and error. Due to the size of the data, each part will generate one or more json files as the milestone to reduce the pressure to memory and the communication between each part. \section{Methods} The first step is selecting valid samples from the data sets. We found that the restaurant and users which only received and made a few reviews, and the reviews with only a few words provided limited resource for the NLP based hybrid recommendation system. A valid sample of data can be selected by setting some thresholds based on the above condition. The second step is to prepare the review data with data cleaning and preprocessing. We also want to see how the different types of preprocessing and embedding affect the result, so we generated two data sets after the preprocessing step: one with lemmatization, and one without lemmatization. Then the data sets are split into train and test data. In the first step, we applied four embedding models for comparison: word2vec-google-news-300 model, training a word2vec model, glove-wiki-gigaword-300 model and Bert model. With the embeded vectors, we then calculate the similarity between two groups of restaurant reviews. Finally, we used the similarity and ratings from other users to compute the prediction ratings and errors of the improved algorithm, and compared it to the result of the original item-filtering algorithm to show the improvement. In the second step, we compared the original item CF and item CF with reviews to demonstrate the advantage our our system. For original item CF, the main goal is to match users’ rated items to other similar items and then use the ratings of other similar items to predict the rating of current user rated item. \begin{tabular}{ |p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}| } \hline \multicolumn{5}{|c|}{Users Ratings Over Items} \\ \hline & Item 1 & Item 2 & Item 3 & Item 4\\ \hline User 1 & 1 & 3 & N/A & 4\\ \hline User 2 & 4 & ? & 2 & 3\\ \hline User 3 & N/A & 5 & 5 & 4 \\ \hline User 4 & 5 & 2 & 5 & N/A \\ \hline \end{tabular} \captionof{table}{Example data set} For example, if we have a data set as the table 1 above, and we want to predict the rating of Item 2 rated by User 2. We use the following weight function to calculate the weight of each pair of items. $$w_{i, j} = \frac{\sum_{u \in U}(r_{u, i} - \bar{r_i})(r_{u, j} - \bar{r_j})}{\sqrt{\sum_{u \in U}(r_{u, i} - \bar{r_i})^2}\sqrt{\sum_{u \in U}(r_{u, j} - \bar{r_j})^2}}$$ After we calculate the weight of (Item 1, Item 2), (Item 2, Item 3), (Item 2, Item 4), we select K largest weight as the K most important neighbors of Item 2. Lastly, we use the following formula to calculate the predicted rate of Item 2 rated by User 2. $$P_{u, i} = \frac{\sum_{n\in N}r_{u, n}w_{i,n}}{\sum_{n\in N}|w_{i,n}|}$$ For Item CF with reviews, instead of selecting items with K largest weight as neighbors. We use the mean of cosine similarity of reviews written by the same users to select neighbors. For example, for Item 1 and Item 2, only User 1 and User 4 rated both items. In this case, we track the review embeddings written by User1 and User4 for both Item 1 and Item 2. Then, we calculate the cosine similarity of two reviews by the same user. In this case, Item 1 and Item 2 have two cosine similarity scores, one from User 1 and the other one from User 2. And we take the mean of these two cosine similarity scores as the similarity score of Item 1 and Item 2. After we calculate all similarity between pairs of Items, we select the pair of items with highest K similarity scores to do the same Item CF process. \section{Experiment} \subsection{Selecting Samples} Firstly, the original data was downloaded from \url{https://www.yelp.com/dataset}, which is the official website of yelp. There are six data sets, and all of them are in the json format. For this project, we only need the data about business, users, and reviews. The size of the reviews data set is about 6.5 GB, and there are more than 965 million words in reviews. If each word is embedding a 300 dimension vector, the total size of only embedding words will be over 700 GB, which is unrealistic for NLP processing. Therefore, we decided to narrow down the restaurant location to a certain region and take 125k samples. We checked the number of restaurants in each state from business data set, and we found that the number of restaurants in Massachusetts is the largest (more than 30k) compared to other states, so we decided to focus on the restaurants only from this state. It will be helpful for the item-based filtering algorithm and our edited version algorithm to work well, and to see their difference if the users in sample data rating more restaurants, otherwise there is not enough review information to be analyzed. After trying different thresholds, we filter out users whose number of reviews is at least 35 in MA and the restaurants which have at least 150 reviews. Also, each review has at least 20 words after data cleaning. There are about 128k reviews that satisfy those conditions and 125k of them are randomly selected. With the data set ready, we decided to split it into two data set: one from training, and the other one for testing. The split ration between train data set and test data set is 4 : 1. \subsection{Preprocessing} A significant challenge for information extraction is that the vocabulary size is considerably large. We observed that people intend to use informal language while writing reviews, which involves using a massive amount of non-alphabetic characters, or misspelled words. Therefore, we decided to do preprocessing over the context by changing all the alphabetic characters to lowercase, restoring the abbreviations with non-alphabetic characters, and removing all the non-alphabetic characters and common stop words in English by using "ntlk" library. After the first round of preprocessing, we, however, found that the number of the unique words is 84,511, which is too large to train a word2vec model and do the embedding. \subsection{Misspelled word correction} Next, we decided to misspelled word correction by correcting or removing misspelled words to reduce vocabulary size. We observed that one of the most common spelling error types is that people like to repeat some characters consecutively in the same word to express a tone of emphasis, such as "goooodddd", "beauuutifffuuuul", "wooooonderrrrrfullll", etc. We also found that in nearly all of the English words, at most two consecutive duplications of the same character in a word are allowed, and the length of each duplication is at most two. With this observation, we first removed the excessive consecutive duplication of a character in a word. The removal of excessive consecutive duplication reduced the Damerau-Levenshitein edit distance of the most misspelled words, instead of correcting them. Meanwhile, the big-O time complexity of this process is linear regarding to the length of each word. Considering the size of the total sample, so it is very time consuming to apply it to all the words. Therefore, we used the "pyspellchecker" library from python to do the spelling error detection first, which will check if this word is in the provided dictionary of correct English words. If so, then we do not need to do anything about it. Otherwise, we will apply this algorithm to the misspelled words and get it ready for the next processing. Beside of providing spelling error correction, the "pyspellchecker" can provide a suggested correction for the misspelled word. This function is very limited. It cannot detect and provide suggested corrections for cognitive errors. Also, it can only provide a suggestion for the misspelled word with Damerau-Levenshitein edit distance at most two. For the misspelled words that cannot be corrected, we had to remove it to reduce the vocabulary size and avoid over fitting. Therefore, with the above algorithm and correction function from "pyspellchecker", we reduced the vocabulary size to 73,061. \subsection{Lemmatization} Meanwhile, we also tried to perform the lemmatization over the sample, which is grouping the inflected forms of a word, changing them into the base form, and analyzing them as a single form. We used the "nltk" library to perform this process. With the lemmatization, the total vocabulary size went down to 67,233. However, lemmatization can be lossy. The word with different forms can deliver different meanings depending on the context it is in. Also, people like to express their feelings with the different forms of a word, and lemmatization can make this semantic information lost. \subsection {Word Embedding} Out first approach for work embedding is to use the word2vec model, and mainly work with the "gensim" library from python. We selected two pre-trained word2vec model provided by "gensim": "word2vec-google-news-300" and "glove-wiki-gigaword-300". Also, the "gensim" library allowed us to train our own word2vec model based on our data set. Because the "word2vec-google-news-300" model only has the option of 300 embedding vectors, we set the vector size of our own model to 300 for consistency. We also use Bert to encoding sentences directly. We import sentence_transformers packages in python. In this package, we import 'bert-base-nli-mean-tokens' pre-trained model to avoid too long training time. For the pretrained Bert, it keeps a max 128 sentence length and does a mean pooling to generate a 768 dimension output sentence encoding embedding. By using pretrained Bert model, we successfully reduce the training time from days to less than 30 minutes. \subsection{Computing Similarity} There are three word embedding models in this project: word2vec-google-news-300 model (Google), training a word2vec model (Own W2V), glove-wiki-gigaword-300 model (GloVe); and one sentence embedding model: Bert model (Bert). The way of computing similarity between two reviews are different depending on the chosen embedding method. For Bert model, the similarity between two review sentences is simply the cosine similarity of their sentence embedding vectors. For the first three models, we first embed all words in every review to vectors. Then we applied the FaceBook InferSent model, which is a BiLSTM with Max-Pooling, to extract the most important 512 features of each sentence as a sentence vector. Finally, we calculate cosine similarity among embedding vectors as the similarity between two sentences. \subsection{Item CF} We calculated the RMSE of original Item CF as the gold standard score, and compared the RMSE scores of Item CF with reviews to see how much we can improve. For the original item CF, the model is basically the same as described above. Meanwhile, we compared the RMSE scores of three selecting neighbors methods. The first one is selecting top K neighbors with K = 5,10,15, 20, 25, and 30. The second one is keeping all weights as neighbors to calculate RMSE score. The third one is only deleting all pairs with negative weights. In this case, we only use non-negative weights to predict. After comparing the RMSE score of all three methods, the best RMSE is 1.082271 For Item CF with reviews, we first calculated the neighbors of each test case. To reduce the running time, we limited the candidates to restaurants users rated. We reduced the running time from over 6 hours to less than 1 hour. Then, we selected restaurants with top 10 highest similarity scores. For all neighbors for each test user, we calculated the weights of them with the test restaurant, and then predicted the according rating. The whole process was repeated eight times. For each embedding encoding method (Bert, Google, Glove, Own W2V), we calculated the RMSE of two cases: one for reviews with lemmatization and the other for reviews without lemmatization. All hyperparameters used in Item CF remain same. By controlling variables, we compared the performance of different embedding encoding methods, and differentiated whether lemmatization can emphasize the key features of reviews. \section{Result \& Discussion} According to the result RMSEs in table 2, we can see that Bert catches the most important features and generates the most relevant similar neighbors for Item CF by comparing the result RMSE of each method. Lemmatization helps Bert, Glove, and Own W2V to capture more important features so as to reduce RMSE scores of Item CF. Only for Google embedding method, encoding without lemmatization helps to better predict ratings of Item CF. We think that this is due to the lost of information of some helpful features in the reviews from different pretrained vocabulary. For our trained models such as Bert and Word2Vec, we can see lemmatization helps to customize more on our own training reviews. \begin{tabular}{ |p{2.5cm}||p{2.5cm}|p{1.3cm}| } \hline Embedding & Item CF with Review & Original Item CF \\ \hline Bert with lemma & 0.931494 & \multirow{8}{1.3cm}{1.082271}\\ \cline{1-2} Bert without lemma & 0.939019 & \\ \cline{1-2} Glove with lemma & 0.947504 &\\ \cline{1-2} Glove without lemma & 0.947704 &\\ \cline{1-2} Google with lemma & 0.948165 &\\ \cline{1-2} Google without lemma & 0.945793 &\\ \cline{1-2} Own W2V 300d with lemma & 0.947019 &\\ \cline{1-2} Own W2V 300d without lemma & 0.948520 &\\ \cline{1-2} Own W2V 200d with lemma & 0.948401 &\\ \cline{1-2} Own W2V 200d without lemma & 0.949589&\\ \hline \end{tabular} \captionof{table}{Result table.} The main problem remained is our current Item CF cannot properly predict extreme ratings such as 1 and 5. In most cases, our Item CF with reviews predicts those pairs as 3.7 - 4.1. By eliminating all ratings 5 in the test case can dramatically reduce RMSE to 0.79. We think this is due to the limitation of our computational power so that we cannot involve more data in our running process. For the candidates of Item CF with review, we only select candidates from user rated business from the 100,000 training reviews instead of all business. Even with the reduced size of our data, we currently still need to train our models for hours. With more powerful computation resource, We think that we can involve more samples in training and more neighbors for Item CF to predict which can continuously improve the performance of Item CF with reviews. Also, We can try to combine the result of Original Item CF and that of the Item CF with reviews with assigning different weight to the result of each model. For example, since the original Item CF predicts extreme ratings better than our current Item CF with reviews, we may use some weighting and switching methods. \section{Conclusion} During the preprcoessing stage, we reduced the vocabulary size by 11,450. However, this does not match our original expectation. One of the reasons is that the default dictionary of ”pyspellchecker” library is not rich enough for slangs. Our next step will try to use the urban dictionary API, which can provide much richer dictionary for slangs. Also, we noticed that our own word2vec model only calculated 19,417 words, which is much less that our vocabulary. One of plausible explanations is that the gensim will skip some word with low frequency. Regarding about the resulst from different embedding methods, we found that BERT embedding only has insignificant advantage comparing with the word2vec model. Other than the lack of training, we believe that the word2vec model was affected by the preprocess procedure over the data set. With better spelleing correction techniques, we believe that different between BERT and word2vec can be more significant. In the item CF with review, we extracted semantic information from users to restaurants, which provided richer information than the rating system in the original item CF system. Due to the limitation in computation resource, we could not train our system with more epochs, and the results from RSME indicated that our model has not converged enough. If we could have the access to more powerful resource, we are confident that the performance of our system can be improved. In conclusion, comparing with the original item CF, our hybrid system with semantic analysis over the user's comment can extract much information about user's preference, and provide better recommendations. \section{Related Links} Link to our demo video on YouTube \\ \url{https://www.youtube.com/watch?v=rjlxWL2sHas} Link to our GitHub repository\\ \url{https://github.com/bigshawne/CSCI_544_Final_Proj_Recommendation_System_Based_On_NLP} \bibliographystyle{acl}
proofpile-arXiv_065-44
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{INTRODUCTION} The evolution of materials technology in the field of infrared (IR) detection has experienced a rapid turnaround in the late nineties when traditional bulk materials such as HgCdTe, InSb were replaced by quantum-engineered structures as the former exhibited limited applicability due to the non-uniform growth defects and their ineffectiveness to cover the full IR range \cite{plis2014inas,rogalski2018antimonide,martyniuk2014new,martyniuk2014barrier}. Since then, type-II superlattices (T2SL) composed of InAs/(In,Ga)Sb heterostructures have gained a lot of attention for their thickness dependent bandgap tunabilities, suppressed Auger recombination and high quantum efficiencies. This has enabled them to become the latest industry-standard technology for next-generation IR detectors and focal plane arrays (FPA) \cite{plis2014inas,mukherjee2021carrier,rogalski2018antimonide,ting2020long,zavala2020antimonide,martyniuk2014new}. Subsequently, one of the major breakthroughs en-route achieving a higher operating temperature and background limited IR detection has come in effect in the mid-2000s, when the concept of barrier-based detectors was first introduced. This later laid out an ideal platform to explore several novel device architectures of the form nBn, pBp, pBiBn etc., through ``band-diagram engineering" \cite{rogalski2017inas,nguyen2007type,nguyen2008band,plis2014inas,zavala2020antimonide}. \\ \indent Despite numerous advantages, T2SL systems suffer from an inadequacy to produce perfect hole barriers due to the strong adherence of hole minibands to the bottom of GaSb valence bands and the lack of tunability with varying GaSb layer thicknesses \cite{razeghi2010band,nguyen2007dark,nguyen2009minority,zavala2020antimonide,lang2013electronic}. Moreover, the spatial separation of carrier localizations in these structures leads to smaller absorption, affecting the photo response \cite{ting2020long}. To overcome this, complex structures like M, W, N were introduced later by inserting an additional high-bandgap AlSb layer in the original T2SL layout, which offered higher degrees of freedom to achieve a full tailorability of miniband alignment and better wavefunction engineering \cite{ting2020long,lang2013electronic,nguyen2008band,maimon2006n}.\\ \indent The placement of the AlSb layer determines the nomenclature of these structures based on the shape formed on joining the conduction band edges \cite{ting2020long,nguyen2008band}. In M-superlattice (MSL) structures, the AlSb layer is inserted at the center of the GaSb layer which splits the single hole quantum well into two and makes the valence band more sensitive to the GaSb width \cite{lang2013electronic,ting2020long,nguyen2007type,nguyen2007dark}. Furthermore, due to the AlSb layer, the center of the hole wavefucntion in the GaSb layer shifts closer to the center of electronic wavefunction in the InAs layer which effectively increases the overlap between them, thereby facilitating higher absorption \cite{ting2020long,lang2013electronic}. In addition, MSLs possess fine tuning of band alignments between the absorber and the barrier and strongly provide an impediment to tunneling transport of carriers, thereby knocking off their contribution to the SRH dark current \cite{nguyen2007dark,nguyen2007type,nguyen2008band,nguyen2009minority}. However, a unified study on the key roles played by the AlSb layer width in modulating the bandgap, density of states (DOS) effective masses, valence miniband offsets, interband overlaps, and spectral transport are still scarce in the literature and thus form the basis of this study.\\ \indent This motivates us to perform a scrupulous theoretical investigation of MSL systems and present a thorough comparison with the conventional T2SL systems \cite{li2010intrinsic,livneh2012k}. We show that the bandgap of such systems can be accurately predicted by incorporating an appropriate interface model that takes into account the effects of strain due to lattice mismatch and microscopic interface asymmetry (MIA) corresponding to the alternate arrangement of interface materials \cite{li2010intrinsic,szmulowicz1996numerically,szmulowicz2004effect,hong2009applicability,krebs1996giant,mlinar2005nonsymmetrized,szmulowicz2004effect}. In addition, we emphasize the non-trivial roles played by the inserted AlSb layer in tandem with the original T2SL material layers in regulating the miniband edges and carrier effective masses towards an efficient engineering of carrier transport to optimize the dark current. In subsequent explorations, we compare finite-period MSL and T2SL systems in terms of the tunneling transmission and local density of states (LDOS) features evaluated from the Keldysh non-equilibrium Green's function (NEGF) approach and highlight key differences based on the nature of carrier localization in the available subband states \cite{priyadarshi2018superlattice,mukherjee2018improved,mukherjee2021carrier,tibaldi2021modeling,bertazzi2020nonequilibrium}. The strong interband overlap around the interface region and a signature of spatially continuous broad current spectra in MSL structures are respectively indicative of enhanced absorption and phonon-assisted non-coherent miniband transport of carriers. Finally, MSL structures are shown to demonstrate excellent and precise band-tunability features through carefully introduced design guidelines which makes them suitable to be used as carrier blocking unipolar or bipolar barriers.\\ \indent The rest of the paper is organized as follows. In Sec. \ref{sec_theory}, we describe the $\bf{k.p}$ and NEGF theory applied for modeling electronic band properties and miniband characteristics of MSLs. We discuss the results in Sec. \ref{sec_result} which is divided into three subsections. Section \ref{eg_kp} discusses the band structure properties evaluated via the $\bf{k.p}$ method and explains the effect of varying layer widths on the key band parameters. In Sec. \ref{comp}, a comparative study between a finite MSL and a finite T2SL absorber is presented in terms of the local density of states (LDOS) and spectral tunneling transmission evaluated via the NEGF approach. Finally, Sec. \ref{Barrier} depicts the potential of MSL structures to be used as both electron and hole barriers with respect to the T2SL based absorbers. The paper is concluded in Sec. \ref{conclu}. \section{THEORETICAL APPROACH} \label{sec_theory} \subsection{ $\bf{k.p}$ method} \label{sec_1} The electronic band structure of the non-common atom (NCA) interface InAs/GaSb based T2SL is commonly calculated by the k.p based envelope function approximation (EFA) approach \cite{szmulowicz1996numerically,szmulowicz1998numerically}. To account for the strain and interface effects, an additional layer of InSb is inserted at the interface of InAs and GaSb \cite{delmas2019comprehensive,mukherjee2021carrier,livneh2012k}. \begin{figure}[!htbp] \centering {\includegraphics[height=0.2\textwidth,width=0.4\textwidth]{MSL_structure.png}\label{M Structur}} \caption{Energy band alignment of the 6.1{\AA} family of compound semiconductors (InAs, GaSb and AlSb): The given arrangement of materials as shown here, forms a M-shaped structure within a typical unit cell of the periodic SL. In the absence of the AlSb layers, the structure becomes a conventional InAs/GaSb T2SL } \label{Eb} \end{figure} However, the EFA model is incapable of distinguishing the CA and NCA type interfaces leading to an overestimation of the bandgap \cite{li2010intrinsic}. Therefore, to investigate these NCA superlattices, we include the model proposed by Krebs and Voisin with the EFA model, which distinguishes the interface chemical bonds stacked in forward and backward directions \cite{krebs1996giant,li2010intrinsic}. In order to take the MIA effect into consideration within the $\bf{k.p}$ framework, the two distinct interfaces, GaSb-on-InAs and InAs-on-GaSb are considered \cite{li2010intrinsic,hong2009applicability,krebs1996giant,mlinar2005nonsymmetrized,szmulowicz2004effect}. The explicit Hamiltonians for both interface and strain are added with the original $\bf{k.p}$ Hamiltonian. The total Hamiltonian of the system can thus be written as $H(k)=H_{k}+H_{S}+H_{IF}+H_{strain}$, where, $k$ is the in-plane wave vector, $H_{k}$ are $H_S$ are respectively the $\bf{k}$ and spin dependent terms, $H_{strain}$ stands for the substrate induced strain effects due to the lattice mismatch \cite{li2010intrinsic,livneh2012k,delmas2019comprehensive,becer2019modeling}, and $H_{IF}$ is the interface term which accounts for the NCA interface MIA effects, given by \begin{equation} H_{4}^{I}=H_{XY}^{AB/BA}\begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}, \end{equation} \begin{equation} H_{IF}=\begin{bmatrix} H_{4}^{I} & 0 \\ 0 & H_{4}^{I} \end{bmatrix}. \end{equation} Here, $H_{IF}$ is added at each interface by the delta interface strength potentials i.e $H_{XY}^{BA}=490meV$ and $H_{XY}^{AB}=870meV$, respectively denoting the InSb (GaSb-on-InAs) and GaAs (InAs-on-GaSb) like strength potentials at the interface \cite{li2010intrinsic,livneh2012k,li2010intrinsic,lang2011interface,le2019simulation}. For MSL, the GaSb layer is treated as unstrained and the other layers i.e InAs and AlSb as strained to attain the GaSb lattice constant \cite{lang2011interface}. The strain Hamiltonian contains the Pikus-Bir deformation potential (b, ac, av) \cite{livneh2012k,lang2011interface} and the strain terms, given by \begin{equation} \centering \epsilon_{xx}=\epsilon_{yy}=\frac{a_{GaSb}-a}{a}, \end{equation} \begin{equation} \centering \epsilon_{zz}=-2\frac{C_{11}}{C_{12}}\epsilon_{xx}, \end{equation} where, $a_{GaSb}$ is the lattice constant of GaSb and $a$ is the lattice constant of the layer for which strain parameters are to be calculated, $\epsilon_{xx} $, $\epsilon_{yy} $, and $\epsilon_{zz} $ are strains along x, y and z direction, and $C_{11}$, $C_{12}$ are the elastic stiffness coefficients \cite{livneh2012k}. The interband momentum matrix Kane's parameter, being ideally taken same throughout the lattice, is assumed to be an weighted average of Kane's energy of InAs, GaSb and AlSb with respect to their thicknesses \cite{lang2011interface}. Also, a finite difference discretization technique is employed with the periodic boundary conditions to solve the slow varying envelope functions \cite{mukherjee2021carrier,galeriu2005k}. The schematic of the MSL structure as shown in Fig. \ref{Eb}, contains the additional AlSb layers at the center of each GaSb layer throughout the lattice. The interface matrix ($H_{IF}$) is only considered at the NCA interfaces, and the other material parameters for InAs, GaSb, and AlSb taken for the simulation of MSL electronic band structure are provided in Table \ref{table1}. \begin{table} \centering \caption{ Material parameters of InAs, GaSb and AlSb used for the $\bf{k.p}$ electronic band structure calculation at a temperature of $77K$ \cite{becer2019modeling,livneh2012k,delmas2019comprehensive,qiao2012electronic,mukherjee2021carrier,vurgaftman2001band}} \begin{tabular}{|c|c|c|c|} \hline Parameters & InAs & GaSb & AlSb \\ \hline Lattice constant({\AA})& 6.0584& 6.0959 & 6.1297\\ \hline Energy band gap at 0K ($eV$)& 0.418 & 0.814 & 2.386\\ \hline Elastic stiffness constant ($C_{11}$) & 8.329 & 8.842 & 8.769\\ ($10^{11} dyne/c{m^2}$) & & &\\ \hline Elastic stiffness constant ($C_{12}$) & 4.526 & 4.026 & 4.341\\ ($10^{11} dyne/c{m^2}$) & & & \\ \hline Deformation potential $ac$ ($eV$) & -5.08 & -7.5 & -4.5\\ \hline Deformation potential $av$ ($eV$) & 1 & 0.8 & 1.4\\ \hline Deformation potential $b$ ($eV$) & -1.8 & -2 & -1.35\\ \hline Varshini Parameter $\alpha $ $[meV/K$]& 0.276 & 0.417&0.42\\ \hline Varshini Parameter $\beta$ [$K$]&93&140&140\\ \hline Effective mass electron ($m_e^*$)& 0.022&0.0412& 0.14\\ \hline Luttinger parameter $\gamma1$ & 19.4& 11.84 & 4.15\\ \hline Luttinger parameter $\gamma2$ & 8.545 & 4.25\ & 1.28\\ \hline Luttinger parameter $\gamma3$ & 9.17 & 5.01 & 1.75\\ \hline Interband mixing parameter Ep [$eV$] & 21.5 & 22.4 & 18.7\\ \hline Spin orbit splitting (SO) [$eV$] & 0.38& 0.76 & 0.65\\ \hline Valence band offset (VBO) [$eV$] & -0.56 & 0 & -0.38\\ \hline \end{tabular} \label{table1} \end{table} \subsection{Keldysh NEGF method} \label{sec_2} In quantum-confined finite SL structures, the LDOS and electronic transmission function offer a concrete and qualitative understanding on the quantum mechanical nature of the carrier localization profile \cite{aeberhard2018photocarrier,miniband_spie} and spectral current flow under different biasing conditions. In this work, we employ the quantum transport based NEGF formalism \cite{dattaLNE,DattaQT} as a mathematical tool to numerically compute these parameters, incorporating the concerned device physics models \cite{Henrickson,Aeberhard_jce,QCLnegf,AeberhardPRB2008,PRL_nanotube,myTED,myPRA,mukherjee2018improved,priyadarshi2018superlattice,mukherjee2021carrier}. The retarded Green's function ($G$) with the proper information of self energies corresponding to the macroscopic contacts and scattering phenomena along the longitudinal energy (E) is defined as \cite{Cavassilas_NEGF,akhavan_effectivemass,akhavanted,Kolek_effmass,foreman1993effective}, \begin{equation} G(z,z',E)=[E^+I-H-U-\sum_{j}\Sigma^C_{j}(E)-\Sigma^S(E)]^{-1}, \label{green} \end{equation} where, $E^+=E+i\eta^+$ where $\eta^+$ is a small positive number, $z,z'$ are the position indices, $I$ denotes the Identity matrix, $H$ represents the 1-D tight-binding Hamiltonian matrix of the SL in real space and $U$ is the potential profile calculated from the self-consistent NEGF-Poisson solver, $\Sigma^C_j$ is the self-energy of the $j^{th}$ contact (where $j\in R$ (Right), $L$ (Left)), which are calculated from the corresponding broadening functions ($\Gamma_{1,2}$), and $\Sigma^S$ is the scattering self-energy evaluated using the self-consistent Born approach \cite{Henrickson,Aeberhard_jce,QCLnegf,AeberhardPRB2008,PRL_nanotube,myTED,myPRA,mukherjee2018improved,priyadarshi2018superlattice,mukherjee2021carrier}. We calculate the LDOS as the diagonal elements of the spectral function, given by \cite{mukherjee2021carrier,DattaQT} \begin{equation} \mathcal{A}(z,z',E)=i \left[ G(z,z',E)-G^{\dagger}(z,z',E) \right], \label{LDOS_eq} \end{equation} for the entire energy ($E$) range of interest. In the ballistic transport limit, the electronic transmission probability is calculated as \cite{dattaLNE,DattaQT} \begin{equation} T(E)=Re[Tr(\Gamma_{1}G\Gamma_{2}G^{\dagger})]. \label{trans_eq} \end{equation} However, in practical devices, scattering effects caused due to the fluctuations at the atomic level, have a deep impact on the carrier transport. These effects destroy the system coherence and are included in the simulation setup through an additional self-energy coupled to the electron and hole correlation functions ($G^{n,p}$) which are given by \begin{equation} \begin{split} &G^n(E)=G(E)\Sigma^{in}(E)G^{\dagger}(E),\\ &G^p(E)=G(E)\Sigma^{out}(E)G^{\dagger}(E), \end{split} \label{GnGp} \end{equation} where $\Sigma^{in}(E)=\sum_{j=L,R}\Gamma^C_j(E)f_j(E-\mu_j)+\Sigma^{in}_S(E)$ and $\Sigma^{out}(E)=\sum_{j=L,R}\Gamma^C_j(E)\left(1-f_j(E-\mu_j)\right)+\Sigma^{out}_S(E)$. In particular, $f_j(E-\mu_j)$ is the equilibrium Fermi function corresponding to the $j^{th}$ contact with the electrochemical potential $\mu_j$ and $\Sigma_S^{in(out)}$ signifies the in (out) scattering functions related to the scattering event $S$. \\ \indent The scattering matrices corresponding to the momentum and phase relaxed elastic scattering events are considered as $\Sigma_S^{in(out)}=DG^{n(p)}$, where the scaling factor D signifies the strength of scattering which in our simulation is taken as $D= 10^{-5} eV^2$. In contrast, the high temperature energy relaxing inelastic scattering processes are associated with the absorption and emission of phonons. The scattering rates in such cases are determined from the first order self-consistent Born's approximation (SCBA), given by \begin{equation} \Sigma_S^{in}(E)=D_0(N_{w}+1)G^{n}(E+\hbar*w)+ D_0N_{w}G^{n}(E-\hbar*w), \end{equation} \begin{equation} \Sigma_S^{out}(E)=D_0(N_{w}+1)G^{p}(E-\hbar*w)+D_0(N_{w})G^{p}(E+\hbar*w) \end{equation} where, $N_{w}=(\exp(\frac{\hbar*w}{K_{B}T})-1)^{-1}$ represents equilibrium phonon occupation number at temperature T with frequency $w$. Here, the inelastic scattering coefficient $D_{0}$ is taken as $10^{-2}$ $eV^2$. In such cases, \eqref{trans_eq} no longer remains valid and the spectral current flowing from point $z_{i}$ to $z_{i+1}$ is then given by \begin{equation} I^{sp}_{e(h)}(E)=\frac{iq}{\pi \hbar}\left[ H_{i,i+1}G^{n(p)}_{i+1,i}(E)\\-H_{i+1,i}G^{n(p)}_{i,i+1}(E) \right]. \label{Ieqn} \end{equation} A detailed description of the carrier scattering and their modeling in the NEGF framework can be found in Refs. \cite{scattering_negf,nikonov2009scattering,DattaQT}. In our study, we consider both the elastic and inelastic models of scattering based on their importance and validity. \section{RESULTS AND DISCUSSIONS} \label{sec_result} \subsection{Electronic band properties} \label{eg_kp} The peak absorption properties and the spectral range of operation are the most crucial features of any IR detector setup which can be determined from the knowledge of the band structure of the absorber material. In particular, the bandgap, bandwidth and the other key parameters derived from the band structure hold much significance owing to their direct connection with the aforementioned detector parameters.\\ \indent We, therefore, start the discussion of this section by first evaluating the band structure properties of a MSL structure and compare the same with that of a T2SL having almost similar bandgap and electron effective mass for the sake of a comprehensive understanding about their key differences \cite{lang2013electronic}. In order to achieve the correct bandgap values in these broken band alignment structures having interfaces formed by NCA, the appropriate interface treatment is required to account for the MIA effects \cite{li2010intrinsic}. For this, we have considered the interface matrix at the junction of InAs and GaSb layers, as discussed in section \ref {sec_1}. In order to validate the k.p model with interface consideration for MSL, we calculate the band gap for some MSL configurations, and found that the obtained bandgaps with the MIA effects are in close agreement with the experimental values as shown in Fig. \ref{Bg}. Next, employing the same model, we calculate the electronic band structures of 6ML/20ML T2SL and 7ML/6ML/3ML/6ML MSL structures having almost similar bandgaps and plot the obtained $E-k$ dispersion results at T=77K in Figs. \ref{E1} and \ref{E2}. We consider a supercell of three periods to capture the impact of the alternative interface potentials \cite{li2010intrinsic}. Here, due to the double degeneracy \cite{becer2019modeling}, each state splits into two states. The obtained effective masses from the curvature of dispersion plots for T2SL are $m^*_e=0.0466m_{0}$ and $m_h^*=0.258m_{0}$, similarly for MSL, $m_e^*=0.0588m_{0}$ and $m_h^*=0.26m_{0}$.\\ \begin{figure}[!htbp] \centering {\includegraphics[height=0.2\textwidth,width=0.45\textwidth]{exp_final_results.PNG}\label{E3}} \caption{Comparison between measured and theoretically predicted bandgaps of eight MSL samples. Without MIA effects the bandgaps were largely overestimated, whereas with the inclusion of MIA effects, they are obtained in close proximity to the experimental values. }\label{Bg} \end{figure} \begin{figure}[!htbp] \centering \subfigure[]{\includegraphics[height=0.225\textwidth,width=0.225\textwidth]{ek_dispersion_T2SL.PNG}\label{E1}} \quad \subfigure[]{\includegraphics[height=0.225\textwidth,width=0.225\textwidth]{ek_dispersion_MSL.PNG}\label{E2}} \caption{Electronic band structure of T2SL and MSL at T = 77 K using 8 band $k.p$ method: In-plane and out-of-plane E-k dispersion within the 1st Brillouin zone are calculated for (a) 6ML/20ML T2SL and (b) 7ML/6ML/3ML/6ML MSL using the periodic boundary condition with $H_{IF}$ matrix added at the interface. The obtained plots depict strong anisotropy between the in-plane and out-of-plane directions of the both the structures having almost similar bandgaps.} \label{Ekp} \end{figure} \indent In T2SL, the electrons (holes) are spatially confined in InAs (GaSb) layers and their eigen energies vary with the width of that layer \cite{lang2013electronic,ting2020long,delmas2019comprehensive}. It is evident that the change in conduction band energy level is eminent and sensitive with respect to the thickness of the InAs layer, leading to a robust control on the conduction band offset \cite{lang2013electronic,ting2020long}. However, the variation of GaSb layer width does not have adequate control on the tuning of valence band due to the large heavy hole effective mass \cite{lang2013electronic,nguyen2008band,haugan2004band,mukherjee2021carrier}. Therefore, the T2SL system strongly suffers from valence band tunability which is overcome in MSL structures by inserting a thin AlSb layer within the GaSb layer \cite{lang2013electronic,ting2020long}. \\ \indent The insertion of AlSb divides the GaSb hole quantum well into two quantum wells, which leads to a reduction of the individual well widths. This suggests that both the GaSb and AlSb layers have a major role to play in tuning the valence band maxima (VB\textsubscript{max}) and thereby tailoring the bandgap. Figure. \ref{bg1} depicts the variation of bandgap ($E_g$) with respect to $d_{GaSb}$ and $d_{AlSb}$ in a 2D color plot with a constant $d_{InAs}=12$ML . Similarly, Fig. \ref{bg2} shows the $E_g$ variation with $d_{InAs}$ and $d_{AlSb}$ with a constant $d_{GaSb}=5$ML. The variation of conduction band minima (CB\textsubscript{min}) and VB\textsubscript{max} are also shown with respect to the same parameters in the insets of Fig. \ref{bg1} and \ref{bg2}. The presence of the additional AlSb barrier reduces the interaction between the electrons confined in adjacent InAs layers and as a consequence CB\textsubscript{min} moves upwards due to the lowering of conduction band split \cite{haugan2004band} as evidenced from Fig. \ref{bg1}. Moreover, the electron wavefunction becomes more localized in InAs wells which effectively gives rise to a higher electron effective mass in MSL \cite{lang2013electronic,nguyen2008band,haugan2004band} as shown in Fig. \ref{m_1}. In Fig. \ref{bg1}, VB\textsubscript{max} shifts up (down) with the increase in $d_{GaSb}$ ($d_{AlSb}$) when $d_{AlSb}$ ($d_{GaSb}$) is kept constant. Furthermore, it is observed that as the AlSb width goes up, the lowest attainable limit of VB\textsubscript{max} increases even when GaSb thickness remains constant. For instance, at $d_{AlSb}=5ML$ and GaSb varying from 3ML to 10ML, the lowest value attained by VB\textsubscript{max} is found to be around $-0.2 eV$, whereas at $d_{AlSb}=1ML$ this value reaches only $-0.125 eV$. $d_{InAs}$ has more pronounced effect on CB\textsubscript{min} than $d_{AlSb}$, therefore, CB\textsubscript{min} shown in the inset of Fig. \ref{bg2}, shifts downwards with the increase in $d_{InAs}$, and there is almost no change with $d_{AlSb}$. \begin{figure}[!htbp] \centering \subfigure[]{\includegraphics[height=0.25\textwidth,width=0.4\textwidth]{GaSb_AlSb_band.PNG}\label{bg1}} \quad \quad \subfigure[]{\includegraphics[height=0.25\textwidth,width=0.4\textwidth]{InAs_AlSb_band.PNG}\label{bg2}} \caption {Bandgap and band offsets of MSL with reference to zero energy level at $T=77K$ in 2D color plots: variation of $E_{g}$ with respect to widths of (a) $d_{AlSb}$ and $d_{GaSb}$ when $d_{InAs}$ is kept constant, and (b) $d_{AlSb}$ and $d_{InAs}$ at constant $d_{GaSb}$. In the former case, VB\textsubscript{max} and CB\textsubscript{min} shift in opposite directions with the increase in AlSb thickness, and CB\textsubscript{min} is less sensitive with the change in AlSb thickness unlike VB\textsubscript{max}. The latter case portrays the inverse effect of InAs width on CB\textsubscript{min}, while AlSb layer width has direct impact on VB\textsubscript{max}, offering a superior tuning of band edge alignments.}\label{Eg} \end{figure} \begin{figure}[!htbp] \centering \subfigure[]{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{m_5.PNG}\label{Eg_m1}} \quad \subfigure[]{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{m_6.PNG}\label{Eg_m2}} \quad \subfigure[]{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{m_3.PNG}\label{Eg_m3}} \quad \subfigure[]{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{m_4.PNG}\label{Eg_m4}} \quad \caption{MSL electron and hole effective masses with 2D color plots at $T=77K$. (a) and (b) Varying $d_{GaSb}$ and $d_{AlSb}$ while keeping $d_{InAs}$ as constant. (c) and (d) Varying $d_{InAs}$ and $d_{AlSb}$ while keeping GaSb as constant. AlSb barrier makes electron wavefunction more localised in InAs wells, therefore $m^*_e$ increases. $m^*_h$ increases marginally with AlSb at lower widths of InAs and GaSb, but rises sharply at higher widths of InAs and GaSb. $m^*_e$ and $m^*_h$ increase with AlSb thickness, therefore the tunnelling probability of carriers will be less in these MSL configurations.} \label{m_1} \end{figure} It appears in Fig. \ref{bg2} that at a constant $d_{AlSb}$ and increasing $d_{InAs}$ from 6ML to 18ML, VB\textsubscript{max} remains unchanged, which is similar to the trend noticed in T2SL \cite{delmas2019comprehensive,mukherjee2021carrier}. Here, VB\textsubscript{max} moves from -0.08 $eV$ to -0.115$eV$ as $d_{AlSb}$ is increased from 1ML to 5ML for all values of $d_{InAs}$ under consideration. This suggests that VB\textsubscript{max} changes sharply with both $d_{AlSb}$ and $d_{GaSb}$, however, remains nearly invariant with $d_{InAs}$. Furthermore, at a constant $d_{AlSb}$ and with increasing $d_{InAs}$, as CB\textsubscript{min} is pulled down with VB\textsubscript{max} remains unchanged, the effective bandgap decreases as observed in Fig. \ref{bg2}. Such a wide tunable range in valence band edge is key to design hole blocking barriers in hetero-structure based photodetectors to suppress the SRH processes in the depletion region which in turn alleviates dark current \cite{nguyen2007dark,zavala2020antimonide,rodriguez2007n}. The obtained bandgap values in Fig. \ref{bg1} and Fig. \ref{bg2} corresponds to the wavelength range between 3$\mu m$ to 12$\mu m$.\\ \indent Next, we turn our attention towards the DOS effective masses of electrons and holes which are calculated from the band structure data using the relation $m_{e(h)}^*=\left( m_{e(h)\parallel}^*\right)^{2/3} \left( m_{e(h)\perp}^*\right)^{1/3}$ and are plotted in 2D color plots with respect to the variation in the layer thicknesses. It is seen from Fig. \ref{Eg_m1} and Fig. \ref{Eg_m3} that the obtained electron effective masses of MSL are higher than that of T2SL for a similar wavelength range \cite{mukherjee2021carrier,delmas2019comprehensive} due to the strong electron localization in InAs layer is fuelled by the additional AlSb layer \cite{lang2013electronic,nguyen2008band,haugan2004band}. In particular, when we consider the bandgap range from $0.1 eV$ to $0.3 eV$, the maximum electron mass obtained in case of T2SL is approximately $0.034m{0}$ \cite{mukherjee2021carrier,delmas2019comprehensive}, while in MSL, for the same range, the electron masses are varying from $0.06m_{0}$ to $0.08m_{0}$. It is also evident that higher electron effective masses can be achieved even with thin GaSb layer, as they are more sensitive to $d_{AlSb}$ \cite{lang2013electronic,haugan2004band}. From Fig. \ref{Eg_m2} and Fig. \ref{Eg_m4}, it is observed that the rise in hole effective masses with the increase in AlSb width is prominent only at higher GaSb and InAs thicknesses. The acquired higher electron masses are important for the p-$\pi$-M-n type structures, in which, an additional MSL is inserted between the $\pi$ and n region of traditional p-$\pi$-n structure to restrict the carrier transport due to diffusion and tunneling at depletion region to lessen the dark current \cite{nguyen2007dark,nguyen2007type,delmas2019comprehensive}. \subsection{Comparison of finite MSL and T2SL absorber} \label{comp} Our next step is to optimize the absorber layer configuration for better optical properties through the enhancement of interband carrier overlap \cite{ting2020long}. Due to the spatial separation of carriers, T2SL has less oscillator strength, which leads to weaker absorption \cite{ting2020long,razeghi2010band}. Whereas, the optical properties of MSL are expected to be better than T2SL, and even comparable to short period T2SL \cite{taalat2013influence,lang2013electronic}. To model finite superlattice structures in the quantum mechanical framework, we implement single band NEGF approach using the effective mass Hamiltonian and on the basis of obtained properties, we compare the properties of few-period MSL and T2SL structures \cite{mukherjee2021carrier,priyadarshi2018superlattice,akhavan_effectivemass,akhavan2016superlattice,aeberhard2018photocarrier}. The parameters used for the simulation are provided in Table \ref{negftable}. \begin{table} \centering \caption{ Material parameters used in the NEGF simulation \cite{livneh2012k,vurgaftman2001band}} \begin{tabular}{|l|l|l|l|} \hline Parameters & InAs & GaSb & AlSb \\ \hline Electron effective mass ($m_e^*$) & 0.023 & 0.041 & 0.14 \\ \hline Heavy hole effective mass ($m_h^*$) & 0.4 & 0.4 & 0.9\\ \hline VBO [$eV$] @300K & -0.50 & 0 & -0.44\\ VBO [$eV$] @77K & -0.56 & 0 & -0.38\\ \hline \end{tabular} \label{negftable} \end{table} \begin{figure}[!htbp] \centering \subfigure[]{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{Transmission_T2SL.PNG}\label{N1}} \quad \subfigure[]{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{LDOS_F_T2SL_1.PNG}\label{N2}} \quad \subfigure[]{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{Trans_f_MSL_1.PNG}\label{N3}} \quad \subfigure[]{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{LDOS_F_MSL_1.PNG}\label{N4}} \quad \caption{Transmission function and miniband formation at $T=77K$: (a) Electron and hole transmission probabilities of ten period 6ML/20ML T2SL. (b) LDOS of ten period 6ML/20ML T2SL. (c) Electron and hole transmission probabilities of a 10 period 7ML/6ML/3ML/6ML MSL. (d) LDOS of ten period 7ML/6ML/3ML/6ML MSL. Transmission probabilities are plotted with respect to energy, and LDOS plotted in a grey scale 2-D plot in the position and energy space. The transmission probability in MSL is less than T2SL in valence band. The conduction band bandwidth of MSL is less than T2SL, also there is a formation of two quantum wells in GaSb due to the AlSb layer in MSL.} \label{NEGF1} \end{figure} Figure \ref{N1} and \ref{N3} depict the electron and heavy hole transmission probabilities in the ballistic limit for the same configuration of T2SL and MSL used in k.p calculations, respectively \cite{priyadarshi2018superlattice,mukherjee2018improved}. To obtain absolute transmission, the conduction and valence band edges of the contacts are assumed at their respective lowest and highest values. In Fig. \ref{N2} and Fig. \ref{N4}, LDOS obtained via \eqref{LDOS_eq} within the elastic limit of scattering are shown for both T2SL and MSL \cite{priyadarshi2018superlattice,mukherjee2021carrier}. The electron (hole) localization in InAs (GaSb) layer is fairly evident from the LDOS plots. In particular, the hole confinement in the two adjacent GaSb quantum wells formed due to the insertion of AlSb, can be distinctly seen in MSL. In addition, the first conduction mini bandwidth observed in MSL is lower than that of T2SL, which implies for higher localization of electrons in MSL. The bandgaps obtained through the NEGF simulation for the two structures are similar and in well agreement with the $k.p$ results, discussed in Sec. \ref{eg_kp}.\\ \indent In Fig. \ref{s1} and Fig. \ref{s2}, we plot the total number of states in the first conduction ($A_{C1}$) and heavy hole ($A_{HH1}$) bands for T2SL and MSL, respectively, as a function of position by integrating the LDOS over the energy range of interest \cite{mukherjee2021carrier,priyadarshi2018superlattice}. In continuation, we further plot the spatial product ($A_{C1}$*$A_{HH1}$) of them in Fig. \ref{s3} and Fig. \ref{s4} to understand the nature of interband overlap responsible for carrier transition \cite{mukherjee2021carrier}. \begin{figure}[!htbp] \centering \quad \subfigure[]{\includegraphics[height=0.19\textwidth,width=0.20\textwidth]{states_T2SL.PNG}\label{s1}} \quad \subfigure[]{\includegraphics[height=0.19\textwidth,width=0.2\textwidth]{states_MSL.PNG}\label{s2}} \quad \subfigure[]{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{interband_T2SL.PNG}\label{s3}} \quad \subfigure[]{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{interband_MSL.PNG}\label{s4}} \quad \caption{ Available states in conduction and valence band and their spatial product at $T=77K$. (a) and (b) number of available states in conduction band and valence band with respect to the position for T2SL and MSL. (c) and (d) The spatial product of number of available states in C1 and HH1 for T2SL and MSL. For MSL, insertion of a thin AlSb layer in the middle of each GaSb hole quantum well pushes the HH1 wave function out for stronger overlap with the C1 wave function, therefore the interband overlap is higher in MSL, which indicates more absorption.} \label{states} \end{figure} \begin{figure}[!htbp] {\includegraphics[height=0.2\textwidth,width=0.45\textwidth]{LDOS_T2SL_MSL.png} \caption{ 1D-DOS for first conduction and first heavy hole band for 6ML/20ML T2SL (red) and 7ML/6ML/3ML/6ML MSL (blue) at $T=77K$. The higher hole densities in T2SL suggests higher probability of auger recombination than MSL.}\label{l1}} \end{figure} This spatial product in MSL is found to be quite higher than that in T2SL especially at the interface region, which clearly indicates a stronger overlap between C1 and HH1 wavefunctions. The AlSb layer in MSL pushes the HH1 wavefunction towards the newly created wells. As a consequence, the centre of HH1 wavefunction is shifted more towards the center of C1 wavefunction which leads to an enhancement in the interband overlap, and hence provides higher absorption in MSL \cite{ting2020long}. In Fig. \ref{l1}, we plot the 1D-DOS calculated by integrating the LDOS over the entire T2SL and MSL lengths \cite{mukherjee2021carrier}. It is noticed that the 1D-DOS for VB holes are higher in T2SL than in MSL. Therefore, by comparing the maximum hole density values in T2SL and MSL, we predict that Auger recombination will be less in MSL, causing lesser dark current \cite{flatte1998auger}. Also, higher electron localization in MSL is an indicative for reduced tunneling dark current in MSL absorber.\\ \indent Having evaluated and compared the LDOS properties of MSL and T2SL structures, it is now customary to examine the nature of carrier transport through these structures in order to gain a qualitative insight on the dark current at any given operating point and establish its connection to the LDOS. In doing so, we plot the normalized spectral current, given by \eqref{Ieqn}, of electrons and holes with respect to the position at an applied voltage of $0.015V$ and $T=300K$ in Fig. \ref{c1} and Fig. \ref{c2} for the T2SL and MSL structures, respectively. For a fair comparison, we maintain a nearly similar electric field corresponding to the applied bias of $0.015V$ across both these structures by considering ten periods of 6ML/20ML T2SL and twelve periods of 7ML/6ML/3ML/6ML MSL. The contacts are assumed to be carrier selective with one for the electron injection and the other for the hole. We also bring in the non-coherent transport features in our simulation model by including the self-consistent inelastic electron-phonon scattering model \cite{scattering_negf} with a phonon energy of $30meV$ \cite{deacon2005high,li2010intrinsic} to look for possible broadening of the current spectra, although they are less likely in the near-equilibrium regime of transport considered here. The bright stripes observed in both the figures are distinctly indicative of the miniband transport of electrons and holes for both the T2SL and MSL. However, a careful observation reveals that the T2SL current spectrum is broader than the MSL, especially near the contacts, which fairly justifies the strong localization in MSL as discussed earlier. Furthermore, the amount of broadening observed around the contacts with inelastic scattering largely vanishes when only the ballistic and elastic scattering effects are considered. This clearly points towards the existence of phonon-actuated energy relaxation processes of carriers occurring at higher temperatures. These processes become predominant at higher built-in and applied field and play a key role in tailoring the carrier transport. Therefore, this study holds much significance in the context of understanding the miniband and phonon-mediated hopping transport of carriers through heterostructures and can be extremely useful to predict the dark and photo current of T2SL systems. \begin{figure}[!htbp] \centering \subfigure[]{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{T2SL_dark_C.PNG}\label{c1}} \quad \subfigure[]{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{MSL_dark_C.PNG}\label{c2}} \quad \caption{Spatially and energetically resolved normalized dark current for ten periods of 6ML/20ML T2SL and twelve periods of 7ML/6ML/3ML/6ML MSL with the inclusion of optical phonons with energy $30meV$ at $T=300K$ with the applied voltage of 0.015 V (a) Dark current spectrum in conduction and valence bands of T2SL (b) Dark current spectrum in conduction and valence bands of MSL. The dark current spectrum at lead-device interfaces reflects the spectrum of states from which the carriers injects from the contacts. } \label{cur} \end{figure} \subsection{MSL as unipolar or bipolar barrier} \label{Barrier} \begin{figure}[!htbp] \centering \quad \subfigure[]{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{offset_CB.PNG}\label{b1}} \quad \subfigure[]{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{offset_VB.PNG}\label{b2}} \subfigure[]{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{offset_CB_VB.PNG} \label{b3}} \caption{CB and VB offset tuning using MSL in 2D color LDOS plot at $T=77K$: unipolar (a) electron barrier ($B_e$) having CB offset of 0.27$eV$ using 5ML/6ML/3ML/6ML MSL and (b) hole barrier ($B_h$) with 0.13$eV$ VB offset using 10ML/3ML/3ML/3ML MSL are shown with respect to a 9ML/9ML T2SL absorber ($A_b$). (c) Bipolar barrier using $B_{e}A_bB_{h}$ design having both electron and hole blocking barriers. Specific band offsets can be attained by the appropriate tuning of thicknesses of multiple layers. } \label{bar} \end{figure} Photodetectors suffer from noise-inducing currents such as generation currents related to SRH processes in the depletion region, current due to thermal generation of carriers in the absorber region and their diffusion to the contact layers, and surface currents \cite{maimon2006n,rakovska2000room}. SRH current, which dominates at lower temperature through the activated midgap traps, is minimized by utilizing barrier-based structures like nBn, XBp, XBn etc. The barrier blocks the majority carriers flow to inhibit the SRH processes in the depletion region \cite{zavala2020antimonide,rodriguez2007n}. These barriers are usually made up of bulk materials which have limited applications due to their inadequacy to provide specific conduction and valence band tunability, and large dark currents at elevated temperatures \cite{zavala2020antimonide,ting2020long}. \\ \indent The design of barriers in the photodetector is noteworthy as the barrier height and width mutually determine the ability to block the thermal excitation of majority carriers from the contact layers, and the potential to impede the electron tunneling through it \cite{maimon2006n,zavala2020antimonide,nguyen2007type,nguyen2007dark,nguyen2009minority}. Here, we construct MSL based barriers as the insertion of AlSb layer provides an additional degree of freedom to design and control the band offsets as per requirement. Moreover, MSL structures due to their high effective mass as discussed in Sec. \ref{eg_kp}, become more resistant to the diffusion and tunneling transport in the depletion region \cite{nguyen2009minority,nguyen2007dark,nguyen2007type,lang2013electronic}.\\ \indent In this work, we present three configurations for $XB_{e}A_{b}$, $XB_{h}A_{b}$ and $XB_{e}A_{b}B_{h}X$ structures, where $B_{e}$ ($B_{h}$) is the electron (hole) barrier layer made of MSL, $A_{b}$ is the T2SL absorber layer and $X$ is the contact layer which can be composed of T2SL or bulk material. The band offsets pertaining to the electron and hole barriers with respect to a 9ML/9ML T2SL absorber ($A_b$), as obtained from the outcome of LDOS calculated via NEGF at 77$K$, are respectively plotted in Fig. \ref{b1} and Fig. \ref{b2}. Here, $B_e$ and $B_h$ are respectively modeled using 5ML/6ML/3ML/6ML and 10ML/3ML/3ML/3ML MSL configurations. It is noted from Fig. \ref{b1} that the $B_e$ layer provides a conduction band offset of approximately 0.27 $eV$ above and a nearly zero valence band discontinuity with respect to $A_{b}$, making it an ideal unipolar electron barrier that blocks the majority carriers electrons from the contacts and allows the minority holes to pass through it, functioning similar to a pn junction space charge region \cite{zavala2020antimonide,maimon2006n}. These barriers are usually intrinsic or have the similar doping as in the absorber region. Therefore, most of the depletion region lies within them, resulting in a reduced SRH dark current due to their high bandgap \cite{zavala2020antimonide,maimon2006n}. Similarly, $B_h$ in Fig. \ref{b2} offers a valence band offset of 0.13$eV$ with respect to $A_b$, while having a negligible conduction band discontinuity. Such a unipolar hole barrier opposes the flow of majority holes without affecting the minority electrons flow. Combining these two, one can design a $B_{e}A_{b}B_{h}$ bipolar barrier structure as depicted in Fig. \ref{b3} where $B_{e}$ ($B_{h}$) is sandwiched between the p-type (n-type) contact and absorber layer, which blocks the minority diffusion electron (hole) current from the p-type (n-type) contact \cite{gautam2013band,zavala2020antimonide,maimon2006n}. \\ \indent The design strategy to achieve such MSL barriers although appears to be quite challenging from an engineering perspective, follows from a definite physics-based guideline demonstrated earlier while discussing the band structure in Sec. \ref{eg_kp}. With reference to the 9ML/9ML T2SL, the AlSb layer in MSL splits the GaSb hole quantum well into two and reduces their effective width \cite{lang2013electronic}. This pushes VB\textsubscript{max} of MSL down with respect to the T2SL, giving rise to a VB offset for the configuration 10ML/3ML/3ML/3ML shown in Fig.\ref{b2}. In this case, for a zero CB offset, the thickness of the InAs layer in MSL should be kept slightly larger (10ML) than in T2SL to compensate the rise of CB\textsubscript{min} caused due to the presence of the AlSb electron barrier layer. Similarly, the lowering of InAs thickness in MSL pushes the CB\textsubscript{min} up and gives rise to a CB offset for the configuration 5ML/6ML/3ML/6ML as shown in Fig. \ref{b1}. To ensure the alignment of VB in this case to have zero offset, one should increase the thickness of the two GaSb layers (6ML each) to pull the VB\textsubscript{max} up to compensate the downshift in VB\textsubscript{max} due to AlSb. This discussion, as a whole, should serve as a predictive and robust guideline to design efficient barriers using complex T2SL structures. \section{Conclusion} \label{conclu} This study provided comprehensive design guidelines of utilizing M-structured superlattices for both the absorber and barrier layers through proper band engineering and discussed its potential benefits over conventional T2SL structures. Our detailed calculations carefully took into account the effects of both strain and microscopic interface asymmetry to primarily estimate the bandgap and density-of-states effective mass and their variation with respect to the thicknesses of the constituent material layers. In contrast, for practical finite-period structures, the local density-of-states and spectral tunneling transmission and current calculated using the Keldysh non-equilibrium Green's function approach with the inclusion of non-coherent scattering processes offered deep insights into the qualitative aspects of miniband and localization engineering via structural variation. Our key results demonstrated how to achieve a wide infrared spectral range, reduce tunneling dark currents, induce strong interband wavefunction overlaps at the interfaces for adequate absorption, and excellent band-tunability to facilitate unipolar or bipolar current blocking barriers. This study, therefore, exemplifies the utilization of 6.1{\AA} material library to its full potential through the demonstration of band engineering in M-structured superlattices and sets up the right platform to possibly replace other complex superlattice systems for targeted applications. \section*{Acknowledgments} The authors acknowledge the funding from the PMRF PhD scheme of Ministry of Education, Government of India. This work is also supported by ISRO-IIT Bombay Space Technology Cell. The research and development work undertaken in the project under the Visvesvaraya Ph.D Scheme of the Ministry of Electronics and Information Technology (MEITY), Government of India, is implemented by Digital India Corporation (formerly Media Lab Asia).
proofpile-arXiv_065-45
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} Few-shot object detection (FSOD) \cite{yan2019meta,wang2020frustratingly,xiao2020few,wu2020multi,zhu2021semantic} aims at detecting objects out of base training set with few support examples per class. It has received increasing attention from the robotics community due to its vital role in autonomous exploration, since robots are often expected to detect novel objects in an unknown environment but only a few examples can be provided online. For example, in a mission of rescue shown in \fref{fig:1} (a), the robots are required to detect some uncommon objects such as drill, rope, helmet, vent. \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{fig1.pdf} \caption{Representative images from robots’ exploration and performance comparison of the state-of-the-arts \cite{fan2020few,xiao2020few,wu2020multi,wang2020frustratingly,faster} and proposed AirDet. Solid lines denote results with no fine-tuning and dashed lines indicate results fine-tuned on few-shot data. Without further updating, AirDet can outperform prior work. Besides, unlike the fine-tuned models meeting bottleneck in small objects, AirDet has set an outstanding new level.} \label{fig:1} \end{figure} Despite its recent promising developments, most of existing methods \cite{kang2019few,sun2021fsce,zhang2021accurate,wang2019meta,wang2020frustratingly,wu2020multi,qiao2021defrcn,cao2021nips,li2021few,fan2021generalized} require a careful \textit{offline} fine-tuning stage on novel images before inference. However, the requirement of the fine-tuning process is infeasible for robotic \textit{online} applications, since (\textbf{1}) new object categories can be dynamically added during exploration, thus re-fine-tuning the model with limited onboard computational resources for novel classes is extremely inefficient for the time-starved tasks such as search and rescue \cite{tariq2018dronaid,farooq2018ground,wang2020visual,chen_tro}. (\textbf{2}) to save human effort, only very few samples can be provided online\footnote{Since online annotation is needed during mission execution, only 1-5 samples can be provided in most of the robotic applications, which is the main focus of this paper.}, thus the fine-tuning stage \cite{kang2019few,sun2021fsce,zhang2021accurate,wang2019meta,wang2020frustratingly,wu2020multi,qiao2021defrcn,fan2021generalized,li2021few} needs careful \textit{offline} hyper-parameter tuning to avoid over-fitting, which is infeasible for \textit{online} exploration, and (\textbf{3}) fine-tuned models usually perform well for in-domain test \cite{wang2020frustratingly,wu2020multi,xiao2020few,faster,qiao2021defrcn,li2021few}, while suffer from cross-domain test, which is unfavourable for robotic applications. Therefore, we often expect a few-shot detector that is able to inference without fine-tuning such as \cite{fan2020few}. However, the performance of \cite{fan2020few} is still severely hampered for challenging robotics domain due to (\textbf{1}) ineffective multi-scale detection; (\textbf{2}) ineffective feature aggregation from multi-support images; and (\textbf{3}) inaccurate bounding box location prediction. Surprisingly, in this paper, we find that all three problems can be effectively solved by learning \textit{class-agnostic relation} with support images. We name the new architecture AirDet, which can produce promising results even without abominable fine-tuning as shown in \fref{fig:1}, which, to the best of our knowledge, is the first feasible few-shot detection model for autonomous robotic exploration. Specifically, the following three modules are proposed based on \textit{class-agnostic relation}, respectively. \myparagraph{Support-guided Cross-Scale Fusion (SCS) for Object Proposal} One reason for performance degradation in multi-scale detection is that the region proposals are not effective for small scale objects, even though some existing works adopt multiple scale features from query images \cite{zhang2021accurate,zhu2021semantic}. We argue that the proposal network should also include cross-scale information from the support images. To this end, we present a novel SCS module, which explicitly extracts multi-scale features from cross-scale relations between support and query images. \myparagraph{Global-Local Relation (GLR) for Shots Aggregation} Most prior work \cite{fan2020few,yan2019meta,wu2020multi} simply average multi-shot support feature to obtain a class prototype for detection head. However, this cannot fully exploit the little but valuable information from every support image. Instead, we construct a shots aggregation module by learning the relationship between the multi-support examples, which achieves significant improvements with more shots. \myparagraph{Prototype Relation Embedding (PRE) for Location Regression} Some existing works \cite{fan2020few,zhang2021accurate} introduced a relation network \cite{sung2018learning} into the classification branch; however, the location regression branch is often neglected. To settle this, we introduce cross-correlation between the proposals from SCS and support features from GLR into the regression branch. This results in the PRE module, which explicitly utilize support images for precise object localization. In summary, AirDet is a fully relation-based few-shot object detector, which can be applied directly to the novel classes without fine-tuning. It surprisingly produces comparable or even better results than exhaustively fine-tuned SOTA methods \cite{wang2020frustratingly,faster,xiao2020few,wu2020multi,fan2020few}, as shown in \fref{fig:1} (b). Besides, as shown in \fref{fig:1} (c), AirDet maintains high robustness in small objects due to the SCS module, which fully takes advantage of the multi-scale support feature. Note that in this paper, fine-tuning is undesired because it cannot satisfy the online responsive requirement for robots, but it can still improve the performance of AirDet. \section{Related Works} \subsection{General Object Detection} The task of object detection \cite{faster, yolo, liu2016ssd, RCNN,fast,mask} is to find out all the pre-defined objects in an image, predicting their categories and locations, which is one of the core problems in the field of computer vision. Object detection algorithms are mainly divided into: two-stage approaches \cite{RCNN,fast,faster,mask} and one-stage approaches \cite{liu2016ssd,yolo,yolo2,yolo3}. R-CNN \cite{RCNN}, and its variants \cite{RCNN,fast,faster,mask} serve as the foundation of the former branch; among them, Faster R-CNN \cite{faster} used region proposal network (RPN) to generate class-agnostic proposals from the dense anchors, which greatly improved the speed of object detection based on R-CNN \cite{fast}. On the other hand, YOLO series \cite{yolo,yolo2,yolo3} fall into the second branch, which tackles object detection as an end-to-end regression problem. Besides, the known SSD series \cite{liu2016ssd,li2017fssd} propose to utilize pre-defined bounding boxes to adjust to various object scales inspired by \cite{faster}. One shortcoming of the above methods is that they require abundant labeled data for training. Moreover, the types and number of object categories are fixed after training (80 classes in COCO, for instance), which is not applicable to robot's autonomous exploration, where unseen, novel objects often appear online. \subsection{Few-shot Object Detection} Trained with abundant data for base classes, few-shot object detectors can learn to generalize only using a few labeled novel image shots. Two main branches leading in FSOD are meta-learning-based approaches \cite{yan2019meta,xiao2020few,wu2020multi,fan2020few,han2021query} and transfer-learning-based approaches \cite{wang2020frustratingly,zhu2021semantic,sun2021fsce,wu2021universal,qiao2021defrcn}. Transfer-learning approaches seek for the best learning strategy of general object detectors \cite{faster} on a few novel images. Wang \textit{et al.}~ \cite{wang2020frustratingly} proposed to fine-tune only the last layer with a cosine similarity-based classifier. Using manually defined positive refinement branch, MPSR \cite{wu2020multi} mitigated the scale scarcity issue. Recent works have introduced semantic relations between novel-base classes \cite{zhu2021semantic} and contrastive proposal encoding \cite{sun2021fsce}. Aiming at training meta-models on episodes of individual tasks, meta-learning approaches \cite{yan2019meta,xiao2020few,fan2020few,Hu2021CVPR,Zhang2021CVPR,han2021query} generally contain two branches, one for extracting support information and the other for detection on the query image. Among them, Meta R-CNN \cite{yan2019meta}, and FSDet \cite{xiao2020few} target at support guided query channel attention. With novel attention RPN and multi-relation classifier, A-RPN \cite{fan2020few} has set the current SOTA. Very recent works also cover support-query mutual guidance \cite{Zhang2021CVPR}, aggregating context information \cite{Hu2021CVPR}, and constructing heterogeneous graph convolutional networks on proposals \cite{han2021query}. \subsection{Relation Network for Few-shot Learning} In few-shot image classification, relation network \cite{sung2018learning}, also known as learning to compare, has been introduced to train a classifier by modeling the class-agnostic relation between a query image and the support images. Once trained and provided with a few novel support images, inference on novel query images can be implemented without further updating. For few-shot object detection, such relation has only been utilized for the classification branch so far in very few works. For example, Fan \textit{et al.}~ proposed a multi-relation classification network, which consists of global, local, and patch relation branches \cite{fan2020few}. Zhang \textit{et al.}~ leveraged general relation network \cite{sung2018learning} architecture to build multi-level proposal scoring, and support weighting modules \cite{Zhang2021CVPR}. In this work, we thoroughly explore such relation in few-shot detection and propose a fully relation-based architecture. \subsection{Multi-Scale Feature Extraction} Multi-scale features have been exhaustively exploited for multi-scale objects in general object detection \cite{liu2016ssd,Shen2017dsod,Kong2016HyperNet,yolo2,lin2017fpn,li2017fssd}. For example, FSSD \cite{li2017fssd} proposed to fuse multi-scale feature and implement detection on the fused feature map. Lin \textit{et al.}~ constructed the feature pyramid network (FPN) \cite{lin2017fpn}, which builds a top-down architecture and employs multi-scale feature map for detection. For few-shot detection, standard FPN \cite{lin2017fpn} has been widely adopted in prior transfer-learning-based methods \cite{wang2020frustratingly,zhu2021semantic,sun2021fsce,wu2020multi}. In meta-learning, existing meta-learner \cite{Zhang2021CVPR} employs all scales from FPN and implements detection on each scale in parallel, which is computationally inefficient. \section{Preliminary} In few-shot object detection \cite{yan2019meta,xiao2020few,wu2020multi,Deng2009imagenet}, the classes are divided into $B$ base classes $\mathcal{C}_{\rm{b}}$ and $N$ novel ones $\mathcal{C}_{\rm{n}}$, satisfying that $\mathcal{C}_{\rm{b}}\cap\mathcal{C}_{\rm{n}}=\varnothing$. The objective is to train a model that can detect novel classes in $\mathcal{C}_{\rm{n}}$ by only providing $k$-shot labeled samples for $\mathcal{C}_{\rm{n}}$ and abundant images from base classes $\mathcal{C}_{\rm{b}}$. During training, we adopt the episodic paradigm \cite{yan2019meta}. Basically, images from the base classes $\mathcal{C}_{\rm{b}}$ are split into query images $\mathbf{Q}_{{\rm{b}}}$ and support images $\mathbf{S}_{{\rm{b}}}$. Given all support images $\mathbf{S}_{{\rm{b}}}$, the model learns to detect objects in query images $\mathbf{Q}_{{\rm{b}}}$. During test, the model is to detect objects in novel query images $\mathbf{Q}_{{\rm{n}}}$ by only providing few (1-5) labeled novel support images $\mathbf{S}_{{\rm{n}}}$. \noindent\textbf{Remark}~\refstepcounter{RNum}\textbf{\theRNum}: Most existing methods \cite{wang2020frustratingly,zhu2021semantic,sun2021fsce,yan2019meta,xiao2020few,wu2020multi,wu2021universal,cao2021nips,qiao2021defrcn} have to be fine-tuned on $\mathbf{S}_{{\rm{n}}}$ due to the class-specific model design, while AirDet can be applied directly to $\mathbf{Q}_{{\rm{n}}}$ by providing $\mathbf{S}_{{\rm{n}}}$ without fine-tuning. \section{Methodology} \begin{figure*}[!t] \centering \includegraphics[width=1\textwidth]{main_fig.pdf} \caption{The pipeline of the autonomous exploration task and the framework of AirDet. During exploration, a few prior raw images that potentially contain novel objects (helmet) are sent to a human user first. Provided with online annotated few-shot data, the robot explorer is able to detect those objects by observing its surrounding environment. AirDet includes 4 modules, \textit{i.e.}, the shared backbone, support-guided cross-scale (SCS) feature fusion module for region proposal, global-local relation (GLR) module for shots aggregation, and relation-based detection head, which are visualized by different colors. } \label{fig:main} \end{figure*} Since only a few shots are given during model test, information from the support images is little but valuable. We believe that the major limitation of the existing algorithms is that such information from support images is not fully exploited. Therefore, we propose to learn \textit{class-agnostic relation} with the support images in all the modules of AirDet. As exhibited in \fref{fig:main}, the structure of AirDet is simple: except for the shared backbones, it only consists of three modules, \textit{i.e.}, a support-guided cross-scale fusion (SCS) module for regional proposal, a global-local relation (GLR) module for shots aggregation, and a relation-based detection head, containing prototype relation embedding (PRE) module for location regression and a multi-relation classifier \cite{fan2020few}. We next introduce two kinds of \textit{class-agnostic relation}, which will be used by the three modules. \subsection{Class-Agnostic Relation} To exploit the relation between two features from different aspects, we define two relation modules, \textit{i.e.}, spatial relation $\mathcal{R}_{\rm{s}}(\cdot, \cdot)$ and channel relation $\mathcal{R}_{\rm{c}}(\cdot, \cdot)$. \myparagraph{1. Spatial Relation:} Object features from the same category are often correlated along the spatial dimension, thus we define the spatial relation features $\mathcal{R}_{\rm{s}}$ in \eqref{eqn:inner} leveraging on the regular and depth-wise convolution. \begin{equation}\label{eqn:inner} \mathcal{R}_{\rm{s}}(\mathbf{A}, \mathbf{B}) = \mathbf{A} \odot \mathrm{MLP}\Big(\mathrm{Flatten}\big(\mathrm{Conv}(\mathbf{B})\big)\Big), \end{equation} where inputs $\mathbf{A}, \mathbf{B}\in\mathbb{R}^{C\times W\times H}$ denote 2 general tensors. $\rm{Flatten}$ means flatten the features in spatial (image) domain and $\mathrm{MLP}$ denotes multilayer perceptron (MLP), so that $\rm{MLP}\Big(\rm{Flatten}\big(\rm{Conv}(\mathbf{B})\big)\Big) \in \mathbb{R}^{C\times 1\times 1}$. $\odot$ indicates depth-wise convolution \cite{fan2020few}. Note that we use convolution to calculate correlation since both operators are composed of inner products. \myparagraph{2. Channel Relation:} Inspired by the phenomenon that features of different classes are often stored in different channels \cite{li2019siamrpn}, we propose a simple but effective channel relation $\mathcal{R}_{\rm{c}}(\cdot, \cdot)$ in \eqref{eqn:channel} to extract the cross-class relation features. \begin{equation}\label{eqn:channel} \mathcal{R}_{\rm{c}}(\mathbf{A}, \mathbf{B}) = \mathrm{{Conv}}\big(\mathrm{Cat}(\mathbf{A}, \mathbf{B})\big) + \mathrm{Cat}\big(\mathrm{{Conv}}(\mathbf{A}), \mathrm{{Conv}}(\mathbf{B})\big), \end{equation} where $\mathrm{Cat(\cdot, \cdot)}$ is to concatenate features along the channel dimension. \noindent\textbf{Remark}~\refstepcounter{RNum}\textbf{\theRNum}: The two simple but effective \textit{class-agnostic relation} learners are fundamental building blocks of AirDet, which, to the best of our knowledge, is the first attempt towards a fully relation-based structure in few-shot detection. \subsection{Support-guided Cross-Scale Fusion (SCS) for Object Proposal} As mentioned earlier, existing works generate object proposals only using single scale information from query images \cite{kang2019few,xiao2020few,wang2019meta,wu2020multi}, while such strategy may not be effective for small scale novel objects. Differently, we propose support-guided cross-scale fusion (SCS) in AirDet to introduce multi-scale features and take the relation between query and support images for region proposal. As shown in \fref{fig:scs} (a), SCS takes support and query features from different backbone blocks (ResNet2, 3, and 4 block) as input. We first apply \textit{spatial relation}, where the query and support features from the same backbone block are $\mathbf{A}, \mathbf{B}$ in \eqref{eqn:inner}, respectively. Then we use \textit{channel relation} to fuse the ResNet2 and ResNet3 block features, which are $\mathbf{A}, \mathbf{B}$ in \eqref{eqn:channel}, respectively. The fused channel relation feature is later merged with the spatial relation feature from ResNet4 block. The final merged feature is sent to the region proposal network (RPN) \cite{faster} to generate region proposals. \begin{figure}[ht] \centering \includegraphics[width=1\columnwidth]{SCS_GLR.pdf} \caption{Network architecture of SCS for region proposal and GLR for shots aggregation.} \label{fig:scs} \end{figure} \subsection{Global-Local Relation (GLR) for Shots Aggregation} In prior attempts \cite{yan2019meta,kang2019few,xiao2020few,wu2020multi}, the support object features from multiple shots are usually averaged to represent the class prototype, which is then used for regression and classification. Although it can be effective with fine-tuning, we argue that a simple average cannot fully utilize the information from few-shot data. To this end, we built global-local relation (GLR) module in \fref{fig:scs} (b), which leverages the features from every shot to construct the final prototype. Suppose the $k$-shot deepest support features are $\phi({\mathbf{s}^i})$,~$i=1,\cdots,k$, our final class prototype $\mathbf{e}$ can be expressed as a weighted average of the features: \begin{equation} \mathbf{e}=\sum_{i=1}^{k}(\phi({\mathbf{s}^i})\otimes\mathbf{M}^i), \end{equation} where $\otimes$ is the element-wise multiplication, and $\mathbf{M}^i$ is a confidence map: \begin{equation} \mathbf{M}^{i}=\mathrm{SoftMax}\Big(\mathrm{MLP}\big(\mathbf{f}^{i}\big)\Big), \end{equation} where $\mathbf{f}^{i}$ is the output from the channel relation extractor: \begin{equation}\label{eq:shot-relation} \mathbf{f}^{i} = \mathcal{R}_{\rm{c}}\left(\mathrm{Conv}(\phi(\mathbf{s}^i)), \frac{1}{k}\sum_{i=1}^{k}\mathrm{Conv}(\phi(\mathbf{s}^i))\right). \end{equation} Note that to include both ``global" (all shots) and ``local" (single shot) features, the inputs of the channel relation extractor in \eqref{eq:shot-relation} include both the feature from that shot and the averaged features over all shots. \noindent\textbf{Remark}~\refstepcounter{RNum}\textbf{\theRNum}: Unlike prior work \cite{yan2019meta,kang2019few,xiao2020few,wu2020multi} relying on fine-tuning with more support data for performance gain, AirDet can extract stronger prototype to achieve improvement on more shots without fine-tuning. \subsection{Prototype Relation Embedding (PRE) for Location Regression} It has been demonstrated that a multi-relation network \cite{fan2020few} is effective for the classification branch. Inspired by its success, we further build a prototype relation embedding (PRE) network for the location regression branch. Given a prototype exemplar $\mathbf{e}\in\mathbb{R}^{C\times a\times a}$, we utilize the spatial relation \eqref{eqn:inner} to embed information from the exemplar to the proposal features $\mathbf{p}^j$,~$j= 1,2,\cdots,p$ as: \begin{equation} \mathbf{l}^j = \mathbf{p}^j + \mathcal{R}_{\rm{s}}(\mathbf{p}^j, \mathbf{e}), \end{equation} where we take $3\times 3$ convolution layer in \eqref{eqn:inner} for spatial feature extraction. The proposal features $\mathbf{l}^j$ is then used for bounding box regression through an MLP module following Faster-RCNN \cite{faster}. \noindent\textbf{Remark}~\refstepcounter{RNum}\textbf{\theRNum}: Class-related feature $\mathbf{l}^j$ contains information from support objects, which turns out more effective for location regression even if the objects have never been seen in the training set. \section{Experiments} \subsection{Implementation}\label{sec:Imple} We adopt the training pipeline from \cite{fan2020few}. To maintain a fair comparison with other methods \cite{wang2020frustratingly,xiao2020few,wu2020multi,faster}, we mainly adopt ResNet101 \cite{He2016res} pre-trained on ImageNet \cite{Deng2009imagenet} as backbone. The performance of other backbones is presented in \appref{sec:backbone}. For fair comparison \cite{wang2020frustratingly,xiao2020few,wu2020multi,fan2020few,faster}, we utilized their official implementation, support examples, and models (if provided) in all the experiments. AirDet and the baseline \cite{fan2020few} take the \textbf{same} supports in all the settings. We use 4 NVIDIA GeForce Titan-X Pascal GPUs for experiments. Detailed configuration of AirDet can be found in \appref{sec:config} and the source code. \noindent\textbf{Remark}~\refstepcounter{RNum}\textbf{\theRNum}: To save human effort, only very few support examples (1-5 samples per class) can be provided during online exploration. Therefore, we mainly focused on $k=1, 2, 3, 5$-shot evaluation. Since the objects from exploration are usually unseen, we only test novel classes throughout the experiments. \subsection{In-domain Evaluation}\label{sec:indomain} We first present the in-domain evaluation on COCO benchmark \cite{lin2014microsoft}, where the models are trained and tested on the same dataset. Following prior works \cite{yan2019meta,kang2019few,wu2020multi,fan2020few,wang2020frustratingly,xiao2020few,wu2021universal,cao2021nips,fan2021generalized,sun2021fsce,zhu2021semantic}, the 80 classes are split into 60 non-VOC base classes and 20 novel classes. During training, the base class images from COCO trainval2014 are considered available. With few-shot samples per novel class, the models are evaluated on 5,000 images from COCO val2014 dataset. \begin{table}[!t] \centering \setlength{\tabcolsep}{0.2mm} \fontsize{5.5}{6.5}\selectfont \caption{Performance comparison on COCO validation dataset. In each setting, \red{red} and \green{green} fonts denote the best and second-best performance, respectively. AirDet achieves significant performance gain on baseline without fine-tuning. With fine-tuning, AirDet sets a new SOTA performance. $^\dag$We randomly sampled 3-5 different groups of support examples and reported the average performance and their standard deviation.} \begin{threeparttable} \begin{tabular}{cc|ccc|ccc|ccc|ccc} \toprul \multicolumn{2}{c|}{Shots} & \multicolumn{3}{c|}{1} & \multicolumn{3}{c|}{2} & \multicolumn{3}{c|}{3} & \multicolumn{3}{c}{5} \\ \midrule Method & Fine-tune & AP & AP$_{50}$ & AP$_{75}$ & AP & AP$_{50}$ & AP$_{75}$ & AP & AP$_{50}$ & AP$_{75}$ & AP & AP$_{50}$ & AP$_{75}$ \\ \multirow{2}{*}{A-RPN \cite{fan2020few}}$\dag$ & \multirow{2}{*}{\text{\ding{55}}} & 4.32 & 7.62 & 4.3 & 4.67 & 8.83 & 4.49 & 5.28 & 9.95 & 5.05 & 6.08 & 11.17 & 5.88 \\ & & $\pm$0.7 &$\pm$1.3 & $\pm$0.7 & $\pm$0.3 & $\pm$0.5 & $\pm$0.3 & $\pm$0.6 & $\pm$0.8 & $\pm$0.6 & $\pm$0.3 & $\pm$0.4 & $\pm$0.3 \\ \cmidrule{1-14} \multirow{2}{*}{\textbf{AirDet (Ours)}}$\dag$ & \multirow{2}{*}{\text{\ding{55}}} & \red{\textbf{5.97}} & \red{\textbf{10.52}} & \red{\textbf{5.98}} & \red{\textbf{6.58}} & \red{\textbf{12.02}} & \red{\textbf{6.33}} & \red{\textbf{7.00}} & \red{\textbf{12.95}} & \red{\textbf{6.71}} & \red{\textbf{7.76}} & \red{\textbf{14.28}} & \red{\textbf{ 7.31}} \\ & & \textbf{$\pm$0.4} &\textbf{$\pm$0.9} &\textbf{ $\pm$0.2} & \textbf{$\pm$0.2} & \textbf{$\pm$0.4} & \textbf{$\pm$0.2} & \textbf{$\pm$0.5} & \textbf{$\pm$0.8} & $\pm$0.7& \textbf{$\pm$0.3} & \textbf{$\pm$0.4} & $\pm$0.4 \\ \midrule FRCN \cite{faster} & \checkmark & 3.26 & 6.66 & 3.04 & 3.73 & 7.79 & 3.22 & 4.59 & 9.52 & 4.07 & 5.32 & 11.20 & 4.54 \\ TFA$_{\mathrm{fc}}$ \cite{wang2020frustratingly} & \checkmark & 2.78 & 5.39 & 2.36 & 4.14 & 7.98 & 4.01 & 6.33 & 12.10 & 5.94 & 7.92 & 15.58 & 7.29 \\ TFA$_{\mathrm{cos}}$ \cite{wang2020frustratingly} & \checkmark & 3.09 & 5.24 & 3.21 & 4.21 & 7.70 & 4.35 & 6.05 & 11.48 & 5.93 & 7.61 & 14.56 & 7.17 \\ FSDetView \cite{xiao2020few} & \checkmark & 2.20 & 6.20 & 0.90 & 3.40 & 10.00 & 1.50 & 5.20 & 14.70 & 2.10 & 8.20 & \red{21.60} & 4.70 \\ MPSR \cite{wu2020multi} & \checkmark & 3.34 & 6.11 & 3.25 & 5.41 & 9.68 & 5.52 & 5.70 & 10.54 & 5.50 & 7.20 & 13.55 & 6.89 \\ A-RPN \cite{fan2020few} & \checkmark & {4.59} & {8.85} & {4.37} & {6.15} & {12.05} & {5.76} & {8.24} & {15.52} & {7.92} & {9.02} & 17.29 & {8.53} \\ W. Zhang \textit{et al.}~ \cite{zhang2021hallucination} & \checkmark & {4.40} & {7.50} & {4.90} & {5.60} & {9.90} & {5.90} & {7.20} & {13.30} & 7.40 & - & - & - \\ FADI \cite{cao2021nips} & \checkmark & \green{5.70} & \green{10.40} & \green{6.00} & \green{7.00} & \green{13.01} & \green{7.00} & \green{8.60} & \green{15.80} & \green{8.30} & \green{10.10} & 18.60 & \red{11.90} \\ \textbf{AirDet (Ours)} & \checkmark & \textbf{\red{6.10}} & \textbf{\red{11.40}} & \textbf{\red{6.04}} & \textbf{\red{8.73}} & \textbf{\red{16.24}} & \textbf{\red{8.35}} & \textbf{\red{9.95}} & \textbf{\red{19.39}} & \textbf{\red{9.09}} & \textbf{\red{10.81}} &\green{\textbf{20.75}} & \textbf{\green{10.27}} \\ \bottomrul \end{tabular}\label{tab:coco}% \end{threeparttable} \end{table}% \myparagraph{Overall Performance} As shown in \tref{tab:coco}, AirDet achieves significant performance gain on the baseline \cite{fan2020few}. AirDet without fine-tuning amazingly also achieves comparable or even better results than many fine-tuned methods. With fine-tuning, AirDet outperformed existing SOTAs \cite{fan2020few,wang2020frustratingly,xiao2020few,wu2020multi,faster,cao2021nips,zhang2021hallucination}. Since the results without fine-tuning may be sensitive to support images, we report the averaged performance, and the standard deviation on 3-5 randomly sampled support images, where we surprisingly find AirDet more robust to the variance of support images compared with the baseline \cite{fan2020few}. \myparagraph{Multi-scale Objects} \begin{table*}[!t] \centering \setlength{\tabcolsep}{0.2mm} \fontsize{5.5}{6.5}\selectfont \caption{Performance evaluation on multi-scale objects from COCO. Highest-ranking and second-best scores are marked out with \red{red} and \green{green}, respectively. Without fine-tuning, AirDet can avoid over-fitting and shows robustness on small-scale objects. By virtue of the SCS module, AirDet can achieve higher results than those with FPN.} \begin{threeparttable} \begin{tabular}{ccc|ccc|ccc|ccc|ccc} \toprul \multicolumn{3}{c|}{Shots} & \multicolumn{3}{c|}{1} & \multicolumn{3}{c|}{2} & \multicolumn{3}{c|}{3} & \multicolumn{3}{c}{5} \\ \midrule Method & FPN & Fine-tune & AP$_s$ & AP$_m$ & AP$_l$ & AP$_s$ & AP$_m$ & AP$_l$ & AP$_s$ & AP$_m$ & AP$_l$ & AP$_s$ & AP$_m$ & AP$_l$ \\ \multirow{2}{*}{A-RPN \cite{fan2020few}}$\dag$ & \multirow{2}{*}{\text{\ding{55}}} & \multirow{2}{*}{\text{\ding{55}}} & 2.43 & 5.00 & 6.74 & 2.67 & 5.01 & 7.18 & 3.42 & 6.15 & 8.77 & 3.54 & 6.73 & 9.97 \\ & & & $\pm$0.4 &$\pm$1.0 & $\pm$1.1 & $\pm$0.3 & $\pm$0.3 & $\pm$0.4 & $\pm$0.2 & $\pm$0.5 & $\pm$0.8 & $\pm$0.3 & $\pm$0.03 & $\pm$0.2 \\ \midrule \multirow{2}{*}{\textbf{AirDet (Ours)}}$\dag$ & \multirow{2}{*}{\text{\ding{55}}} & \multirow{2}{*}{\text{\ding{55}}} & \red{\textbf{2.85}} & \red{\textbf{6.33}} & \red{\textbf{9.00}} & \red{\textbf{4.00}} & \red{\textbf{6.84}} & \red{\textbf{9.94}} & \red{\textbf{4.13}} & \red{\textbf{7.95}} & \red{\textbf{11.30}} & \red{\textbf{4.22}} & \red{\textbf{8.24}} & \red{\textbf{12.90}} \\ & & & \textbf{$\pm$0.3} &\textbf{$\pm$0.7} &\textbf{ $\pm$0.8} & \textbf{$\pm$0.3} & \textbf{$\pm$0.1} & \textbf{$\pm$0.3} & \textbf{$\pm$0.1} & \textbf{$\pm$0.5} & $\pm$0.9 & \textbf{$\pm$0.2} & $\pm$0.04 & $\pm$0.5 \\ \midrule FRCN \cite{faster} & \checkmark & \checkmark & 1.05 & 3.68 & 5.41 & 0.94 & 4.39 & 6.42 & 1.12 & 5.11 & 7.83 & 1.99 & 5.30 & 8.84 \\ TFA$_{\mathrm{fc}}$ \cite{wang2020frustratingly} & \checkmark & \checkmark & 1.06 & 2.71 & 4.38 & 1.17 & 4.02 & 7.05 & 1.97 & 5.48 & 11.09 & 2.40 & 6.86 & 12.86 \\ TFA$_{\mathrm{cos}}$ \cite{wang2020frustratingly} & \checkmark & \checkmark & 1.07 & 2.78 & 5.12 & 1.64 & 4.12 & 7.27 & 2.34 & 5.48 & 10.43 & 2.82 & 6.70 & 12.21 \\ FSDetView \cite{wang2020frustratingly} & \text{\ding{55}} & \checkmark & 0.70 & 2.70 & 3.70 & 0.60 & 4.00 & 4.20 & 1.80 & 5.10 & 8.00 & \green{3.00} & 9.70 & 12.30 \\ MPSR \cite{wu2020multi} & \checkmark & \checkmark & 1.23 & 3.82 & 5.58 & 1.89 & 5.69 & 8.73 & 0.86 & 4.60 & 9.96 & 1.62 & 6.78 & 11.66 \\ A-RPN \cite{fan2020few} & \text{\ding{55}} & \checkmark & \green{1.74} & \green{5.17} & \green{6.96} & \green{2.20} & \green{7.55} & \green{10.49} & \green{2.72} & \green{9.51} & \green{14.74} & 2.92 & \green{10.67} & \green{16.08} \\ \textbf{AirDet (Ours)} & \text{\ding{55}} & \checkmark & \red{\textbf{3.05}} & \red{\textbf{6.40}} & \red{\textbf{10.03}} & \red{\textbf{4.00}} & \red{\textbf{9.65}} & \red{\textbf{13.91}} & \red{\textbf{3.46}} & \red{\textbf{11.44}} & \red{\textbf{16.04}} & \red{\textbf{3.27}} & \red{\textbf{11.20}} & \red{\textbf{18.64}} \\ \bottomrul \end{tabular}\label{tab:coco_scale}% \end{threeparttable} \end{table*}% We next report the performance of methods \cite{wang2020frustratingly,xiao2020few,wu2020multi,fan2020few,faster} and AirDet on multi-scale objects in \tref{tab:coco_scale}. Thanks to SCS, AirDet achieves the highest performance for multi-scale objects among all the SOTAs. Especially for small objects, given 5-shots, AirDet can achieve a surprising \textbf{4.22} AP$_s$, nearly doubling the fine-tuned methods with multi-scale FPN features \cite{wang2020frustratingly,wu2020multi}. \myparagraph{Comparison of 10-Shot} \begin{table*}[!t] \centering \setlength{\tabcolsep}{0.1mm} \fontsize{5}{6.5}\selectfont \caption{Performance comparison with 10-shot on COCO validation dataset. \red{Red} and \green{green} fonts indicate best and second-best scores, respectively. AirDet achieves comparable results without fine-tuning and outperforms most methods with fine-tuning, which strongly demonstrates its effectiveness.} \begin{threeparttable} \begin{tabular}{ccccccccccccccc} \toprule[1pt] Method & Venue & Fine-tune & AP & AP$_{50}$ & AP$_{75}$ & AP$_s$ & AP$_m$ & AP$_l$ & AR$_{1}$ & AR$_{10}$ & AR$_{100}$ & AR$_s$ & AR$_m$ & AR$_l$ \\ \midrule LSTD \cite{chen2018lstd} & AAAI 2018 & \checkmark & 3.2 & 8.1 & 2.1 & 0.9 & 2.0 & 6.5 & 7.8 & 10.4 & 10.4 & 1.1 & 5.6 & 19.6 \\ MetaDet \cite{wang2019meta} & ICCV 2019 & \checkmark & 7.1 & 14.6 & 6.1 & 1.0 & 4.1 & 12.2 & 11.9 & 15.1 & 15.5 & 1.7 & 9.7 & 30.1 \\ FSRW \cite{kang2019few} & ICCV 2019 & \checkmark & 5.6 & 12.3 & 4.6 & 0.9 & 3.5 & 10.5 & 10.1 & 14.3 & 14.4 & 1.5 & 8.4 & 28.2 \\ Meta RCNN \cite{yan2019meta}& ICCV 2019 & \checkmark & 8.7 & 19.1 & 6.6 & 2.3 & 7.7 & 14.0 & 12.6 & 17.8 & 17.9 & 7.8 & 15.6 & 27.2 \\ TFA$_{\mathrm{fc}}$ \cite{wang2020frustratingly} & ICML 2020 & \checkmark & 9.1 & 17.3 & 8.5 & - & - & - & - & - & - & - & - & - \\ TFA$_{\mathrm{cos}}$ \cite{wang2020frustratingly} & ICML 2020 & \checkmark & 9.1 & 17.1 & 8.8 & - & - & - & - & - & - & - & - & - \\ FSDetView \cite{xiao2020few}& ECCV 2020 & \checkmark & 12.5 & \red{27.3} & 9.8 & 2.5 & 13.8 & 19.9 & 20.0 & 25.5 & 25.7 & 7.5 & 27.6 & 38.9 \\ MPSR \cite{wu2020multi} & ECCV 2020 & \checkmark & 9.8 & 17.9 & 9.7 & 3.3 & 9.2 & 16.1 & 15.7 & 21.2 & 21.2 & 4.6 & 19.6 & 34.3 \\ A-RPN \cite{fan2020few} & CVPR 2020 & \checkmark & 11.1 & 20.4 & 10.6 & - & - & - & - & - & - & - & - & - \\ SRR-FSD \cite{zhu2021semantic}& CVPR 2021 & \checkmark & 11.3 & 23.0 & 9.8 & - & - & - & - & - & - & - & - & - \\ FSCE \cite{sun2021fsce} & CVPR 2021 & \checkmark & 11.9 & - & 10.5 & - & - & - & - & - & - & - & - & - \\ DCNet \cite{Hu2021CVPR} & CVPR 2021 & \checkmark & \green{12.8} & 23.4 & 11.2 & 4.3 & \green{13.8} & \green{21.0} & 18.1 & 26.7 & 25.6 & 7.9 & 24.5 & 36.7 \\ Y. Li \textit{et al.}~ \cite{li2021few} & CVPR 2021 & \checkmark & 11.3 & 20.3 & - & - & - & - & - & - & - & - & - & - \\ FADI \cite{cao2021nips} & NIPS 2021 & \checkmark & 12.2 & 22.7 & \green{11.9} & - & - & - & - & - & - & - & - & - \\ QA-FewDet \cite{han2021query} & ICCV 2021 & \checkmark & 11.6 & 23.9 & 9.8 & - & - & - & - & - & - & - & - & - \\ FSOD$^{up}$ \cite{wu2021universal} & ICCV 2021 & \checkmark & 11.0 & - & 10.7 & \red{4.5} & 11.2 & 17.3 & - & - & - & - & - & - \\ \midrule \textbf{AirDet} & \textbf{Ours} & \text{\ding{55}} & 8.7 & 15.3 & 8.8 & 4.3 & 9.7 & 14.8 & \green{\textbf{19.1}} & \red{\textbf{33.8}} & \red{\textbf{34.8}} & \red{\textbf{13.0}} & \red{\textbf{37.4}} & \green{\textbf{52.9}} \\ \textbf{AirDet} & \textbf{Ours} & \checkmark & \red{\textbf{13.0}} & \green{\textbf{23.9}} & \red{\textbf{12.4}} & \red{\textbf{4.5}} & \red{\textbf{15.2}} & \red{\textbf{22.8}} & \red{\textbf{20.5}} & \green{\textbf{33.7}} & \green{\textbf{34.4}} & \green{\textbf{9.6}} & \green{\textbf{36.4}} & \red{\textbf{55.0}} \\ \bottomrule[1pt] \end{tabular}\label{tab:10shot}% \end{threeparttable} \end{table*}% For a more thorough comparison, we present the 10-shot evaluation on the COCO validation dataset in \tref{tab:10shot}. Without fine-tuning, AirDet can surprisingly achieve comparable performance against recent work \cite{wu2020multi,wang2020frustratingly,yan2019meta}, while all of them require a careful fine-tuning stage. Moreover, our fine-tuned model outperforms most prior methods \cite{li2021few,cao2021nips,han2021query,wu2021universal,Hu2021CVPR,sun2021fsce,zhu2021semantic,fan2020few,wu2020multi,xiao2020few,wang2020frustratingly,yan2019meta,kang2019few,wang2019meta,chen2018lstd} in most metrics, especially average recall rate (AR). Besides, the performance superiority on small objects (AP$_s$ and AR$_s$) further demonstrates the effectiveness of AirDet on multi-scale, especially the small-scale objects. \myparagraph{Efficiency Comparison} We report the fine-tuning and inference time of AirDet, and the SOTA methods \cite{fan2020few,wang2020frustratingly,xiao2020few,wu2020multi} in a setting of 3-shot one class in \tref{tab:time}, in which the official code and implementation with ResNet101 as the backbone are adopted. Without fine-tuning, AirDet can make direct inferences on novel objects with a comparable speed, while the others methods \cite{fan2020few,xiao2020few,wu2020multi,wang2020frustratingly} require a fine-tuning time of about 3-30 minutes, which cannot meet the requirements of online exploration. Note that the fine-tuning time is measured on TITAN X GPU, while such computational power is often unavailable on robots. \noindent\textbf{Remark}~\refstepcounter{RNum}\textbf{\theRNum}: Many methods \cite{li2021few,cao2021nips,han2021query,wu2021universal,Hu2021CVPR,sun2021fsce,zhu2021semantic,fan2020few,wu2020multi,xiao2020few,wang2020frustratingly,yan2019meta,kang2019few,wang2019meta,chen2018lstd} also require an offline process to fine-tune hyper-parameters for different shots. While such \textit{off-line} tuning is infeasible for robotic \textit{online} exploration. Instead, AirDet can adopt \textbf{the same} base-trained model without fine-tuning for implementation. \begin{table}[!t] \centering \setlength{\tabcolsep}{0.1mm} \caption{Efficiency comparison with official source code. We adopt the pre-trained models provided by \cite{wang2020frustratingly}, so their fine-tuning time is unavailable.} \fontsize{5}{7.5}\selectfont \begin{tabular}{cccccccc} \toprule Method & \textbf{AirDet} & \multicolumn{1}{l}{A-RPN} \cite{fan2020few} & \multicolumn{1}{l}{FSDet} \cite{xiao2020few}& \multicolumn{1}{l}{MPSR} \cite{wu2020multi} & \multicolumn{1}{l}{TFA$_{\mathrm{fc}}$} \cite{wang2020frustratingly} & \multicolumn{1}{l}{TFA$_{\mathrm{cos}}$} \cite{wang2020frustratingly} & \multicolumn{1}{l}{FRCN$_{\mathrm{ft}}$} \cite{wang2020frustratingly} \\ \midrule Fine-tuning (min) & \textbf{0} & 21 & 11 & 3 & - & - & - \\ Inference (s/img) & \textbf{0.081} & 0.076 & 0.202 & 0.109 & 0.085 & 0.094 & 0.091 \\ \bottomrule \end{tabular}% \label{tab:time}% \end{table}% \begin{table}[!t] \centering \setlength{\tabcolsep}{0.2mm} \fontsize{5.5}{6.5}\selectfont \caption{Cross-domain performance on VOC-2012 validation dataset. \red{Red} and \green{green} fonts denote the first and second place, respectively. AirDet has been demonstrated strong generalization capability, maintaining obvious superiority against others.} \begin{threeparttable} \begin{tabular}{cc|ccc|ccc|ccc|ccc} \toprul \multicolumn{2}{c|}{Shots} & \multicolumn{3}{c|}{1} & \multicolumn{3}{c|}{2} & \multicolumn{3}{c|}{3} & \multicolumn{3}{c}{5} \\ \midrule Method & Fine-tune & AP & AP$_{50}$ & AP$_{75}$ & AP & AP$_{50}$ & AP$_{75}$ & AP & AP$_{50}$ & AP$_{75}$ & AP & AP$_{50}$ & AP$_{75}$ \\ \multirow{2}{*}{A-RPN \cite{fan2020few}}$\dag$ & \multirow{2}{*}{\text{\ding{55}}} & 10.45 & 18.10 & 10.32 & 13.10 & 22.60 & 13.17 & 14.05 & 24.08 & 14.24 & 14.87 & 25.03 & 15.26 \\ & & $\pm$0.1 &$\pm$0.1 & $\pm$0.1 & $\pm$0.2 & $\pm$0.4 & $\pm$0.2 & $\pm$0.2 & $\pm$0.2 & $\pm$0.2 & $\pm$0.08 & $\pm$0.07 & $\pm$0.1 \\ \midrule \multirow{2}{*}{\textbf{AirDet (Ours)}}$\dag$ & \multirow{2}{*}{\text{\ding{55}}} & \red{\textbf{11.92}} & \red{\textbf{21.33}} & \red{\textbf{11.56}} & \red{\textbf{15.80}} & \red{\textbf{26.80}} & \red{\textbf{16.08}} & \red{\textbf{16.89}} & \red{\textbf{28.61}} & \red{\textbf{17.36}} & \red{\textbf{17.83}} & \red{\textbf{29.78}} & \red{\textbf{ 18.38}} \\ & & \textbf{$\pm$0.06} &\textbf{$\pm$0.08} &\textbf{ $\pm$0.08} & \textbf{$\pm$0.08} & \textbf{$\pm$0.3} & \textbf{$\pm$0.05} & \textbf{$\pm$0.1} & \textbf{$\pm$0.1} & \textbf{$\pm$0.1} & \textbf{$\pm$0.03} & \textbf{$\pm$0.03} & \textbf{$\pm$0.1} \\ \midrule FRCN \cite{faster} & \checkmark & \multicolumn{1}{c}{4.49} & \multicolumn{1}{c}{9.44} & \multicolumn{1}{c|}{3.85} & \multicolumn{1}{c}{5.20} & \multicolumn{1}{c}{11.92} & \multicolumn{1}{c|}{3.84} & \multicolumn{1}{c}{6.50} & \multicolumn{1}{c}{14.39} & \multicolumn{1}{c|}{5.11} & \multicolumn{1}{c}{6.55} & \multicolumn{1}{c}{14.48} & \multicolumn{1}{c}{5.09} \\ TFA$_{\mathrm{cos}}$ \cite{wang2020frustratingly} & \checkmark & \multicolumn{1}{c}{4.66} & \multicolumn{1}{c}{7.97} & \multicolumn{1}{c|}{5.14} & \multicolumn{1}{c}{6.59} & \multicolumn{1}{c}{11.91} & \multicolumn{1}{c|}{6.49} & \multicolumn{1}{c}{8.78} & \multicolumn{1}{c}{17.09} & \multicolumn{1}{c|}{8.15} & \multicolumn{1}{c}{10.46} & \multicolumn{1}{c}{20.93} & \multicolumn{1}{c}{9.53} \\ TFA$_{\mathrm{fc}}$ \cite{wang2020frustratingly} & \checkmark & \multicolumn{1}{c}{4.40} & \multicolumn{1}{c}{8.60} & \multicolumn{1}{c|}{4.21} & \multicolumn{1}{c}{7.02} & \multicolumn{1}{c}{13.80} & \multicolumn{1}{c|}{6.21} & \multicolumn{1}{c}{9.24} & \multicolumn{1}{c}{18.48} & \multicolumn{1}{c|}{8.03} & \multicolumn{1}{c}{11.11} & \multicolumn{1}{c}{22.83} & \multicolumn{1}{c}{9.78} \\ FSDetView \cite{xiao2020few} & \checkmark & 4.80 & 14.10 & 1.40 & 3.70 & 11.60 & 0.60 & 6.60 & 22.00 & 1.20 & 10.80 & 26.50 & 5.50 \\ MPSR \cite{wu2020multi} & \checkmark & \multicolumn{1}{c}{6.01} & \multicolumn{1}{c}{11.23} & \multicolumn{1}{c|}{5.74} & \multicolumn{1}{c}{8.20} & \multicolumn{1}{c}{15.08} & \multicolumn{1}{c|}{8.22} & \multicolumn{1}{c}{10.08} & \multicolumn{1}{c}{18.29} & \multicolumn{1}{c|}{9.99} & \multicolumn{1}{c}{11.49} & \multicolumn{1}{c}{21.33} & \multicolumn{1}{c}{11.06} \\ A-RPN \cite{fan2020few} & \checkmark & \green{9.49} & \green{17.41} & \green{9.42} & \green{12.71} & \green{23.66} & \green{12.44} & \green{14.89} & \green{26.30} & \green{14.76} & \green{15.09} & \green{28.08} & \green{14.17} \\ \textbf{AirDet (Ours)} & \checkmark & \textbf{\red{13.33}} & \textbf{\red{24.64}} & \textbf{\red{12.68}} & \textbf{\red{17.51}} & \textbf{\red{30.35}} & \textbf{\red{17.61}} & \textbf{\red{17.68}} & \textbf{\red{32.05}} & \textbf{\red{17.34}} & \textbf{\red{18.27}} & \textbf{\red{33.02}} & \textbf{\red{17.69}} \\ \bottomrule[1.2pt] \end{tabular}\label{tab:voc}% \end{threeparttable} \end{table}% \subsection{Cross-domain Evaluation}\label{sec:cross} Robots are often deployed to novel environments that have never been seen during training, thus cross-domain test is crucial for robotic applications. In this section, we adopt the same model trained on COCO, while test on PASCAL VOC \cite{everingham2010pascal} and LVIS \cite{gupta2019lvis} to evaluate the model generalization capability. \myparagraph{PASCAL VOC} We report the overall performance on PASCAL VOC-2012 \cite{everingham2010pascal} for all methods in \tref{tab:voc}. In the cross-domain setting, even without fine-tuning, AirDet achieves better performance than methods \cite{wu2020multi,fan2020few,faster,xiao2020few,wang2020frustratingly} that perform relatively well in in-domain test. This means AirDet has a much stronger generalization capability than most fine-tuned prior methods. \myparagraph{LVIS} We randomly sample LVIS \cite{gupta2019lvis} to form 4 splits of classes, each of which contains 16 different classes. To provide valid evaluation, the classes that have 20 to 200 images are taken for the test. More details can be found in \appref{sec:lvis}. The averaged performance with 5-shot without fine-tuning is presented in \tref{tab:lvis-cross}, where AirDet outperforms the baseline \cite{fan2020few} in every split under all metrics. Since the novel categories in the 4 LVIS splits are more (64 classes in total) and rarer (many of them are uncommon) than the VOC 20 classes, the superiority of AirDet in \tref{tab:lvis-cross} highly demonstrate its robustness under class variance. \begin{table}[!t] \setlength{\tabcolsep}{.6mm} \centering \fontsize{5.5}{6.5}\selectfont \caption{Cross-domain performance of A-RPN \cite{fan2020few} and AirDet on LVIS dataset. We report the results for 5-shot without fine-tuning on 4 random splits.} \begin{tabular}{c|cccc|cccc|cccc|cccc} \toprul \multicolumn{1}{c|}{Split} & \multicolumn{4}{c|}{1} & \multicolumn{4}{c|}{2} & \multicolumn{4}{c|}{3} & \multicolumn{4}{c}{4} \\ \midrule Metrict & AP & AP$_{50}$ & AP$_{75}$ & AR$_{10}$ & AP & AP$_{50}$ & AP$_{75}$ & AR$_{10}$ & AP & AP$_{50}$ & AP$_{75}$ & AR$_{10}$ & AP & AP$_{50}$ & AP$_{75}$ & AR$_{10}$ \\ \textbf{AirDet} & \textbf{6.71} & \textbf{12.31} & \textbf{6.51} & \textbf{27.57} & \textbf{9.35} & \textbf{14.23} & \textbf{9.98} & \textbf{25.42} & \textbf{9.09} & \textbf{15.64} & \textbf{8.82} & \textbf{34.64} & \textbf{11.07} & \textbf{16.90} & \textbf{12.30} & \textbf{25.76} \\ A-RPN & 5.49 & 10.04 & 5.27 & 26.59 & 8.85 & 13.41 & 9.46 & 24.45 & 7.49 & 12.34 & 8.13 & 33.85 & 10.80 & 15.46 & 12.24 & 25.05 \\ \bottomrul \end{tabular}% \label{tab:lvis-cross}% \end{table}% \subsection{Ablation Study and Deep Visualization}\label{sec:abla} In this section, we address the effectiveness of the proposed three modules via quantitative results and qualitative visualization using Grad-Cam \cite{gradcam}. \myparagraph{Quantitative Evaluation} We report the overall performance on 3-shot and 5-shot for the baseline \cite{fan2020few} and AirDet by enabling the three modules, respectively. It can be seen in \tref{tab:ABLA} that AirDet outperforms the baseline in all cases. With the modules enabled one by one, the results get gradually higher, which strongly demonstrates the necessity and effectiveness of SCS, GLR, and PRE. \begin{table}[!t] \centering \setlength{\tabcolsep}{0.2mm} \fontsize{5.5}{6.5}\selectfont \caption{Ablation study of the three modules, \textit{i.e.}, PRE, GLR, and SCS in AirDet. With each module enabled, the performance is improved step by step on our baseline. With the full modules, AirDet can amazingly achieve up to \textbf{35\%} higher results.} \begin{tabular}{ccc|cccccc|cccccc} \toprul \multicolumn{3}{c|}{Module} & \multicolumn{5}{c}{3} & & \multicolumn{6}{c}{5} \\ \midrule PRE & GLR & SCS & \multicolumn{1}{c}{AP} & $\Delta\%$ & \multicolumn{1}{c}{AP$_{50}$} & $\Delta\%$ & \multicolumn{1}{c}{AP$_{75}$} & $\Delta\%$ & \multicolumn{1}{c}{AP} & $\Delta\%$ & \multicolumn{1}{c}{AP$_{50}$} & $\Delta\%$ & \multicolumn{1}{c}{AP$_{75}$} & $\Delta\%$ \\ \multicolumn{3}{c|}{Baseline \cite{fan2020few} } & 4.80 & 0.00 & 9.24 & 0.00 & 4.49 & 0.00 & 5.73 & 0.00 & 10.68 & 0.00 & 5.53 & 0.00 \\ \checkmark & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & 5.15 & +7.29 & 10.11 & +9.41 & 4.71 & +4.90 & 5.94 & +3.66 & 11.54 & +8.05 & 5.34 & -3.43 \\ \checkmark & \checkmark & \multicolumn{1}{c|}{} & 5.59 & +16.46 & 10.61 & +14.83 & 5.12 & +14.03 & 6.44 & +12.39 & 12.08 & +13.11 & 6.06 & +9.58 \\ \midrule \checkmark & \checkmark & \checkmark & \textbf{6.50} & \textbf{+35.41} & \textbf{12.30} & \textbf{+33.12} & \textbf{6.11} & \textbf{+36.08} & \textbf{7.27} & \textbf{+26.78} & \textbf{13.63} & \textbf{+27.62} & \textbf{6.71} & \textbf{+21.34} \\ \bottomrul \end{tabular}% \label{tab:ABLA}% \end{table}% \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{deep.pdf} \caption{Deep visualization comparison between AirDet and baseline \cite{fan2020few}. In (a), By virtue of SCS, AirDet is capable of finding given support objects effectively. In (b), with similar proposals (\textcolor[rgb]{1,0,0}{red} boxes), AirDet can focus on the entire object (aeroplane) and notice the most representative parts (dog), resulting in more precise regression box and correct classification results. More examples are presented in \appref{sec:more_deep}.} \label{fig:deep_rpn} \end{figure} \myparagraph{How effective is SCS?} Given 2-shot per class, we first take the highest ranking proposal from RPN \cite{faster} to backpropagate the objectiveness score and resize the gradient map to the original image. \fref{fig:deep_rpn} (a) exhibits the heat map from both AirDet and the baseline. We observe that AirDet generally concentrates on objects more precisely than the baseline. Moreover, AirDet can focus better on objects belonging to the support class and is not distracted by other objects (2nd and 3rd row). This means that AirDet can generate novel object proposals more effectively. \myparagraph{How effective is GLR and detection head?} In \fref{fig:deep_rpn} (b), we observe that with similar proposal boxes, AirDet head can better focus on the entire object, \textit{e.g.}, aeroplane is detected with a precise regression box, \textit{e.g.}, the dog is correctly classified with high score. This again demonstrates the effectiveness of our GLR and detection head. \subsection{Real-World Test}\label{sec:real} \begin{table*}[!t] \centering \setlength{\tabcolsep}{0.6mm} \fontsize{5.5}{6.5}\selectfont \caption{3-shot real-world exploration test of AirDet and baseline \cite{fan2020few}. AirDet can be directly applied without fine-tuning and performs considerably more robust than the baseline by virtue of the newly proposed SCS, GLR, and PRE modules.} \begin{tabular}{ccc|cc|cc|cc|cc|cc} \toprul \multicolumn{13}{c}{Real-world Exploration Test} \\ \midrule \multicolumn{1}{l}{Test/\#Frames} & \multicolumn{2}{c|}{1/\#248} & \multicolumn{2}{c|}{2/\#146} & \multicolumn{2}{c|}{3/\#127} & \multicolumn{2}{c|}{4/\#41} & \multicolumn{2}{c|}{5/\#248} & \multicolumn{2}{c}{6/\#46} \\ \midrule Metric & AP & AP$_{50}$ & AP & AP$_{50}$ & AP & AP$_{50}$ & AP & AP$_{50}$ & AP & AP$_{50}$ & AP & AP$_{50}$ \\ \textbf{AirDet (Ours)} & \textbf{17.10} & \textbf{54.10} & \textbf{17.90} & \textbf{47.40} & \textbf{24.00} & \textbf{57.50} & \textbf{26.94} & \textbf{48.20} & \textbf{11.28} & \textbf{38.17} & \textbf{20.40} & \textbf{70.63} \\ A-RPN \cite{fan2020few} & 13.56 & 40.40 & 14.30 & 38.80 & 20.20 & 47.20 & 22.41 & 40.14 & 6.75 & 24.10 & 14.70 & 59.38 \\ \midrule Test/\#Frames & \multicolumn{2}{c|}{7/\#212} & \multicolumn{2}{c|}{8/\#259} & \multicolumn{2}{c|}{9/\#683} & \multicolumn{2}{c|}{10/\#827} & \multicolumn{2}{c|}{11/\#732} & \multicolumn{2}{c}{12/\#50} \\ \midrule Metric & AP & AP$_{50}$ & AP & AP$_{50}$ & AP & AP$_{50}$ & AP & AP$_{50}$ & AP & AP$_{50}$ & AP & AP$_{50}$ \\ \textbf{AirDet (Ours)} & \textbf{5.90} & \textbf{16.00} & \textbf{15.26} & \textbf{43.31} & \textbf{7.63} &\textbf{27.88} & \textbf{13.55} & \textbf{23.92} & \textbf{15.74} & \textbf{34.43} & \textbf{21.45} & \textbf{45.83} \\ A-RPN \cite{fan2020few} & 2.39 & 7.60 & 11.27 & 25.24 & 6.16 & 23.40 & 8.10 & 14.85 & 11.54 & 27.28 & 18.20 & 33.98 \\ \bottomrul \end{tabular}% \label{tab:subt}% \end{table*}% Real-world tests are conducted for AirDet and our baseline \cite{fan2020few} with 12 sequences that were collected from the DARPA Subterranean (SubT) challenge \cite{subtchallenge}. Due to the requirements of \textit{online} response during the mission, the models can only be evaluated \textbf{without fine-tuning}, which makes existing methods \cite{li2021few,cao2021nips,han2021query,wu2021universal,Hu2021CVPR,sun2021fsce,zhu2021semantic,fan2020few,wu2020multi,xiao2020few,wang2020frustratingly,yan2019meta,kang2019few,wang2019meta,chen2018lstd} impractical. The environments of SubT challenge also poses extra difficulties, \textit{e.g.}, a lack of lighting, thick smoke, dripping water, and cluttered or irregularly shaped environments, \textit{etc.}~ To test the generalization capabilities, we adopt the same models of AirDet and the baseline as those evaluated in \sref{sec:indomain} and \sref{sec:cross}. The performance of 3-shot for each class is exhibited in \tref{tab:subt}, where AirDet is proved better. The robot is equipped with an NVIDIA Jetson AGX Xavier, where our method runs at 1-2 FPS without TensorRT acceleration or other optimizations. \begin{table}[t] \centering \setlength{\tabcolsep}{2mm} \fontsize{6}{6}\selectfont \caption{Per class results of the real-world tests. We report the instance number of each novel class along with the 3-shot AP results from AirDet and A-RPN \cite{fan2020few}. Compared with the baseline, AirDet achieves higher results for all classes.} \begin{tabular}{cccccccc} \toprule[1.2pt] Class & Backpack & Helmet & Rope & Drill & Vent & Extinguisher & Survivor \\ \midrule Instances & 626 & 674 & 723 & 587 & 498 & 1386 & 205 \\ AirDet & \textbf{32.3} & \textbf{9.7} & \textbf{13.9} &\textbf{ 10.8} & \textbf{16.2} & \textbf{10.5} & \textbf{10.7} \\ Baseline \cite{fan2020few} & 26.6 & 9.7 & 6 & 9 & 14.4 & 5.6 & 9.1 \\ \bottomrule[1.2pt] \end{tabular}% \label{tab:subt_cls}% \end{table}% In \tref{tab:subt_cls}, we present the number of instances and the performance on each novel class. To our excitement, AirDet shows smaller variance and higher precision cross different classes. We also present the support images and representative detected objects in \fref{fig:subt}. Note that AirDet can detect the novel objects accurately in the query images even if they have distinct scales and different illumination conditions with the supports. We regard this capability to the carefully designed SCS in AirDet. More visualization are presented in \appref{sec:quali}. The robustness and strong generalization capability of AirDet in the real-world tests demonstrated its promising prospect and feasibility for autonomous exploration. \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{SUBT.pdf} \caption{The provided support images and examples of detection results in the real-world tests. AirDet is robust to distinct object scales and different illumination conditions.} \label{fig:subt} \end{figure} \section{Limitation and Future Work} Despite the promising prospect and outstanding performance, AirDet still has several limitations. (1) Since abundant base classes are needed to generalize, AirDet needs a relatively large base dataset to train before inference on novel classes. (2) Second, AirDet relies on the quality of support images to work well without fine-tuning. This is because the provided few support images are the only information for the unseen classes. (3) We observe that the failure cases of AirDet are mainly due to false classification, resulting in a high result variance among different classes in COCO and VOC. (4) Since SCS and the detection head run in loops for multiple novel classes, the efficiency of AirDet will suffer from a large number of novel classes. We provide quantitative results for limitation (1), (2), and (3) in \appref{sec:de_limi}. \section{Conclusion} This paper presents a brand new few-shot detector, AirDet, which consists of 3 newly proposed \textit{class-agnostic relation}-based modules and is free of fine-tuning. Specifically, with proposed spatial relation and channel relation, we construct support guided cross-scale feature fusion for region proposals, global-local relation network for shots aggregation, and prototype relation embedding for precise localization. With the strong capability to extract \textit{class-agnostic relation}, AirDet can work comparably or even better than those exhaustively fine-tuned methods in both in-domain and cross-domain evaluation. AirDet is also tested on real-world data with a robotic platform, where its feasibility for autonomous exploration is demonstrated. \\ \par\noindent \myparagraph{Acknowledgement} This work was sponsored by ONR grant \#N0014-19-1-2266 and ARL DCIST CRA award W911NF-17-2-0181. The work was done when Bowen Li and Pranay Reddy were interns at The Robotics Institute, Carnegie Mellon University. The authors would like to thank all members of the Team Explorer for providing data collected from the DARPA Subterranean Challenge. \bibliographystyle{splncs04}
proofpile-arXiv_065-46
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:introduction} The new generation of galaxy surveys are mapping large swaths of the observable Universe using a variety of objects, and with increasing completeness \cite{Abbetal,Amendola:2012ys,Ivezic:2008fe,Benitez:2014ibt}. These surveys allow us to infer cosmological information from the positions of galaxies and other tracers of large-scale structure mainly through measurements of their clustering. In particular, when combining two or more tracers, we are able to measure not only their auto-spectra (or, equivalently, their auto-correlations), but also their cross-spectra. And as we increase the diversity of tracers in our surveys, the relative importance of the cross-spectra grows as well: given $N$ tracers there are $N$ auto-spectra, and $N(N-1)/2$ cross-spectra. A further advantage of multiple tracers is the fact that some physical parameters can be measured with an accuracy that is not limited by cosmic variance -- i.e., we are not necessarily constrained only by the survey volume, and can improve the measurements of some physical parameters by increasing the numbers of tracers in the same volume \cite{Seljak:2008xr,McDonald:2008sh}. As shown by \cite{abramo2013multitracer}, the independent degrees of freedom measured by galaxy surveys are split into two branches: on one hand, the total clustering of the survey (a single degree of freedom), which includes observables such as the shape of the matter power spectrum, is severely constrained by cosmic variance: no matter how many tracers we observe, we are limited by the volume of the survey, which imposes a lower bound to the uncertainties in that total clustering. On the other hand, the {\em ratios} of power spectra of the tracers (or {\em relative} clusterings) are independent from the total clustering, as well as from each other, and their covariance is not limited by cosmic variance: it's always possible to beat down the noise in those variables by detecting larger numbers of tracers in any given survey volume. However, in this argument it is often assumed that the auto-power spectra and the cross-spectra are manifestations of the same basic degrees of freedom: the biases, the matter growth rate, the amplitude and shape of the matter power spectrum, etc. In practice, this is equivalent to assuming that $P_{ij} = v_i v_j$, and taking these $N$ parameters $v_i$ as the fundamental degrees of freedom. In fact, in linear theory the information in the cross-spectra is degenerate with that already contained in the auto-spectra -- for a review see, e.g., \cite{Desjacques2016}. A linear biasing model implies that the {\em observed} power spectra (the data covariance in Fourier space), in redshift space, is given by: \begin{equation} \label{Eq:ObsPower} P_{ij} ( \vec{k} ) = b_i b_j P_{m} ( \vec{k} ) + \frac{\delta_{ij}}{\bar{n}_i} \; \to \; (b_i + f \mu^2)(b_j + f \mu^2) P_m(k) + \frac{\delta_{ij}}{\bar{n}_i} \; , \end{equation} where $b_i$ are the biases of the tracers $i=1,2,\ldots,N$, $f$ is the matter growth rate, $\mu = \hat{r}\cdot \hat{k}$ is the angle of the Fourier mode with the line of sight, and $\bar{n}_i$ are the number densities of the tracers. In the second part of the expression above we also used the flat-sky approximation to indicate explicitly the bias model -- but this assumption is irrelevant for the arguments presented in this paper. Although the assumption that the degrees of freedom in the cross-spectra are contained in the auto-spectra is approximately correct in the linear regime, and assuming a deterministic biasing model, it misses many important features such as the one-halo term, exclusion effects, stochasticity, as well as deviations from the ideal Poisson shot noise model \cite{Baldauf:2013hka,Assassi2014,mirbabayi2015biased,Desjacques2016,Schmittfull:2018yuk}. Extensions of the linear and deterministic biasing model motivated by perturbation theory introduce additional dependencies in the tracer density contrasts, typically of the form: \begin{equation} \label{eq:deltai} \delta_i \to B_i \delta_m + \epsilon_i^P + \epsilon_i^S \; , \end{equation} where $B_i=b_i + f \mu^2$, and $\epsilon_i^P$ is a shot noise stochastic term that, under the assumption of Poisson statistics, obeys $\langle \epsilon_i^P \epsilon_j^P \rangle = \delta_{ij}/\bar{n}_i$. The last term in Eq. \eqref{eq:deltai}, $\epsilon_i^S$, collects all additional stochastic terms and non-linearities (e.g., the dependencies on $\delta_m^2$). A more general relation between the matter density field and the tracers, similar to Eq. \eqref{eq:deltai}, was in fact used by Gil-Mar\'{\i}n et al. \cite{2010MNRAS.407..772G} to model how the multi-tracer approach could improve measurements of the matter growth through RSDs. In some cases, discarding these stochastic terms, especially in the cross-spectra, can lead to systematics in the measurements of parameters such as $f_{NL}$ \cite{Ginzburg}. Recently, Mergulh\~ao et al. \cite{2021arXiv210811363M} showed that by splitting a halo population in two according to their mass allows us to measure these bias parameters with higher precision, which implies that, in practice, the advantages of the multi-tracer approach can leverage the larger number of parameters that comes together with considering more tracers. Let's assume, for simplicity, that shot noise is exactly Poissonian and is uncorrelated with either $\delta_m$ or $\epsilon_i^S$. The power spectrum corresponding to the pair $\langle \delta_i \delta_j \rangle$, after shot noise subtraction, takes the form: \begin{equation} \label{eq:deltai2} \hat{P}_{ij} \to B_i B_j P_{m} + \langle \epsilon_i^S \epsilon_j^S \rangle \; , \end{equation} where we assumed that the stochastic terms are uncorrelated with the density contrast. Even if we assume the simplest stochastic model, $\langle \epsilon_i^S \epsilon_j^S \rangle = \delta_{ij} S_i$, we still find that the cross-spectra carry additional information with respect with the auto-spectra, in the sense: \begin{equation} \label{eq:AutoCross} \hat{P}_{ii} \hat{P}_{jj} - \hat{P}_{ij}^{2} \to \hat{P}_{ii} S_j + S_i \hat{P}_{jj} + S_i S_j \; . \end{equation} In this paper we show that, with multiple tracers, the accuracy with which we can measure the {\em irreducible} information in the cross-spectra, in the sense of Eq. \eqref{eq:AutoCross}, is not constrained by cosmic variance. Moreover, the uncertainties in those irreducible degrees of freedom of the cross-spectra fall even {\em faster} with the number density of tracers, when compared with the ratios of auto-spectra. Therefore, cosmic variance cancellation is not only a feature that allows us to measure bias, redshift-space distortion (RSD) parameters \cite{McDonald:2008sh} or primordial non-Gaussianities \cite{Seljak:2008xr} with increased accuracy, but it also opens the way to measure some of the parameters in the perturbative bias expansion to much higher accuracy by using multiple tracers and their cross-correlations. We start, in Section 2, by deriving several useful results related to the multi-tracer Fisher matrix, with and without cross-spectra as independent degrees of freedom. We also present a general expression for the Fisher matrix with any data covariance, and show that its inverse is exactly the covariance matrix that we expect under the Gaussian approximation. Then, in Section 3 we show how to maximize the information of a galaxy survey, in terms of the optimal number tracers, by using two different summary statistics for the total amount of Fisher information in that survey. Finally, in Section 4 we show that the extra degrees of freedom that arise from the cross-spectra are not constrained by cosmic variance, and can be measured with arbitrary accuracy (at least in principle). We conclude in Section 5. \section{Fisher matrix for two-point functions} Let's say that we have many samples of some measurements $f_i$, from which we wish to estimate physical parameters through the quadratic form (the ``correlations'') $q_{ij} \to \langle f_i f_j \rangle$. We define the data covariance as: \begin{equation} \label{Eq:DataCov} C_{ij} = \langle f_i f_j \rangle = q_{ij} + s_{ij}\; , \end{equation} where $s_{ij}$ is the noise (assumed symmetric under $i \leftrightarrow j$). Under the hypothesis of Gaussianity for the data $f_i$, one can easily show by using Wick's theorem that the parameter covariance is given by: \begin{equation} \label{Eq:DCov} {\rm Cov}(q_{ij}, q_{i'j'}) = C_{ii'} C_{jj'} + C_{ij'} C_{ji'} \; . \end{equation} Since by construction $q_{ij} = q_{ji}$, this set of parameters will count twice the correlations. For this reason, we introduce the following notation for all the non-equivalent pairs $\{i,j\}$: \begin{equation} \label{Eq:DoubleCount} q_{[ij]} = q_{ij} \quad , \; {\rm for} \; i \leq j \; . \end{equation} Clearly, if $i=1,2,\ldots,N$, then the number of non-equivalent pairs is $N_p = N(N+1)/2$. Notice that with this notation the parameter covariance is still given by the same expression as above, i.e.: \begin{equation} \label{Eq:DCov2} {\rm Cov}(q_{[ij]}, q_{[i'j']}) = C_{ii'} C_{jj'} + C_{ij'} C_{ji'} \; . \end{equation} However, as opposed to the $((N,N) \times (N,N))$ array of Eq. \eqref{Eq:DCov}, in terms of the individual pairs $[ij]$ the expression above is an $N_p \times N_p$ matrix. With this notation it is straighforward to show that the inverse of the parameter covariance, also known as the Fisher matrix, is given by: \begin{equation} \label{eqn:FishMatGen} F[q_{[ij]},q_{[i'j']}] = F_{[ij],[i'j']} = \left( 1 - \frac12 \delta^{ij} \right) \left( 1 - \frac12 \delta^{i'j'} \right) \, \left( C_{ii'}^{-1} C_{jj'}^{-1} + C_{ij'}^{-1} C_{ji'}^{-1} \right) \, . \end{equation} where $C_{ij}^{-1}$ is the inverse of the data covariance, i.e., $\sum_j C_{ij}^{-1} C_{ji'} = \delta_{ii'} $. In order to show that the covariance of Eq. \eqref{Eq:DCov} is indeed the inverse of the Fisher matrix of Eq. \eqref{eqn:FishMatGen} we need the identity: \begin{eqnarray} \nonumber 2 \sum_{[mn]} C_{im}^{-1} C_{mi'} \, C_{jn}^{-1} C_{nj'} &=& \sum_{mn} C_{im}^{-1} C_{mi'} \, C_{jn}^{-1} C_{nj'} + \sum_{m} C_{im}^{-1} C_{mi'} \, C_{jm}^{-1} C_{mj'} \\ \nonumber &=& \delta_{ii'} \, \delta_{jj'} + \sum_{m} C_{im}^{-1} C_{mi'} \, C_{jm}^{-1} C_{mj'} \; , \label{eqn:usefulid} \end{eqnarray} where, following the notation introduced in Eq. \eqref{Eq:DoubleCount}, the sum $\sum_{[mn]}$ is limited to the indices $m \leq n$. With this result it is then trivial to show that, as expected: \begin{equation} \label{Eq:FishCov} \sum_{[mn]} F_{[ij],[mn]} \, {\rm Cov}_{[mn],[i'j']} = \delta_{[ij],[i'j']} \, . \end{equation} We now present two key results for the covariance and Fisher matrices of the 2-point functions of Gaussian variables, which we will employ later. The first identity concerns the determinants of the Fisher matrix and of the parameter covariance (which are, of course, the inverse of each other). It is possible to show that: \begin{equation} \det \left( {\rm Cov}_{[ij],[i'j']} \right) = 2^{N} \left( \det C \right)^{N+1} \; , \end{equation} and therefore that: \begin{equation} \label{Eq:detF} \det \left( F_{[ij],[i'j']} \right) = 2^{-N} \left( \det C \right)^{-N-1} = 2^{-N} \left( \det C^{-1} \right)^{N+1} \; . \end{equation} We stress the fact that $F_{[ij],[i'j']}$ is an $N_p \times N_p$ matrix, while $C$ is an $N\times N$ matrix. The second identity is the fact that the ``grand sum'' of the Fisher matrix is proportional to the square of the grand sum of the inverse data covariance, namely: \begin{equation} \label{eq:TrF} \sum_{[ij]} \sum_{[i'j']} F_{[ij],[i'j']} = \frac12 \left( \sum_{ij} C^{-1}_{ij} \right)^2 \; . \end{equation} We will use Eqs. \eqref{Eq:detF} and \eqref{eq:TrF} in Section 3, when we optimize the number of tracers in a survey. \subsection{Fisher matrix for the power spectra} \label{sec:Fisher} The fundamental degrees of freedom in a survey are the positions of the tracers (galaxies, halos or other point-like objects that follow the underlying matter distribution). When we measure the number densities of tracer species $i$, $n_i(\vec{x})$, over some volume around the position $\vec{x}$, that number reflects some mean density of those tracers, $\bar{n}_i(\vec{x})$, as well as the fluctuations $\delta n_i = n_i - \bar{n}_i$. From these observables we compute the main object that carries information about cosmology, the data (or ``pixel'') covariance: \begin{equation} \label{Eq:DefCorrFun} C_{ij} (\vec{x},\vec{y}) = \langle \delta n_i (\vec{x}) \, \delta n_j (\vec{y}) \rangle = \bar{n}_i (\vec{x}) \, \bar{n}_j (\vec{y}) \, \xi_{ij}(\vec{x},\vec{y}) + \bar{n}_i (\vec{x}) \delta_{ij} \delta_D (\vec{x} - \vec{y} ) \; , \end{equation} where $\xi_{ij}(\vec{x},\vec{y}) $ is the 2-point correlation function, and the last term is shot noise, which we assume here to follow Poisson statistics. The multi-tracer 2-point correlation function is generally assumed to be related to the matter correlation function, $\xi^{(m)}(\vec{x},\vec{y})$, through some knowable relations such as tracer bias, redshift-space distortions, etc. In real space (i.e., excluding redshift-space distortion), the matter two-point correlation function can be written in terms of the matter power spectrum as: \begin{equation} \xi^{(m)}(\vec{x},\vec{y}) = \xi^{(m)}(|\vec{x} - \vec{y}|) = \int \frac{d^3 k}{(2\pi)^3} \, e^{-i \vec{k} \cdot (\vec{x}-\vec{y})} \, P^{(m)}(k) \; . \end{equation} We can also work directly in Fourier space, and derive the Fisher and covariance matrices for the power spectra. In that case it is more convenient to work with the density contrasts for the tracers, $\delta_i = (n_i-\bar{n}_i)/\bar{n}_i$. The Fourier mode $\vec{k}$ of the density contrast can be expressed as: \begin{equation} \label{eqn:dof} d_i^a (\vec{k}) = \{ \tilde\delta_i (\vec{k}) \, , \, \tilde\delta_{i}^{*} (\vec{k}) \} \; , \end{equation} where $i=1,2,\ldots,N$ denotes the tracer, and $a=1,2$ stand for the Fourier mode and its complex conjugate, respectively. The data covariance is then: \begin{equation} \label{eqn:expval} \langle d_i^a (\vec{k}) d_j^b (\vec{k}{}') \rangle = D^{ab} \, C_{ij} (\vec{k},\vec{k}{}') = C^{ab}_{ij} (\vec{k},\vec{k}{}')\; , \end{equation} where $D^{ab} = 1-\delta^{ab}$. The data covariance in Fourier space is then simply the observed power spectrum for those tracers, including shot noise if it is an auto-spectrum: \begin{equation} \label{eqn:expval2} C_{ij} (\vec{k},\vec{k}{}') = \delta_{\vec{k} \, \vec{k}{}'} \left( P_{ij} + \frac{\delta_{ij}}{ \bar{n}_i} \right) \; , \end{equation} where in the continuum limit we have $\delta_{\vec{k} \vec{k}{}'} \to (2\pi)^3 \delta_D (\vec{k} - \vec{k}{}')$, but for simplicity here we can consider this to be a Kronecker delta-function, up to a constant. One of the ways in which we can derive the Fisher matrix is through the Hessian of the log-likelihood. Given a set of parameters $\theta^\mu$, the Fisher matrix is given by the generalized trace \cite{1997ApJ...480...22T}: \begin{equation} \label{eqn:FishMat} F_{\mu\nu} = \frac{1}{4} \sum_k V \tilde{V}_k \sum_{iji'j'} \sum_{aba'b'} \frac{\partial \, C^{ab}_{ij}}{\partial \theta^\mu} \left[ C^{ba'}_{ji'} \right]^{-1} \frac{\partial \, C^{a'b'}_{i'j'}}{\partial \theta^\nu} \left[ C^{b'a}_{j'i} \right]^{-1} \; , \end{equation} where $V$ is the survey volume, $\tilde{V}_k$ is the volume in Fourier space of the bandpowers (Fourier bins) $k$, and the additional factor of $1/2$ in Eq. (\ref{eqn:FishMat}) is due to the fact that our degrees of freedom include the Fourier modes twice. Notice that this Fisher matrix is diagonal in $\vec{k}$ due to the diagonal nature of the data covariance, Eq. \eqref{eqn:expval2}, and for simplicity for the remainder of this Section we will omit the Fourier space indices. Using the fact that the data covariance is separable, $\left[ C^{ab}_{ij}\right]^{-1} = \left[ D^{ab}\right]^{-1} C_{ij}^{-1}$, and using that $\left[ D^{ab}\right]^{-1} = D^{ab}$, we obtain $\sum_{abcd} D^{ab} D^{bc} D^{cd} D^{da} = \sum_{ac} \delta_{ac} \delta_{ca} = 2$. Hence: \begin{equation} \label{eqn:FishMat2} F_{\mu\nu} = \frac12 \sum_k V \tilde{V}_k \sum_{iji'j'} \frac{\partial \, C_{ij}}{\partial \theta^\mu} C_{ji'}^{-1} \frac{\partial \, C_{i'j'}}{\partial \theta^\nu} C_{j'i}^{-1} \; , \end{equation} In Fourier space the inverse of the data covariance has a trivial expression, namely: \begin{equation} \label{Eq:InvCovPk} C_{ij}^{-1} = \bar{n}_i \, \delta_{ij} - \bar{n}_i \, \frac{P_{ij}}{1+{\cal{P}}} \, \bar{n}_j \; , \end{equation} with ${\cal{P}} = \sum_i \bar{n}_i \, P_{ii}$ -- in fact, the denominator in the second term is precisely $\det C = 1 + \cal{P}$. We now set the parameters $\theta^\mu$ to be the auto- and cross-spectra of the tracers evaluated at some bandpower, $P_{ij}(k)$. Just as was the case in our basic example of the previous Section, these spectra are symmetric, $P_{ij}=P_{ji}$. In order to avoid double-counting these degrees of freedom we define the non-degenerate auto- and cross-spectra as: \begin{equation} \label{Eq:sympow} P_{[ij]} = P_{ij} \quad , \; {\rm for} \; i \leq j \; . \end{equation} Therefore, the spectra with any index can be expressed as: \begin{equation} \label{Eq:ndpow} P_{ij} = P_{[ij]} + P_{[ji]} - \delta_{ij} P_{ii} \; , \end{equation} and a similar expression for the data covariance. We can now evaluate the partial derivatives assuming that the parameters are the non-denegerate spectra: \begin{equation} \label{Eq:InvCovk} \frac{\partial C_{ij}(k)}{\partial P_{[i'j']}(k')} = \delta_{k,k'} \left[ \delta_{ii'}\delta_{jj'} + \delta_{ij'}\delta_{ji'} - \delta_{ij}\delta_{ji'}\delta_{i'j'}\delta_{j'i} \right] \, \equiv \, \delta_{k,k'} \; \delta_{[ij],[i'j']}\; . \end{equation} Substituting this identity into Eq. (\ref{eqn:FishMat2}) results in a Fisher matrix which is diagonal in the bandpowers, and which can be expressed for each mode $k$ as: \begin{equation} \label{eqn:FishMat3} F[P^{[ij]},P^{[i'j']}] = F^{[ij],[i'j']} = V \tilde{V}_k \, \left( 1 - \frac12 \delta^{ij} \right) \left( 1 - \frac12 \delta^{i'j'} \right) \, \left( C_{ii'}^{-1} C_{jj'}^{-1} + C_{ij'}^{-1} C_{ji'}^{-1} \right) \, . \end{equation} It is immediately obvious that the results of the previous section apply here, so the inverse of this Fisher matrix is the usual expression for the covariance of the spectra: \begin{equation} \label{eq:speccov} {\rm Cov} [P_{[ij]},P_{[i'j']}] = {\rm Cov}_{[ij],[i'j']} = \frac{1}{V \tilde{V}_k }\, \left( C_{ii'} C_{jj'} + C_{ij'} C_{ji'} \right) \, . \end{equation} \subsection{Example: two tracers} As an example, we write explicit expressions for the case when we have two tracers. The covariance matrix becomes: \begin{equation} \label{eq:CovPk} {\rm Cov} [P_{[ij]},P_{[i'j']}] = \frac{1}{V \tilde{V}_k } \, \left( \begin{array}{ccccc} 2 C_{11}^2 & \quad & 2 C_{11} C_{12} &\quad & 2 C_{12}^2 \\ {} & {} & {} & {} & \\ 2 C_{11} C_{12} &\quad & C_{11} C_{22} + C_{12}^2 &\quad & 2 C_{12} C_{22} \\ {} & {} & {} & {} & \\ 2 C_{12}^2 &\quad & 2 C_{12} C_{22} &\quad & 2 C_{22}^2 \end{array} \right) \, , \end{equation} where we have ordered the degrees of freedom as $\{ P_{11}, P_{12},P_{22} \} $. The Fisher matrix in that case, derived directly from Eq. \eqref{eqn:FishMat3}, is given by: \begin{eqnarray} \label{eq:FishPk} F [P_{[ij]},P_{[i'j']}] &=& V \tilde{V}_k \, \left( \begin{array}{ccccc} \frac12 C_{11}^{-2} & \quad & C_{11}^{-1} C_{12}^{-1} & \quad & \frac12 C_{12}^{-2} \\ {} & {} & {} & {} & \\ C_{11}^{-1} C_{12}^{-1} & \quad & C_{11}^{-1} C_{22}^{-1} + C_{12}^{-2} & \quad & C_{12}^{-1} C_{22}^{-1} \\ {} & {} & {} & {} & \\ \frac12 C_{12}^{-2} & \quad & C_{12}^{-1} C_{22}^{-1} & \quad & \frac12 C_{22}^{-2} \end{array} \right) \, . \end{eqnarray} In the particular case of the power spectra, the inverse of the data covariance is given by Eq. \eqref{Eq:InvCovPk}. \section{Fisher information and the optimal number of tracers} The results above can serve as a guide to a first attempt at organizing the data of galaxy surveys: is it worth splitting a galaxy population into sub-types with different properties, or upon doing so we risk degrading the discriminating power of our survey? Although the precise answer depends on the nature of the tracers, as well as the kinds of parameters we are trying to constrain (in that respect see also \cite{2010MNRAS.407..772G}), there are some general trends that can be inferred in terms of the summary statistics of the Fisher matrix. Before we proceed any further, it is useful to express the spectra in terms of signal-to-noise ratios (SNR), where the noise is given by shot noise (assumed Poissonian): \begin{equation} \label{eq:defCalP} {\cal{P}}_{ij} = \sqrt{\bar{n}_i \, \bar{n}_j} \, P_{ij} \; . \end{equation} We define the the total clustering (or total SNR) as $ {\cal{P}} = \sum_i {\cal{P}}_{ii} = \sum_i \bar{n}_i \, b_i^2 \, P^{(m)}$, and this quantity can be regarded as being approximately constant for a given survey. When two tracer species $i$ and $j$ are joined to form a composite tracer, their numbers are combined, $n_{i+j}= n_i + n_j$, but the number densities and linear biases are constrained by the relation $\bar{n}_{i+j} b_{i+j} = \bar{n}_{i} b_{i} + \bar{n}_{j} b_{j} $. Since ${\cal{P}} \sim \sum_i \bar{n}_i b_i^2$, by joining or splitting tracers we can slightly increase or decrease the total SNR. The Fisher and covariance matrices can also be expressed in terms of signal and noise. For generic degrees of freedom $X_\mu$ ($\mu=1,2,\ldots,n$) we have: \begin{eqnarray} \label{eq:defCalF} {\cal{F}}[X_\mu,X_\nu] &=& X_\mu \, F [X_\mu,X_\nu] \, X_\nu = F[\log X_\mu , \log X_\nu ] \\ \label{eq:defCalC} {\cal{C}}[X_\mu,X_\nu] &=& \frac{{\rm Cov}[X_\mu,X_\nu]}{X_\mu X_\nu} = {\rm Cov}[\log X_\mu, \log X_\nu] \; . \end{eqnarray} Clearly, these Fisher and covariance matrices refer to the relative uncertainties in the parameters $X_\mu$. All our summary statistics will be derived in terms of these relative uncertainties -- or, equivalently, in terms of SNR. There are arbitrarily many summary statistics that one can build from the Fisher matrix, and there is no single expression that can claim to capture the total information \cite{Bayes}. Although in principle we should be guided by invariants such as the trace or the determinant of the matrix, depending on the application one summary statistics may be more suitable. The determinant of the Fisher matrix (whose square root is known as the Jeffreys prior \cite{Jeffrey}) is evidently a convenient summary statistics, and in the context of cosmology, when the Fisher matrix is projected into some sub-space, it is also called ``figure of merit'' (FoM) \cite{2006astro.ph..9591A}. In our context we define the FoM as: \begin{equation} \label{eq:FoM} \Delta = \det {\cal{F}} [X_\mu,X_\nu] \; . \end{equation} The FoM corresponds to the inverse volume of an ellipsoid in $n$ dimensions, defined by the 68\% confidence limit of the corresponding multivariate Gaussian distribution. The smaller the volume in parameter space, the greater is the discriminating power. Another useful summary statistics is a generalization of the $\chi^2$, given by the grand sum of the Fisher matrix. Let's take the usual $\chi^2$: \begin{equation} \label{eq:chi2} \chi^2 = \sum_{\mu\nu} \left( X_\mu - \bar{X}_\mu \right) \, F[X_\mu,X_\nu] \, \left( X_\nu - \bar{X}_\nu \right) = \sum_{\mu\nu} \left( 1 - \frac{\bar{X}_\mu}{X_\mu} \right) \, {\cal{F}} [X_\mu,X_\nu] \, \left( 1 - \frac{\bar{X}_\nu}{X_\nu} \right) \; , \end{equation} where $\bar{X}_\mu$ are the fiducial values of the parameters $X_\mu$. Now, make all $X_\mu$ equal to a certain fraction of the $\bar{X}_\mu$, such that the terms multiplying ${\cal{F}}$ in the sum above reduce to a constant. We then define the grand sum of the Fisher matrix as: \begin{equation} \label{eq:defTotTrace} \Xi = \sum_{\mu\nu} \, {\cal{F}} [X_\mu,X_\nu] = \sum_{\mu\nu} \, X_\mu \, {\cal{F}} [X_\mu,X_\nu] \, X_\nu \; . \end{equation} \subsection{Fisher summary statistics for the auto-spectra} Before we examine the Fisher matrix in full generality, it is instructive to consider the case when the cross-spectra are {\em not} independent degrees of freedom, but are in fact constrained in terms of the auto-spectra, $P_{ij}^2 = P_{ii} P_{jj}$. This corresponds to assuming that the non-linear and stochastic terms $\epsilon_i^S \to 0$ in Eq. \eqref{eq:deltai}, and that shot noise can be perfectly subtracted. In that case the Fisher matrix is given by \cite{abramo2012full,abramo2013multitracer}: \begin{equation} \label{eq:FishMT} {\cal{F}}[{\cal{P}}_{ii},{\cal{P}}_{jj}] = \frac{V \tilde{V}_k}{4} \frac{\delta_{ij} {\cal{P}}_{ii} {\cal{P}} (1+{\cal{P}}) + {\cal{P}}_{ii} {\cal{P}}_{jj} (1-{\cal{P}})}{(1+{\cal{P}})^2} \; . \end{equation} It is worth stressing here that by including only the auto-spectra in the Fisher matrix does not mean that we are discarding measurements of the cross-spectra: it simply means that the degrees of freedom in the cross-spectra are assumed to be redundant with those already found in the cross-spectra. It is straightforward to show that the FoM associated with the Fisher matrix of Eq. \eqref{eq:FishMT} is given by: \begin{equation} \label{eq:detFauto} \Delta = \left( \frac{V \tilde{V}_k}{4} \right)^N \, \frac{2 \, {\cal{P}}^N }{(1+{\cal{P}})^{N+1}} \, \prod_i {\cal{P}}_{ii} \; . \end{equation} We can now ask what happens as we change the number of tracers -- either by splitting one tracer into two or more sub-types, or by combining many tracers into a single one. However, notice that the determinant of Eq. \eqref{eq:detFauto} carries a phase-space volume factor for each degree of freedom, $(V\tilde{V}_k)^N$. Moreover, when projecting into a final set of parameters, those phase space volumes will tend to be compensated by the larger number of parameters from each tracer species. This discussion indicates that what we ought to use as a summary statistics is the Fisher information per unit phase space volume, which in this case (where we ignore the information from the cross-spectra) results in the renormalized FoM: \begin{equation} \label{eq:detFautoRenorm} \Delta_{Ph} = \frac{\Delta_{N}}{ \left( V \tilde{V}_k\right)^N} = \frac{1}{4^N} \, \frac{2 \, {\cal{P}}^N}{(1+{\cal{P}})^{N+1}} \, \prod_i {\cal{P}}_{ii} \; . \end{equation} In order to maximize the Fisher information it is clear that we must first maximize the clustering SNR ${\cal{P}}$, given the constraint that the total number of tracers is kept fixed -- this is indeed what we expect from a survey where we observe some total number of objects, that we can subdivide into one or more species of tracers. But for a fixed number of tracers ($N$), what is the optimal way to draw the lines between the tracers? The answer clearly depends on the way that bias varies with the number density. Let's assume that the bias is given by a power-law in terms of the number density: $b_i^2 \sim \bar{n}_i^{-\gamma}$. The total SNR is therefore given by: \begin{equation} \label{Eq:Pofgamma} {\cal{P}} = \sum_i {\cal{P}}_{ii} = \sum_i \bar{n}_i \, b_i^2 \, P^{(m)} = \bar{n}_T \, b_T^2 \, P^{(m)} \sum_i w_i^{1-\gamma} \; , \end{equation} where $\bar{n}_T = \sum_i \bar{n}_i$, $b_i^2 = b_T^2 (\bar{n}_i/\bar{n}_T)^{-\gamma}$, and we defined the weights: \begin{equation} \label{Eq:weights} w_i = \frac{\bar{n}_i}{\bar{n}_T} \; . \end{equation} Extremizing the total clustering SNR subject to the constraint that the total number of objects is fixed ($\sum_i w_i = 1$) leads to an ``equipartition'' between all the tracers, i.e., $w_i = 1/N$. This extremal value is a maximum only if $\gamma (1-\gamma) < 0$, or $0 < \gamma < 1$ -- otherwise equipartition yields a minimum. Substituting $w_i=1/N$ back into Eq. \eqref{Eq:Pofgamma} we obtain that ${\cal{P}} \to \bar{n}_T b_T^2 P^{(m)} N^\gamma$, which grows with the number of tracers if the power-law $\gamma$ is in the range where equipartition yields a maximum of the clustering SNR. Interestingly, the halo mass function and halo bias fits found by Tinker et al. \cite{2008ApJ...688..709T} indicate that, for the halos masses of cosmological interest ($10^{10} \lesssim M[h^{-1} M_\odot] \lesssim 10^{15}$), this index is $0.1 \lesssim \gamma \lesssim 0.5 $. Therefore, for tracers which behave similarly to dark matter halos, the configuration which maximizes the total SNR ${\cal{P}}$ has the populations divided in equal numbers, with as many sub-species of tracers as practically possible. If, on the other hand, the tracers behave in a completely different way, such that $\gamma <0$ or $\gamma > 1$, then the maximum of ${\cal{P}}$ is obtained by taking one of the $w_i \to 1$, and all others to zero -- i.e., in such a situation it would be optimal to join all the tracers into a single population. For mixed tracers, such as galaxies, separated by properties such as stellar mass, luminosity, morphology, etc., as long as the parameter used to separate the populations generates a dependence of the bias with number density which follows the same trends as the dark matter halos, then the main conclusion remains the same. Notice that equipartition in terms of the number densities, which maximizes ${\cal{P}}$, also implies equipartition of that total clustering SNR amongst all tracers: since $w_i = 1/N$, we have that $w_i^{1-\gamma} = N^{\gamma-1}$, and therefore ${\cal{P}}_{ii} = {\cal{P}}/N$. Moreover, equipartition of the clustering also happens to maximize the product $\prod_i {\cal{P}}_{ii}$, which implies that the FoM of Eq. \eqref{eq:detFautoRenorm} is maximized when $w_i = 1/N$, and ${\cal{P}}_{ii} = {\cal{P}}/N$. It is more enlightening to express the FoM at its maximum value in terms of the total SNR ${\cal{P}}$. Substituting the optimal configuration (i.e., equipartition) into Eq. \eqref{eq:detFauto} we obtain: \begin{equation} \label{eq:MaxdetFauto} \Delta_{Ph}^{\rm max} = \frac{1}{4^N} \frac{2 \, {\cal{P}}^N}{(1+{\cal{P}})^{N+1}} \, \left( \frac{{\cal{P}}}{N} \right)^N \; . \end{equation} While in this expression the total SNR ${\cal{P}}$ still depends on the number of tracers through a relation such as ${\cal{P}} \sim N^\gamma$, we can find an approximate expression for the number of tracers that maximize the FoM, assuming that ${\cal{P}}$ varies slowly with $N$ -- i.e., in the limit that $\gamma$ is small. We have, in that limit: \begin{equation} \label{eq:NmaxDet} \left. \frac{d \Delta_{Ph}^{\rm max}}{dN} \right|_{{\cal{P}}} = 0 \quad \Rightarrow \quad N_0^{\rm max} = \frac{1}{4\, e} \, \frac{{\cal{P}}^2}{1+{\cal{P}}} \; , \end{equation} where $e$ is Euler's number. Clearly, then, we obtain that, as the total clustering strength ${\cal{P}}$ increases, so grows the optimal number of tracers that we should use in order to maximize the information from our survey. In a more realistic situation, we cannot hold the total clustering SNR ${\cal{P}}$ constant as we change the number of species of tracers. In Fig. \ref{fig:Deltas} (left panel) we show the FoM as a function of $N$, assuming that ${\cal{P}} = P_0 N^\gamma$ --- here $P_0=\bar{n}_T b_T^2 P^{(m)} $ is the baseline clustering SNR\footnote{E.g., a low-redshift survey with $\bar{n}_T = 2. \times 10^{-3} \, h^3$ Mpc$^{-3}$ and a mean bias $b_T =1.5$, at a reference scale of $k_0=0.1 \, h$ Mpc$^{-1}$, and considering a spectrum $P^{(m)} (k_0)=10^4 \, h^{-3}$ Mpc$^3$, yields $P_0 =45$.}. The solid, dashed and thin-dashed lines denote the power-law values $\gamma=0$, 0.05 and 0.1, while the colors refer to the values of $P_0$ which make $N^0_{max}=$ 2 (red, $P_0=22.2$), 4 (orange, $P_0=44$), 6 (green, $P_0=65.7$) and 8 (blue, $P_0=87.5$), according to Eq. \eqref{eq:NmaxDet} -- i.e., assuming that $\gamma \to 0$. This plot shows that the higher the clustering SNR $\cal{P}$ is, the higher is also the optimal value for the number of tracers, $N^{max}$. The plot also shows that, for a fixed value of the baseline clustering $P_0$, the optimal number of tracers grows slightly for higher values of $\gamma$ (as denoted by the shift to higher values of $N$ of the dashed lines with respect to the solid lines). \begin{figure} \centering \includegraphics[width=0.48\textwidth]{DeltaN_auto.pdf} \includegraphics[width=0.48\textwidth]{DeltaN_cross.pdf} \caption{Figure of Merit (FoM) as a function of the number of tracers, $N$, assuming that ${\cal{P}} = P_0 N^\gamma$. Left panel: FoM considering only the auto-spectra, Eq. \eqref{eq:detFautoRenorm}, for the power law indices $\gamma=0$ (solid), 0.05 (dashed), and 0.1 (thin-dashed lines). The different colors refer to different values of $P_0$ that make $N_0^{max}=2$ (red), 4 (orange), 6 (green) and 8 (blue lines) -- see Eq. \eqref{eq:NmaxDet}. Right panel: FoM considering the auto- and cross-spectra, Eq. \eqref{Eq:DetFCGen}, for the power law indices $\gamma=0$, 0.05, 0.1 and 0.2, which are respectively denoted by the solid, dashed, thin-dashed and dotted lines. As in the left panel, the red, orange, green and blue lines refer to the values of $P_0$ that make $N_0^{max}=2$, 4, 6 and 8 -- in this instance, see Eq. \eqref{eq:NMaxDet2}.} \label{fig:Deltas} \end{figure} As an alternative to the determinant of the Fisher matrix, we could also consider the grand sum defined in Eq. \eqref{eq:defTotTrace} as a summary statistics for the total Fisher information. In the case where the auto-spectra contain all the information, that grand sum is obtained directly from Eq. \eqref{eq:FishMT}: \begin{equation} \label{eq:trFauto} \Xi = \frac{V \tilde{V}_k}{2} \, \frac{{\cal{P}}^{2}}{(1+{\cal{P}})^{2}} \; . \end{equation} Interestingly, this summary statistics does not carry an explicit dependency on the number of tracers: for a fixed volume and bandwith, the total information depends only on the total clustering SNR ${\cal{P}}$. Since maximizing that SNR also maximizes the grand sum, this leads us back to the same conclusion that equipartition is the optimal solution. \subsection{Fisher summary statistics: including the cross-spectra} We now include the independent information from the cross-spectra in our analysis. In that case, the determinant of the Fisher matrix can be derived directly from Eq. \eqref{Eq:detF}. Here we assume, {\em a posteriori}, that for the fiducial model we have ${\cal{P}}_{12}^2 \to {\cal{P}}_{11} {\cal{P}}_{22}$. This means that in our expressions we assume that the non-linear and stochastic terms $\epsilon_i$ are relatively small -- a good approximation on large scales. The resulting FoM is then given by: \begin{eqnarray} \label{Eq:DetFCGen} \Delta = \det \left( {\cal{F}}_{[ij],[i'j']} \right) &=& 2^{-N} \left( V \tilde{V}_k \right)^{N(N+1)/2} \frac{\prod_{i} {\cal{P}}_{ii}^2}{\left( 1+{\cal{P}} \right)^{N+1}} \; . \end{eqnarray} This expression should be compared with Eq. \eqref{eq:detFauto}, in the case when the cross-spectra are not independent degrees of freedom. Once again, there are multiplicative factors of the phase space volume for each one of the degrees of freedom -- in this case, the $N(N+1)/2$ auto- and cross-spectra. The renormalized FoM then becomes: \begin{eqnarray} \label{Eq:DetFCGenNorm} \Delta_{Ph} = &=& 2^{-N} \frac{\prod_{i} {\cal{P}}_{ii}^2}{\left( 1+{\cal{P}} \right)^{N+1}} \; . \end{eqnarray} Just as it happened in the previous Section, maximizing the FoM leads to the equipartition of power between the tracers, ${\cal{P}}_{ii} \to {\cal{P}}/N$ -- and the same conditions still apply regarding the dependence of the biases of the tracers with the number densities, which we assume to be $b_i^2 \sim \bar{n}_i^{-\gamma}$, with $0<\gamma <1$ so that the extremum of the FoM is in fact a maximum. Finally, knowing that the equipartition of power maximizes the FoM $\Delta_{N,Ph}$, we can ask whether or not it is beneficial to split a sample into sub-types of tracers. Let's again consider the FoM at its maximum, i.e., substituting $P_{ii} = {\cal{P}}/N$ into Eq. \eqref{Eq:DetFCGenNorm}: \begin{eqnarray} \label{Eq:DetFCGenMax} \Delta_{Ph}^{\rm max} = \frac{1}{2^{N} \, N^{2N} } \frac{{\cal{P}}^{2N}}{ \left( 1+ {\cal{P}}\right)^{N+1}} \; . \end{eqnarray} In the right panel of Fig. \ref{fig:Deltas} we show the FoM of Eq. \eqref{Eq:DetFCGenMax} as a function of $N$, again assuming that ${\cal{P}} = P_0 N^\gamma$. In this case we have plotted the FoM for the values $\gamma=0$ (solid), 0.05 (dashed), 0.1 (thin-dashed), and 0.2 (dotted lines), and the colors refer to the values of $P_0$ which make $N_0^{max}=$ 2 (red, $P_0=60$), 4 (orange, $P_0=237$), 6 (green, $P_0=533$) and 8 (blue, $P_0=947$), according to Eq. \eqref{eq:NMaxDet2}. As was the case when we eliminated the information in the cross-spectra (left panel of Fig. \ref{fig:Deltas},) we see that higher values of the clustering SNR $\cal{P}$ lead to higher values for the optimal number of tracer species. However, there are some important differences when we include the cross-spectra. First, for similar values of the baseline SNR $P_0$, the optimal number of tracers when we disregard the degrees of freedom of the cross-spectra is higher than the optimal number of tracers when we take into account those degrees of freedom: e.g, compare the green lines in the left plot of Fig. \ref{fig:Deltas} (for $P_0 = 65.7$) with the red lines in the plot on the right (for $P_0 = 65.7$). Discarding the cross-spectra, we would arrive at an optimal number of $N\sim 8-10$ tracers if we neglected the cross-spectra, and an optimal number of $N\sim 2-3$ tracers if we fully incorporate those degrees of freedom. Another way of putting this is to notice that, when including the cross-spectra, the baseline clustering SNR needs to be significantly higher for the same optimal number of tracers compared with the case where the cross-spectra are not independent degrees of freedom: e.g., for $N^{max}_0=4$ we only need $P_0=44$ when the cross-spectra are discarded, while that value grows to $P_0=237$ when they are included. The second difference is that, when we include the cross-spectra, the FoM becomes less sensitive to the power-law index $\gamma$, as can be seen by the difference between the solid, dashed and dotted lines of the plot in the right panel of Fig. \ref{fig:Deltas}, which is much less pronounced than in the case when we discarded the degrees of freedom in the cross-spectra (left panel of the same figure). Just as we did in the previous Section, an approximate expression for the optimal number of tracers can be found by taking the derivative of the FoM while assuming that the clustering SNR is kept fixed: \begin{eqnarray} \label{eq:NMaxDet2} \left. \frac{d \, \Delta_{Ph}^{\rm max}}{dN} \right|_{{\cal{P}}} = 0 \quad \Rightarrow \quad N_0^{max} = \frac{1}{\sqrt{2} \, e} \, \frac{{\cal{P}}}{ \sqrt{1+ {\cal{P}}}} \; . \end{eqnarray} Since in the case where the cross-spectra are included the FoM is not very sensitive to $\gamma$ (see Fig. \ref{fig:Deltas}), we can take the limit $\gamma \to 0$ and substitute the maximal number found in Eq. \eqref{eq:NMaxDet2} into Eq. \eqref{Eq:DetFCGenMax}, to express the FoM as a function of the total SNR ${\cal{P}}$. The result is shown in Fig. \ref{fig:totfish}, for $N=1$, 2, 3, and 4. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{Total_Fisher_Info.pdf} \caption{Figure of Merit (FoM) for the Fisher information at its maximum, $\Delta_{N,Ph}^{max}$, as a function of the total power ${\cal{P}}$, for 1 (blue), 2 (green), 3 (orange), and 4 (red) tracers. Here we assumed that ${\cal{P}}$ does not vary significantly with $N$ -- see the text.} \label{fig:totfish} \end{figure} The other summary statistics that we can employ is the grand sum of the relative Fisher matrix, Eq. \eqref{eq:defTotTrace}. We have, for the relative Fisher matrix of the auto- and cross-spectra: \begin{equation} \label{eq:TrF2} \Xi = \sum_{[ij]} \sum_{[i'j']} {\cal{F}}_{[ij],[i'j']} = \frac12 \, V \tilde{V}_k \, \left( N - \frac{1}{1+{\cal{P}}} \sum_{ij} {\cal{P}}_{ij} \right)^2 \; . \end{equation} We can again extremize this functional assuming that $b_i^2 \sim \bar{n}_i^{-\gamma}$ , subject to the constraint $\sum_i w_i = \sum_i \bar{n}_i/\bar{n}_T = 1$. The result is the same as before: the weights are all the same, meaning equipartition of the number densities and SNRs, $w_i = 1/N$ and ${\cal{P}}_{ij} ={\cal{P}}_{ii} = {\cal{P}}/N$. Substituting this extremal solution back into Eq. \eqref{eq:TrF2} we obtain: \begin{eqnarray} \label{Eq:DetFCGenMax2} \Xi^{\rm max} &=& \frac12 \, V \tilde{V}_k \, \left( N - \frac{1}{1+{\cal{P}}} N^2 \frac{{\cal{P}}}{N} \right)^2 \\ \nonumber &=& \frac12 \, V \tilde{V}_k \, \frac{N^2}{\left( 1+{\cal{P}}\right)^2} \; . \end{eqnarray} Now, recall that ${\cal{P}} \sim N^\gamma$, with $0.1 \lesssim \gamma \lesssim 0.5$, and then it becomes clear that the grand sum Fisher information always grows when we split the tracers into more sub-types. A caveat is in order with regards to the conclusions that were drawn above. The precise behaviours of the FoM and the grand sum Fisher information depend on the nature of the relationship between the number densities of the tracers, $\bar{n}_i$, and their biases, $b_i$. In the derivation above we assumed that $b_i^2 \sim \bar{n}_i^{-\gamma}$ for all tracers, with $0 < \gamma < 1$, regardless of how the tracers are split. But this is not strictly valid even for halos: at $z=0$ the fit by Tinker for halo bias at low ($ M_h \lesssim 10^{12} \, h^{-1} \, M_\odot$) and high ($ M_h \gtrsim 10^{15} \, h^{-1} \, M_\odot$) masses yields $\gamma \sim 0.1-0.3$, while in the intermediate mass range $\gamma \sim 0.4-0.5$ \cite{2008ApJ...688..709T}. For galaxies, quasars and Ly-$\alpha$ systems the way in which we can break up the tracers into sub-types, according to luminosity, stellar mass or other properties, could be even less consistent with the hypothesis of a simple power-law relationship between number densities and biases. If that turns out to be the case in a given galaxy survey, then maximizing the information may be significantly more complex than in this simplified model. \section{The independent degrees of freedom: diagonalizing the Fisher matrix} If the cross-correlations do not carry any additional information, then, as shown by \cite{abramo2013multitracer}, the independent degrees of freedom which diagonalize the Fisher matrix of Eq. \eqref{eq:FishMT} are given by the total clustering power, ${\cal{P}}$, and ratios of their clusterings. In the case of two tracers, the diagonalized Fisher matrix for the degrees of freedom $\{ {\cal{P}} \, , \log ({\cal{P}}_{11} / {\cal{P}}_{22}) \} $ is given by: \begin{eqnarray} \label{eq:FishDiag} F [{\cal{P}} \, , \, \log ({\cal{P}}_{11} / {\cal{P}}_{22})] &=& V \tilde{V}_k \, \left( \begin{array}{cc} \frac12 \frac{1}{(1+{\cal{P}})^2} & 0 \\ 0 & \frac14 \frac{{\cal{P}}_{11} \, {\cal{P}}_{22} }{1+{\cal{P}}} \end{array} \right) \, . \end{eqnarray} This expression makes it clear that, at least in the Gaussian approximation, measurements of the total clustering strength ${\cal{P}}$ are limited by cosmic variance: even if the total clustering becomes arbitrarily large, ${\cal{P}} \to \infty$, its relative uncertainty is still limited by the volume of the survey and the Fourier-space volume of the bandwith, $\sigma^2({\cal{P}})/{\cal{P}}^2 \to 2/(V \tilde{V}_k)$. The ratios of spectra, on the other hand, can be measured with arbitrarily large accuracy (at least in principle), as long as we keep increasing the number densities of the two tracers. However, it is only on very large scales that one can realistically expect that the auto- and cross-spectra are degenerate, $P_{ij}^2 = P_{ii} P_{jj}$. Indeed, as argued in the Introduction, on small scales the cross-spectra may carry information about additional physical dependencies that are not directly available through the auto-correlations. But more importantly, cross-correlations and cross-spectra constitute different observables, which can be estimated from the data in different ways, in order to optimize the amount of information that is extracted from the survey. Therefore, the question arises as to what are the independent degrees of freedom when the cross-spectra are regarded as carrying irreducible degrees of freedom, which are not degenerate with the auto-spectra. To be specific, the problem we wish to solve, in the particular case of two tracers, is how to diagonalize the Fisher matrix of Eq. \eqref{eq:FishPk}. A straightforward calculation shows that the three degrees of freedom which diagonalize that Fisher matrix are: \begin{eqnarray} \label{eq:Q1} {\cal{Q}}_1 &=& {\cal{P}} \left[ 1 + \frac12 \log \left( \frac{{\cal{P}}_{11}^2 + {\cal{P}}_{22}^2 + 2 {\cal{P}}_{12}^2}{{\cal{P}}^2} \right) \right] \\ \label{eq:Q2} {\cal{Q}}_2 &=& \log \left( \frac{{\cal{P}}_{11}^2 + {\cal{P}}_{12}^2 }{{\cal{P}}_{22}^2 + {\cal{P}}_{12}^2} \right) \\ \label{eq:Q3} {\cal{Q}}_3 &=& \frac{{\cal{P}}_{12}^2 - {\cal{P}}_{11} {\cal{P}}_{22}}{{\cal{P}}^2 } \; . \end{eqnarray} It is clear that, in the limit that ${\cal{P}}_{12}^2 \to {\cal{P}}_{11} {\cal{P}}_{22}$, the first two degrees of freedom reduce to ${\cal{Q}}_1 \to {\cal{P}}$ and to ${\cal{Q}}_2 \to \log {\cal{P}}_{11}/{\cal{P}}_{22}$, while the third one effectively disappears, ${\cal{Q}}_3 \to 0$. For that reason, it is useful to express the irreducible degree of freedom in the cross-spectrum in terms of an adimensional quantity $\epsilon_{12}$: \begin{equation} {\cal{P}}_{12}^2 = {\cal{P}}_{11} {\cal{P}}_{22} (1 + \epsilon_{12} ) \quad \Leftrightarrow \quad {\cal{Q}}_3 = \frac{{\cal{P}}_{11} {\cal{P}}_{22}}{{\cal{P}}^2} \, \epsilon_{12} \, . \end{equation} Computing the Jacobian for the transformation from $\{ {\cal{P}}_{11} ,{\cal{P}}_{12} ,{\cal{P}}_{22} \} \to \{ {\cal{Q}}_1 , {\cal{Q}}_1 , {\cal{Q}}_3\} $ and using it to project the Fisher matrix of Eq. \eqref{eq:FishPk} into the new degrees of freedom, yields the result: \begin{eqnarray} \label{eq:DiagFishPk} F [{\cal{Q}}_i , {\cal{Q}}_j]&=& V \tilde{V}_k \, \left( \begin{array}{ccccc} \frac12 \frac{1}{(1+{\cal{P}})^2} & \quad & 0 & \quad & 0 \\ {} & {} & {} & {} & \\ 0 & \quad & \frac14 \frac{{\cal{P}}_{11} \, {\cal{P}}_{22} }{1+{\cal{P}}} & \quad & 0 \\ {} & {} & {} & {} & \\ 0 & \quad & 0 & \quad & \frac12 (1+{\cal{P}})^2 \end{array} \right) \, . \end{eqnarray} Notice that our {\em fiducial model} is such that $P_{12}^2 \to P_{11} P_{22}$, or $\epsilon_{12} \to 0$, but we only take this limit {\em after} transforming to the new degrees of freedom. Some facts are immediately clear: first, the two generalized degrees of freedom ${\cal{Q}}_1$ and ${\cal{Q}}_2$, have the same Fisher information as the simplified degrees of freedom ${\cal{P}}$ and $\log ({\cal{P}}_{11}/{\cal{P}}_{22})$ -- see Eq. \eqref{eq:FishDiag}. But more importantly, just as it happens with the relative clustering strength ${\cal{Q}}_2$, the irreducible information in the cross-spectrum, encapsulated in ${\cal{Q}}_3$, also have a signal-to-noise ratio (SNR) that grows arbitrarily with the number density of the tracers, i.e.: \begin{equation} \label{Eq:SigmaQ3} \frac{{\cal{Q}}_3^2}{\sigma^2({\cal{Q}}_3)} = \frac{V \tilde{V}_k}{2} (1+{\cal{P}})^2 \left( \frac{{\cal{P}}_{12}^2 - {\cal{P}}_{11} {\cal{P}}_{22}}{{\cal{P}}^2} \right)^2 = \frac{V \tilde{V}_k}{2} (1+{\cal{P}})^2 \left( \frac{ {\cal{P}}_{11} {\cal{P}}_{22}}{{\cal{P}}^2} \right)^2 \epsilon_{12}^2 \, . \end{equation} Fig. \ref{fig:SNR} shows the behavior of the SNRs of the three independent degrees of freedom in some typical scenarios. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{SNRs.pdf} \caption{Signal-to-noise ratios ${\cal{Q}}_i/\sigma({\cal{Q}}_i)$, for the three independent degrees of freedom in the case of two tracers -- see Eqs. \eqref{eq:Q1}-\eqref{eq:Q3}. The blue, green and red curves correspond to the SNRs of ${\cal{Q}}_1$, ${\cal{Q}}_2$ and ${\cal{Q}}_3$. The thin, dashed lines correspond to the scenario with $\bar{n}_1 = 10^{-4} \, h^{3} \, {\rm Mpc}^{-3}$, while the thick, solid lines correspond to the case where $\bar{n}_1 = 10^{-3} \, h^{3} \, {\rm Mpc}^{-3}$, both as a function of $\bar{n}_2$. For these examples we considered a volume $V=10^9 \, h^{-3} \, {\rm Mpc}^3$, a spherical bandpower at $k= (0.1 \pm 0.0025) \, h \, {\rm Mpc}^{-1}$, and a value of the power spectrum of $P^{(m)} (k) = 10^4 \, h^{-3} \, {\rm Mpc}^3$.} \label{fig:SNR} \end{figure} Taking the high SNR limit, ${\cal{P}}_1 = \bar{n}_1 P_{11} \gg 1$ and ${\cal{P}}_2 = \bar{n}_2 P_{22} \gg 1$, we see that the accuracies in the measurements of the three independent variables scale as: \begin{eqnarray} \label{eq:SNR1} \frac{{\cal{Q}}_1^2}{\sigma^2({\cal{Q}}_1)} &\sim & \left( \frac{\bar{n}_1 \, \bar{n}_2}{\bar{n}_1 + \bar{n}_2} \right)^0 \to \left( \frac{\bar{n}_T}{4} \right)^0 \\ \label{eq:SNR2} \frac{{\cal{Q}}_2^2}{\sigma^2({\cal{Q}}_2)} &\sim & \left( \frac{\bar{n}_1 \, \bar{n}_2}{\bar{n}_1 + \bar{n}_2} \right)^1 \to \left( \frac{\bar{n}_T}{4} \right)^1 \\ \label{eq:SNR3} \frac{{\cal{Q}}_3^2}{\sigma^2({\cal{Q}}_3)} &\sim & \left( \frac{\bar{n}_1 \, \bar{n}_2}{\bar{n}_1 + \bar{n}_2} \right)^2 \to \left( \frac{\bar{n}_T}{4} \right)^2 \; , \end{eqnarray} where, for two tracers, $\bar{n}_T = \bar{n}_1 + \bar{n}_2$, and the expressions on the right-hand sides result from using the optimal ``equipartition'' configuration that maximizes the SNR, namely $\bar{n}_1=\bar{n}_2=\bar{n}_T/2$. The first limit, Eq. \eqref{eq:SNR1}, expresses cosmic variance: the accuracy in measurements of the matter power spectrum is fundamentally limited by the volume where we measure the density perturbations and by the width of the bandpower, even if we can count on arbitrarily large numbers of tracers to determine the density field inside that particular volume. The second limit, Eq. \eqref{eq:SNR2}, means that ratios of power spectra of different tracers can be measured with an accuracy that is only limited by the numbers of tracers that we have, and scale with the number density \cite{abramo2013multitracer}. Finally, the third limit, Eq. \eqref{eq:SNR3}, tells us that the irreducible degrees of freedom of the cross-correlations scale even faster with the tracer densities, compared with the ratios of tracers. Therefore, in the limit of high number of tracers, these degrees of freedom can be determined with extremely high accuracy, and are even less constrained by cosmic variance. On the other hand, in the limit of small number densities, the situation is reversed. In that case the Fisher matrix of Eq. \eqref{eq:DiagFishPk} tells us that the SNR of the degrees of freedom scale as $(\bar{n}_1 + \bar{n}_2)^2 $ for ${\cal{Q}}_1$, as $\bar{n}_1 \, \bar{n}_2 $ for ${\cal{Q}}_2$, and as $ \bar{n}_1^2 \, \bar{n}_2^2/(\bar{n}_1 + \bar{n}_2)^4 $ for ${\cal{Q}}_3$. Therefore, in the limit of very sparse tracers it becomes increasingly difficult to measure the ratios of spectra, and even harder to determine the irreducible degrees of freedom in the cross-spectra. These results put the issue of the cancellation of cosmic variance into a new light: some degrees of freedom profit even more from a denser sampling than others. Ratios of spectra (green lines in Fig. \ref{fig:SNR}), which allow us to measure, e.g., redshift-space distortions and primordial non-Gaussianities, as well as many parameters in the bias expansion \cite{2021arXiv210811363M}, start to become more interesting when the numbers of tracers are larger than $\bar{n} \gtrsim 10^{-4} \, h^{3} \, {\rm Mpc}^{-3}$, but they saturate at a SNR $\sim 1-2\%$ unless we have both tracers with number densities $\bar{n} \gtrsim 10^{-2} \, h^{3} \, {\rm Mpc}^{-3}$, which doesn't seem reasonable. On the other hand, the independent degrees of freedom in the cross-spectrum (red lines in Fig. \ref{fig:SNR}) start to become detectable for $\bar{n} \gtrsim 10^{-4} \, h^{3} \, {\rm Mpc}^{-3}$, but their accuracy grows even faster with the number density, such that we can reasonably achieve accuracies of $\sim 0.3\%$ when the two tracers have number densities of $\bar{n} \sim 10^{-3} \, h^{3} \, {\rm Mpc}^{-3}$. Finally, we note that we were unable to determine an analytic expression that generalizes to $N$ tracers the independent degrees of freedom ${\cal{Q}}_i$ when we include the cross-spectra. Nevertheless, a numerical study of the eigenvalues of the Fisher matrix seems to indicate that the irreducible degrees of freedom in the cross-spectra indeed scale in the manner shown in Eq. \eqref{eq:SNR3}. We will return to this issue in a future paper. \section{Conclusions} This paper conveys two main results. Firstly, we showed how to use two summary statistics of the Fisher matrix in order to optimize the number of tracer species in a survey. We employed the following proxies for the total Fisher information: the determinant (or Figure of Merit), and the grand sum of the Fisher matrix. By assuming a simple power-law relation between bias and number density we have shown that using either one of these two summary statistics, with or without the explicit inclusion of the independent degrees of freedom for the cross-spectra, the configuration that optimizes the total Fisher information is that in which the tracers are divided into equal samples: $\bar{n}_i = \bar{n}_T/N$, where $\bar{n}_T = \sum_{i=1}^N \bar{n}_i$ is the total number of tracers in the survey. Moreover, in general, the higher the total clustering SNR ${\cal{P}} = \sum_i \bar{n}_{i} P_{ii}$, the higher is the optimal number of tracers that one should employ in order to maximize that Fisher information. We have also shown in this paper that the information in the irreducible degrees of freedom of the cross-spectra, expressed in terms of ${\cal{P}}_{ij}^2 - {\cal{P}}_{ii}{\cal{P}}_{jj}$, can be measured with an accuracy that is not limited by cosmic variance. This result is analogous to the well-known fact that ratios of spectra are partially immune to cosmic variance, however, while the accuracy of spectral ratios increase with $\sim \sqrt{\bar{n}_T}$, the accuracy of the degrees of freedom of the cross-spectra grow with $\sim \bar{n}_T$. This means that physical parameters which are manifested in the combination ${\cal{P}}_{ij}^2 - {\cal{P}}_{ii}{\cal{P}}_{jj}$ can be measured, at least in principle, to exquisite precision. In particular, stochastic and non-linear terms in the perturbative bias expansion, such as those shown in Eq. \eqref{eq:AutoCross}, can benefit from this windfall of the multi-tracer analysis. \acknowledgments We thank Henrique Rubira, Thiago Mergulh\~ao and Rodrigo Voivodic for useful comments. We also acknowledge the financial support of FAPESP (R.A.), CNPq (R.A \& I.L.T.) and CAPES (J.V.D.F.). \nocite{*} \bibliographystyle{JHEP}
proofpile-arXiv_065-47
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:intro}} \IEEEPARstart{D}{iscriminative} feature extraction from 3D meshes is fundamentally important for computer graphics \cite{hanocka2019meshcnn,ranjan2018generating} and computer vision \cite{dai2017scannet,scenenn-3dv16,Matterport3D}. Its success is of great value to multiple emerging technologies, including autonomous driving, robotics and virtual/augmented reality. Considering the impressive performance of convolutional feature learning on homogeneous grid data, i.e.~images and videos \cite{he2016deep,krizhevsky2012imagenet,liu2016ssd,long2015fully,redmon2016you,ren2015faster,ronneberger2015u,simonyan2014very}, researchers are also seeking alternative convolutional neural networks for 3D-mesh feature learning~\cite{verma2018feastnet,hanocka2019meshcnn,schult2020dualconvmesh}. Currently this is a major research topic in geometric deep learning, which focuses on feature encoding of generic heterogeneous data in non-Euclidean space~\cite{bronstein2017geometric}. Compared to 3D point clouds, the geodesic connections of 3D meshes, that constitute edges and facets on top of vertices, hold key information about object surfaces and scene topology \cite{yi2016scalable,dai2017scannet}. Concurrently, mesh facets can also carry higher resolution texture information. This variety and heterogeneous nature of mesh data primitives (\textit{i.e.~} vertices, facets, textures, etc.), makes adaption of deep learning to 3D meshes much more challenging~\cite{lei2020spherical}. Besides, the rich extra information of mesh data casts large memory footprint on GPU devices, and raises practical concerns for the training of deep neural networks, especially in large-scale scenario \cite{dai2017scannet,Matterport3D,scenenn-3dv16,song2015sun,armeni20163d}. Applications in computer graphics largely focus on small-scale mesh data. We find a few mesh-based neural networks to learn features for \textit{shape analysis} \cite{boscaini2016learning,hanocka2019meshcnn,monti2017geometric,ranjan2018generating,xie2015deepshape}. Generally, these methods handle small shape meshes as graphs and learn mesh representations using graph convolutions. Their architectures are either non-hierarchical that adopt a single network resolution, or they slowly reduce the network resolution with inefficient mesh decimation algorithms \cite{garland1999quadric,garland1997surface,rossignac1993multi,zhou2018open3d}. Whereas non-hierarchical networks and slow resolution reduction are acceptable for small-scale problems, they become impractical for large-scale mesh data. \begin{figure*}[!t] \centering \includegraphics[width=0.96\textwidth]{Figs/Lib_demo_usage.pdf} \vspace{-3mm} \caption{(a) An example of building a simple hierarchical mesh network for shape classification using the mesh convolutions and poolings in Picasso. The network comprises two hierarchical layers, and uses batch size 3 in this example. It accepts batch input as a tuple of $({\bf V}, {\bf F}, {\bf H}_G^0)$, where ${\bf V}$ denotes concatenated vertices, ${\bf F}$ are facets, and ${\bf H}_G^0$ denotes facet geometric features of shapes. In this illustration, we decimate the input meshes by reducing their number of vertices by $N_r$. (b) Example configuration of the standard initial layer which considers the input features to comprise both geometric features, ${\bf H}_G^0$, and texture features, ${\bf H}_C^0$. We discuss further details of this figure in the overview of Picasso in \S~\ref{sec:overview}.} \label{fig:picasso_demo_uage} \vspace{-2mm} \end{figure*} For large-scale 3D problems, \textit{e.g.~} semantic \textit{scene parsing}, most techniques only deal with (voxelized) point clouds \cite{choy20194d,thomas2019kpconv,graham2017submanifold,lei2020spherical,wu2019pointconv,qi2017pointnetplusplus,su2018splatnet}. The absence of efficient modular operations for mesh processing in the modern libraries such as Tensorflow \cite{abadi2016tensorflow}, Pytorch \cite{paszke2019pytorch}, requires substantial effort for deep learning application to large-scale mesh inputs. This is a currently major hindrance in effective neural modeling of scene surfaces~\cite{huang2019texturenet,schult2020dualconvmesh}. Point cloud methods are unable to directly benefit from the pre-existing neighborhood information in data. On the other hand, robust geometrics and fine facet texture of meshes provide appealing alternatives to point coordinates $(x,y,z)$ and colors $(r,g,b)$ as input features. These alternatives remain largely under-exploited in practical large-scale 3D problems due to the absence of appropriate operations in the contemporary deep learning libraries. With this work, we aim to advance geometric deep learning over 3D meshes for shape analysis in graphics and scene parsing in vision. We introduce \textit{Picasso}\footnote{Paying homage to Pablo Picasso for cubism in paintings.}, a collection of deep learning modules that provides convenient modular operations for efficient mesh decimation, (un-)pooling and convolution. Additionally, we propose a generic neural network for efficiently learning discriminative features for synthetic/real, small/large-scale, watertight/unstructured meshes. We construct our mesh-based neural network with the proposed Picasso modules. Our network processes facet geometrics and textures instead of vertex coordinates and colors as input, while also capitalizing on the pre-existing geodesic neighborhood of meshes. In Picasso, we introduce GPU-accelerated mesh decimation that simplifies a \textit{batch} of heterogeneous meshes on-the-fly for hierarchical feature learning. It performs all computations in parallel on GPU, except for the vertex clustering. It allows control over the decimated mesh resolution using a desired number of vertex. We define the (un)pooling operations based on vertex clusters recorded during the decimation. They are required to produce features for newly-created neurons when the network resolution is altered. In addition, three novel convolution types are presented for context aggregation on meshes, namely \textit{facet2vertex}, \textit{vertex2facet}, and \textit{facet2facet}. The mesh convolutions exploit von Mises-Fisher (vMF) mixture \cite{gopal2014mises} and Barycentric interpolation for different fuzzy modelling. Figure~\ref{fig:picasso_demo_uage} illustrates an example mesh-based neural network for shape classification that can be built using Picasso modules. We additionally propose a generic neural network for 3D meshes, PicassoNet-\Romannum{2}, by incorporating a series of significant improvements in the original PicassoNet~\cite{lei2021picasso}. These improvements pertain to skip connections, dual convolutions, network depth and the ability to process true mesh data. Dual convolutions, that explore both geodesic and Euclidean neighborhoods, can be computationally intractable for dense data due to the search required to establish the Euclidean neighborhood~\cite{lei2020spherical}. We demonstrate that Euclidean neighborhood does not contribute significantly in feature learning from high-resolution meshes. Leveraging this insight PicassoNet-\Romannum{2} applies dual convolutions only at the low-resolution layers. Our network is also able to process both small-scale and large-scale 3D meshes as intact samples alike. We evaluate PicassoNet-\Romannum{2} on the ShapeNetCore \cite{chang2015shapenet} dataset, along with SHREC \cite{lian2011shape}, CUBE \cite{hanocka2019meshcnn}, COSEG \cite{wang2012active}, Human \cite{maron2017convolutional} and FAUST \cite{bogo2014faust} for shape analysis. We also evaluate it on the large-scale S3DIS \cite{armeni20163d} and ScanNet \cite{dai2017scannet} datasets for real-world scene parsing, achieving highly competitive results in all cases. This article is a significant extension of our preliminary work presented in IEEE CVPR 2021~\cite{lei2021picasso}. Below, we summarize the major enhancements beyond the preliminary conference work. \begin{itemize} \item \textbf{Fuzzy modeling:} Since normals represent directional distributions on the surface of a unit sphere, we replace the Gaussian mixture in \cite{lei2021picasso} with the vMF mixture for better fuzzy modelling. Additionally, we train the centers (mean directions) of each mixture component while fixing its concentration parameter. We also remove the Barycentric interpolation of vertex2facet convolution in \cite{lei2021picasso} for computational and memory gain. \vspace{0.5mm} \item \textbf{Improved efficiency without trading-off efficacy:} We establish the passive role of dual convolutions in high-resolution mesh feature learning. In PicassoNet-\Romannum{2}, we address this to gain significant computational advantage over \cite{lei2021picasso} while maintaining the performance. We further improve the network architecture through better design choices for skip connections and sub-network blocks. \vspace{0.5mm} \item \textbf{Rendered mesh as input:} The network in \cite{lei2021picasso} can only process point coordinates and colors as input features, whereas PicassoNet-\Romannum{2} can also handle rendered meshes as inputs. We incorporate all required functionalities in our network, including reconfiguration of the initial convolutional layer - see Fig.~\ref{fig:picasso_demo_uage}. \vspace{0.5mm} \item \textbf{Extensive evaluation:} We evaluate PicassoNet-\Romannum{2} for shape analysis and scene parsing on a wide variety of standard benchmarks. It achieves highly competitive performance on all datasets. We also provide extensive ablation studies for analysis. We release the latest Picasso and PicassoNet-\Romannum{2} at Github\footnote{\href{https://github.com/EnyaHermite/Picasso}{https://github.com/EnyaHermite/Picasso}} for the broader research community. \item \textbf{Pytorch extension:} Originally in \cite{lei2021picasso}, Picasso was implemented in Tensorflow. However, with this work, we make it available in both Tensorflow and Pytorch due to the growing popularity of Pytorch. We emphasize that Picasso version released with this article incorporates not only the newly introduced modules for heterogeneous mesh processing, but also compatible modular operations for heterogeneous point cloud processing. We include the point cloud modules by improving on our previous contributions to heterogeneous applications~\cite{lei2020spherical,lei2020seggcn}. As a whole, Picasso enables conveniently building of neural networks to process 3D meshes and point clouds of arbitrary sizes. \end{itemize} \section{Related Work}\label{sec:references} Among the existing feature learning approaches for 3D data, the majority considers `point cloud' as input, while only a few works deal with meshes despite their added benefits of pre-existing neighborhood connections. This discrepancy can be largely attributed to the absence of modular implementations that provide operations for hierarchical deep learning with fast mesh decimation and network reduction as well as (un)poolings. \vspace{-2mm} \subsection{Convolution on 3D Point Clouds}\label{subsec:convolution_pointcloud} Applying voxel-grid kernels to dense volumetric representations is the most straightforward solution of transferring CNNs from images to point clouds \cite{wu20153d,maturana2015voxnet,huang2016point,sedaghat2016orientation,zeng20163dmatch,zhang2017deepcontext}. However, the practical potential of these methods is limited by their cubically growing requirements on memory and computational resources. Different strategies have been introduced to incorporate sparsity into the dense volumetric CNNs \cite{EngelckeICRA2017,graham20183d,li2016fpnn,hua2018pointwise,choy20194d,riegler2017octnet}, among which SparseConvNets \cite{graham20183d,choy20194d,tang2020searching} are currently the best performing architectures. Several approaches also explore similar regular-grid kernels for transformed input representations of point clouds, such as TangentConv \cite{tatarchenko2018tangent}, SplatNet \cite{su2018splatnet}, PCNN~\cite{atzmon2018point}. Since PointNet \cite{qi2017pointnet}, the permutation invariant networks learn features from point clouds using multilayer perceptrons followed by max pooling~\cite{klokov2017escape,li2018so,qi2017pointnet,qi2017pointnetplusplus,rethage2018fully,shen2018mining,wu2019pointconv} and show that the spatial coordinates ($x,y,z$) of points can be effective input features to the network. Graph-based neural networks allow the convolutions to be conducted in either spectral or spatial domain. However, applying the spectral convolutions to point cloud processing is complicated because they demand the graph Laplacians of different input samples to be pre-aligned \cite{yi2017syncspeccnn}. As a pioneering work in the spatial domain, ECC \cite{simonovsky2017dynamic} exploits dynamical filters \cite{de2016dynamic} to parameterize the graph convolutional parameters for point cloud analysis. Subsequent works also explored more effective kernel and filter parameterizations \cite{groh2018flex,li2018pointcnn,wang2019attention,wu2019pointconv,xu2018spidercnn}. The discrete kernels \cite{lei2019octree,lei2020seggcn,lei2020spherical,thomas2019kpconv,xu2021paconv} are efficient alternatives to those dynamic kernels as they define the filter parameters directly, avoiding the necessity of indirect filter generation within the network. The spherical kernels \cite{lei2020seggcn,lei2020spherical} that separate depth-wise and point-wise computations are memory and runtime advantageous, while KPConv \cite{thomas2019kpconv} is reported to be more competitive than SparseConvNets. Recently, researchers have also started to adapt transformers~\cite{vaswani2017attention} to point cloud processing~\cite{zhao2021point}. Although many existing approaches use spatial coordinates $(x,y,z)$ of points as geometric input features (\textit{e.g.~} \cite{qi2017pointnetplusplus,wu2019pointconv,li2018pointcnn,thomas2019kpconv,lei2020seggcn,lei2020seggcn,wang2019attention,xu2018spidercnn}), they particularly lack convenience when data cropping and transformations are involved in pre-processing or data augmentation. Our work allows mesh structures as the input modality instead. The alternate of relative geometric attributes of mesh facets as the input feature naturally mitigates the problem. \vspace{-1mm} \subsection{Convolution on 3D Meshes}\label{subsec:convolution_mesh} Multiple approaches exist that employ convolution on meshes to learn features for small-scale shape analysis. The convolutions are generally performed on local planar patches defined in the hand-crafted coordinate systems \cite{boscaini2016learning,masci2015geodesic,monti2017geometric}. These methods either establish the coordinate system using geodesic level sets \cite{masci2015geodesic} or surface normals and principle curvatures \cite{boscaini2016learning,monti2017geometric}. For improved correspondence matching, Verma \textit{et~al.~} \cite{verma2018feastnet} replaced the previous hand-crafted local patches with a learnable mapping between graph neighborhoods and filter weights. To reconstruct human facial expressions, Ranjan \emph{et al.}~\cite{ranjan2018generating} exploited the spectral graph convolutions \cite{defferrard2016convolutional} with hierarchical mesh-based autoencoders. Whereas other methods focus on learning vertex-wise features, MeshCNN~\cite{hanocka2019meshcnn} introduces convolutional operation that learns edge-wise features for semantic labelling on a mesh. The recent PD-MeshNet~\cite{milano2020primal} further extracts facet-wise representations by defining convolution on the primal-dual graphs of an input mesh. It reduces network resolution using the graph edge contraction method provided by Pytorch Geometric \cite{fey2019fast}. Currently, only a small number of mesh-based convolutional networks exist for large-scale scene parsing in the real world. TextureNet \cite{huang2019texturenet} parameterizes the room surface into local planar patches in the 4-RoSy field such that standard CNNs \cite{krizhevsky2012imagenet} can be applied to extract high-resolution texture information from mesh facets. Schult~\textit{et~al.~} \cite{schult2020dualconvmesh} applied the spatial graph convolutions of dynamic filters \cite{li2018pointcnn,simonovsky2017dynamic,wang2018dynamic,wu2019pointconv} to the union of neighborhoods in both geodesic and Euclidean domains for vertex-wise feature learning. VMNet \cite{hu2021vmnet} combines the SparseConvNet \cite{tang2020searching} with graph convolutional networks to learn merged features from point clouds and meshes. Generally, previous methods explore mesh as an edge-based graph and define the graph convolutions based on its geodesic connections \cite{milano2020primal,verma2018feastnet,schult2020dualconvmesh,ranjan2018generating}. We instead propose convolutions on the mesh structure itself, following its elementary geometric components, i.e.~vertices and facets. To promote this more natural perspective, we also contribute computation and memory optimized CUDA implementations for forward and backward propagations of all the mesh convolutions we present in this work. \vspace{-1mm} \subsection{Mesh Decimation}\label{subec:decimation_mesh} Hierarchical neural networks induce multi-scale feature extraction by allowing convolutions to be applied on increasing receptive fields of the input data. Although farthest point sampling (FPS) is widely used to construct hierarchical architectures for point clouds \cite{lei2020spherical,qi2017pointnetplusplus,wu2019pointconv}, it is inapplicable to mesh processing because of its inability of tracking vertex connections. Fortunately, the graphics research community has contributed effective methods for mesh simplification, such as Vertex Clustering (VC) \cite{rossignac1993multi} and Quadric Error Metrics (QEM)~\cite{garland1999quadric,garland1997surface}. The two methods are suitable choices for mesh-based neural networks \cite{ranjan2018generating,schult2020dualconvmesh,hu2021vmnet} to establish hierarchical architectures. Compared to VC, the QEM method is good at reducing mesh resolution while retaining most of its geometric information, leading to superior performance \cite{schult2020dualconvmesh}. In specific, QEM simplifies a mesh via iterative contractions of vertex pairs, whereas the optimal vertex pair for contraction has to be determined after each iteration. The popular geometric processing library - Open3D \cite{zhou2018open3d}, offers simplification functions for both VC and QEM. However, the CPU-based implementation is inefficient and also not amenable to operations required for deep learning, \textit{e.g.~} batch processing. Though superior to VC in performance, the iterative progressive strategy of QEM makes it impossible to be deployed on GPUs as parallel processes. In this work, we introduce a fast mesh decimation technique based on the QEM algorithm \cite{garland1997surface}. Compatible to deep learning, our method can process a batch of heterogeneous meshes on-the-fly. In contrast to \cite{garland1997surface}, it sorts all the vertex pairs only once according to their quadric errors, and groups the vertices to be contracted into \textit{disjoint} clusters. Except for the grouping process, all other computations in our method get accelerated via parallel GPU computing. \begin{figure*}[!t] \centering \includegraphics[width=0.98 \textwidth]{Figs/VCluster.pdf}\\ \vspace{-3mm} \caption{Illustration of the vertex clustering process. (a) An input mesh with twelve edges (vertex pairs). We sort the vertex pairs in ascending order according to their quadric errors. (b) Then, we initialize the clusters as $\{c,d\},\{a,g\},\{e,f\}$ using the disjoint vertex pairs $(c,d),(a,g),(e,f)$ - shown by red, green and blue. (c) We group the remaining vertex $b$ to the vertex cluster $\{a,g\}$ because $(a,b)$ holds the smallest quadric error among all pairs containing $b$, \textit{i.e.~} $(a,b),(b,g),(b,c)$. Finally, the vertex clusters become $\{c,d\},\{a,b,g\},\{e,f\}$. (d) We construct the decimated mesh by applying vertex contraction to each cluster. The target position of contraction is computed as the average location of all vertices in the cluster.} \label{fig:vertex_cluster} \vspace{-2mm} \end{figure*} \section{GPU-Accelerated Mesh Decimation} To explore hierarchical neural networks on 3D meshes, there is a need of efficient mesh decimation technique that suits deep learning for on-the-fly network reduction. The QEM algorithm \cite{garland1997surface} is effective at simplifying meshes while retaining the decimation quality. However, it applies contractions to each vertex pair iteratively with a global optimal quadric error. The implicit dependencies between the iterative contractions make this method unsuitable for parallel acceleration\footnote{We summarise the QEM method in Algorithm~A.1 of the supplementary where its iterative dependency is clear from \textit{lines 7--13}.}. Hence, we propose an enhancement of QEM to enable parallel computing with GPUs. In our method, we do not allow inter-dependent iterative contractions. Instead, we group the vertices into multiple disjoint clusters under a reasonable compromise on the quadric error cost. We control the clustering process using expected number of vertices in the decimated mesh rather than the number of edges or facets. Due to the disjointness of vertex clusters, their contractions are independent of each other and can be executed in parallel. We provide a toy example in Fig.~\ref{fig:vertex_cluster} to illustrate our procedure of vertex clustering. In our method, we establish the vertex pairs for candidate contraction using the end-vertices of mesh geodesic edges only. To prioritize the vertex pairs that contribute to lower quadric errors, they are arranged in ascending order. Each vertex cluster is then initialized as a disjoint vertex pair in the ascending order of the candidates. We summarize our mesh simplification procedure as Algorithm~\ref{alg:Mesh_decimate_ours} that reduces the number of mesh vertices to nearly a half per-iteration. To handle mesh decimation of arbitrary number of vertices, we allow the core algorithm to be iterated for flexible (${\geqslant}1$) times. In Algorithm~\ref{alg:Mesh_decimate_ours}, we present the decimation method for a single mesh as the input for clarity. Our decimation function implementation processes `mini-batches' of multiple meshes. We execute the vertex clustering (\textit{lines 5--16}) on CPU while all the other operations that require heavy computations are performed on GPU. The clustering process has a time complexity of $\mathcal{O}(|\mathcal{E}|)$, where $|\mathcal{E}|$ is the number of edges of the input mesh. The routine penalties, and consistency checks in mesh decimation are excluded in our method to favor runtime efficiency. We compare the runtime of QEM and the proposed decimation algorithm in Fig.~B.1 of the supplementary, where our method is much faster. \begin{algorithm}[t] \caption{The GPU-accelerated mesh simplification} \label{alg:Mesh_decimate_ours} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE mesh $\mathcal{T}^i{=}(\mathcal{V}^i,\mathcal{F}^i)$; number of vertices to remove $N_r$. \hspace{-4mm} \ENSURE decimated mesh ${\mathcal{T}^o}{=}(\mathcal{V}^o,\mathcal{F}^o)$. \vspace{1mm} \STATE establish a vertex pair $({\bf v}_i$, ${\bf v}_j)$ for each edge. \STATE compute the quadric cost of contracting each pair. \STATE sort all pairs ascendingly based on the quadrics. \STATE set $n_r=0$, and $p({\bf v}_i)=\FALSE, \forall~{\bf v}_i \in \mathcal{V}^i$. \FOR{each pair (${\bf v}_i$, ${\bf v}_j$)} \IF{$p({\bf v}_i)=\FALSE$, $p({\bf v}_j)=\FALSE$, \AND $n_r<N_r$} \STATE (a) initialize $\{{\bf v}_i, {\bf v}_j\}$ as a new cluster. \STATE (b) set $n_r=n_r+1$, $p({\bf v}_i)=\TRUE$, $p({\bf v}_j)=\TRUE$. \ENDIF \ENDFOR \FOR{each pair (${\bf v}_i$, ${\bf v}_j$)} \IF{$p({\bf v}_i)=\FALSE$ \OR $p({\bf v}_j)=\FALSE$, \AND $n_r<N_r$} \STATE (a) place ${\bf v}_i$, ${\bf v}_j$ to the same cluster. \STATE (b) set $p({\bf v}_i)=\TRUE$, $p({\bf v}_j)=\TRUE$. \ENDIF \ENDFOR \FOR{each cluster $\{{\bf v}_i, {\bf v}_j, \dots\}$} \STATE (a) compute the average position $\bar{\bf v}$ of the cluster. \STATE (b) contract the cluster to $\bar{\bf v}$. \ENDFOR \STATE return \end{algorithmic} \footnotetext{We compute $\bar{\bf v}$ as the average position of all vertices in a cluster.} \end{algorithm} In our implementation, we also record the vertex clustering information with a parameter \textit{VCluster}, and the vertex mapping between input and output meshes with a parameter \textit{IOmap}. They are both vectors of the same sizes as the number of vertices $|\mathcal{V}^i|$ in the input mesh $\mathcal{T}^i=(\mathcal{V}^i,\mathcal{F}^i)$. Our decimation function yields those two parameters along with the decimated mesh as they are required in the computations of (un)poolings. \vspace{1mm} \noindent \textbf{(Un)poolings:} The clustering and mapping information encoded in vectors \textit{VCluster} and \textit{IOmap} largely facilitate the (un)pooling computations. Considering each cluster as a local region or neighborhood, common pooling operations such as `sum'/`average'/`max'/`median'/`weighted' can be directly defined. We provide max($\cdot$), and average($\cdot$) poolings to down-sample the features. For unpooling, all vertices in a cluster replicate features of a representative vertex that the cluster is contracted to in the decimated mesh. Consider the input and output meshes in Fig.~\ref{fig:vertex_cluster} as an example. We compute the feature of vertex `$1$' in the decimated mesh as $h^1=\max(h^a,h^b,h^g)$ under max pooling, while create the features of $\{a,b,g\}$ as $h^a=h^b=h^g=h^1$ in unpooling. In addition to the mesh decimation and (un)poolings, we also introduce convolutional operations that are more compatible to feature learning on triangular meshes than the previous graph convolutions \cite{schult2020dualconvmesh}. \vspace{-3mm} \section{Mesh Convolutions} To discuss the proposed mesh convolutions, we first briefly introduce a few notions. We represent a triangular mesh as $\mathcal{T}=(\mathcal{V},\mathcal{F})$, which is a tuple of vertices and facets. Each vertex has spatial coordinates ${\bf x}=(x,y,z)$, and may have additional features like texture ${\bf c}=(r,g,b)$, normal and intensity. Let the area and normal of a facet be $A$ and ${\bf n}=(n_x,n_y,n_z)$. For a rendered mesh, we further denote the texture size of a facet as $K\times3$, where $K$ relates to the texture resolution and '3' indicates the $(r,g,b)$ values. We allow $K$ to vary according to the area of facets. This is elaborated further in \S~\ref{subsec:facet2facet_conv}. \vspace{-2mm} \subsection{Facet2vertex Convolution}\label{subsec:facet2vertex_conv} We compute features of each vertex by aggregating context information from adjacent facets, rather than neighboring vertices. This avoids transforming a mesh into a graph for context propagation. The facet normal is directional data residing on the surface of a unit sphere. We define our kernel by associating filter weights to different positions on the sphere, while the kernel size relates to the number of different positions. Typically, normals of real-world meshes, especially the indoor meshes, are distributed as distinctive pattern clusters on the unit sphere, resulting from human construction preferences\footnote{To illustrate, we provide an example of mesh normal distribution of S3DIS~\cite{armeni20163d} in Fig.~C.1 of the supplementary.}. We exploit the von Mises-Fisher (vMF) mixture~\cite{gopal2014mises} to cluster the normals, and associate each filter of our kernel to a vMF component. Let the total number of components in the vMF mixture be $T$, their mean directions be $\{{\boldsymbol \mu}_t\}$, and their concentration parameters be $\kappa=1/\tau$. Based on its normal ${\bf n}_i$, we compute the fuzzy coefficients $\{\pi_{it}\}$ of each facet ${\bf f}_i$ as \begin{equation}\label{eq:fuzzy_coeff} \pi_{it} = \frac{\exp(\kappa{\boldsymbol {\mu}}_t^\intercal{\bf n}_i)}{\sum_{s=1}^T\exp(\kappa{\boldsymbol {\mu}}_s^\intercal{\bf n}_i)}. \end{equation} We fix the parameter `$\tau$' as 0.1 which results in `$\kappa$' to be $10$. This setting is motivated by the definition of contrastive loss in SimCLR~\cite{chen2020simple}. Softmax function is a straightforward choice to normalize those fuzzy coefficients. We keep the mean directions $\{{\boldsymbol \mu}_t\}$ as learnable parameters of the vMF mixture. They are initialized randomly and trained together with the filter weights of the network. Following previous works \cite{chollet2017xception,lei2020spherical}, we define the facet2vertex convolution in a depth-wise separable manner to save computations. Let the filter weights in the kernel be $\{w_t\}$, the adjacent facets of vertex ${\bf v}$ be $\mathcal{N}(\bf v)$, and the associated features of those facets be $\{h_{i}^f|{\bf f}_i\in\mathcal{N}({\bf v})\}$. The feature of vertex ${\bf v}$ is computed as \begin{align} &g^v =\frac{1}{\mathcal{N}({\bf v})}\sum_{{\bf f}_i\in\mathcal{N}({\bf v})}\Big(\sum_{t=1}^{T}\pi_{it} {w}_{t}\Big){h^f_{i}}, \\ \label{eq:F2V_Conv} & h^v = \text{activation}(g^v). \end{align} We use ReLU \cite{nair2010rectified} as the activation function. The bias term in feature computation is omitted for simplicity. Considering that we model the fuzzy coefficients based on facet normals, the resulting facet2vertex convolution is scale and translation invariant but not rotation invariant. \subsection{Vertex2facet Convolution}\label{subsec:vertex2facet_conv} We aggregate features of each facet from its vertices. The vertex2facet convolution exploits depth-wise separable strategy as well. Let $\{{\bf v}_1,{\bf v}_2,{\bf v}_3\}$ be the three vertices of triangular facet $\bf f$, and the features of those vertices be $\{h^v_1,h^v_2,h^v_3\}$. We define a kernel composed of three filters, which are associated to the three vertices and have filter weights $\{w_1,w_2,w_3\}$. We compute the feature of facet ${\bf f}$ as \begin{align} g^f=\sum_{k=1}^3 {w_kh^v_k}. \label{eq:V2F_Conv} \end{align} The Barycentric interpolation in \cite{lei2021picasso} is no longer retained in the vertex2facet convolution as it makes minor contributions to the feature extraction but requires additional computations. Point cloud convolutions propagate local information from points to points \cite{qi2017pointnet,wu2019pointconv,lei2020spherical}. We induce a vertex2vertex convolution to achieve similar propagation by combining the vertex2facet and facet2vertex convolutions. Figure~\ref{fig:v2v_conv} illustrates the notion of facet2vertex, vertex2facet, vertex2vertex, and facet2facet convolutions. \begin{figure}[!t] \centering \includegraphics[width=0.49\textwidth]{Figs/MeshConvolutions.pdf} \vspace{-3mm} \caption{Mesh convolutions introduced in Picasso. (a) The facet2vertex convolution propagates features from the adjacent facets of a vertex to the vertex itself. (b) The vertex2facet convolution computes the features of a facet based on its three vertices. (c) The facet2facet convolution calculates features of a rendered facet based on the vertices and interpolated points in the facet. For simplicity, we show only three interpolated points on the rendered facet. It corresponds to a setting of $\gamma=1$ and $K=6$ following Eq.~(\ref{equ:interpolate_num}). (d) The vertex2vertex convolution is composed of a vertex2facet convolution followed by a facet2vertex convolution. We apply batch normalization to both vertex and facet features. } \label{fig:v2v_conv} \vspace{-2mm} \end{figure} \subsection{Facet2facet Convolution}\label{subsec:facet2facet_conv} The facet2facet convolution is applicable to textured meshes. We calculate the features of each facet based on the textures of all points on the facet. Let $\{{\bf h}^f_k\in\mathbb{R}^{C_{i}}\}$ be the input features of all points on a facet, and the associated Barycentric coordinates of each point be $\{{\boldsymbol\xi}_k=[\xi_{k1},\xi_{k2},\xi_{k3}]^\intercal|\xi_{k1}+\xi_{k2}+\xi_{k3}=1,~\xi_{k1},\xi_{k2},\xi_{k3}\geqslant0\}$. A facet of texture resolution $K$ leads to $|\{{\bf h}^f_k\}|=|\{{\boldsymbol\xi}_k\}|=K$. We define a convolutional kernel as $\{{\bf w}_1,{\bf w}_2,{\bf w}_3\in\mathbb{R}^{C_{i}}\}$. The fuzzy scheme \cite{lei2020seggcn} is incorporated to facet2facet convolution using Barycentric coordinates. Finally, we compute the feature of facet ${\bf f}$ as \begin{align} g^f&=\frac{1}{K}\sum_k {(\xi_{k1}{\bf w}_1+\xi_{k2}{\bf w}_2+\xi_{k3}{\bf w}_3)^\intercal{\bf h}^f_k}. \label{eq:F2F_Conv} \end{align} We note that the facet2facet convolution is only required at the first convolution layer for extracting texture features from the raw mesh input. We prepare fine textures for the mesh data using Barycentric interpolation~\cite{coxeter1961introduction} in the experiments. The texture resolution $K$ of a facet ${\bf f}$ is determined by its area $A$, and it is computed as \begin{equation}\label{equ:interpolate_num} K=\frac{(\gamma+1)(\gamma+2)}{2},~\text{where}~ \gamma=\left\lfloor \frac{\alpha(A-A_{\min})}{A_{\max}-A_{\min}} \right\rfloor + \beta.\\ \end{equation} We use $\gamma$ to represent the number of interpolated points on each edge. In Eq.~\eqref{equ:interpolate_num}, $A_{\min}, A_{\max}$ are the minimum and maximum facet areas of the mesh, while $\alpha,\beta\in\mathbb{Z}_{\scaleto{\geqslant0\mathstrut}{5pt}}$ are hyper-parameters. For all settings in the group of $\alpha{=}0$, we let the texture resolution $K$ to be constant across all mesh facets. One special case in this group is when the mesh is provided with vertex colors but its facets are not rendered. In that case, $K$ correlates to $(\alpha,\beta){=}(0,1)$ and takes value $K=3$. In this situation, we can alternatively fulfill the facet2facet convolution with a simple $1\times 1$ convolution, by formulating the texture features of each facet as a concatenation of its vertex colors, \textit{i.e.~} $[{\bf c}_1,{\bf c}_2,{\bf c}_3]$. \subsection{Geometric Facet Features}\label{subsec:mesh_facet_geometry} Let us denote the coordinates of facet vertices by ${\bf x}_1, {\bf x}_2, {\bf x}_3$, the edge lengths of the facet as ${\boldsymbol \ell}=(\ell_1,\ell_2,\ell_3)$ and the facet normals as ${\bf n}$. We compute inner angles of a facet ${\boldsymbol \theta}=(\theta_1,\theta_2,\theta_3)$ as \begin{equation}\label{equ:inner_angles} \begin{aligned} \theta_1 &= \frac{\langle{\bf x}_2-{\bf x}_1, {\bf x}_3-{\bf x}_1\rangle}{\ell_1\ell_3}, \\ \theta_2 &= \frac{\langle{\bf x}_1-{\bf x}_2, {\bf x}_3-{\bf x}_2\rangle}{\ell_1\ell_2}, \\ \theta_3 &= \frac{\langle{\bf x}_1-{\bf x}_3, {\bf x}_2-{\bf x}_3\rangle}{\ell_2\ell_3}. \end{aligned} \end{equation} We form the input feature representation composed of mesh facet geometry as $[{\boldsymbol \ell}, {\boldsymbol \theta}, {\bf n}]$ for shapes. For real-world surface data with aligned gravitational-axis (\textit{e.g.~} the $z$-axis), we form their facet geometrics as $[{\boldsymbol \ell}, {\boldsymbol \theta}, {\bf n}, {\bf h}]$, where ${\bf h}$ concatenates the heights of the three vertices, \textit{e.g.~} ${\bf h}=[z_1,z_2,z_3]$. The use of our facet geometric representation is empirically supported in \S~\ref{subsec:input_feats_ablation}. For network training, we allow the direction of facet normals to be arbitrary (\textit{i.e.~} $\pm{\bf n}$). This randomness plays an effective role of data augmentation. We note that the standard input features to our network in \S~\ref{sec:network_Pi2} include both facet geometrics and facet textures. However, when the mesh is not textured, we employ only facet geometrics as the input features. \section{Picasso overview}\label{sec:overview} We combine the operations proposed in this work for mesh processing with our previously proposed operations for 3D point cloud processing in \cite{lei2020spherical,lei2020seggcn} into \textit{Picasso}. The previous point cloud operations are improved to handle point clouds of heterogeneous sizes in addition to homogeneous arrays. This improvement has led to seamless integration of our mesh and point cloud operations. Through Picasso, we make geometric deep learning over 3D data accessible to the broader research community. We allow easy integration of the contributed modular operations in 3D domain with the modern deep learning blocks/layers such as ResNet \cite{he2016deep}, DenseNet \cite{huang2017densely}, Inception \cite{Szegedy2015googLeNet} etc. Figure~\ref{fig:picasso_overview} provides an overview of the major modules in Picasso. To differentiate this article's contribution from \cite{lei2020spherical,lei2020seggcn}, the figure colorizes only the novel operations introduced in this work. These include CUDA-accelerated mesh decimation, pooling, unpooling, and different mesh convolutions. We additionally incorporate a module for GPU-based voxelization of point clouds and meshes in Picasso. The module allows mesh decimation with voxelized vertex clustering to be performed on-the-fly. Picasso is supported by both Tensorflow \cite{abadi2016tensorflow} and Pytorch \cite{paszke2019pytorch} for different user preferences. We release the code at \href{https://github.com/EnyaHermite/Picasso}{https://github.com/EnyaHermite/Picasso}. To build a deep convolutional block for feature learning in Picasso, multiple vertex2vertex convolutions can be cascaded within a network layer of the same mesh resolution, similar to the usage of CNN kernels. In Fig.~\ref{fig:picasso_demo_uage}(left), an example is shown for constructing a simple hierarchical mesh network using mesh convolutions and poolings. The example network is sequentially composed of an initial convolutional layer, a max pooling layer, one convolutional block, a global pooling and an arbitrary classifier. Assume the network uses batch size 3 for training. Let $\mathcal{T}_1{=}(\mathcal{V}_1,\mathcal{F}_1)$, $\mathcal{T}_2{=}(\mathcal{V}_2,\mathcal{F}_2)$, $\mathcal{T}_3{=}(\mathcal{V}_3,\mathcal{F}_3)$ be different shapes in a batch, and $\mathcal{H}^0_1,\mathcal{H}^0_2,\mathcal{H}^0_3$ be the input features of $\mathcal{T}_1, \mathcal{T}_2, \mathcal{T}_3$, respectively. The standard input features $\mathcal{H}^0$ of a mesh comprise both facet geometrics $\mathcal{H}_G^0$ and facet textures $\mathcal{H}_C^0$. As the shape meshes provided in the example do not contain textures, the input features are simplified to $\mathcal{H}^0{=}\mathcal{H}_G^0$. In Picasso, we customize the network to accept multiple meshes via concatenation. Therefore, the shapes in the batch input are represented as a tuple of $({\bf V}, {\bf F}, {\bf H}^0)$, where \begin{equation} {\bf V}{=\hspace{-1mm}} \begin{bmatrix} \mathcal{V}_1\vspace{.5mm}\\ \mathcal{V}_2\vspace{.5mm}\\ \mathcal{V}_3\vspace{.5mm}\\ \end{bmatrix}{\hspace{-.8mm},\hspace{1.5mm}} {\bf F}{=\hspace{-1mm}} \begin{bmatrix} \mathcal{F}_1{+}0\phantom{+|\mathcal{V}_1|+}\vspace{.5mm}\\ \mathcal{F}_2{+}|\mathcal{V}_1|\phantom{+|\mathcal{V}_2|}\vspace{.5mm}\\ \mathcal{F}_3{+}|\mathcal{V}_1|{+}|\mathcal{V}_2|\\ \end{bmatrix}{\hspace{-.8mm},\hspace{1.5mm}} {\bf H}^0{=}{\bf H}_G^0{=\hspace{-1mm}} \begin{bmatrix} \mathcal{H}_{G,1}^0\vspace{.5mm}\\ \mathcal{H}_{G,2}^0\vspace{.5mm}\\ \mathcal{H}_{G,3}^0\\ \end{bmatrix}{\hspace{-.8mm}.} \end{equation} For facet concatenations in ${\bf F}$, we follow 0-indexing convention. Without textures, the initial layer of the shape example consists of a $1{\times}1$ convolution followed by a facet2vertex convolution. However, the standard initial layer covers both geometrics ${\bf H}_G^0$ and textures ${\bf H}_C^0$ as input features. We show its configurations in Fig.~\ref{fig:picasso_demo_uage}(right). To pool the features, the mesh has to be decimated first such that the pooling operation can proceed. We exploit max pooling in the example, while the convolutional block comprises two vertex2vertex convolutions. Global pooling induces a single representation for each sample such that the final classification can be applied. We give a basic example network in Fig.~\ref{fig:picasso_demo_uage} to provide a clear overview of Picasso. Next, we build our proposed network that also offers a more advanced example of using the Picasso modules. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{Figs/Lib_overview.pdf} \vspace{-3mm} \caption{Overview of the major deep learning modules included in Picasso. We only colorize the novel modules proposed in this work. Picasso allows feature learning for both heterogeneous 3D mesh and heterogeneous 3D point cloud.} \label{fig:picasso_overview} \vspace{-2mm} \end{figure} \begin{figure*}[!t] \centering \includegraphics[width=0.96\textwidth]{Figs/Pi2.pdf} \vspace{-3mm} \caption{PicassoNet-\Romannum{2} for large-scale semantic parsing of complete scenes (\textbf{top}), and its convolution blocks (\textbf{bottom}). The network consists of six mesh resolutions including the input $\mathcal{T}^{0\sim5}$. The output channels are respectively $32,64,96,128,192,256$ in the encoder and $128,128,96,96,96$ in the decoder. The pooling strides are $4,3,3,2,2$, which can be different for shape analysis. The figure depicts predicting semantics of mesh vertices, a vertex2facet convolution can be inserted before the final classifier for facet-based predictions. For classification, the decoder is replaced by a global mesh pooling. The bottom row shows (i) the `Initial Convolution' which propagates input features from facet to vertex; (ii) the `MeshEncoderBlock' that is exploited in high-resolution layers for feature extraction, along the `DualEncoderBlock' that applies to low-resolution layers such that feature extraction can go beyond disconnected components of the mesh, and (iii) the `DecoderBlock' for feature unsampling from low-resolution meshes to high-resolution meshes. In the `Initial Convolution', the facet2facet convolution is not applicable if textures are not provided. We apply `MeshEncoderBlock' to $\mathcal{T}^1,\mathcal{T}^2$ and `DualEncoderBlock' to $\mathcal{T}^3,\mathcal{T}^4,\mathcal{T}^5$. They repeat $m{=}2$ and $m{=}4$ times, respectively. } \label{fig:Pi2} \vspace{-2mm} \end{figure*} \vspace{-2mm} \section{PicassoNet-II}\label{sec:network_Pi2} Besides extending Picasso beyond the preliminary work in \cite{lei2021picasso}, this article also considerably enhances PicassoNet~\cite{lei2021picasso} to introduce a more effective network PicassoNet-\Romannum{2}. Compared to \cite{lei2021picasso}, PicassoNet-\Romannum{2} is deeper yet faster for geometric feature learning over 3D meshes. We show its configuration in the Fig.~\ref{fig:Pi2}(top), which also includes a decoder part (boxed) for dense parsing. For classification, the decoder is replaced with an average global pooling layer. PicassoNet-\Romannum{2} takes \textit{intact} meshes rather than mesh crops as input samples. We apply strided mesh decimation by specifying the expected vertex size using a stride parameter. This removes the constraint of fixing vertex sizes across different samples. As meshes are not always guaranteed to form a connected graph (\textit{e.g.~} after decimation), PicassoNet-\Romannum{2} exploits point cloud convolution to extract features across disconnected components of the mesh, similar to \cite{lei2021picasso}. The Euclidean neighborhood in point cloud convolution allows larger context to be established such that feature learning can go beyond geodesic connections. Whereas the previous methods \cite{lei2021picasso,schult2020dualconvmesh} explored Euclidean neighborhood extensively in every network layer regardless of the mesh resolution, PicassoNet-\Romannum{2} employs it only at the coarse layers of low mesh resolutions. We validate in \S~\ref{subsec:dual_layers_ablation} that point cloud convolution in the Euclidean domain is unnecessary for high-resolution layers since the meshes are already well-connected. Selective application of convolutions in PicassoNet-\Romannum{2} results in considerable computational gain without compromising performance. Our network uses range search \cite{preparata2012computational} to construct the neighborhood, and fuzzy spherical convolution \cite{lei2020seggcn} to learn features in the Euclidean domain. It exploits two types of encoder blocks to extract features from the meshes of different resolutions. One is \textit{mesh} encoder block, which comprises a repetitive building unit that uses only two vertex2vertex convolutions. The other is \textit{dual} encoder block, whose repetitive building unit is two vertex2vertex convolutions accompanied by one spherical convolution. We use identical feature channels for mesh and point cloud convolutions in the dual encoder blocks. PicassoNet-\Romannum{2} employs mesh encoder blocks in high-resolution layers and dual encoder blocks in the low-resolution layers. Our network inherits the primary building units of PicassoNet. However, it applies skip connections to every building unit of the encoder block. Besides, it leverages the concatenation-style skip connection of DenseNet~\cite{huang2017densely}, instead of the addition-style skip connection of ResNet~\cite{he2016deep}\footnote{The DenseNet-style connection is slightly less efficient but usually produces 0.5\% higher results.}. Figure~\ref{fig:Pi2}(bottom) depicts the key blocks of PicassoNet-\Romannum{2}, including its initial convolution, mesh and dual encoder blocks, as well as the decoder blocks. We employ max mesh pooling to down-sample the network features. PicassoNet-\Romannum{2} takes facet textures and geometrics as input features. This differs from PicassoNet~\cite{lei2021picasso}, which follows point cloud networks in expecting vertex coordinates and colors as input features. To propagate the input features from facet to vertex, we build the initial layer of PicassoNet-\Romannum{2} using a $1{\times}1$ convolution with a parallel facet2facet convolution, followed by a feature fusion of addition `$\textcircled{+}$' and a facet2vertex convolution. The $1{\times}1$ convolution is used for extracting geometric features while the facet2facet convolution is for learning texture representations. We adopt the `$\textcircled{+}$' feature fusion for its simplicity. It can be replaced by \textit{e.g.~} attentional fusion \cite{vaswani2017attention} for better performance at the expense of higher computational cost. For decoder, we use the $1{\times}1$ convolutions and mesh unpooling of PicassoNet~\cite{lei2021picasso} to upsample features. PicassoNet-\Romannum{2} applies batch normalization to all of its modular convolutions. The proposed network repeats the building unit of its dual encoder blocks $m{=}$4 times, resulting in a deeper network than PicassoNet. For classification, we replace the single $1{\times}1$ convolution for final predictions in dense parsing with two fully connected (FC) layers. Also, a dropout \cite{srivastava2014dropout} of rate 0.2 is applied after the first FC layer. \section{Experiments}\label{sec:experiment} We establish the effectiveness of PicassoNet-\Romannum{2} by evaluating it for (i)~shape analysis with synthetic meshes and (ii)~semantic scene parsing using real-world meshes. We use the ShapeNetCore \cite{chang2015shapenet} dataset, along with SHREC \cite{lian2011shape}, CUBE \cite{hanocka2019meshcnn}, COSEG \cite{wang2012active}, HUMAM \cite{maron2017convolutional} and FAUST \cite{bogo2014faust} for shape analysis. For real-world scene parsing, we employ the large-scale datasets S3DIS \cite{armeni20163d} and ScanNet \cite{dai2017scannet}. Each dataset is discussed with its related experiments. For all shape analysis experiments in \S~\ref{sec:synthetic_data}, we normalize the scales of all shape meshes, but keep the original scales of real-world scene surfaces in \S~\ref{sec:realistic_data}. This is because scale information is more important for real-world scenes. We employ the proposed facet geometrics $[{\boldsymbol \ell}, {\boldsymbol \theta}, {\bf n}]$ as default input features for synthetic shapes, and $[{\boldsymbol \ell}, {\boldsymbol \theta}, {\bf n}, {\bf h}]$ as default geometric features for the real-world data. Additionally, real-world data also provides facet textures for experiments. \vspace{1mm} \noindent{\textbf{Data Augmentation:}} We apply standard geometric transformations, \textit{e.g.~} random flipping, scaling and shifting to the mesh vertices. We perform random rotations along the gravitational axis for aligned data in ShapeNetCore, ScanNet and S3DIS, and free data rotation along all axes for other datasets. We also randomly drop the vertices and facets of meshes to obtain more training data. When textures are available, we apply color shifting, jittering, and color contrast to augment the data further, following \cite{choy20194d}. \vspace{1mm} \noindent{\textbf{Network Configuration:}} PicassoNet-\Romannum{2} contains 6 hierarchical layers of mesh resolutions from $\mathcal{T}^0$ to $\mathcal{T}^5$ that use mesh decimation. We set different decimation strides for different datasets, discussed in the respective experiments. Our network uses dual encoder blocks only at coarse resolutions $\mathcal{T}^3,\mathcal{T}^4,\mathcal{T}^5$. The range search radii for point-based convolutions of $\mathcal{T}^3$ to $\mathcal{T}^5$ are $0.2$, $0.4$, $0.8$, respectively. We train the network with Adam Optimizer \cite{kingma2015adam} and exponential decay. The initial learning rate is set to 0.001, with 0.5 decay rate. \begin{table*}[!t] \centering \caption{Shape analysis performance of our network on the synthetic datasets. } \label{tab:shape_analysis} \begin{adjustbox}{width=0.9\textwidth} { \begin{tabular}{l|c|c|c|c|c|c|c|c|c} \hline \multirow{3}{*}{Method}& \multicolumn{4}{c|}{Classification} & \multicolumn{4}{c|}{Semantic Labelling} & Correspondence \\ \cline{2-4}\cline{5-10} & \multirow{2}{*}{ShapeNetCore}& \multicolumn{2}{c|}{SHREC} & \multirow{2}{*}{CUBE} & \multicolumn{3}{c|}{COSEG} & \multirow{2}{*}{HUMAN} & \multirow{2}{*}{FAUST} \\ \cline{3-4}\cline{6-8} & & Split 16 & Split 10 && aliens & chairs & vases & & \\ \hline GI \cite{sinha2016deep} & --& 96.6& 88.6& -- & --& --& --& --&--\\ GWCNN \cite{ezuz2017gwcnn} & --& 96.6& 90.3& -- & --& --& --& --&--\\ PointNet++ \cite{qi2017pointnetplusplus} & --& --& --& 64.3 & --& --& --& --&--\\ MeshCNN \cite{hanocka2019meshcnn} & --& 98.6& 91.0& 92.2 & 96.3& 93.0& 92.4& 85.4&--\\ PD-MeshNet \cite{milano2020primal} & --& 99.7& 99.1& 94.4& 98.2& 97.2& 95.4 & 85.6&--\\ GCNN \cite{masci2015geodesic}& --& --& -- & --& --& --& --& --& 65.4\\ ACNN \cite{boscaini2016learning}& --& --&-- & --& --& --& --& --& 63.0 \\ MoNet \cite{monti2017geometric} & --& --& -- & --& --& --& --& --& 90.0\\ PointContrast \cite{monti2017geometric} & 85.1&--& --& -- & --& --& --& --& --\\ \hline PicassoNet-\Romannum{2} (Prop.) &\textbf{87.6}&\textbf{100.0} & \textbf{100.0}& \textbf{100.0} & \textbf{98.8} & \textbf{99.6} & \textbf{96.0} & \textbf{91.4} & \textbf{100.0}\\ \hline \end{tabular} } \end{adjustbox} \end{table*} \vspace{-2mm} \subsection{Shape Analysis} \label{sec:synthetic_data} We evaluate our network performance on shape classification and facet labelling tasks using synthetic data. The input meshes are decimated with strides 1, 3, 2, 2, 2, respectively on ShapeNetCore and FAUST, while 1, 1.5, 1.5, 1.5, 1.5 on the other datasets due to their limited number of input vertices. Here, the first stride `1' indicates the $\mathcal{T}^0$ is not decimated and it is identical to $\mathcal{T}^1$. Therefore, the first pooling operation in PicassoNet-\Romannum{2} is not applied. We train the network using batch size 6 for FAUST, 32 for ShapeNetCore, and 64 for others. A weight decay of $10^{-5}$ is applied to all datasets other than ShapeNetCore for their limited training samples. \vspace{-1mm} \subsubsection{Classification} \noindent\textbf{ShapeNetCore:} The ShapeNetCore dataset \cite{chang2015shapenet} is a large-scale and information-rich repository of 3D models collected from online resources. It contains around 51,000 shapes of 55 common objects. We follow the original standard split to evaluate the performance of PicassoNet-\Romannum{2} for shape classification. In specific, the split specifies $80\%$ samples for training and $20\%$ for testing. Table \ref{tab:shape_analysis} shows that our network outperform the sparse Residual network of PointContrast \cite{xie2020pointcontrast} by 2.5\%. This indicates the desirability of processing mesh data with PicassoNet-\Romannum{2} for shape analysis. We prepare the input meshes to our network by uniformly sampling 3,000 points on the raw mesh, and triangulating them using the algorithm provided by \cite{pointcloud2mesh}. We note that these meshes are not ideal and actual watertight meshes should result in even better performance of our network. \vspace{1mm} \noindent\textbf{SHREC:} The SHREC dataset \cite{lian2011shape, hanocka2019meshcnn} contains 600 watertight meshes from 30 classes, with 20 samples in each class. Shape classification is defined on split 16 and 10 of the dataset. The split number here indicates the number of training samples per class. Following the setup in \cite{hanocka2019meshcnn}, we report the average results over three randomly generated sets. Table \ref{tab:shape_analysis} shows excellent performance of PicassoNet-\Romannum{2}. \vspace{1mm} \noindent\textbf{CUBE:} The CUBE Engraving dataset \cite{hanocka2019meshcnn} includes 22 object categories with 200 mesh samples per class. Those samples are created by insetting the MPEG-7 binary shapes \cite{latecki2000shape} into random locations of a cube. Each cube consists of about 250 vertices and 500 facets. Table~\ref{tab:shape_analysis} shows that our network achieves 100\% accuracy on this dataset. \vspace{-1mm} \subsubsection{Semantic Labelling} \noindent\textbf{COSEG:} The COSEG dataset~\cite{wang2012active} defines semantic labelling tasks over three independent categories, \textit{i.e.~} \textit{aliens}, \textit{chairs} and \textit{vases}. The alien category contains 169 training samples, 29 test samples and 4 part labels. The chair category contains 337 training samples, 60 test samples and 3 part labels. The vase category contains 252 training samples, 45 test samples and 4 part labels. We follow \cite{milano2020primal} and evaluate our network under semantic facet labelling. Table \ref{tab:shape_analysis} reports the consistent superior performance of PicassoNet-\Romannum{2}. \vspace{1mm} \noindent\textbf{HUMAN:} The HUMAN dataset~\cite{maron2017convolutional} defines semantic facet labelling as segmenting the human body into 8 parts, which include \textit{head}, \textit{hand}, \textit{forearm}, \textit{upperarm}, \textit{body}, \textit{thigh}, \textit{leg} and \textit{foot}. It contains 381 training samples and 18 test samples. Each mesh sample is composed of 750 vertices and 1,500 facets. Table \ref{tab:shape_analysis} suggests that the segmentation result of PicassoNet-\Romannum{2} outperforms the previous methods by a large margin. \vspace{-1mm} \subsubsection{3D Manifold Correspondence}\label{subsec:correspondence} \noindent\textbf{FAUST:} The FAUST dataset~\cite{bogo2014faust} is widely used for correspondence matching of 3D manifold meshes \cite{masci2015geodesic,boscaini2016learning,monti2017geometric}. It consists of 10 different subjects with 10 different poses each, resulting in 100 watertight meshes with exact ground-truth correspondence. Each shape is represented as a mesh with 6,890 vertices and 13,776 facets. The convention is to utilize the first pose of the first subject (i.e.~the zeroth scan `000') as the reference, the first 80 shapes for training and the rest 20 shapes for testing. We follow MoNet~\cite{monti2017geometric} and formulate the correspondence task as a multi-class labelling problem. Similar to its configurations in semantic labelling, the proposed network accomplishes this correspondence labelling with softmax function. In specific, the number of classes is defined as 6,890, \textit{i.e.~} the number of vertices in the reference mesh. We report the matching accuracy of different methods for correspondences without geodesic error in Table \ref{tab:shape_analysis}. Feature representation of our network achieves 100\% accuracy, which is considerably better than other methods. \vspace{-2mm} \subsection{Real-world Datasets}\label{sec:realistic_data} Real-world scene surfaces have heterogeneous vertex and facet sizes, and varying scales. We decimate the input meshes using strides $4$, $3$, $3$, $2$, $2$, respectively, to construct network layers of mesh resolutions from $\mathcal{T}^1$ to $\mathcal{T}^5$. The network is trained with batch size 16. \begin{table*}[!t] \caption{Performance of PicassoNet-\Romannum{2} on the fifth fold (Area 5) of S3DIS dataset. The results are obtained by taking each \textit{complete} scene as input. Our network outperforms the previous best approaches, MinkowskiNet and KPConv, by a significant margin. }\label{tab:s3dis_seg_review} \vspace{-1.5mm} \begin{adjustbox}{width=1\textwidth} {\Huge\begin{tabular}{c|l|ccc|ccccccccccccc} \hline &Method& OA& mAcc & mIoU & ceiling & floor & wall & beam & column & window & door & table & chair & sofa & bookcase & board & clutter \\ \hline \multirow{11}{*}{\rotatebox[origin=c]{90}{Area 5}} & PointNet \cite{qi2017pointnet}& - &49.0 &41.1 &88.8 &97.3 &69.8 &0.1 &3.9 &46.3 &10.8 &58.9 &52.6 &5.9 &40.3 &26.4 &33.2\\ &SEGCloud \cite{tchapmi2017segcloud} & - &57.4 &48.9 &90.1 &96.1 &69.9 &0.0 &18.4 &38.4 &23.1 &70.4 &75.9 &40.9 &58.4 &13.0 &41.6\\ &Tangent-Conv \cite{tatarchenko2018tangent}& 82.5 &62.2 &52.8 &- &- &- &- &- &- &- &- &- &- &- &- &-\\ &SPG \cite{landrieu2017large} & 86.4 &66.5 &58.0 &89.4 &96.9 &78.1 &0.0 &\textbf{42.8} &48.9 &61.6&75.4 &84.7 &52.6 &69.8 &2.1 &52.2\\ &PointCNN \cite{li2018pointcnn}& 85.9& 63.9& 57.3& 92.3& 98.2 &79.4& 0.0 &17.6& 22.8& 62.1& 74.4& 80.6& 31.7& 66.7& 62.1& 56.7\\ &SSP+SPG \cite{landrieu2019point}& 87.9 & 68.2 &61.7&- &- &- &- &- &- &- &- &- &- &- &- &-\\ &GACNet \cite{wang2019attention}& 87.8 & - &62.9&92.3 &98.3 &81.9 &0.0 &20.4 &59.1 &40.9 &78.5 &85.8 &61.7 &70.8 &74.7 &52.8\\ &SPH3D-GCN \cite{lei2020spherical}& 87.7 &65.9 &59.5 &93.3 &97.1 &81.1 &0.0 &33.2 &45.8 &43.8 &79.7 &86.9 &33.2 &71.5 &54.1 &53.7\\ &SegGCN \cite{lei2020seggcn} & 88.2 &70.4 &63.6 &93.7 &\textbf{98.6} &80.6 &0.0 &28.5 &42.6 &\textbf{74.5} &80.9 &88.7 &69.0 &71.3 &44.4 &54.3\\ & MinkowskiNet \cite{choy20194d} & -& 71.7&65.3 &- &- &- &- &- &- &- &- &- &- &- &- &-\\ &KPConv \cite{thomas2019kpconv}& - & 72.8 &67.1 &92.8& 97.3& 82.4 &0.0& 23.9& 58.0& 69.0& 81.5& \textbf{91.0}& \textbf{75.4}& \textbf{75.3}& 66.7& 58.9\\ &DCM-Net \cite{schult2020dualconvmesh} & - & 71.2 & 64.0 &92.1 &96.8 &78.6 &0.0 &21.6 &\textbf{61.7} &54.6 &78.9 &88.7 &68.1 &72.3 &66.5 &52.4\\ \hline &PicassoNet-\Romannum{2} (Prop.) & \textbf{90.4} &\textbf{75.7} &\textbf{69.8} &\textbf{94.4}&98.1 &\textbf{85.1} &0.0 &33.8 &59.5 &80.9 &\textbf{82.8} &90.0 &79.3 &74.9 &\textbf{70.3}&\textbf{58.1}\\ \hline \end{tabular}} \end{adjustbox} \end{table*} \begin{table*}[!t] \centering \caption{Semantic vertex labelling results on the test set of ScanNet.} \label{tab:scannet_test} \vspace{-1.5mm} \begin{adjustbox}{width=1\textwidth} {\Huge\begin{tabular}{l|c|cccccccccccccccccccc} \hline Method & mIoU & floor &wall &chair &sofa &table& door& cab& bed &desk &toil &sink &wind& pic &bkshf &curt &show &cntr &fridg& bath &other\\ \hline SPLATNET$_{\text{3D}}$ \cite{su2018splatnet}& 39.3&92.7&69.9&65.6&51.0&38.3&19.7&31.1&51.1&32.8&59.3&27.1&26.7&0.0&60.6&40.5&24.9&24.5&0.1&47.2&22.7\\ Tangent-Conv~\cite{tatarchenko2018tangent}& 43.8&91.8&63.3&64.5&56.2&42.7&27.9&36.9&64.6&28.2&61.9&48.7&35.2&14.7&47.4&25.8&29.4&35.3&28.3&43.7&29.8\\ PointCNN~\cite{li2018pointcnn} &45.8&94.4&70.9&71.5&54.5&45.6&31.9&32.1&61.1&32.8&75.5&48.4&47.5&16.4&35.6&37.6&22.9&29.9&21.6&57.7&28.5\\ PointConv~\cite{wu2019pointconv}& 55.6&94.4&76.2&73.9&63.9&50.5&44.5&47.2&64.0&41.8&82.7&54.0&51.5&18.5&57.4&43.3&57.5&43.0&46.4&63.6&37.2\\ SPH3D-GCN~\cite{lei2020spherical}& 61.0&93.5&77.3&79.2&70.5&54.9&50.7&53.2&77.2&57.0&85.9&60.2&53.4&4.6&48.9&64.3&70.2&40.4&51.0&85.8&41.4\\ KPConv~\cite{thomas2019kpconv}& 68.4&93.5&81.9&81.4&78.5&61.4&59.4&64.7&75.8&60.5&88.2&69.0&63.2&18.1&78.4&77.2&80.5&47.3&58.7&84.7&45.0 \\ SegGCN~\cite{lei2020seggcn}& 58.9&93.6&77.1&78.9&70.0&56.3&48.4&51.4&73.1&57.3&87.4&59.4&49.3&6.1&53.9&46.7&50.7&44.8&50.1&83.3&39.6\\ MinkowskiNet \cite{choy20194d} & \textbf{73.6}&95.1&\textbf{85.2}&\textbf{84.0}&77.2&\textbf{68.3}&\textbf{64.3}&70.9&\textbf{81.8}&\textbf{66.0}&87.4&67.5&\textbf{72.7}&28.6&\textbf{83.2}&\textbf{85.3}&\textbf{89.3}&\textbf{52.1}&\textbf{73.1}&\textbf{85.9}&\textbf{54.4} \\ DCM-Net~\cite{schult2020dualconvmesh} & 65.8&94.1&80.3&81.3&72.7&56.8&52.4&61.9&70.2&49.4&82.6&67.5&63.7&\textbf{29.8}&80.6&69.3&82.1&46.8&51.0&77.8&44.9\\ \hline PicassoNet-\Romannum{2} (Prop.) & 69.6&\textbf{95.6}&84.8&83.7&\textbf{79.9}&61.9&61.5&\textbf{70.9}&79.0&54.3&\textbf{90.8}&\textbf{70.3}&70.0&25.0&78.7&81.5&79.0&45.9&55.1&70.4&52.9 \\ \hline \end{tabular}} \end{adjustbox} \end{table*} \vspace{1mm} \noindent\textbf{S3DIS.} The Stanford 3D Indoor Spaces (S3DIS) dataset~\cite{armeni20163d} is a large-scale real-world dataset. It has sparse 3D meshes and dense 3D point clouds of 6 large-scale indoor areas. The data was collected, using the Matterport scanner, from three different buildings in Stanford University campus. The semantic labelling task on this dataset is defined to classify 13 classes, namely \emph{ceiling, floor, wall, beam, column, window, door, table, chair, sofa, bookcase, board}, and \emph{clutter}. We follow the standard training/testing protocol where Area 5 is used as the test set and the remaining 5 Areas as the training set \cite{landrieu2017large,li2018pointcnn,qi2017pointnet,tchapmi2017segcloud,wang2019graph}. Performance of each method is evaluated for Overall Accuracy (OA), mean Accuracy of all classes (mAcc), Intersection Over Union of each class (IoU) and their average over all classes (i.e.~mIoU). mIoU is normally considered the most reliable among these metrics. DCM-Net \cite{schult2020dualconvmesh} prepared its training meshes and labels based on the original meshes with over-tessellation and interpolation. In contrast, we generate the scene meshes by triangulating the labelled point cloud provided in the dataset. In specific, we voxelize the raw point cloud using a voxel size of 0.03 (3$cm$), and triangulate them into meshes using the algorithm from \cite{pointcloud2mesh}. We guarantee all of the created meshes to be \textit{edge-manifold}. In this experiment, we utilize the default facet geometrics together with rendered facets of texture resolutions determined by $(\alpha,\beta)=(3,3)$ as input features to PicassoNet-\Romannum{2}. We train and test the network using complete scenes as input samples. It can be noticed from Table~\ref{tab:s3dis_seg_review} that our method significantly outperforms the previous methods. The average inference time of PicassoNet-\Romannum{2} is 60ms across the 68 (voxelized) test samples in Area~5, using a single NVIDIA 3090 GPU. The final results reported in Table~\ref{tab:s3dis_seg_review} are computed on the original point cloud. We transfer the voxelized predictions to dense predictions using nearest neighborhood search. \vspace{2mm} \noindent\textbf{ScanNet.} The ScanNet dataset~\cite{dai2017scannet} comprises reconstructed room meshes from RGB-D video frames, and has rich annotations for semantic vertex labelling. It includes 1,613 meshes in total, among which 1,213 scenes are used for training and 300 scenes for validation. We ignore the 100 test samples in our experiment as their labels are unavailable. The dataset contains 40 class labels, while 20 are recommended for performance evaluation. We train and test our network with complete scene samples. Our network takes voxelized mesh of grid size 2$cm$ as the input. Yet, for better performance, we generate those inputs by voxelizing the raw meshes on-the-fly after data augmentation has been applied. This results in PicassoNet-\Romannum{2} unable to benefit from rendered textures since on-the-fly rendering of large-scale data is computationally intractable. We therefore use only vertex colors to form the fact textures. Our results on the validation set of ScanNet is 71.9\%, which is 3.6\% higher than DCM-Net and is very competitive to the 72.2\% of the top performer MinkowskiNet. MinkowskiNet has 29.8M training parameters \cite{xie2020pointcontrast}, while our network produces similar results using just 3.7M parameters. We report the results of our network on the test benchmark of ScanNet in Table~\ref{tab:scannet_test}, which validates that PicassoNet-\Romannum{2} is very competitive to the top performer. We note that PicassoNet-\Romannum{2} takes 58 ms on average to process per (voxelized) mesh on a single NVIDIA RTX 3090 GPU. \section{Further Analysis} \label{sec:ablation} In this section, we provide further results to analyze the proposed approach. \vspace{-2mm} \subsection{Varying the Number of vMF Components} We study influence of the number of components in the vMF mixture, \textit{e.g.~} $T=27, 18, 9$, on the performance of PicassoNet-\Romannum{2}. We use S3DIS as well as ScanNet for this analysis. We use the validation set of ScanNet for all experiments for analysis. Table~\ref{tab:diff_vMF_size} summarizes the segmentation results. It can be noticed that generally, larger number of $T$ leads to more accurate predictions because it introduces more filter parameters. By default, PicassoNet-\Romannum{2} selects $T$ as 27. \vspace{-2mm} \subsection{Performance with Different Input Features}\label{subsec:input_feats_ablation} We also analyze the performance of PicassoNet-\Romannum{2} for different facet features, including geometrics and textures. Firstly, we fix the facet textures as `plain' vertex colors, \textit{i.e.~}$(\alpha,\beta)=(0,1)$, and compare the network performance for two different facet geometrics. One is composed of vertex coordinates, \textit{i.e.~}$[{\bf x}_1, {\bf x}_2, {\bf x}_3]$, the other is the proposed $[{\boldsymbol \ell}, {\boldsymbol \theta}, {\bf n}, {\bf h}]$ in \S~\ref{subsec:mesh_facet_geometry}. ScanNet is used in this experiment. Our results in Table~\ref{tab:diff_input_feats} suggest that the proposed facet geometrics is a better alternative for the vertex coordinates. Further, to compare the network performance for `plain' and `rendered' facet textures, we render the mesh facets of S3DIS dataset by setting $(\alpha,\beta)=(3,3)$ in Eq.~(\ref{equ:interpolate_num}) and fix the geometric features as $[{\boldsymbol \ell}, {\boldsymbol \theta}, {\bf n}, {\bf h}]$. These results are also summarized in Table~\ref{tab:diff_input_feats}. It can be noticed that finer textures indeed boost the network performance, as expected. \begin{table}[t] \centering \caption{Performance of PicassoNet-\Romannum{2} by using \\different number of components in the vMF mixture. }\label{tab:diff_vMF_size} \vspace{-1.5mm} \begin{tabular}{l|c|c|c|c|c} \hline Dataset & \multicolumn{3}{c|}{S3DIS Area 5} & \multicolumn{2}{c}{ScanNet} \\ \hline Number $T$ & 27 & 18 & 9 & 27 & 18 \\ \hline mIoU & 69.4 & 68.8 & 68.4 & 71.9& 71.8 \\ \hline \end{tabular} \vspace{-2mm} \end{table} \begin{table}[t] \centering \caption{Performance of PicassoNet-\Romannum{2} by taking different\\ geometric and texture features for the mesh facets.}\label{tab:diff_input_feats} \vspace{-1.5mm} \begin{tabular}{p{18mm}|c|c|c|c} \hline Dataset & \multicolumn{2}{c|}{ScanNet} & \multicolumn{2}{c}{S3DIS} \\ \hline Textures~$(\alpha,\beta)$& \multicolumn{2}{c|}{$(0,1)$} & $(0,1)$ & $(3,3)$ \\ \hline Geometrics & $[{\bf x}_1,{\bf x}_2,{\bf x}_3]$ & $[{\boldsymbol \ell},{\boldsymbol \theta},{\bf n},{\bf h}]$ & \multicolumn{2}{c}{$[{\boldsymbol \ell},{\boldsymbol \theta},{\bf n},{\bf h}]$} \\ \hline mIoU & 71.0 &71.9 & 69.4 & 69.8\\ \hline \end{tabular} \vspace{-3mm} \end{table} \vspace{-2mm} \subsection{Is Dual Convolution Always Necessary?}\label{subsec:dual_layers_ablation} It is known that point cloud convolutions can be time-consuming because of neighborhood search, which accompanies a significant computational burden in processing dense data~\cite{lei2020spherical}. The DCM-Net and original PicassoNet~\cite{lei2021picasso} use dual convolutions in every layer of their networks. In comparison, PicassoNet-\Romannum{2} utilizes dual convolutions only in its encoder blocks of coarse resolutions~$\mathcal{T}^{3\sim5}$. To consolidate our pruning of point-based convolutions for PicassoNet-\Romannum{2}, we empirically evaluate if dual convolution is necessary for every layer. In specific, we alter the PicassoNet-\Romannum{2} by either adding point cloud convolutions to its encoders of resolution~$\mathcal{T}^2$, or removing its existing point cloud convolution from the encoder of resolution~$\mathcal{T}^3$. We test the performance of these variants, and report their results as well as other details in Table~\ref{tab:nLayer_dual}. ScanNet dataset is utilized in this experiment. From our findings, we can conclude that point-based convolutions can be eliminated from high resolution layers of the network, without affecting the network performance. This led us to our eventual configuration of PicassoNet-\Romannum{2}, which is both effective and efficient. The reported inference time in the Table is for voxelized meshes of grid size 2$cm$. To further confirm our finding, we also conducted a similar experiment on the HUMAN dataset. The results in Table~\ref{tab:exp_human_dual} validate our insight about the passive role of dual convolutions for dense meshes. \begin{table}[!t] \centering \caption{Performance and runtime of the network while adding or deleting \\ point cloud convolutions to PicassoNet-\Romannum{2}. The list of NN search radii 0.1,~0.2,~0.4,~0.8 in the parenthesis denotes radius of neighborhood search for point cloud convolutions from resolution $\mathcal{T}^2$ to $\mathcal{T}^5$. The dual levels of $(\mathcal{T}^3, \mathcal{T}^4, \mathcal{T}^5)$ in the third column correspond to the actual configurations of PicassoNet-\Romannum{2}.} \label{tab:nLayer_dual} \vspace{-1.5mm} \begin{adjustbox}{width=0.47\textwidth}{ \begin{tabular}{l|c|c|c} \hline Config & adding & \textbf{used} & reducing \\ \hline Dual Levels & $(\mathcal{T}^2, \mathcal{T}^3, \mathcal{T}^4, \mathcal{T}^5)$ & $(\mathcal{T}^3, \mathcal{T}^4, \mathcal{T}^5)$& $(\mathcal{T}^4, \mathcal{T}^5)$ \\ \hline NN search radii & (0.1,~0.2,~0.4,~0.8)& (0.2,~0.4,~0.8)& (0.4,~0.8) \\ \hline Inference time (ms) & 93 & 58 & 52 \\ \hline mIoU & 71.7 & 71.9 & 71.0 \\ \hline \end{tabular}} \end{adjustbox} \end{table} \begin{table}[!t] \centering \caption{Network performance on the HUMAN dataset while \\adding or deleting point cloud convolutions from PicassoNet-\Romannum{2}.} \label{tab:exp_human_dual} \vspace{-1.5mm} \begin{adjustbox}{width=0.48\textwidth} { \begin{tabular}{l|c|c|c|c|c} \hline Dual & $(\mathcal{T}^2, \mathcal{T}^3, \mathcal{T}^4, \mathcal{T}^5)$ & $(\mathcal{T}^3, \mathcal{T}^4, \mathcal{T}^5)$& $(\mathcal{T}^4, \mathcal{T}^5)$ & $(\mathcal{T}^5)$ & None\\ \hline Radii & (0.1,~0.2,~0.4,~0.8) & (0.2,~0.4,~0.8)& (0.4,~0.8) & (0.8) & N.A. \\ \hline Acc&91.3& 91.4 & 91.0 & 90.7 & 89.2 \\ \hline \end{tabular} } \end{adjustbox} \end{table} \vspace{-2mm} \subsection{2D Embedding of the Shape Features} We visualize the shape features learned by PicassoNet-\Romannum{2} for the test samples of ShapeNetCore \cite{chang2015shapenet} by showing their 2D embeddings in Fig.~\ref{fig:tSNE_2Dembed_ShapeNetCore} using the t-SNE technique \cite{van2008visualizing} for dimension reduction. For the feature representations, we use the 256-dimensional output of global pooling in the classification network. From Fig.~\ref{fig:tSNE_2Dembed_ShapeNetCore}, it is clear that the shapes of most classes are distinctly represented, such as \textit{car}, \textit{bus}, \textit{guitar}, \textit{knife}, \textit{vessel}, \textit{rifle}, \textit{faucet}, \textit{airplane}, \textit{chair}, \textit{sofa}, \textit{table}, etc. We also note that some classes are much closer to each other such that their closeness is well-justified based on their shapes and semantics. For instance, buses are close to cars and pistols are close to rifles. \begin{figure}[!t] \centering \hspace{-1mm}\includegraphics[width=0.49\textwidth]{Figs/Top55_names.pdf} \\ \vspace{-2mm} \caption{2D embedding of the shape feature representations learned by PicassoNet-\Romannum{2} for the test samples of ShapeNetCore.} \label{fig:tSNE_2Dembed_ShapeNetCore} \end{figure} \vspace{-2mm} \subsection{Semantic Parsing Visualization} As representative examples, we visualize the semantic parsing results of PicassoNet-\Romannum{2} for shapes of human bodies \cite{maron2017convolutional} and surfaces of real-world scenes \cite{dai2017scannet} in Fig.~\ref{fig:sem_vis}. The network predicts most of the body parts and scene objects correctly. However, we see that segmenting parts and objects near boundaries sometimes cause minor issues for our network. Nevertheless, such errors remain minor and do not occur too frequently. Also notice that one of the test samples of human bodies has incorrect ground-truth label for the right leg. Such ground truth problems can result in a lower accuracy value of an accurate technique like ours. This also indicates that instead of highest prediction performance on a single dataset, highly competitive results across multiple datasets is sometimes more preferable in this domain. Our PicassoNet-\Romannum{2} is able to achieve that. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Figs/sem_vis.pdf} \vspace{-2mm} \caption{The ground truth and our predictions for the shapes of human bodies and real-world scene surfaces. The black colors in the ground truth of textured surfaces indicate unlabelled objects.} \label{fig:sem_vis} \vspace{-2mm} \end{figure} \vspace{-3mm} \section{Conclusion} We made two major contributions towards hierarchical neural modeling of heterogeneous 3D meshes. First, we presented Picasso - a modular implementation of multiple desired operations for geometric feature learning over 3D meshes. Picasso introduces novel mesh-amenable convolutional operations, mesh (un)poolings and GPU-accelerated mesh decimation. This article considerably enhances our preliminary version of Picasso by incorporating fuzzy modeling modules and improved efficiency. Moreover, we also release Pytorch version of Picasso with this article along the Tensorflow support. The second major contribution of this article is our network, PicassoNet-\Romannum{2}. Enabled by the upgraded Picasso, our network is able to effectively process facet signals, including primitive geometrics and textures, as inputs. It also takes advantage of a new insight provided in this article regarding the passive role of dual convolutions in high resolution mesh feature learning. Leveraging that, PicassoNet-\Romannum{2} learns geometric features over 3D shapes and scene surfaces efficiently. Through extensive experiments, we established highly competitive performance of PicassoNet-\Romannum{2} for shape analysis and scene parsing. \vspace{-3mm} \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi Professor Ajmal Mian is the recipient of an Australian Research Council Future Fellowship (project number FT210100268) funded by the Australian Government. Dr.~Naveed Akhtar is the recipient of Office of National Intelligence Postdoctoral Grant (project number NIPG-2021-001) funded by the Australian Government. \ifCLASSOPTIONcaptionsoff \newpage \fi \vspace{-3mm}
proofpile-arXiv_065-48
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In the context of open-source tools and multi-messenger astronomy, it is essential to provide the community with reliable and accessible analysis tools to reproduce state-of-the-art scientific results and to facilitate the incorporation of those tools in custom analysis chains for future work. We present two open-source code frameworks that combine multi-instrument data from gamma-ray and neutrino telescopes and that perform a global analysis aimed to constrain the nature of DM. \newpage \section{Analysis workflow} The workflow for the \texttt{gLike}\footnote{\href{https://github.com/javierrico/gLike}{https://github.com/javierrico/gLike}}~\citep{javier_rico_2021_4601451} and \texttt{LklCom}\footnote{\href{https://github.com/TjarkMiener/likelihood_combiner}{https://github.com/TjarkMiener/likelihood\_combiner}}~\citep{tjark_miener_2021_4597500} tools is depicted in Fig~\ref{fig:UMLWorkflow}. \texttt{gLike} is a self-containing framework, which takes as input high-level gamma-ray astronomical data in a format interfaceable with the one proposed by the \texttt{GADF}\footnote{\href{https://gamma-astro-data-formats.readthedocs.io/}{https://gamma-astro-data-formats.readthedocs.io/}} initiative ~\citep{deil_christoph_2018_1409831} and numerically maximizes the joint likelihood functions. The obtained likelihood curves as a function of the DM self-annihilation cross-section $ \langle\sigma v\rangle $ can be stored in an intermediate data format in txt and be later reutilized as input. The \texttt{LklCom} is fed with either the former curves stored as txt files or with a single hdf5 file, obtained by merging the txt files via its i/o tools. Various gamma-ray and neutrino telescopes can adopt the data format for the likelihood curves in their analysis chains and use the two presented analysis tools to perform a combined analysis. The tremendous advantage of this procedure is that there is no need of sharing sensitive low-level data among the participating instruments. \texttt{LklCom} features several matplotlib-based plotting functions to visualize the final output products generated by either tool. \articlefigure[width=1.0\textwidth]{X0-006_f1.eps}{fig:UMLWorkflow}{Pseudo-UML workflow from \texttt{gLike} and \texttt{LklCom}.} \section{Results} In order to verify the agreement of the independently implemented code frameworks, mock data is used and analyze with the same analysis setting. In Fig.~\ref{fig:Comparison}, we overlay the \texttt{LklCom} DM limits results plot with the DM limits and error bands obtained with \texttt{gLike} as red dashed contour lines, which shows an accurate agreement. \articlefigure[width=0.6\textwidth]{X0-006_f2.eps}{fig:Comparison}{Comparison of \texttt{gLike} and \texttt{LklCom} with mock data.} \noindent The latest scientific results (see Fig.~\ref{fig:Armand}) of a global indirect DM search in the gamma-ray band, carried out with the presented tools, is ~\citep{Armand:2021}, where the major gamma-ray observatories \textit{Fermi}-LAT, HAWC, H.E.S.S., MAGIC, and VERITAS jointly analyzed observations of 20 dwarf spheroidal galaxies (dSphs). This work sets the most constraining and robust upper limits of the weakly interacting massive particles DM self-annihilation cross-section $ \langle\sigma v\rangle $ towards dSphs over the widest DM mass range, extending from 5 GeV to 100 TeV. \articlefigure[width=1.0\textwidth]{X0-006_f3.eps}{fig:Armand}{[Taken from ~\citep{Armand:2021}] Upper limits at 95\% confidence level on $ \langle\sigma v\rangle $ as a function of the DM mass for the annihilation channels $ b\bar{b} $ (left) and $ \tau^{+}\tau^{-} $ (right) from the combined analysis of dSph observations by \textit{Fermi}-LAT, HAWC, H.E.S.S., MAGIC, and VERITAS performed by \texttt{gLike} and \texttt{LklCom}.} \section{Conclusions and outlook} This contribution presents two open-source software tools aiming to help the astronomical community to unify the search for DM and derive global DM constraints from multi-instrument and multi-messenger observations. \acknowledgements TM acknowledges support from PID2019-104114RB-C32. CN, JR, and DN acknowledges partial support from The European Science Cluster of Astronomy \& Particle Physics ESFRI Research Infrastructures funded by the European Union’s Horizon 2020 research and innovation program under Grant Agreement no. 824064. This work had support from the ERDF under the Spanish Ministerio de Ciencia e Innovaci\'{o}n (MICINN, grant PID2019-107847RB-C41), and from the CERCA program of the Generalitat de Catalunya. \\ \\
proofpile-arXiv_065-49
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Keyword spotting is a task to detect predefined words from continuous speech. It is of importance for human-computer interaction, and is widely used in various smart devices and voice retrieval systems. In recent years, due to the rapid development of artificial intelligence technology, keyword spotting has also achieved promising developments in many scenarios, such as smart speaker, voice assistant, $etc$. In the past few decades, researchers have proposed kinds of techniques to improve the performance of keyword spotting systems. The first is the family of Query-by-Example (QbyE) methods \cite{vasudev2015query} \cite{2017Query} \cite{zhang2009unsupervised} \cite{barakat2012improved}, which utilize the keywords speech samples to get a set of feature templates. As in the detection phase, a feature template is extracted from the test speech sample and matched with the keyword feature template via the pair templates similarity. If the similarity exceeds the threshold, it is considered as a hit. The second is the large vocabulary continuous speech recognition (LVCSR) \cite{weintraub1995lvcsr} \cite{chen2013quantifying} \cite{chen2013using} based methods, which is widely used in audio retrieval tasks. These methods transcribe speeches to texts and indexes for keywords information. In order to improve the recall rate, some improved methods \cite{thambiratnam2005dynamic} \cite{cardillo2002phonetic} introduce lattice to save multiple decoded sequences as well as position information. In the VKW challenge, the organizer provides 1505 hours training data and 15 hours fine-tuning data. The F1 score and the actual term-weighted value (ATWV) are used to evaluate system performance. The challenge mainly focus on evaluating the accuracy and the recall rate of the keyword spotting system. In this case, we choose the LVCSR based method for this challenge. We introduce a BBS-KWS system, which aims to improve the accuracy and the recall rate of keyword spotting in the entire system. Specifically, the equipped ASR module transcribes input speech feature to text representations, and the KWS module uses both the posterior probability of the ASR module and the transcribed text to query keyword candidates paired with the corresponding scores. Since the acoustic model has taken the custom Chinese characters into consideration, it is unnecessary to retrain the model for different keywords. Our contributions are summarized as follows: (1) We introduce a big backbone network and syllable modeling units to improve the cross-domain performance. (2) We introduce a keyword biasing mechanism to improve the recall rate of keywords in the speech recognition stage. (3) In the KWS module, we utilize a series of methods to improve the recall rate of the keyword detection, including multi-stage matching and fuzzy matching. Inspired by the references \cite{park2020improved} \cite{xie2020self}, we use semi-supervised learning on the CN-Celeb dataset \cite{fan2020cn} to solve the problem of data sparsity and the domain mismatch between the training set and the validation set. Specifically, we first train a model on the labeled data and then use it as the initial teacher model. We generate pseudo-labels for the unlabeled CN-Celeb dataset via a teacher model, and then use synthesized data to fine-tune the model. This step will be repeated for multiple rounds, and the performance of the new model on the validation set can be greatly improved. This paper is organized as follows: In Section 2, we describe the structure of the overall BBS-KWS system, including the ASR module and the KWS module. In Section 3, we introduce the experimental setup, the semi-supervised learning and the experimental results on the VKW challenge. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{BBS-KWS.pdf} \caption{The pipeline of BBS-KWS system.} \label{fig:BBS} \end{figure} \section{BBS-KWS Model} The BBS-KWS system consists of a ASR module and a KWS module, the schematic diagram of the system is provided in Figure~\ref{fig:BBS}. In the whole keyword spotting process, the ASR module converts the speech features into N-best hypotheses, and the KWS module continuously match each candidate keywords on the N-best candidate sequences. Once a keyword hits, it uses the results of the ASR acoustic model to calculate the CTC forward score as the confidence score. \subsection{ASR Module} The acoustic model converts the input audio features into the probability distribution over the modeling units, which is the most important part in the entire system. In our system, the N-best candidate sequences are generated by the acoustic model. \begin{equation} loss = {\lambda} loss_{ctc} + (1-{\lambda})loss_{att} \label{eq1} \end{equation} The acoustic model is based on a hybrid CTC/Attention structure \cite{graves2006connectionist} \cite{kim2017joint} \cite{watanabe2017hybrid}. The input of the model is 80-dimensional filterbank features. The output of the encoder is used to calculate the CTC objective, and the output of the decoder together with the ground-truth label are utilized to obtain the CE loss. During training, the loss functions of the two branches will be linearly combined in a certain proportion. As shown in Eq.(\ref{eq1}), where $\lambda$ denotes the weight of different loss. Different from the training process, the inference phase of BBS-KWS only uses the branch of CTC module to generate N-best hypotheses. It uses prefix beam search to generate N-best hypotheses, and uses a 4-gram language model (LM) for shallow fusion, and finally obtain N-best sequences. \subsubsection{Big backbone} In recent years, it is common to use the deeper models to improve the performance. For example, researchers have investigated the Transformer \cite{vaswani2017attention} \cite{dong2018speech} \cite{mohamed2019transformers} model in the field of speech recognition. They usually adopt several layers of CNN plus multiple stacked transformer sublayers. However, it is hard to further improve the model accuracy with increasing the depth of the network. Therefore, the depth of the transformer is generally in the range of 6-24, which is less than the number of network layers in the image field. To achieve a trade-off between the accuracy and the training efficiency, the BBS-KWS system does not extend the depth of the model, but expand the width of the conformer network \cite{gulati2020conformer}. Specifically, the encoder adopt the conformer structure with a depth of 12, while the decoder has 6 identical layers. The attention dimension is increased from 256 to 512, and the number of attention heads is also increased from 4 to 8. The width of the feed-forward layer is 2048. \subsubsection{Keyword biasing} Due to the factors such as context information and the frequency of tokens appearing in the training set, the rarely-used words are usually under-estimated and thus leading to the absence of them in N-best paths. This makes keyword spotting a challenging task as the rarely-used words can't pass into the final keyword determining process. To solve this issue, we utilize keyword biasing technology to perform keyword matching in real time on the process of generating N-best hypotheses. If a certain keyword is matched, the corresponding candidate sentence will be awarded with a certain score. \begin{equation} W(keyword) = -{\alpha}LM(keyword) + {\beta} \label{eq2} \end{equation} We introduce a language model and use the predicted scores to adaptively assign weights to words. The N-gram language model counts the word frequency in the training data. By taking a negative number for the score of the language model, the low-frequency keyword will be assigned with high weight. Especially for long keywords, it will be segmented into several parts before calculating weights, thus avoiding the situation that some keywords cannot be matched because of their excessive lengths. As shown in Eq.(\ref{eq2}), where $LM$ is the N-gram language model, $\alpha$ and $\beta$ are parameters of the affine transformation. \subsubsection{Syllable modeling units} Another problem in the VKW task is that there are less in-domain data matching the test set. It is extremely difficult to fine-tune the effect of the large model on such a small amount of data. To solve this problem, we follow \cite{zhang2019investigation} \cite{2019Char} \cite{zou2018comparable} to introduce the smaller syllable modeling units than character units. At the same time, in order to maintain the accuracy at the expected character level, the character-level modeling unit is also retained. The CTC classifier considers syllables as modeling units, and the decoder deal with characters units. Therefore, the entire loss function is a linear combination between the syllable-based CTC loss and the character-based CE loss. The specific structure is shown in Figure~\ref{fig:p2}. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{hybrid_syllable_char.pdf} \caption{Hybrid syllable-character model.} \label{fig:p2} \end{figure} \subsection{KWS Module} The KWS module aims to detect whether the keywords exist in the speech according to the results of the ASR module, which can be divided into two steps: matching and scoring. In the matching stage, the BBS-KWS system uses fuzzy matching and multi-stage matching to improve the recall rate of the keywords. And in the scoring stage, the BBS-KWS system uses the CTC algorithm to calculate the confidence score of the keywords. \subsubsection{Keyword matching and scoring} \begin{table*}[th] \caption{Comparison of methods} \label{tab:example} \centering \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Methods} & \multicolumn{3}{|c|}{F1} & \multicolumn{3}{|c|}{ATWV} \\ \cline{2-7} ~ & lgv & liv & stv & lgv & liv & stv \\ \hline Chain model baseline&0.6781&0.6565&0.7006 &0.5171&0.6027&0.5644\\ \hline Conformer with character modeling unit&0.7646&0.7899&0.8154 &0.5534&0.6479&0.6222\\ + LM &0.7988&0.8249&0.839 &0.6191&0.7183&0.6997\\ + Length normalization&0.8236&0.849&0.8579 &0.6703&0.7784&0.7711\\ + N-best matching&0.8247&0.8476&0.8531 &0.7019&0.8066&0.7933\\ + SSL round 1&0.8543&0.886&0.8969 &0.7192&0.8271&0.8129\\ + keyword biasing&0.857&0.8813&0.8898 &0.7587&0.8457&0.8356\\ + SSL round 4&0.8644&0.8897&0.8893 &0.7677&0.8538&0.8335\\ + Fuzzy matching&0.8681&0.8937&0.8987 &0.8078&0.8693&0.8579\\ \textbf{+ Syllable modeling units} &\textbf{0.8886}&\textbf{0.9059}&\textbf{0.91 }&\textbf{0.82}&\textbf{0.8809}&\textbf{0.8683}\\ \hline \textbf{Ensemble} &\textbf{0.8839}&\textbf{0.8973}&\textbf{0.8965} &\textbf{0.8495}&\textbf{0.9024}&\textbf{0.8895}\\ \hline \end{tabular} \end{table*} The KWS module performs to query keywords from the N-best results decoded by the acoustic model. If the keyword appears more than once in the N-best sequences, it is considered as a hit and passed into the next step of scoring. \textbf{Keyword Matching:} During the matching process, BBS-KWS introduces a multi-stage matching strategy and a fuzzy matching strategy to improve the recall rate. Multi-stage matching uses the model with hybrid syllable and character modeling units for syllable matching and character matching. Keywords will be matched in both the syllable N-best sequences and the character N-best sequences, respectively. At the same time, fuzzy matching is used to improve the recall rate of keyword spotting. The dimsim \footnote{\url{https://github.com/System-T/DimSim.git}} library is used to calculate the pronunciation similarity between the decoded vocabulary and the keywords. If the distance is less than a certain threshold, the keyword is also considered as a hit. \textbf{Keyword Scoring:} Once the keyword hits, it will be passed to the consequent scoring stage. The CTC classifier outputs the probability distribution of each frame, and uses the CTC peak information to obtain the position offset of the keyword in the speech as well as computes the probability of the keyword path. \begin{equation} S(kw) = {\sum}_{\pi:\beta(\pi)=kw}\rho(\pi) \label{eq3} \end{equation} As shown in Eq.(\ref{eq3}), where \emph{kw} stands for keywords, $\beta$ is the CTC path compression algorithm, $\pi$ is the keyword's state path of the CTC prefix beam search, and $\rho( )$ calculate the path score. In particular, if multi-stage matching is used, once the syllable keyword and the character keyword hit at the same time, the result with a higher score will be reserved. The BBS-KWS applies the length normalization to the confidence score. \begin{equation} Score(kw) = {S}(kw) / length(kw) \label{eq4} \end{equation} \section{Experiments And Results} \subsection{Experimental Setup} The training data consists of the following three parts: (1) 1505 hours of training set (2) 15 hours of fine-tune dataset collected from Long video, short video, and live broadcast (3) 560h unlabeled data randomly selected from CN-Celeb dataset for semi-supervised learning. We use the SpecAug method \cite{park2019specaugment} during training process and speed augmentation with coefficient {(0.9, 1.0, 1.1)} for training data respectively. The text corpus for language model consists of two parts: (1) labeled data provided by the contestant. (2) the text corpus are collected from the public website, including Wikipedia, Weibo, Douban, and Netease News. The language model is trained on 40M sentences. Both the training data and the text corpus used in restricted and unrestricted tracks are the same. The BBS-KWS adopt the conformer structure, the attention dimension is 512, and the number of attention heads are 8. We used a 12-layer encoder and a 6-layer decoder. The $\lambda$ in Eq.(\ref{eq1}) is 0.9. During decoding, ($\alpha$,${\beta}$) in Eq.(\ref{eq2}) is (1,4), the beam size is 10. The threshold of dimsim is 0.5. The system uses voice keywords F1 and actual term-weighted value (ATWV) to measure system performance. Among them, F1 reflects both the accuracy and the recall rate of the system. ATWV mainly evaluates the average TWV value of the system on each keyword, which reflects the system's detection effect on the keywords with different frequencies. The final submitted system is the ensemble of the three well-trained models. \subsection{Semi-supervised Learning} In order to solve the problem of domain mismatch between the training set and the validation set, the CN-Celeb dataset \cite{fan2020cn} is used for semi-supervised learning (SSL). We randomly select 560h CN-Celeb data for semi-supervised learning. The method we used are inspired by the noisy student training (NST). Formally, the selected unlabeled data and the well-trained teacher model are denoted as $U$ and $M_0$, respectively. The semi-supervised learning algorithm is summarized as follow: (1) Set the initial teacher model $M=M_{0}$. (2) Generate pseudo-label $M(U)$ via the teacher model $M$. (3) Mix $M(U)$ and the VKW data to fine-tune the model $M_{0}$ and get a new model $M'$. (4) Set $M=M'$ and repeat step (2) (3) until convergence. \subsection{Experimental Results} Table 1 shows the performance of the BBS-KWS system in three scenarios: long video (lgv), short video (stv), and live broadcast (liv). We can draw the following conclusions from the table. First, the language model, length normalization and syllable modeling units brought significant improvements. Second, compared to the chain baseline model, the F1 score of the BBS-KWS system is improved from (67.81\%, 65.65\%, 70.06\%) to (88.39\%, 89.73\%, 89.65\%) in the three scenarios respectively. And the ATWV score of the BBS-KWS system is improved from (51.71\%, 60.27\%, 56.44\%) to (84.95\%, 90.24\%, 88.95\%). Overall, the BBS-KWS achieves 31\% F1 relative increase, and 56\% ATWV increase. Among them, the best performance of the single system achieved 32\% F1 relative increase, and 52\% ATWV increase. Furthermore, the semi-supervised learning uses the CN-Celeb data to increase the diversity of training data, thereby improving the performance of the model. This experiment is repeated four times, and the accuracy of pseudo-labels are improved through multiple rounds of iteration. As can be see from the Table 1, the SSL have brought a great improvement. \section{Conclusions} The BBS-KWS system exploits a hybrid CTC/Attention acoustic model, combined with a big backbone, syllable modeling units, and keyword biasing technology to improve the performance of the ASR module. In the KWS module, it uses the fuzzy matching and multi-stage matching methods, achieving promising performance. And we adopt the semi-supervised learning to further improve the robustness of the system. In the VKW task, the BBS-KWS system achieves significant gains over the baseline, which achieves 31\% F1 increase and 56\% ATWV increase. \bibliographystyle{plain}
proofpile-arXiv_065-50
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The purpose of gesture recognition is to distinguish users' gestures in videos. Gesture recognition is given a video sequence as input, unlike an image in ImageNet \cite{deng2009imagenet}. Thus, simply using spatial features of video frames cannot effectively achieve gesture recognition. Spatiotemporal networks exploit temporal features as well as spatial features for gesture recognition \cite{feichtenhofer2017spatiotemporal, taylor2010convolutional, tran2015learning}. Furthermore, multi-modal networks leverage a variety of modalities to improve the network to contain more information \cite{nishida2015multimodal, zhu2017multimodal}. Also, studies have been conducted using spatiotemporal features drawn from several modalities at once \cite{neverova2015moddrop}. Despite utilizing temporal features, existing studies have limitations in extracting efficient spatiotemporal features. Based on this intuition, we attempt to extract efficient temporal features from video sequences with skeleton sequences. Skeleton sequences and video sequences are deeply related to each other, and they are characterized by sharing time-critical points. We propose a skeleton-based keyframe selection module to extract the critical time points from the video data and propose a network that can maintain the keyframe features and the temporal-attention features well, as can be found in Fig. \ref{1}. \begin{figure*}[t] \centerline{\includegraphics[width=0.98\textwidth]{Overview_1.pdf}} \caption{An overview of the proposed network architecture for efficient gesture recognition. The inputs are video frames and skeleton data, and the output is the predicted class of gestures in the video. Our network can be largely divided into three stages. (1) Extract the keyframes from the video using skeleton sequences. (2) Feed the keyframe features from the keyframe pathway to the temporal-attention pathway. (3) Extract the spatiotemporal features from the temporal-attention features and join it into the keyframe pathway. Step (2) through (3) operate continuously and repeatedly. } \label{1} \end{figure*} There are also studies that use different modalities to efficiently extract temporal features. Gao \textit{et al.} \cite{gao2020listen} conducted a study using audio sequences to find important parts within video clips which contain corresponding audio. They can efficiently select critical moments within a long untrimmed video to extract time-critical features. In addition, Song \textit{et al.} \cite{song2020gesture} uses audio sequences as a detector for gestures. The detector that uses audio data determines which part of the video to proceed with for gesture recognition. Likewise, we do not simply fuse modalities \cite{miao2017multimodal}, but exploit skeleton sequences to extract keyframes within video sequences. We leverage attention-based long short-term memory (LSTM) to select the keyframes of the skeleton sequences, and use the frames as the keyframes of the video sequences. Furthermore, we prove that utilizing skeleton data as a keyframe selection module can achieve higher performance than simply fusing the modalities. We also propose a bi-directional consecutively connected two-pathway network (BCCN) that efficiently fuses keyframe features and spatiotemporal features. By keeping the keyframe features in the two pathways, both pathways are able to extract effective features for gesture recognition, and the activation maps are also obtained more clearly. Our contributions can be summarized as follows: \begin{itemize} \item We experimentally show how the skeleton-based keyframe selection module helps achieve good keyframe features for gesture recognition. \item We propose a BCCN that can contain keyframe features and spatiotemporal features well during the learning process to classify gestures efficiently. \item The pathways of the proposed network teach each other to maintain spatiotemporal characteristics of both pathways for fusing spatial semantics and temporal semantics effectively. \end{itemize} \section{Related Work} In this section, we consider the following topics: temporal semantics, multimodal network, multitask learning, skeleton data, and two-stream networks. \noindent\textbf{Temporal Semantics} Gestures consist of a series of human poses, so they cannot be thought of without temporal semantics. Therefore, it is common to use video sequences for gesture recognition. However, video sequences contain many parts that are unnecessary to recognize the gestures. For example, video sequences can include basic human postures that are not related to behavior or contain the movements of objects other than humans. In addition, adjacent frames are likely to have many overlapping features because human body positions are likely to be similar to each other. In this cases, the network learns unnecessary information, which leads to poor learning performance. Therefore, efficient handling of temporal semantics is considered an important issue for improving performance. Neverova \textit{et al.} \cite{neverova2015moddrop} uses various time scales to process temporal information, helping to perceive behaviors insensitive to the speed of the gestures. However, it requires diversifying the time scales constantly to get better performance. In comparison, Feichtenhofer \textit{et al.} \cite{feichtenhofer2019slowfast} proposed a method to follow a two-pathway network. The two pathways that make up the network extract features, focusing on the spatial and temporal characteristics, respectively. Unlike Neverova's work, it is efficient in that it extracts specific features separately -- the temporal features of the video sequences and spatial features of the frames. However, this network can be improved because it only extracts spatial features of the starting frames from each video batch. \noindent\textbf{Multimodal Networks} In addition to simply handling time information well, there are ways to increase gesture recognition performance by efficiently combining and accepting different kinds of data. For example, a person does not only use their vision to grasp the behavior of others. For example, when you see a person waving their hand forward or backward, it is efficient to use other senses, such as hearing, to determine whether the person is asking you to come or to go. Even if a person is doing the same thing, when they say ``Come this way,'' the action becomes ``come this way.'' But when they say ``Go that way,'' the action becomes ``go that way.'' Therefore, the multimodal network takes the approach that multiple behaviors can be classified well if other senses are available. Multimodal networks do not use one type of data, but rather two or more types of data. Multimodal networks increase behavioral recognition performance, but due to the large size of the data, the performance of the hardware is greatly affected. This is why the way to combine different types of data is important. The modality fusion method varies the timing and feature combining method to handle the data efficiently \cite{gadzicki2020early, miao2017multimodal, snoek2005early, wang2020deep}. Recently, a method has been proposed to effectively extract features from the data using other types of data \cite{gao2020listen, song2020gesture}. We discuss in Section 4 that it is better to extract effective features from other data using one type of data rather than simple fusion methods in the field of gesture recognition. \noindent\textbf{Multitask Learning} Multitask learning is a learning method that improves the performance of all tasks by allowing a system to learn highly correlated tasks simultaneously \cite{caruana1997multitask, crawshaw2020multi, ruder2017overview}. The network can be learned efficiently because it shares learned representations, which helps to obtain generalized features. Various sharing methods are being studied \cite{kumar2012learning, meyerson2017beyond}. There are three issues in multitask learning: when to share, what to share, and how to share. Various studies have been conducted to tackle these issues \cite{argyriou2008convex, han2016multi, liu2015multi}. Luvizon \textit{et al.} \cite{luvizon20182d} utilizes a multitask CNN to perform simultaneous appearance and pose recognition, combining them for action recognition. By performing similar tasks using one network, the CNN was able to learn the generalized shared presentation well. Likewise, we focus on extracting generalized features while obtaining the spatiotemporal features. \noindent\textbf{Skeleton Data} Skeleton data contains information about human skeletons and joints, which is important in determining human gestures. Skeleton data can be extracted using RGB-D sensors, which consist of cameras and infrared sensors, or using deep learning \cite{fang2017rmpe, kim2015real, toshev2014deeppose}, such as OpenPose \cite{cao2019openpose} or AlphaPose \cite{xiu2018pose}. As such, skeleton data are basically extracted from image frames. This is why image sequences and skeleton sequences have similar time features. There are some studies on recognizing gestures with skeleton data only \cite{kim2016weighted, shi2019skeleton}. Shi \textit{et al.} \cite{shi2019skeleton} proposed a method of sequentially applying spatial attention modules and temporal attention modules to extract features to proceed with gesture recognition using selected features. They have shown that gesture recognition is possible with skeleton data alone, but there are limitations in distinguishing similar behaviors; that's why we decided to utilize skeleton data to select keyframes of gestures. \noindent\textbf{Two-Stream Network} Multimodal networks use multistream networks that process each modality separately. The network can be learned efficiently because each modality has different characteristics and each type of data can fit efficiently. In addition, the pathways can be divided into spatial and temporal pathways. In the spatial pathway, they focus on the spatial features of objects that can be drawn from each frame, and in the temporal pathway, they focus on the temporal features of motion through the frames. To fuse the processed features effectively, we use a lateral connection. Feichtenhofer \textit{et al.} \cite{feichtenhofer2016convolutional} proposed a way to fuse optical flow with video sequences to achieve better gesture recognition results. Through a lateral connection and fusion between the pathways, information about the optical flow and video sequences is fused, helping the network distinguish the gestures well. We constructed a network with two pathways -- the keyframe pathway and temporal-attention pathway. We also propose the keyframe to temporal attention (KTT) unit and the temporal attention to keyframe (TTK) unit for efficient exchanges of information, which are discussed in Section 3. \section{The Proposed Method} Our goal is to achieve efficient gesture recognition using skeleton and video data. We first propose a module using skeleton data to pull the keyframe out of the video (Section 3.1). Also, we present a network that can effectively deliver the keyframe feature (Section 3.2). \subsection{Skeleton-Based Keyframe Selection Module} As can be found in Fig. \ref{2}, the keyframe selection module uses long short-term memory and attention mechanisms to extract keyframes. First, the skeleton sequence enters into the long short-term memory. Each output continues to update the hidden layer of the long short-term memory, and the updated hidden value ($h_t$) passes through the multilayer perceptron to form a query vector. We formed a key vector through $1 \times 1$ convolution for each skeleton sequence. The attention score is obtained by multiplying the matrix of query vectors with key vectors. The keyframe of skeleton sequences is selected by the softmax function and the maximum value function of the attention score. \begin{figure}[t] \centerline{\includegraphics[width=0.49\textwidth]{key-frame_eps_final.png}} \caption{Structure of the keyframe selection module using skeleton data. } \label{2} \end{figure} Two criteria must be satisfied to utilize the skeleton-based keyframe selection module. 1) Skeleton data contains sufficient data for action recognition. 2) Keyframes in skeleton data and video sequences should be similar. For the first prerequisite, as demonstrated by Shi \textit{et al.} \cite{shi2019skeleton}, our network contains sufficient data for behavior recognition, since behavior recognition is also possible only with skeleton data. Experiments conducted to prove the second prerequisite are discussed in Section 4.1. \subsection{Proposed Network} The goal of a BCCN is that the two pathways that make up the network can learn from each other to better focus on the regions needed to recognize gestures. Rather than simply drawing out temporal and spatial features, each characteristic complements the other to achieve better gesture recognition results. \noindent\textbf{BCCN} The architecture of the proposed BCCN can be found in Fig. \ref{3}. It has a two-pathway network structure connected continuously in lateral directions. The first pathway contains keyframes selected by the previously proposed keyframe selection module, and the second pathway contains general video sequences. Each feature is learned from a 3D convolution layer in each pathway, and the network's structure provides features to each pathway by sequentially applying the KTT unit and the TTK unit. \begin{figure*}[t] \centerline{\includegraphics[width=.98\textwidth]{BCCN_eps_final.png}} \caption{The architecture of the proposed network BCCN. The BCCN consists of three units: skeleton-based keyframe selection module, keyframe to temporal attention (KTT) unit, and temporal attention to keyframe (TTK) unit. The units help the network to maintain efficient spatiotemporal features. } \label{3} \end{figure*} The features that can be obtained from the first pathway are mainly focused on spatial characteristics. Since the first pathway contains the keyframe extracted from the video, we extract the keyframe spatial feature on the basis that it contains a lot of important information in gesture recognition. The extracted keyframe features are provided to the second path via the KTT unit to help the network determine gestures by weighting the keyframe features. In the second pathway, the number of frames entering the input is increased, and extracted features are more focused on temporal semantics. The extracted temporal-attention features are supplied to the first pathway through the TTK unit. The TTK unit also considers spatial aspects of the temporal-attention features that are less dependent on the size of the features. \noindent\textbf{KTT unit} The KTT unit, as can be found in Fig. \ref{4}, helps to focus better on keyframe features within the network. The unit inflates the keyframe features, which are selected through the keyframe selection module, and supplies them to the second pathway. Basically, the keyframe features in the first pathway have a small dimension on the time axis, instead of a large channel, and contain a variety of spatial information. Therefore, to supply the keyframe features from the first pathway to the second pathway, the channel must be reduced and amplified over time. To this end, the time step must be unified by aligning the channels through the 3D convolution layer and then inflating over time. \begin{figure}[t] \centerline{\includegraphics[width=0.49\textwidth]{KTT.PNG}} \caption{The architecture of the keyframe to temporal attention (KTT) unit. The unit inflates the keyframe features to give weights to the features through the network. } \label{4} \end{figure} By adding keyframe features for the time step that contains each of the temporal-attention features, the influence of the keyframe features can be enhanced. It is necessary to focus more on the spatial features to carefully distinguish similar behaviors. Therefore, strengthening the keyframe features through KTT units serves to better distinguish similar behaviors. The second pathway, enhanced through the KTT unit, is followed by the TTK unit. \noindent\textbf{TTK unit} The TTK unit, as can be found in Fig. \ref{5}, serves to reinforce the spatial features by extracting spatiotemporal information from the temporal-attention features drawn from the second pathway and feeding them to the first pathway. Because the temporal-attention features in the second pathway have focused on temporal information, instead of having a large dimension on the time axis, temporal-attention features have fewer channels. Therefore, to supply the spatiotemporal features from the second pathway to the first pathway, the channel must be increased and compressed over time. For this purpose, the channel and the time dimension must be matched through the pyramidal 3D convolution layer and supplied to the first pathway according to each time step. \begin{figure}[t] \centerline{\includegraphics[width=0.49\textwidth]{TTK.PNG}} \caption{The architecture of the temporal attention to keyframe (TTK) unit. The unit applies the pyramidal 3D convolution layer to extract size-insensitive sptiotemporal features. } \label{5} \end{figure} The time dimension of the keyframe features can also be aligned with the time dimension of the temporal-attention features, which flow along the second pathway, by using a basic 3D convolution layer. However, the reason for applying the pyramidal 3D convolution layer is to make the spatiotemporal features insensitive to features' size when being extracted. For good gesture recognition, all of the features must be well recognized regardless of their size. In other words, it should work regardless of size of the behavior or objects. To this end, varying the size of the kernels results in varying the receptive field, which results in varying the size of the regions in which the spatiotemporal features are extracted. In other words, the spatiotemporal features, which are insensitive to differences in size through the TTK unit, improve the performance of gesture recognition. The features from the two pathways that pass the KTT unit and the TTK unit know where to look through each other's information. In particular, since the second pathway develops from the existing temporal-attention features, it is possible to focus on the important part and extract better temporal features for that part. \section{Experiments} \noindent\textbf{Datasets} We conducted training on three main datasets: Chalearn \cite{escalera2014chalearn}, ETRI-Activity3D \cite{jang2020etri}, and Toyota Smarthome \cite{das2019toyota}. Each dataset contains Italian gestures, gestures in the indoor environment of apartment, and gestures in the real-life environment of the elderly, respectively. We used all of the Chalearn dataset and 14 classes from ETRI-Activity3D and Toyota Smarthome datasets. Each dataset contains different age groups, so the speed of the actions is different, and each video has different duration. It is important to select spatiotemporal features efficiently to distinguish gestures accordingly. Using the video and skeleton data present in each dataset, we conducted the following experiments. If a video contained multiple gestures, the experiments were conducted by separating them and dividing them into multiple video clips. \noindent\textbf{Implementation Details} We conducted an experiment using PyTorch. For fast learning, we resized the frame inputs for each image sequence to 178 $\times$ 120. For skeleton data, experiments were conducted using only the \textit{x}-axis, \textit{y}-axis, and \textit{z}-axis values for each joint. The skeleton-based keyframe selection module used 1,024 hidden units and 512 key dimensions. We trained the network for 150 epochs. We used momentum of 0.9 and weight decay of $10^{-4}$. \noindent\textbf{Evaluation} We measured the recognition accuracy for each gesture for evaluation. If the top-predicted target was the same as the actual label, it was determined that it is the correct prediction. We also checked the activation map whether the part that should be activated to determine the actual label was activated, and judged that the better the activation, the more efficient the network learning is. \noindent\textbf{Accuracy and Computational Load Trade-off} There is a trade-off between accuracy and computational load. For good performance, the network needs to grow, and as the network grows, so does the computational load. Therefore, the goal was to obtain a large increase in accuracy through a lower increase in the computational load. \subsection{Prerequisite Experiment} First, we experimented by diversifying the video frames used to classify gestures to prove that keyframes drawn using skeleton data can be used as keyframes for videos. The experiment we designed is shown in Fig. \ref{6}. This experiment identifies changes of the gesture recognition performance depending on input video frames. \begin{figure} \centering \subfigure[Using the starting frame as an input]{\label{6.1}\includegraphics[width=0.49\textwidth]{pre_1_final.png}} \subfigure[Using the keyframe as an input]{\label{6.2}\includegraphics[width=0.49\textwidth]{pre_2_final.png}} \caption{An overview of prerequisite experiment} \label{6} \end{figure} In Fig. \ref{6.1}, the input takes the skeleton sequence and the starting frame of the image sequence. The skeleton sequence extends the time dimension and the channel dimension widely to form a single frame, the features of which are extracted through the spatial attention module and temporal attention module. These extracted skeleton features and extracted features from the starting frame are fused and judged through a fully connected layer. By comparison, in Fig. \ref{6.2}, instead of the starting frame of the image sequence, the keyframe of the video drawn through the keyframe selection module is introduced. The skeleton features and extracted features from the keyframe are fused and judged. The performance difference between the two experiments can prove that keyframes drawn using skeleton data can be used as keyframes of videos. \noindent\textbf{Impact of skeleton-based keyframe selection module} Table \ref{table:Prerequisite} represents the measurements of accuracy of the underlying network using only skeleton sequences and that of putting the starting frame and keyframe together as an input. Compared to the results tested using only skeleton sequences, we can see a significant increase in performance when image frames are put together. This shows that recognizing gestures with only \textit{x}-axis, \textit{y}-axis, and \textit{z}-axis values of each joint is insufficient in determining similar behaviors. It is necessary to use the spatial features in the image to achieve better results. Furthermore, the results of the difference in frame input show that the keyframe drawn using skeleton data plays an important role. This means that the keyframe spatial feature contains more important information in recognizing gestures than the spatial feature held by a typical starting frame. It is advantageous to use the skeleton-based keyframe selection module, which can achieve outstanding performance. \begin{table}[t] \renewcommand*{\arraystretch}{1.4} \begin{center} \resizebox{0.45\textwidth}{!}{ \begin{tabular} {ccccccccccc} \hline\hline & Network && Accuracy ($\%$) && Difference ($\%p$) &\\ \hline & Skeleton only && 64.72 && +0 &\\ & Skeleton + starting frame && 67.44 && +2.72 &\\ & \textbf{Skeleton + keyframe} && \textbf{69.65} && \textbf{+4.93} &\\ \hline\hline \end{tabular}} \end{center} \caption{Recognition accuracy from a prerequisite experiment} \label{table:Prerequisite} \end{table} \subsection{Fusion Method} We experimented by diversifying our method of utilizing skeleton data according to two methods: 1) After constructing the skeleton's frame using skeleton data in the same way as Section 4.1, we proceeded with gesture recognition by drawing features from those frames and concatenating them with features in the video sequence. 2) We construct the first pathway using the skeleton-based keyframe selection module and then proceed with gesture recognition. As can be found in Fig. \ref{7.1}, for the first method, skeleton sequences, high-frame-rate image sequences, and low-frame-rate image sequences enter the network to form three pathways. On the other hand, in Fig. \ref{7.2}, the network consists of two pathways, with a high-frame-rate image sequence and a keyframe image sequence entering the network. The performance difference between the two experiments allows us to find more effective fusion methods. \begin{figure} \centering \subfigure[Utilizing skeleton features in a separate pathway]{\label{7.1}\includegraphics[width=0.49\textwidth]{fusion_1_final.png}} \subfigure[Utilizing skeleton data as a keyframe selector]{\label{7.2}\includegraphics[width=0.49\textwidth]{fusion_2_final.png}} \caption{Experiments to determine the influence of fusion methods} \label{7} \end{figure} We measured gesture recognition accuracy to evaluate the performance of each method. We fine-tuned our network on each dataset and compared it to the following networks. \begin{itemize} \item C2D \cite{he2016deep}: This is a ResNet-50 frame-based model. We took a pretrained model from ImageNet and proceeded with the learning. Three-dimensional video data was processed using pooling in the time axis direction. \item I3D \cite{carreira2017quo}: Similarly, we used a pretrained model from ImageNet, copying and pasting the parameters for the time axis three times, but scaling parameter values to $1 \over 3$. \item Slowfast \cite{feichtenhofer2019slowfast}: This is a two-stream, state-of-the-art network. We exploited the high-frame-rate image sequence and the low-frame-rate image sequence for gesture recognition. \end{itemize} We tested the above networks, and we conducted experiments using the slowfast network as the base network for exploiting skeleton data. Experimental results from each fusion method demonstrate that it is better to select keyframes in a video from skeleton data than to draw features from the skeleton data directly. \noindent\textbf{Effect of varying fusion methods} Table \ref{2_t} represents the results of testing the above networks. The C2D, I3D, and slowfast networks all used video sequences only. We also ran tests by adding skeleton features to the slowfast network and using the skeleton-based keyframe selection module. Compared to C2D and I3D models, we can confirm that slowfast networks perform very well because they extract spatial and temporal properties effectively. In the low-frame-rate pathway, they extract the spatial features for each frame. In the high-frame-rate pathway, they extract the temporal features for the video sequences. By separating and extracting each characteristic, the network achieves good results as it prevents different features from containing overlapping information. \begin{table}[t] \renewcommand*{\arraystretch}{1.4} \begin{center} \resizebox{0.45\textwidth}{!}{ \begin{tabular} {ccccccccccc} \hline\hline & Network && Accuracy ($\%$) && Difference ($\%p$) &\\ \hline & C2D \cite{he2016deep} && 80.338 && -5.011 &\\ & I3D \cite{carreira2017quo} && 82.673 && -2.676 &\\ & Slowfast \cite{feichtenhofer2019slowfast} && 85.349 && +0 &\\ & Slowfast + skeleton feature && 85.354 && +0.005 &\\ & \textbf{Slowfast + keyframe selection module} && \textbf{86.151} && \textbf{+0.802} &\\ \hline\hline \end{tabular}} \end{center} \caption{Recognition accuracy according to fusion methods} \label{2_t} \end{table} The method of adding a new pathway using skeleton features did not show a significant increase in performance. This is due to the information that skeleton features have and the fact that video sequences overlap each other. Because skeleton data itself is obtained through a video sequence, it is difficult for skeleton features to contain other important information, except for the information that video data has. However, we can confirm that when skeleton pathway is added, the network outperforms the existing performance by itself. When keyframes are selected using skeleton sequences, they achieve better performance than conventional networks. This implies that the network has well-selected frames that are effective in recognizing gestures compared to conventional networks and delivers the frames' spatial features effectively. However, there is an improvement point at which the high-frame-rate pathway can decide where to look to obtain efficient temporal features. \subsection{Gesture Recognition} We proposed BCCN for effective gesture recognition and conducted comparative experiments with the state-of-the-art networks. We checked the recognition accuracy of the gestures and the activation map of the layer that exists at the end of the network. The gesture recognition accuracy was tested by fine-tuning our network on each dataset, just like previous experiments. We used GradCAM \cite{selvaraju2017grad} to check the activation map of the layer. If the activated part overlapped with the part needed to determine the actual behavior, it was judged as a good activation, and if not, as a bad activation. Since activation allows us to determine how efficiently the network has been learned, it is important to obtain good activation for good gesture recognition. \noindent\textbf{Impact of the proposed network} Table \ref{3_t} represents the results of testing with existing networks and the proposed network. The proposed network performed better compared to the slowfast network, which was previously considered state-of-the-art, and also performed better compared to other networks. This means that the keyframe drawn by the BCCN contained more information, and that information was well communicated over the network. We obtained better results in most classes, especially when classifying small motions. \begin{table}[t] \renewcommand*{\arraystretch}{1.4} \begin{center} \resizebox{0.45\textwidth}{!}{ \begin{tabular} {ccccccccccc} \hline\hline & Network && Accuracy ($\%$) &\\ \hline & C2D && 80.338 &\\ & I3D && 82.673 &\\ & Slowfast && 85.349 &\\ & Slowfast + skeleton features && 85.354 &\\ & \textbf{Slowfast + keyframe selection module}&& \textbf{86.151} &\\ & \textbf{Ours (BCCN)} && \textbf{87.775} &\\ \hline\hline \end{tabular}} \end{center} \caption{Comparison of recognition accuracy for different networks} \label{3_t} \end{table} \begin{figure}[t] \centerline{\includegraphics[width=0.30\textwidth]{graph_3.PNG}} \caption{Computational load analysis for the networks } \label{4_t} \end{figure} The BCCN was able to achieve good performance without significantly increasing the number of parameters. As shown in Fig. \ref{4_t}, the BCCN outperformed the existing network with only a small increase in the number of parameters, compared to the structure of the existing networks. We confirm that our method is relatively superior in the trade-off between accuracy and computational load. As shown in Table \ref{5_t}, we tested the effectiveness of the three units comprising the BCCN: the skeleton-based keyframe selection module, the KTT unit, and the TTK unit. As previously demonstrated in several experiments, when we added the skeleton-based keyframe selection module, we could see that the network was good at extracting important spatial features. However, this keyframe feature alone did not effectively capture the temporal semantics. The use of spatiotemporal features that can be obtained from each pathway through KTT and TTK units has become an important factor in gesture recognition. This means that it is important to extract the spatial features of the keyframe, which in turn is important in extracting spatiotemporal semantics. \begin{table}[t] \renewcommand*{\arraystretch}{1.4} \begin{center} \resizebox{0.45\textwidth}{!}{ \begin{tabular} {ccccccccccc} \hline\hline \multirow{2}{*}{Network} & \multirow{2}{*}{Keyframe selection} & \multirow{2}{*}{KTT unit} & \multirow{2}{*}{TTK unit} & \multicolumn{3}{c}{Higher is better} \\ & & & & Accuracy(\%) & Spatial & Temporal\\ \hline Slowfast & & & & 85.349 & Bad & Common\\ Ours & $\bigcirc$ & & & 82.995 & Common & Bad\\ Ours & $\bigcirc$ & & $\bigcirc$ & 86.224 & Common & Common\\ Ours & $\bigcirc$ & $\bigcirc$ & & 86.356 & Common & Good\\ \textbf{Ours (BCCN)} & \textbf{$\bigcirc$} & \textbf{$\bigcirc$} & \textbf{$\bigcirc$} & \textbf{87.775} & \textbf{Good} & \textbf{Good}\\ \hline\hline \end{tabular}} \end{center} \caption{Ablation study} \label{5_t} \end{table} The left part of each figure shown in Fig. \ref{8} is the activation map for pathways containing keyframes, and the right part of each figure is the activation map for temporal-attention pathways. The keyframe activation map is relatively well activated for keyframe spatial features and has much information about where to look in recognizing human gestures. The activation map of the temporal-attention pathway is relatively well enabled for temporal semantics. It can be seen as a significant activation at important timing for gesture recognition as well as temporal information within the frame. Fig. \ref{8.1} is an activation map on the base slowfast network, and shows that neither spatial nor temporal activation maps catch hand movements well. Fig. \ref{8.2} shows that when applying a keyframe selection module, it focuses better on hand movements compared to the base network. In addition, the temporal activation map can also be effective when hand movements are important. The BCCN in Fig. \ref{8.3}, was successfully activated on both spatial and temporal activation maps compared to other networks. This means that the BCCN successfully learned the necessary information in recognizing gestures, and has been efficiently trained to focus on important parts. In particular, when looking at temporal activation maps, we can see that spatiotemporal features are well-activated, helping to recognize gestures. Even when obtaining temporal information, the BCCN could focus on the spatial parts and get better temporal features. \begin{figure*}[t] \centering \subfigure[Slowfast network]{\label{8.1}\includegraphics[width=.33\textwidth, height=16cm]{slowfast_res.png}} \subfigure[Slowfast network + keyframe selection module]{\label{8.2}\includegraphics[width=.33\textwidth, height=16cm]{module_res.png}} \subfigure[BCCN]{\label{8.3}\includegraphics[width=.33\textwidth, height=16cm]{proposed_res.png}} \caption{The activation map from each network for each pathway.} \label{8} \end{figure*} \section{Conclusion} We introduced a method to leverage skeleton data for effective gesture recognition. The skeleton-based keyframe selection module is applicable to all networks, and with a small increase in computational load, it can achieve a superior performance increase over the skeleton feature alone. In addition, we proposed BCCN, a novel network that can effectively convey features of the keyframe extracted through the skeleton-based keyframe selection module. The KTT unit, which can effectively preserve the keyframe feature in the network, and the TTK unit, which allows the spatiotemporal features become insensitive to their size of the feature, were able to learn the features efficiently. Both units allow the two pathways that make up the BCCN to obtain better information from each other. As a future work, we plan to expand the modalities. Each modality has different characteristics effects for extracting features efficiently. Focusing on specific features through networks could help us perceive features more thoroughly, similar to human perception. {\small \bibliographystyle{ieee_fullname}
proofpile-arXiv_065-51
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} The studies related to black hole shadows have a long history if we trace their research trajectory. The earliest study was the deflection of light in a strong gravitational field, which was confirmed by the observation of the solar eclipse in 1919 and the subsequent developments associated with the gravitational lensing~\cite{Gott:1984ef,Blandford:1991xc,Bartelmann:1999yn,Lewis:2006fu,Bartelmann:2010fz}. The term ``shadow'' is currently the most common word but it was once called by several interesting names, such as the escape cone~\cite{Synge:1966okc}, the apparent boundary~\cite{Bardeen:1973tla}, and the critical curve~\cite{Gralla:2019xty}, etc., see Ref.~\cite{Perlick:2021aok} for a more detailed review. In 2000 it was proposed~\cite{Falcke:1999pj} that the shadow of Sagittarius A$^*$ with a thin accretion flow could be observable at submillimeter wavelengths and expected to image event horizons in the future. About two decades later, the image of the supermassive black hole located at the center of the Messier 87 galaxy (M87$^*$) was announced~\cite{EventHorizonTelescope:2019dse,EventHorizonTelescope:2019uob,EventHorizonTelescope:2019jan,EventHorizonTelescope:2019ths,EventHorizonTelescope:2019pgp,EventHorizonTelescope:2019ggy} and further the first polarized images of this black hole were released~\cite{EventHorizonTelescope:2021btj} by the Event Horizon Telescope (EHT). These achievements are inspiring great attentions in studying \cite{Bozza:2010xqn,Chen:2021gwy,Rahaman:2021web,Jafarzade:2020ova,Guo:2020nci,Bambhaniya:2021ugr,Wei:2013kza,Zhang:2021hit,Gan:2021xdl,Cunha:2019hzj,Peng:2020wun,Lima:2021las,Peng:2021osd,Churilova:2021tgn} various aspects of the shadow and observational appearance of black holes. One of the M87$^*$ features that could be observed is that a thick band of light outlines a dark area, that is, the shadow. Based on the general relativistic magnetohydrodynamics (GRMHD) simulations, the appearance of the M87$^*$ describes~\cite{EventHorizonTelescope:2019dse} the turbulent, thermal, and magnetized disks orbiting the Kerr black hole. Furthermore, a black hole is expected to show its shadow caused by the gravitational light bending and photon capture at its event horizon when there are transparent emissions near the black hole. The shadow radius is related to the photon ring which is a geometric property of a spacetime. It is natural to expect that the photon ring should not depend on the accretion surrounding the black hole but be determined by the black hole geometry, and that one should observe the same photon ring under various accretion models. Recently, a regularizing procedure, i.e. the introduction of a length scale or bounce parameter has been applied~\cite{Franzin:2021vnj} to the Reissner-Nordstr\"om (RN) black hole in order to generate a candidate spacetime labeled ``charged black-bounce'', ``black-bounce-Reissner-Nordstr\"om geometry'', or ``Reissner-Nordstr\"om-Simpson-Visser (RN-SV) spacetime''. This geometry has three properties: (i) It is globally free from curvature singularities; (ii) It passes all weak-field observational tests; (iii) It smoothly interpolates between regular black holes and charged traversable wormholes. \noindent Alternatively, the charged black-bounce can be constructed when an electromagnetic charge is introduced to a family of candidate spacetimes labeled ``black-bounce''~\cite{Simpson:2018tsi,Simpson:2019cer,Lobo:2020kxn,Lobo:2020ffi,Mazza:2021rgq}, where the black-bounce can be derived from the Schwarzschild black hole in terms of the regularizing procedure. The black-bounce without charge interpolating between regular black holes and traversable wormholes shares~\cite{Guerrero:2021ues} the same critical impact parameter with the Schwarzschild black hole because a length scale or bounce parameter in the black-bounce family does not change the critical impact parameter. We expect that the charged black-bounce can share the same critical impact parameter with the RN black hole, that is, the regularizing procedure does not alter the critical impact parameter whether it is applied to the Schwarzschild black hole or the RN black hole. Moreover, we discuss the photon rings and shadows of charged black-bounces due to their close association with~\cite{Okyay:2021nnh,Bronnikov:2021liv} critical impact parameters. In this paper, we attempt to provide an investigation to the observational appearance of charged black-bounces, focusing on photon rings and shadows. We show the images of the appearance of charged black-bounces under various illumination conditions. These images can help us to test the strong gravitational field around a compact object described by the charged black-bounce. The outline of this paper is as follows. In Sec.~\ref{sec:geometry} we review briefly the properties of a charged black-bounce geometry and then determine the boundaries of critical impact parameters. Next, we trace the light rays coming from the region near a charged black-bounce for the classification of light rays in Sec.~\ref{sec:light bending}. We investigate the appearance of the region near a charged black-bounce when emissions come from a thin disk accretion and a spherically symmetric infalling accretion in Sec.~\ref{sec:appearance}. Our results are discussed and compared with astronomical observations in Sec.~\ref{sec:compare}. Finally, we give our concluding remarks in Sec.~\ref{sec:con}. \section{The field sources and null geodesics of Reissner-Nordst\"om-Simpson-Visser spacetimes} \label{sec:geometry} \subsection{Field sources for Einstein's equations} We start with the ``regularizing procedure'' recently proposed in Ref.~\cite{Franzin:2021vnj}, in which the radial coordinate $r$ in the RN metric is replaced by $\sqrt{x^2+a^2}$, where the parameter $a$ is associated with the Plank length. The charged black-bounce is a one-parameter modification of the Reissner-Nordst\"om black hole of General Relativity, and it is called black-bounce-Reissner-Nordst\"om (BB-RN), charged black-bounce spacetime, or Reissner-Nordst\"om-Simpson-Visser (RN-SV) spacetime. It can be obtained as an exact solution to the Einstein equation sourced by a combination of a minimally coupled phantom scalar field and a nonlinear electrodynamics field. The action reads \cite{Bronnikov:2021uta}, \begin{eqnarray} S=\int\sqrt{-g}\,d^4x \left( \mathcal{R} + 2 \epsilon \partial_\mu\phi \partial_\nu\phi - 2V(\phi) - \mathcal{L(F)} \right),\label{action} \end{eqnarray} where $\mathcal{L(F)}$ is the gauge-invariant Lagrangian density of nonlinear electrodynamics with $\mathcal{F}\equiv F_{\mu\nu}F^{\mu\nu}$ and $\epsilon=-1$ for a phantom scalar field. The Lagrangian density and the potential of an uncharged scalar field $\phi(x)$ take the following forms, \begin{eqnarray} \mathcal{L(F)} = \frac{12 Ma^2}{5 (2q^2/\mathcal{F})^{5/4}} + \frac {2Q^2 \big[3(2q^2/\mathcal{F})^{1/2}-4a^2\big]}{3(2q^2/\mathcal{F})^{3/2}}, \label{V-fin} \end{eqnarray} and \begin{eqnarray} V(\phi) = \frac {2 \cos^6 \phi}{15 a^4} (6M a \sec \phi - 5Q^2), \end{eqnarray} where $M$ is mass of the black-bounce, $q$ is magnetic charge of free nonlinear electrodynamics and $Q$ electric charge parameter. Varying Eq.~(\ref{action}) with respect to the metric yields Einstein's equation, \begin{eqnarray} G_{\mu\nu}=T_{\mu\nu}, \end{eqnarray} where the energy-momentum tensor, $T_{\mu\nu}=-T_{\mu\nu}[\phi]-T_{\mu\nu}[\mathcal{F}]$, is the combination of the energy-momentum tensor of scalar field and the energy-momentum tensor of nonlinear electromagnetic field. One can obtain the RN-SV metric by following the standard procedure in the four dimensional spacetime, \begin{eqnarray} ds^2=-A(x)dt^2+A^{-1}(x)dx^2+r^2(x)d\Omega^2, \qquad r(x)\equiv\sqrt{x^2+a^2}, \end{eqnarray} with \begin{eqnarray} A(x)=1-\frac{2M}{r(x)}+\frac{Q^2}{r^2(x)}. \end{eqnarray} The radial coordinate expands to the entire real domain, $x\in(-\infty,+\infty)$, and $d\Omega^2$ is the line element of a unit 2-sphere. This procedure is a smooth transformation\footnote{Note that this procedure is not a coordinate transformation~\cite{Simpson:2021vxo}, which leaves $dx$ undisturbed and makes the metric components an explicit $x$-dependence.} which makes it evident that the charged black-bounce is a globally regular spacetime. The charged black-bounce will recover the RN geometry when the length scale $a$ approaches zero, whilst it will recover~\cite{Morris:1988tu,Boonserm:2018orb} the Morris-Thorne wormhole when $M\rightarrow0$ and $Q\rightarrow0$. Furthermore, its horizons are located at \begin{eqnarray} x_{\rm h}=S_1\sqrt{\left( M+S_2\sqrt{M^2-Q^2}\right) ^2-a^2}, \end{eqnarray} where $S_1,S_2=\pm1$, $S_1=1$ corresponds to our universe while $S_1=-1$ the copy of our universe, and $S_2 =1$ indicates outer horizon while $S_2 =-1$ inner horizon, respectively. Due to the introduction of the bounce parameter, the geometry and the corresponding structure of event horizons are significantly deformed. The constraints on the three parameters $(M, Q, a)$ for different geometric types are as follow: \begin{itemize} \item $|Q|< M$ and $a< M\pm\sqrt{M^2-Q^2}$ This case corresponds to a charged regular black hole with a standard outer (inner) horizon. \item $|Q|< M$ and $a= M \pm\sqrt{M^2-Q^2}$, or $|Q|= M$ and $a=M$ This case corresponds to a non-traversable wormhole since the geometry possesses an event horizon at the throat. \item $|Q|< M$ and $a> M \pm\sqrt{M^2-Q^2}$, or $|Q|= M$ and $a>M$ In this geometry the inner horizons disappear and then the outer horizons disappear. Thus, this case corresponds to a traversable wormhole. \item $|Q|=M$ and $ a< M$ This case corresponds to an extreme black hole with an extremal horizon at $x_{\rm h}=\sqrt{M^2-a^2}$. \item $|Q|>M$ There are no horizons in this geometry. This case corresponds to a traversable wormhole. \end{itemize} \subsection{Null geodesics and impact parameters} A freely falling massless particle moving along a null geodesic satisfies the equation, \begin{eqnarray} g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}=0, \end{eqnarray} where the dot stands for the derivative with respect to affine parameter $\lambda$. Without loss of generality, let us pay attention to the orbits in the equatorial hyperplane ($\theta=\pi/2$). Due to Killing symmetries in this spacetime, one has the conserved energy and conserved angular momentum, \begin{eqnarray} E=A(x)\dot{t}, \qquad L=r^2(x)\dot{\varphi}.\label{conengamom} \end{eqnarray} The impact parameter is defined by the ratio, \begin{eqnarray} b\equiv L/E,\label{imppar} \end{eqnarray} which is only relevant to the trajectory of the null geodesic. By re-parameterizing the affine parameter, we obtain the null geodesic equation as follows, \begin{eqnarray} \label{eq:geo} \dot{x}^2+V_{\rm eff}(x)=\frac{1}{b^2} , \end{eqnarray} with the effective potential, \begin{eqnarray} \label{eq:Veff} V_{\rm eff}(x)=\frac{A(x)}{r^2(x)}. \end{eqnarray} The particle moving along the geodesics with $b<b_c$ will fall into the event horizon of a charged black-bounce. Here the critical impact parameter $b_c$ in this spacetime is charge dependent, that is, \begin{eqnarray} b_c&=&\frac{r(x_{\rm ph})}{\sqrt{A(x_{\rm ph})}} \nonumber\\ &=& \frac{3 \mathcal{A}^{1/2}+\mathcal{B}}{\left( 9 M^2-8 M \left( 3 \mathcal{A}^{1/2}+\mathcal{B}\right)^{1/2} +6 \mathcal{A}^{1/2}+\mathcal{B}\right)^{1/2} }, \label{IP} \end{eqnarray} where $x_{\rm ph}$ is the photon sphere radius, $\mathcal{A}\equiv 9 M^4-8 M^2 Q^2$, and $\mathcal{B}\equiv 9M^2-4Q^2$. Based on Eq.~(\ref{IP}), we plot in Fig.~\ref{Ipact} the critical impact parameter with respect to the charge and mass. The critical impact parameter decreases continuously with the increasing of charge and reaches its minimum when $|Q|=M$. It is natural to have the maximum critical impact parameter when $|Q|=0$, where this maximum value equals~\cite{Gralla:2019xty} the critical impact parameter of Schwarzschild black holes. We note that the charged black-bounce shares\footnote{In black-bounce spacetimes, all black-bounce solutions have~\cite{Guerrero:2021ues} the same critical impact parameter as the Schwarzschild black hole has. Here the fact that the length scale $a$ does not appear in Eq.~(\ref{IP}) maintains the critical impact parameter unchanged in RN black holes or charged black-bounces.} the same critical impact parameter with the RN black hole. According to this property, we have the following two boundaries of the critical impact parameter in the charged black-bounce spacetime, \begin{eqnarray} b^{\rm min}_{c}=4M,\label{bcmin} \end{eqnarray} \begin{eqnarray} b^{\rm max}_{c}=3\sqrt{3}M.\label{bcmax \end{eqnarray} \begin{figure}[htbp] \centering \includegraphics[width=0.6\linewidth]{Ipact.pdf} \caption{The critical impact parameter as a function of $M$ and $Q$ in a charged black-bounce.} \label{Ipact} \end{figure} \section{Rays tracing and light bending near a charged black-bounce} \label{sec:light bending} A key procedure we have to follow before we capture the appearance of emissions is to trace the light rays coming from the region near a charged black-bounce. The total number of orbits is defined~\cite{Gralla:2019xty} as $n(b)\equiv \theta/(2\pi)$, where $\theta$ is the total change of azimuth angle. It measures the number of times the null geodesics cross an equatorial plane and can be classified into three types: \begin{enumerate} \item Direct emission: $n < \frac{3}{4}$, which means a null geodesic intersects the equatorial plane only once; \item Lensed: $\frac{3}{4} < n < \frac{5}{4}$, which means a null geodesic intersects the equatorial plane twice; \item Photon ring: $n > \frac{5}{4}$, which means a null geodesic intersects the equatorial plane at least three times. \end{enumerate} The three types of light rays are plotted in Fig.~\ref{orbit} with the setting of $M=1$, $Q=0.1$, and $a=0.5$, in which we show the fractional number of orbits as a function of impact parameter $b$. The singularity of the total number appears at $b=b_c\approx 5.1875$, which belongs of course to the range depicted by Eqs.~(\ref{bcmin}) and (\ref{bcmax}), i.e., $b_c\in (4, 3\sqrt{3})$. \begin{figure}[htbp] \centering \includegraphics[width=0.6\linewidth]{Orbit.pdf} \caption{The number of orbits as a function of impact parameter $b$. The direct ($n<3/4$), lensed ($3/4<n<5/4$), and photon ring ($n>5/4$) trajectories are colored in black, gold, and red, respectively.} \label{orbit} \end{figure} We plot Fig.~\ref{Rays} in order to give a clearer picture of the photon trajectories for our rays tracing. If the right (East) of Fig.~\ref{Rays} is regarded as the ``north pole direction'' as used in Ref.~\cite{Gralla:2019xty}, we notice that the black curves cross the equatorial plane only once, the gold ones do at least twice, and the red ones do at least three times. In addition, it is worth noting that the light rays with $b=b_c$ spiral towards an unstable circular orbit on the photon sphere, see the green dashed curve around the black solid disk. The light rays with $b<b_c$ fall into the event horizon and only the light rays with $b>b_c$ can be captured by the observer. In the black-bounce spacetime ($Q=0$) there is another way~\cite{Simpson:2018tsi} that turns its geometry into a wormhole when $a>2M$. In such a wormhole case some rays with $b<b_c$, called~\cite{Guerrero:2021ues} the retro-orbits, contribute to the luminosity in the observer's screen. \begin{figure}[htbp] \centering \includegraphics[width=0.6\linewidth]{Rays01Q05L.png} \caption{The photon trajectories near the charged black-bounce (shown as a black solid disk) in $(r, \theta)$ Euclidean polar coordinates. The spacing of the impact parameter for plotting is chosen to be 1/10, 1/100, and 1/1000 in the direct (black), lensed (gold), and photon ring (red) bands, respectively.} \label{Rays} \end{figure} \section{Observational appearances of a charged black-bounce} \label{sec:appearance} With all the configurations in place we now focus our attention on the appearance of a charged black-bounce under various illumination conditions. An observer at the north pole in a face-on orientation to the equatorial plane will receive the isotropic emissions from the accretion disk lying in the equatorial plane. The observed intensity at frequency $\nu^{\prime}$ and the specific intensity of the emission at frequency $\nu$ are given \cite{Gralla:2019xty} by \begin{eqnarray} I^{\rm obs}_{\nu^{\prime}}=g^3I^{\rm em}_{\nu}, \qquad g=[A(x)]^{1/2}. \end{eqnarray} Integrating over all frequencies yields $I^{\rm obs}=g^4I^{\rm em}$, and summing all intensities from each intersection with the disk gives the total observed intensity, \begin{eqnarray} I^{\rm obs}(b)=\sum_mg^4I^{\rm em}|_{x=x_m(b)}, \end{eqnarray} where $x_m(b)$ is the radial coordinate of the $m$-th intersection with the disk plane hit by the light ray with impact parameter $b$ and it is also called the {\em transfer function}. \subsection{Thin disk accretions} The first model of accretion disks we consider is the emission from the innermost stable circular orbit (ISCO) whose intensity of emission satisfies \cite{Li:2021riw} the following rule, \begin{eqnarray} I^{\rm em}(x) = \begin{cases} \Big({x - (x_{\rm isco} - 1)}\Big)^{-2}, &\;\;\; x> x_{\rm isco},\\ 0, & \;\;\; x\leq x_{\rm isco}, \end{cases} \end{eqnarray} where $x_{\rm isco}$ is the radius of the innermost stable circular orbit. \begin{figure*}[htbp] \centering \includegraphics[width=5.9cm,height=5.0cm]{intenem01Q0a.png} \includegraphics[width=5.9cm,height=5.0cm]{intenob01Q0a.png} \includegraphics[width=5.9cm,height=5.0cm]{01Q0a.png} \includegraphics[width=5.9cm,height=5.0cm]{intenem01Q05a.png} \includegraphics[width=5.9cm,height=5.0cm]{intenob01Q05a.png} \includegraphics[width=5.9cm,height=5.0cm]{01Q05a.png} \includegraphics[width=5.9cm,height=5.0cm]{intenem01Q07L.png} \includegraphics[width=5.9cm,height=5.0cm]{intenob01Q07L.png} \includegraphics[width=5.9cm,height=5.0cm]{01Q07a.png} \caption{The observational appearance of emission near the charged black-bounce surrounded by a thin disk accretion, where the observer is located at the north pole, facing on the orientation to the equatorial plane. From left to right, the panels show the emitted intensity, observed intensity, and optical appearance, respectively. From top to bottom, the charge and bounce parameter are set to be $Q=0.1$ and $a=0$ (RN case, top), $Q=0.1$ and $a=0.5$ (middle), and $Q=0.1$ and $a=0.7$ (bottom), respectively.} \label{fig:thin} \end{figure*} In Fig.~\ref{fig:thin} we show the appearance of the region near a charged black-bounce surrounded by a thin disk accretion. There are two common features in the three cases. One feature is that the gravitational redshift effect does not significantly reduce the observed intensity because the emitted peak is outside the photon orbit in each case and the direct emission dominates the luminosity in the observational appearance. The other feature is that the photon rings have the same size for different values of the bounce parameter, which further confirms that the charged black-bounce has the same critical impact parameter with that of RN black holes. In the top row (RN case with $M=1$, $Q=0.1$, and $a=0$), the emitted intensity is sharply peaked near $x\approx5.98$ (see the left panel) which is outside the photon orbit at $x\approx2.99$. The middle row of Fig.~\ref{fig:thin} depicts the charged black-bounce with $M=1$, $Q=0.1$, and $a=0.5$. The emitted intensity peaks at $x\approx5.96$ which is also outside the photon orbit at $x\approx2.95$. And the bottom row of Fig.~\ref{fig:thin} presents the charged black-bounce with $M=1$, $Q=0.1$, and $a=0.7$. The emitted intensity and photon orbit are located at $x\approx5.58$ and $x\approx2.78$, respectively. The contributions from the photon ring and lensing ring to the total observed intensity are negligible because they appear in the innermost layer as a thin and faint ring (see the right panel). Thus, the main contribution to the total luminosity in the observational appearance comes from the direct emission that presents a wide and bright ring. Moreover, the observational appearance is also affected by the charge of black-bounces. In the appearance image shown in Fig.~\ref{fig:thin}, we already know that the bounce parameters have no observable effects on the rings and shadows. The appearance is the same for different values of bounce parameters. In the case of the same bounce parameter but different charges, from $0.1$ to $0.9$, we can observe the effect of charges on the shadow and appearance in more detail in Fig.~\ref{fig:thin1}. That is, there are two remarkable features in its appearance. One is that the increasing in charge leads to the decreasing of the radii of photon rings and shadows, and the other is that the increasing in charge leads to the increasing of the observed intensity, which is manifested in the appearance of brighter and wider rings. \begin{figure*}[htbp] \centering \includegraphics[width=5.9cm,height=5.0cm]{01Q.png} \includegraphics[width=5.9cm,height=5.0cm]{02Q.png} \includegraphics[width=5.9cm,height=5.0cm]{03Q.png} \includegraphics[width=5.9cm,height=5.0cm]{04Q.png} \includegraphics[width=5.9cm,height=5.0cm]{05Q.png} \includegraphics[width=5.9cm,height=5.0cm]{06Q.png} \includegraphics[width=5.9cm,height=5.0cm]{07Q.png} \includegraphics[width=5.9cm,height=5.0cm]{08Q.png} \includegraphics[width=5.9cm,height=5.0cm]{09Q.png} \caption{The observational appearance of emission near the charged black-bounce surrounded by a thin disk accretion for the choices of the bounce parameter $a=0.5$ and the charge $Q=0.1, 0.2, ..., 0.9$, respectively.} \label{fig:thin1} \end{figure*} \subsection{Spherically symmetric infalling accretions} In this subsection we investigate a spherically symmetric free-falling accretion. For this model the intersections of light rays and the accretion appear in the whole space surrounded by the spherical accretion. The observed intensity at the photon frequency $\nu_{\rm obs}$ can be obtained \cite{Bambi:2013nla} by integrating along the photon path with the impact parameter $b_\gamma$, \begin{eqnarray} I(\nu_{\rm obs},b_\gamma) = \int_\gamma g^3 j(\nu_e) dl_{\rm prop}, \label{eq:intensitysp} \end{eqnarray} where $j(\nu_e)$ is the emissivity per unit volume, $\nu_e$ is the photon frequency, and $dl_{\rm prop}$ is the infinitesimal proper length in the rest frame of emitter. The redshift is given by \begin{eqnarray} g = \frac{k_\mu u^\mu_{\rm obs}}{k_\nu u^\nu_e}, \end{eqnarray} where $k^\mu$ is the 4-velocity of photons and $u^\mu_{\rm obs}$ is the 4-velocity of the distant observer. In a static and spherically symmetric spacetime, the accretion falls with the 4-velocity $u^\mu_e$, \begin{eqnarray} u^t_e=\frac{1}{A(x)}, \qquad u^x_e=-\sqrt{1-A(x)}, \qquad u^{\theta}_e=u^{\varphi}_e=0, \end{eqnarray} and the 4-velocity of photons is \begin{eqnarray} k_t = \frac{1}{b}, \qquad k_x = \pm \frac{1}{b}\sqrt{\frac{1}{A(x)}\left( \frac{1}{A(x)} - \frac{b^2}{x^2}\right) }. \end{eqnarray} The proper distance along a photon path $\gamma$ reduces to \begin{eqnarray} dl_\gamma = k_\mu u^\mu_e d\lambda = \frac{k_t}{g |k_x|}dx. \end{eqnarray} For a simple model of the monochromatic emission, the specific emissivity can be determined~\cite{Bambi:2013nla,Saurabh:2020zqg,Zeng:2020dco,Qin:2020xzu} by the delta function with a $1/x^2$ radial profile, \begin{eqnarray} j(\nu_e) \propto \frac{\delta(\nu_e - \nu_*)}{x^2}, \end{eqnarray} where $\nu_*$ is the emitted frequency in the rest frame. Then integrating Eq.~(\ref{eq:intensitysp}) over all the observed frequencies, we derive the total observed photon flux, \begin{eqnarray} F_{\rm obs}(b_\gamma) \propto \int_\gamma \frac{g^3}{x^2} \frac{k_t}{|k_x|} dx. \end{eqnarray} The observed intensity and the observational appearance of images for a charged black-bounce are shown in Figs.~\ref{SphericalOB0.1Q} and \ref{SphericalOB0.5Q}, where $M=1$, $Q=0.1$ and $a=0.5$ are set in Fig.~\ref{SphericalOB0.1Q}, and $M=1$, $Q=0.5$ and $a=0.5$ are set in Fig.~\ref{SphericalOB0.5Q}. As in the thin disk accretion, there is a peak in the observed intensity of a spherically symmetric infalling accretion (see the top left panels of Figs.~\ref{SphericalOB0.1Q} and \ref{SphericalOB0.5Q}). With the increasing of the impact parameter, the observed intensity peaks sharply at $b=b_c$ and then gradually decreases. If comparing a spherically symmetric infalling accretion with a thin disk accretion, we find that the former has a very wide luminosity which can well be presented in the optical appearance. We can clearly see from the top right panels of Figs.~\ref{SphericalOB0.1Q} and \ref{SphericalOB0.5Q} that the black circle at the center is surrounded by a very thick and bright appearance. In the bottom rows of Figs.~\ref{SphericalOB0.1Q} and \ref{SphericalOB0.5Q} we take the field of view to 1/10 of the images in the top rows for a clear view of the photon ring. The photon ring located at $b\approx5.19$ in the case of $Q=0.1$ is shown in Fig.~\ref{SphericalOB0.1Q} and at $b\approx4.97$ in the case of $Q=0.5$ is shown in Fig.~\ref{SphericalOB0.5Q}. Furthermore, a large charge increases the intensity of incoming light but decreases the apparent size of the shadow. The peak of the observed intensity in the case of $Q=0.5$ is higher than that in the case of $Q=0.1$, see the bottom left panels in Figs.~\ref{SphericalOB0.1Q} and \ref{SphericalOB0.5Q}. The apparent size of shadows is reduced from 5.19 to 4.97 with the reduction of charge from 0.5 to 0.1. The difference of shadows is not obvious in the top rows of Figs.~\ref{SphericalOB0.1Q} and \ref{SphericalOB0.5Q} because the optical appearances are generated in a large field of view, but such a difference can be clearly seen if we look at the images in 1/10 of the field of view, see the bottom rows. The features of photon rings in a spherically symmetric infalling accretion are same as those in a thin disk accretion discussed in the previous subsection, so that the photon ring is independent of the types of the two accretions. \begin{figure*}[htbp] \centering \includegraphics[width=6.9cm,height=5.6cm]{intenspherical01Q05L60fov.png} \includegraphics[width=6.9cm]{SphericalBH01fov6005.png} \includegraphics[width=6.9cm,height=5.6cm]{intenspherical01Q05L6fov.png} \includegraphics[width=6.9cm]{SphericalBH01fov605.png} \caption{The observational appearance of emission near the charged black-bounce with $M=1$, $Q=0.1$ and $a=0.5$ surrounded by a spherically symmetric infalling accretion. From left to right, the panels show the observed intensity and optical appearance, respectively. The bottom row of the figure shows an image with 1/10 of the field of view of the top row.} \label{SphericalOB0.1Q} \end{figure*} \begin{figure*}[htbp] \centering \includegraphics[width=6.9cm,height=5.6cm]{intenspherical05Q05L60fov.png} \includegraphics[width=6.9cm]{SphericalBH05fov6005.png} \includegraphics[width=6.9cm,height=5.6cm]{intenspherical05Q05L6fov.png} \includegraphics[width=6.9cm]{SphericalBH05fov605.png} \caption{The observational appearance of emission near the charged black-bounce with $M=1$, $Q=0.5$ and $a=0.5$ surrounded by a spherically symmetric infalling accretion. From left to right, the panels show the observed intensity and optical appearance, respectively. The bottom row of the figure shows an image with 1/10 of the field of view of the top row.} \label{SphericalOB0.5Q} \end{figure*} \section{Discussions and comparisons with astronomical observations} \label{sec:compare} \subsection{Constraints on the shadow size and charge from the EHT observations} The variation of shadow radii for a nonrorating object with one (electric) charge or for a rotating object with two (one electric and the other angular momentum) charges has been studied~\cite{EventHorizonTelescope:2021dqv}, where the electric charge gives the same constraint on shadows as the angular momentum does in the Kerr black hole, i.e., the increasing of charges reduces the apparent size of the shadow. This is clearly consistent with our results. On the other hand, the charge is constrained by the reconstructed shadow size of the latest EHT observations. For RN black holes, the bound from the EHT M87$^*$ observations was given~\cite{EventHorizonTelescope:2021dqv} by \begin{eqnarray} 0<Q\lesssim0.9. \end{eqnarray} Recently, the different charge bounds have been reported~\cite{Vagnozzi:2022moj} in different types of geometries, such as the singular black holes, regular black holes and wormholes. We emphasize that the RN-SV solution is compatible with the various charge bounds in both the theoretical and observational aspects. One reason is that the bounce parameter imposes no observable effects on the photon ring and the other reason is that the RN-SV solution shares the same critical impact parameter theoretically as the RN solution does. \subsection{Photon rings and shadows of accreting black holes} The question, whether the size of photon rings and the size of shadows would be affected by accretion details, has been debated. It was claimed~\cite{Gralla:2019xty} that the size of shadows depends on emission details. However, for a simple model --- the Schwarzschild black hole surrounded by a thin disk accretion, it was shown~\cite{Narayan:2019imo} that the size of shadows is an intrinsic signature of the spacetime geometry and it is hardly affected by accretions. That is, Ref.~\cite{Narayan:2019imo} provides one counterexample to Ref.~\cite{Gralla:2019xty}. We noticed in Sec. IV that the photon ring remains unchanged in the thin disk and spherical accretions, which provides the evidence that the photon ring is an intrinsic property of spacetime geometry. In addition, one recent work \cite{Chael:2021rjo} supports our opinion and that of Ref.~\cite{Gralla:2019xty}, where the photon ring is an intrinsic property but the size of shadows depends on the details of the emission region for a thin disk accretion. \section{Concluding remarks} \label{sec:con} In the present work we have studied the photon ring, shadow and optical appearance of a charged black-bounce under various illumination conditions. The introduction of a charge term in a black-bounce spacetime decreases the critical impact parameter, but the introduction of a length scale in the regularizing procedure does not change the critical impact parameter. We determine the upper and lower limits of the critical impact parameter on the premise of reserving the regularity in charged black-bounces interpolating between regular black holes and charged traversable wormholes. We also notice that the charged black-bounces and RN black holes share the same critical impact parameter. The relations between the impact parameter and charge can be reflected more clearly in the observational appearance of the region near a charged black-bounce when there are emissions from a thin disk accretion and a spherically symmetric infalling accretion. We find that a large charge increases the intensity of incoming light but decreases the apparent size of shadows, and that the photon rings remain unchanged in the two different accretion models. These results are consistent with the recent observations, which confirms that the photon ring is the intrinsic property of a spacetime geometry. \section*{Acknowledgments} The authors would like to thank the anonymous referee for the helpful comments that improve this work greatly. This work was supported in part by the National Natural Science Foundation of China under Grant Nos. 11675081 and 12175108.
proofpile-arXiv_065-52
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Dance and music are intimately related. They share a movement form wherein for dance, movements are articulated visually as body motion whereas for music, movements manifest themselves via an auditory and allegorical nature. These movements, whether visual in dance or auditory in music, evoke emotions and feelings of intentionality. Looking at this intimate connection between dance and music from an artificial intelligence perspective, an intriguing question would be whether a computational model can generate a coherent and meaningful dance sequence for a given piece of music. This is an ambitious task which is useful for simulation, behavior understanding and ultimately benefit the vast community of dancers and musicians. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{Figures/Problem.png} \end{center} \caption{The proposed MDOT-Net generates a matching dance sequence given a music input. We model different genres of music and dance styles (e.g. cha cha and tango) by distributions lying within \emph{Music Spaces} and an \emph{Articulated Pose Manifold}, where the authenticity of the generated dance distributions is evaluated by an \emph{Optimal Transport Distance} and the harmony between the music and the generated dance is measured by a \emph{Gromov-Wasserstein Distance}.} \label{fig:problem} \end{figure} In this paper, we present a framework generating 3D dance choreographies from music sequences. There are multiple challenges to be addressed for this task. 1) The generated dance motions have to be realistic and adhere to the idiosyncratic distinctions of the dance style. For example, a generated waltz sequence should reflect stylistic elements that are recognizable (even to the non-expert observer). 2) Dance choreography is inherently diverse and multiple choreographic interpretations for the same musical piece are ubiquitous. An adequate computational model would have to generate diverse and multimodal dance kinematics. 3) Choreographing for a piece of music has to take into consideration the rhythmic articulation, melody, theme and variation to achieve an organic unity, a challenging feat even for a professional human expert. The generated dance should be intricately bonded with the music input through a shared intentionality of the movement form, and reflect the melodic styles and rhythmic articulation. Earlier works typically adopted a similarity retrieval approach \cite{shiratori2006dancing,fan2011example,lee2013music} which lacks creativity. The sequence to sequence modeling approach in \cite{tang2018dance} is also limited to a single output and unable to generate diverse dances for any piece of input music. A recent class of works turned to Generative Adversarial Networks (GANs) \cite{lee2019dancing,ren2020self} for multimodal generation and enhance diversity. However, both \cite{lee2019dancing,ren2020self} focused on 2D choreographies which lack in the dynamic richness and pose realism of 3D choreographies. Furthermore, tuning the adversarial training schemes in these GAN approaches is an arduous task \cite{artetxe2018robust}. Discriminating the music and dance correspondence by mapping to a common embedding space also tends to be inadequate, resulting in lack of coherence between the dance and music. To address these challenges and overcome the shortcomings of existing methods, we leverage optimal transport (OT) theory \cite{villani2008optimal} and propose a Music-to-Dance with Optimal Transport Network (MDOT-Net). As illustrated in Figure~\ref{fig:problem}, for a given music input, MDOT-Net generates diverse dance sequences that correspond to trajectories over a articulated pose manifold. Directly working with dance distributions supported on the articulated pose manifold is advantageous for realism of the dance poses, allowing subtle stylistic distinctions and nuances to be reflected. We evaluate the optimal transport distance between the generated and data distribution on this manifold. This offers several advantages such as the capability to handle non-overlapping distributions (a major issue for Jensen-Shannon divergence) \cite{arjovsky2017wasserstein} and non-divergent generator loss by reframing the adversarial training as an optimization problem \cite{genevay2018learning,salimans2018improving}. Since dance and music exist in different domains, it is difficult to quantitatively gauge their differences directly. Therefore, we propose a Gromov-Wasserstein distance \cite{memoli2011gromov,bunne2019learning} to compare distributions over different domains (music space and articulated pose manifold). The Gromov-Wasserstein distance compares distributions in \emph{relational} terms. The intuition is that for a matching pair of dance and music $(D,M)$, a generated dance sequence $D'$ is likely a good match for music sequence $M'$ if $D'$ is close to $D$ and $M'$ is close to $M$. Similar to the optimal transport distance in facilitating adversarial training, the Gromov-Wasserstein distance enables a more efficient approach in assessing the music and dance correspondence. Our contribution are summarized as follows. 1) We develop a novel optimal transport framework for music to dance generation. The authenticity of the generated dance is measured via an optimal transport distance on the manifold of articulated poses. 2) A Gromov-Wasserstein distance is incorporated to facilitate learning cross modal generations from the music space to the articulated pose manifold through a \emph{relational} rather than absolute measure of the music and dance similarity. 3) Our MDOT-Net can generate realistic and diverse 3D dance sequences faithful to the rhythm and melody of a given music input. \section{Related Works} \textbf{Optimal Transport for Generative modeling} \quad Optimal transport \cite{villani2008optimal} defines a metric distance for probability distributions over arbitrary spaces. The generative modeling problem is reframed as finding an optimized transport for aligning the model distribution and data distribution. However, solving the optimal transport problem is expensive, and this computational burden presented major hurdles for employing optimal transport for generative modeling. The Wasserstein GAN \cite{arjovsky2017wasserstein} turned to the dual optimal transport problem and proposed a discriminator approximating 1-Lipschitz functions for GAN training. An alternative line of work was pursued in \cite{cuturi2013sinkhorn,genevay2016stochastic}, in which the introduction of an entropic regularization term reduces the computational cost. The regularized primal optimal transport problem is amenable to backpropagation training \cite{genevay2018learning,salimans2018improving}. \cite{memoli2011gromov} generalizes optimal transport for comparing distributions supported on different spaces, introducing the Gromov-Wasserstein distance as a notion of distance between intra-domain distances. The Gromov-Wasserstein distance is a promising metric for learning cross-domain correspondences \cite{bunne2019learning} such as unsupervised language translation \cite{alvarez2018gromov} or graph matching \cite{xu2019gromov}. \textbf{Dance Generation} \quad Earlier works generally utilised similarity retrieval \cite{shiratori2006dancing,fan2011example,lee2013music}. A major drawback is that the synthesized choreography appears rigid and lacks creativity, simply arranging the dance moves in the training data with unnatural transitions. \cite{tang2018dance} employs Long Short Term Memories (LSTM) in a sequence to sequence modeling framework that generates motion features from encoded musical features. However, in using a L2 loss for the dance sequences, the synthesized motion are unrealistic and tends to incur motion freezing for longer sequences. Another shortcoming is the inability to generate diverse dance sequences. \cite{huang2021dance} proposes curriculum learning with L1 loss on the dance sequences to alleviate the motion freezing issue. It also introduces a noise vector on top of the encoded musical feature to enable multimodal generation. An alternative approach utilises GANs to synthesize multimodal dances. \cite{ren2020self} adopts a GAN framework with both a local and global discriminator to measure discrepancies between dance sequences. \cite{lee2019dancing} proposes a two-phase framework by first learning a decomposition of dance sequences into basic dance motion units and subsequently composing these dance units into a dance sequence with a GAN. These works focused on 2D poses, losing the geometric richness and realism of 3D motion. Important cues such as the dancer's position cannot be clearly put into perspective and invariance of bone lengths across frames is not enforced. More recently, \cite{valle2021transflower,li2021ai,wu2021dual} explored transformer-based models for music-to-dance generation with \cite{wu2021dual} proposing a dual learning framework of concurrently learning music composition conditional on dance inputs. \begin{figure*}[ht] \begin{center} \includegraphics[width=1.00\linewidth]{Figures/Framework.png} \end{center} \caption{Overview: An \emph{Encoder} maps the extracted \emph{Musical Features} and \emph{Noise Vector} into latent vectors $\mathbf{h}_1,\cdots,\mathbf{h}_N$. Each $\mathbf{h}_i$ corresponds to an inter-beat sequence, comprising a hierarchical representation of global musical feature and local beat-level features. A \emph{GRU decoder} generates a dance sequence from $\mathbf{h}_i$ for each beat event. The authenticity of the generated sequences is measured by an \emph{Optimal Transport Distance}, and the music-to-dance matching is measured by a \emph{Gromov-Wasserstein Distance}.} \label{fig:framework} \end{figure*} \section{Our Approach} \noindent\textbf{Music Input Preprocessing} \quad The music waveform is sampled at 48kHz. We do not use the raw waveform as input as it would be too computationally expensive with too much redundancies. Following existing works \cite{tang2018dance,ren2020self}, we adopt a similar procedure of extracting Mel-frequency cepstral coefficients (MFCC) and MFCC delta features which constitute low level sound features \cite{muller2015fundamentals}. To incorporate additional high level musical information, we further extract chroma features which correspond to pitch and melody as well as beats which relate to rhythm. \noindent\textbf{Pose Preprocessing}\quad We first perform an inverse kinematics fitting to obtain the joint orientation parameters for each dance pose in the Skinned Multi-Person Linear (SMPL) model \cite{SMPL:2015}. Key advantages of this over 3D positions representation include: 1) bone length invariance and rotational degrees of freedom are inherently inbuilt in this manifold; 2) We can easily normalize the bone lengths across performers and the global orientation of a dance sequence. \subsection{Formal Problem Statement} We denote the dance distribution as $\nu$ and the music distribution as $\xi$. As illustrated in Figure~\ref{fig:framework}, the generator with parameters $\theta$ learns a parametric mapping $g_{\theta}$ that maps an input music sequence $y$ (sampled from $\xi$) and noise vector $z$ (sampled from a Gaussian distribution $\psi$) to a generated dance sequence $\tilde{x}$. This gives the model distribution of generated dances $\mu_{\theta}$\footnote{Formally, $\mu_{\theta}$ is the push forward probability distribution of $(\xi,\psi)$ under the generator mapping $g_{\theta}$, \emph{i.e.} $\mu_\theta\myeq{g_{\theta_{\#}}(\xi,\psi)}$.}. A crucial aspect of our MDOT-Net is in introducing an optimal transport distance $OT(\mu_{\theta},\nu)$ and a Gromov-Wasserstein distance $GW(\mu_{\theta},\xi)$ as objective functions to facilitate learning of the generator parameters $\theta$ \begin{equation}\label{eqn:problem_statement} \begin{aligned} &\arg\min_{\theta} \;OT (\mu_{\theta},\nu) + GW (\mu_{\theta},\xi). \end{aligned} \end{equation} In what follows, we present the definitions for $OT (\mu_{\theta},\nu)$ and $GW (\mu_{\theta},\xi)$ and algorithms for computing them. \subsection{Optimal Transport Distance} For the generated dance distribution $\mu_{\theta}$ and the data dance distribution $\nu$, we consider each as a discrete distribution with $m$ samples. We have $\mu_{\theta}=\frac{1}{m}\sum_{i=1}^m\delta_{\tilde{x}_i}$, $\nu=\frac{1}{m}\sum_{j=1}^m\delta_{x_j}$ where $\delta$ denotes the Dirac delta distribution. The optimal transport distance between $\mu_{\theta}$ and $\nu$ is obtained via the following optimal transport problem \cite{kantorovitch1958translocation}: \begin{equation}\label{eqn:ot_discrete} OT_c (\mu_{\theta},\nu)=\min_{\gamma \in \Gamma} \sum_{i,j} \gamma_{ij}c(\tilde{x}_i,x_j). \end{equation} Intuitively, the optimal transport distance is the minimum total cost of matching $\mu_{\theta}$ with $\nu$ with $c(\tilde{x},x)$ denoting the unit cost of moving generated sequence $\tilde{x}$ to data sequence $x$. This is optimized over the set $\Gamma$ of all possible transport plans: \begin{equation} \label{eqn:transport_plan} \Gamma=\left\{\gamma\in\mathbb{R}^{m\times m}_+\mid\forall{i}\sum_{j}\gamma_{ij}=1,\forall{j}\sum_{i}\gamma_{ij}=1 \right\}. \end{equation} \noindent\textbf{Cost function $c$} \quad A $T$-frames dance sequence is given by $x=(\mathbf{p}_1,\cdots,\mathbf{p}_T)\in\mathcal{X}^T$ where each pose comprises of joint rotations, \emph{i.e.} $\mathbf{p}_i=(R_{i,1},\cdots,R_{i,J})$ \footnote{We disregard global translation of the pose.}. We propose a cost function measuring the squared geodesic distance on the $SO(3)$ rotation manifold: \begin{equation}\label{eqn:rotation_cost} \begin{aligned} c_R(\tilde{x},x) &= \sum_{i=1}^T \sum_{j=1}^J \text{geodesic}_{SO(3)}(\tilde{R}_{i,j},R_{i,j})^2\\ &=\sum_{i=1}^T \sum_{j=1}^J \left\lvert \arccos \left[\frac{\text{Tr}(\tilde{R}_{i,j}^TR_{i,j})-1}{2}\right] \right\rvert^2. \end{aligned} \end{equation} \noindent\textbf{Solving for Equation~\ref{eqn:ot_discrete}} \quad For distributions supported on an Euclidean space $\mathbb{R}^d$ with L1 cost function $c(\tilde{x},x)=|\tilde{x}-x|$, the optimal transport distance is known as the Wasserstein distance. The can be solved through 1-Lipschitz functions in the dual formulation as proposed in Wasserstein GAN \cite{arjovsky2017wasserstein}. Our case is more complicated in that the distributions are over a manifold with a squared geodesic cost function. To solve for this, we follow \cite{genevay2016stochastic} in introducing a regularization term \begin{equation}\label{eqn:ot_regularized} \begin{aligned} OT_{c,\epsilon}(\mu_{\theta},\nu) =\min_{\gamma \in \Gamma}\sum_{i=1}^n\sum_{j=1}^n \gamma_{ij}c(\tilde{x}_i,x_j)+\epsilon I(\gamma) \end{aligned} \end{equation} where $I(\gamma) = \sum_{i,j}\gamma_{ij}\log_2\gamma_{ij}$ is the mutual information of $\mu_{\theta},\nu$. This regularization transforms the primal Kantorovich problem into a convex optimization problem and Eqn~\ref{eqn:ot_regularized} admits a solution of the form $\gamma*=\text{diag}(\mathbf{a})K \text{diag}(\mathbf{b})$ where $K_{ij}\myeq{-\exp(c(\tilde{x}_i,x_j)/\epsilon})$. We compute the optimal transport distance with Algorithm~\ref{algo:OT}. This algorithm has the crucial advantage of being amenable to backpropagation \cite{genevay2018learning} and faster convergence. \begin{algorithm} \caption{OT Distance for batch of $m$ samples with Sinkhorn-Knopp algorithm} \label{algo:OT} \begin{algorithmic} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \Require gen. dance sequences $\widetilde{\mathbf{X}}=\{\tilde{x}_i\}_{i=1}^m$ \Require data dance sequences $\mathbf{X}=\{x_j\}_{j=1}^m$ \renewcommand{\algorithmicrequire}{\textbf{Hyperparameters:}} \Require regularization $\epsilon$, Sinkhorn iterations $L$ \State Dance Cost Matrix $C_{ij}=c_R(\tilde{x}_i,x_j)$ from Eqn~\ref{eqn:rotation_cost}, \State $K = \exp(-C/\epsilon)$ \State $\mathbf{b}^{(0)}=\mathbb{1}_m$ where $\mathbb{1}_m=(1,\cdots,1)^T\in\mathbb{R}^m$ \For{$\ell = 1:L$} \State $\mathbf{a}^{(\ell)} = \mathbb{1}_m \oslash K\mathbf{b}^{(\ell-1)}$, $\mathbf{b}^{(\ell)} = \mathbb{1}_m \oslash K^T\mathbf{a}^{(\ell)}$ \State $\oslash$ denotes component-wise division \EndFor \Ensure $OT_{\epsilon} (\widetilde{\mathbf{X}},\mathbf{X}) = \sum_{i,j}C_{ij}a_{i}^{(L)}K_{ij}b_{j}^{(L)}$ \end{algorithmic} \end{algorithm} Following \cite{salimans2018improving}, we sample two independent mini-batches of data and generated sequence pairs $(\widetilde{\mathbf{X}}, \mathbf{X}), (\widetilde{\mathbf{X}}', \mathbf{X}')$ in order to compute the following unbiased optimal transport distance $\overline{OT}_{\epsilon}$ \begin{equation} \small \begin{aligned} \overline{OT}_{\epsilon}&=OT_{\epsilon}(\widetilde{\mathbf{X}},\mathbf{X})+OT_{\epsilon}(\widetilde{\mathbf{X}}',\mathbf{X})+OT_{\epsilon}(\widetilde{\mathbf{X}},\mathbf{X}')\\ &+OT_{\epsilon}(\widetilde{\mathbf{X}}',\mathbf{X}')-2OT_{\epsilon}(\widetilde{\mathbf{X}},\widetilde{\mathbf{X}}')-2OT_{\epsilon}(\mathbf{X},\mathbf{X}'). \label{eqn:ot_unbiased} \end{aligned} \end{equation} \subsection{Gromov-Wasserstein distance} Comparing the similarity of dance distribution $\mu_{\theta}$ and music distribution $\xi$ invokes a cross domain learning problem. The Gromov-Wasserstein distance is defined as a \emph{relational} distance between the respective costs within each distribution. The cost function for the dance pose manifold is defined in Eqn~\ref{eqn:rotation_cost}. For music distributions, we learn an embedding $f$ and define the cost as the L1 distance in this embedding space \begin{equation} \label{eqn:music_cost} \begin{aligned} d(y_i,y_j) = \lVert f(y_i)-f(y_j)\rVert_1. \end{aligned} \end{equation} The Gromov-Wasserstein distance for our task is given by \begin{flalign} \label{eqn:Gromov-Wasserstein} &\Pi=\left\{\pi\in\mathbb{R}^{m\times m}_+\mid\forall{i}\sum_{j}\pi_{ij}=1,\forall{j}\sum_{i}\pi_{ij}=1 \right\}\\ &GW(\mu_{\theta},\xi)=\min_{\pi\in\Pi}\sum_{i,j,k,l}\lvert c_R(\tilde{x}_i, \tilde{x}_k) - d(y_j, y_l) \rvert^2 \pi_{ij} \pi_{kl}. \nonumber \end{flalign} \noindent\textbf{Solving for Eqn~\ref{eqn:Gromov-Wasserstein}} \quad This may be solved via entropic regularization and projected gradient descent as in Algorithm~\ref{algo:GW}. \begin{algorithm}[H] \caption{GW Distance for 2 independent batches of $m$ samples} \label{algo:GW} \begin{algorithmic} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \Require gen. dance sequences $\widetilde{\mathbf{X}}=\{\tilde{x}\}_{i=1}^m,\widetilde{\mathbf{X}}'=\{\tilde{x}'\}_{i=1}^m$ \Require music sequences $\mathbf{Y}=\{y\}_{i=1}^m,\mathbf{Y}'=\{y'\}_{i=1}^m$ \renewcommand{\algorithmicrequire}{\textbf{Hyperparameters:}} \Require regularization $\varepsilon$, projection iterations $M$, Sinkhorn iterations $L$ \renewcommand{\algorithmicrequire}{\textbf{Initialize:}} \Require $\pi^{(0)}_{ij} = \frac{1}{n} \forall{i,j}$ \State Dance Cost Matrix $C_{ij}=c_R(\tilde{x}_i,\tilde{x}'_j)$ from Eqn~\ref{eqn:rotation_cost} \State Music Cost Matrix $D_{ij} = d(y_i,y'_j)$ from Eqn~\ref{eqn:music_cost} \For{$l = 1:M$} \State $E = \frac{1}{m}D^2 \mathbb{1}_m \mathbb{1}_m^T + \frac{1}{m} \mathbb{1}_m \mathbb{1}_m^T C^2 - 2D \pi^{(l-1)} C^T$ \State $K = \exp(-E/\varepsilon)$ \State $\mathbf{b}^{(0)} = \mathbb{1}_m$ \For{$\ell = 1:L$} \State $\mathbf{a}^{(\ell)} = \mathbb{1}_m \oslash K\mathbf{b}^{(\ell-1)}$, $\mathbf{b}^{(\ell)} = \mathbb{1}_m \oslash K^T\mathbf{a}^{(\ell)}$ \EndFor \State $\pi^{(l)}=\text{diag}(\mathbf{a}^{(L)})K\text{diag}(\mathbf{b}^{(L)})$ \EndFor \Ensure \small$GW_{\varepsilon}(\widetilde{\mathbf{X}},\widetilde{\mathbf{X}}',\mathbf{Y},\mathbf{Y}') = \displaystyle \sum_{i,j,k,l}\lvert C_{ik}-D_{jl}\rvert^2\pi_{ij}^{(M)}\pi_{kl}^{(M)}$ \end{algorithmic} \end{algorithm} \subsection{Algorithmic Pipeline} An overview of our MDOT-Net is presented in Figure~\ref{fig:framework}. We generate an inter-beat dance sequence directly instead of a frame-by-frame synthesis since this improves the temporal smoothness and also facilitates modeling nuances in dance motions. Our encoder network, as illustrated in Figure~\ref{fig:encoder} processes the musical feature $y$ and a noise vector $\mathbf{z}$ into latent vectors $\mathbf{h}_1,\cdots,\mathbf{h}_T$ where $T$ denotes the total beat events in the music input $y$. Each $\mathbf{h}_i$ consists of a hierarchical representation of the global musical feature and local beat level features. Each $\mathbf{h}_i$ is decoded via a GRU network into a dance sequence $\tilde{x}_i$ consisting of 10 articulated poses. For post-processing, we employ spherical interpolation to fix the frames per second to 25. The training procedure is summarized in Algorithm~\ref{algo:pipeline}. Due to space limitations, a discussion on further motivations of our optimal transport based cross-domain sequence-to-sequence learning is presented in the supplementary text. \begin{figure}[ht] \begin{center} \includegraphics[width=1.00\linewidth]{Figures/Encoder.png} \end{center} \caption{Architecture details of encoder and decoder} \label{fig:encoder} \end{figure} \begin{algorithm}[H] \caption{Overall Algorithmic Pipeline} \label{algo:pipeline} \begin{algorithmic} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \Require music dataset with data distribution $\xi$ \Require dance dataset with data distribution $\nu$ \renewcommand{\algorithmicrequire}{\textbf{Hyperparameters:}} \Require regularization parameters $\epsilon, \varepsilon$, batch size $m$, learning rate $\alpha$, training epochs $T$, generator to critic update ratio $n_{\text{gen}}$, noise vector distribution $\psi$ \renewcommand{\algorithmicrequire}{\textbf{Initialize:}} \Require generator parameters $\theta_0$, music embedding $f_0$. \For{$l = 1:T$} \State Sample 2 independent mini-batches of music-dance pairs from data and noise vectors from $\psi$, $(\mathbf{X},\mathbf{Y},\mathbf{Z}),(\mathbf{X}',\mathbf{Y}',\mathbf{Z}')$ \State Generate dance sequences as $\widetilde{\mathbf{X}}=g_{\theta}(\mathbf{Y},\mathbf{Z})$ and $\widetilde{\mathbf{X}}'=g_{\theta}(\mathbf{Y}',\mathbf{Z}')$. \State Compute $\overline{OT}_{\epsilon}$ with Algorithm~\ref{algo:OT} and Equation~\ref{eqn:ot_unbiased}. \State Compute $GW_{\varepsilon}$ with Algorithm~\ref{algo:GW}. \If{$l \mod (n_{\text{gen}}) > 0$} \State $\theta \leftarrow \theta-\alpha\nabla_{\theta}\overline{OT}_{\epsilon}-\alpha\nabla_{\theta}GW_{\varepsilon}$ \Else \State $f \leftarrow f+\alpha\nabla_{f}GW_{\varepsilon}$ \EndIf \EndFor \Ensure Generator network parameters $\theta$ \end{algorithmic} \end{algorithm} \section{Experimental Results} \subsection{Dataset and Implementation Details} We adopt the public dataset of \cite{tang2018dance} comprising four dance styles, waltz, tango, cha-cha, and rumba. The dances were performed by professional dancers and comprises 3D motion capture data obtained via Vicon MoCap devices. We perform an inverse kinematics fitting to re-parameterize the dance sequence as SMPL \cite{SMPL:2015} parameters, consisting of 3D orientations for 24 joints. We implement MDOT-Net in PyTorch \cite{paszke2019pytorch}. Additional libraries included Librosa \cite{mcfee2015librosa} for music processing and the Python Optimal Transport Toolbox \cite{flamary2017pot}. The hyperparameters are as follows: regularization parameters are set to $\epsilon=0.1$, $\varepsilon=0.5$; mini-batch size is set to $m=128$; learning rate is set to $\alpha=0.001$; number of Sinkhorn iterations is set to $L=30$; projection iterations is set to $M=20$; generator to critic update ratio is set to $n_{\text{gen}}=10$. The RMSprop optimizer is used. Convergence occurs around 150 epochs. \subsection{Baselines and Ablation Studies} \noindent\textbf{Baselines}: As a relatively novel task, few methods have been developed for generating 3D dance sequences from music. Existing methods \cite{ren2020self,huang2021dance} focus on 2D generation, making it difficult to visualize the full dynamic richness of dance motions. As such, we adapted them for 3D generation for fair comparison. \noindent\textbf{Ablation Studies}: We further perform two ablation studies for validating the efficacy of the optimal transport distance and Gromov-Wasserstein distance. \textbf{1) WGAN}: Here we replace the optimal transport and Gromov-Wasserstein objectives with WGAN discriminators. The discriminator for dance sequences is adapted from AGCN \cite{shi2019two} (SOTA for modeling skeletal based motion). A second discriminator serves to determine if the dance sequence matches input music. \textbf{2) Remove GW}: We remove the Gromov-Wasserstein objective to investigate its effectiveness in establishing music and dance correspondence. \begin{figure*}[ht] \centering \resizebox{\textwidth}{!} { \begin{tabular}{|l||c||c|} \hline Method & Click $\downarrow$ & Tango dance sequences shown at 1 second intervals \\ \hline Ground Truth & \animategraphics[height=0.92cm]{5}{Figures/GT/}{0}{39} & \includegraphics[height=0.92cm]{Figures/GT/Combined.png} \\ \hline \cite{tang2018dance} & \animategraphics[height=0.92cm]{5}{Figures/Tang2018/}{0}{39} & \includegraphics[height=0.92cm]{Figures/Tang2018/Combined.png} \\ \hline \cite{ren2020self} & \animategraphics[height=0.92cm]{5}{Figures/Ren2020/}{0}{39} & \includegraphics[height=0.92cm]{Figures/Ren2020/Combined.png} \\ \hline \cite{huang2021dance} & \animategraphics[height=0.92cm]{5}{Figures/Huang2021/}{0}{39} & \includegraphics[height=0.92cm]{Figures/Huang2021/Combined.png} \\ \hline Ablation: WGAN & \animategraphics[height=0.92cm]{5}{Figures/WGAN/}{0}{39} & \includegraphics[height=0.92cm]{Figures/WGAN/Combined.png} \\ \hline Ablation: Remove GW & \animategraphics[height=0.92cm]{5}{Figures/Remove_GW/}{0}{39} & \includegraphics[height=0.92cm]{Figures/Remove_GW/Combined.png} \\ \hline MDOT-Net (Ours) & \animategraphics[height=0.92cm]{5}{Figures/MDOT-Net/}{0}{39} & \includegraphics[height=0.92cm]{Figures/MDOT-Net/Combined.png} \\ \hline \end{tabular} } \caption{Visualization of sample generated tango dance sequences. Dance animations will be played in Adobe Acrobat Reader upon clicking.} \label{fig:realism_results} \end{figure*} \subsection{Evaluation: Realism and Consistency} \label{sec:realism} Sample generated dance sequences for tango are illustrated in Figure~\ref{fig:realism_results}. We observe that \cite{tang2018dance} tends to converge to mean pose and is lacking in dynamical variations. Comparatively, GAN methods including \cite{ren2020self} and our ablation experiment with WGAN demonstrates more variation in the range of dance motions, but often lacks in naturalness and appears unrealistic, especially in turning dance motions. Our MDOT-Net generates the most realistic looking dance motions which are also consistent with the tango style. \noindent\textbf{User Study} \quad A single blind user study involving 8 dancers is engaged to judge the authenticity of the generated dance sequences and tabulated in Table~\ref{tab:realism}. For each dance style, 5 sample generated sequences of 20 seconds duration are ranked according to two criteria, namely 1) the naturalness and realism of the motion and 2) adherence to the specific dance style. The rankings echo our above qualitative observations, with the samples generated by our proposed approach being preferred over previous methods. This demonstrates the effectiveness of the optimal transport objective in generating realistic and consistent dance sequences. \begin{table}[ht] \resizebox{1\linewidth}{!} { \begin{tabular}{|l|c|c|c|} \hline Method & Realism & Consistency & FID \\ \hline \cite{tang2018dance} & 5.9 & 5.8 & 105.5 $\pm 17.2$ \\ \hline \cite{ren2020self} & 4.6 & 4.8 & 67.0 $\pm 9.3$ \\ \hline \cite{huang2021dance} & 3.3 & 3.1 & 43.4 $\pm 7.5$ \\ \hline Ablation: WGAN & 3.8 & 4.0 & 49.3 $\pm 5.2$ \\ \hline Ablation: Remove GW & 2.3 & 2.1 & 32.4 $\pm 4.8$ \\ \hline MDOT-Net (Ours) & \textbf{1.2} & \textbf{1.3} & \textbf{25.6} $\pm 3.3$ \\ \hline \end{tabular}} \caption{Results for Realism and Consistency} \label{tab:realism} \end{table} \noindent\textbf{FID} \quad We randomly sample data dance sequences ranging from 100 frames to 250 frames and employ a pre-trained AGCN \cite{shi2019two} for feature extraction to evaluate the Fr\'{e}chet Inception Distance \cite{heusel2017gans}. The better FID result suggests that the optimal transport objective is more effective than existing methods and the WGAN in matching the model distribution with data distribution. This could be explained by the optimal transport objective not being prone to instability issues of adversarial training. \begin{table}[h] \resizebox{1\linewidth}{!}{ \begin{tabular}{|l|c|c|c|c|c|c|} \hline & & \multicolumn{5}{c|}{Multimodality} \\ \cline{3-7} \multirow{-2}{*}{Method} & \multirow{-2}{*}{Diversity} & Rumba & Cha Cha & Tango & Waltz & Average \\ \hline Ground Truth & 63.5 & - & - & - & - & - \\ \hline \cite{tang2018dance} & 18.2 & 13.2 & 16.7 & 14.8 & 10.7 & 13.9 \\ \hline \cite{ren2020self} & 35.4 & 32.1 & 30.5 & 27.4 & 25.9 & 29.0 \\ \hline \cite{huang2021dance} & 34.7 & 28.9 & 28.5 & 31.9 & 34.4 & 30.9 \\ \hline Ablation: WGAN & 41.5 & 38.9 & 46.5 & 33.5 & 35.4 & 38.6 \\ \hline Ablation: Remove GW & 58.7 & 52.3 & 55.4 & 46.8 & \textbf{50.7} & 51.3 \\ \hline MDOT-Net (Ours) & \textbf{60.1} & \textbf{55.6} & \textbf{59.7} & \textbf{49.4} & 48.8 & \textbf{53.4} \\ \hline \end{tabular}} \caption{Results for Diversity (dance generated via different music) and Multimodality (dance generated on the same music)} \label{tab:diversity} \end{table} \subsection{Evaluation: Diversity and Multimodality} \label{sec:diversity} We adopt the terminology of \cite{lee2019dancing}: \emph{diversity} refers to variations over the entire ensemble of dances generated from different music inputs whereas \emph{multimodality} pertains to dances generated from the same music. For quantitative evaluation, we employ a perceptual similarity metric \cite{zhang2018unreasonable}. For diversity, we generate 20 dance sequences conditioned on different music inputs. For multimodality, 5 dance sequences are conditioned on the same music. The pairwise feature distance is evaluated for each collection and experiments are averaged over 20 independent trials. The results reported in Table~\ref{tab:diversity} demonstrate significant improved diversity and multimodality over existing methods and the ablation setting of WGAN, indicating that MDOT-Net is more effective in preventing mode collapse. \subsection{Evaluation: Cohesion and Unity with Music} \label{sec:music} \begin{table}[t] \resizebox{1\linewidth}{!}{ \begin{tabular}{|l|c|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{Beats Matching (\%)} & \multicolumn{2}{c|}{Music-Dance Matching Ranking} \\ \cline{2-7} \multirow{-2}{*}{Method} & Rumba & Cha Cha & Tango & Waltz & \begin{tabular}[c]{@{}l@{}} Music from Dataset \end{tabular} & New Music \\ \hline Ground Truth & 65.1 & 68.4 & 62.4 & 72.3 & 1.2 & - \\ \hline \cite{tang2018dance} & 13.7 & 11.2 & 14.2 & 16.9 & 6.9 & 5.9 \\ \hline \cite{ren2020self} & 44.7 & 40.2 & 39.9 & 50.7 & 5.6 & 4.7 \\ \hline \cite{huang2021dance} & 52.3 & 46.5 & 52.6 & 58.2 & 3.6 & 1.9 \\ \hline Ablation: WGAN & 49.8 & 49.4 & 55.7 & 58.8 & 3.0 & 3.8 \\ \hline Ablation: Remove GW & 54.3 & 60.2 & 60.5 & 64.3 & 3.8 & 2.8 \\ \hline MDOT-Net (Ours) & \textbf{63.2} & \textbf{65.4} & \textbf{64.7} & \textbf{73.1} & \textbf{1.9} & \textbf{1.2} \\ \hline \end{tabular}} \caption{Results for beats matching and user preference} \label{tab:music} \end{table} \noindent\textbf{Beats Matching} \quad We evaluate the consistency of rhythmic articulation through beats matching. A dance beat is a local minimum in the mean joint speeds and it matches a music beat if the two events are within $\pm2$ frames. The beats matching ratio measures matching beats against total dance beats. \noindent\textbf{Music-Dance Harmony} \quad Evaluating the music and dance coherence is a rather subjective task and we again engage an user study. 10 dance sequences (20 sec duration) are ranked by 8 users according to the perceived harmony with music. The consistent preference for MDOT-Net justify the effectiveness of the Gromov-Wasserstein distance. \section{Conclusion} In this work, we propose a MDOT-Net framework for generating 3D dance sequences conditioned on music. Through an optimal transport objective for matching the model and data dance distributions as well as a Gromov-Wasserstein objective for aligning the music and dance, MDOT-Net proves capable of generating realistic dances that are consistent with the dance style, display diversity and match the music. Extensive experiments demonstrate the effectiveness of the optimal transport and Gromov-Wasserstein objectives. For future work, we will generalize this cross-domain sequence-to-sequence generation framework to more applications. \section*{Acknowledgements} This work is supported in part by the Ministry of Education Academic Research Fund Tier-1 Project Grant (RG94/20) in Singapore, and in part by the NSERC Discovery Grant (RGPIN-2019-04575) and the UAHJIC Grants in Canada. \clearpage \small{ \bibliographystyle{ijcai22}
proofpile-arXiv_065-53
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Our main result is the following. \begin{theorem}\label{thm:general_three_col} For any planar convex body $C$ there is a positive integer $m=m(C)$ such that any finite point set $P$ in the plane can be three-colored in a way that there is no translate of $C$ containing at least $m$ points of $P$, all of the same color. \end{theorem} This result closes a long line of research about coloring points with respect to planar range spaces that consist of translates of a fixed set, a problem that was proposed by Pach over forty years ago \cite{Pach80}. In general, a pair $(P, \Sc)$, where $P$ is a set of points in the plane and $\Sc$ is a family of subsets of the plane, called the \emph{range space}, defines a \emph{primal} hypergraph $\Hc(P,\Sc)$ whose vertex set is $P$, and for each $S\in\Sc$ we add the edge $S\cap P$ to the hypergraph. Given any hypergraph $\Hc$, a planar realization of $\Hc$ is defined as a pair $(P, \Sc)$ for which $\Hc(P,\Sc)$ is isomorphic to $\Hc$. If $\Hc$ can be realized with some pair $(P, \Sc)$ where $\Sc$ is from some family $\Fc$, then we say that $\Hc$ is realizable with $\Fc$. The dual of the hypergraph $\Hc(P,\Sc)$, where the elements of the range space $\Sc$ are the vertices and the points $P$ define the edges, is known as the \emph{dual} hypergraph and is denoted by $\Hc(\Sc,P)$. If $\Hc=\Hc(\Sc,P)$ where $\Sc$ is from some family $\Fc$, then we say that $\Hc$ has a dual realization with $\Fc$. Pach observed \cite{Pach80,surveycd} that if $\Fc$ is the family of translates of some set, then $\Hc$ has a dual realization with $\Fc$ if and only if $\Hc$ has a (primal) realization with $\Fc$. Pach proposed to study the chromatic number of hypergraphs realizable with different geometric families $\Fc$. It is important to distinguish between two types of hypergraph colorings that we will use, the \emph{proper} coloring and the \emph{polychromatic} coloring. \begin{definition} A hypergraph is \emph{properly $k$-colorable} if its vertices can be colored with $k$ colors such that each edge contains points from at least two color classes. Such a coloring is called a \emph{proper $k$-coloring}. If a hypergraph has a proper $k$-coloring but not a proper $(k-1)$-coloring, then it is called \emph{$k$-chromatic}. A hypergraph is \emph{polychromatic $k$-colorable} if its vertices can be colored with $k$ colors such that each edge contains points from each color class. Such a coloring is called a \emph{polychromatic $k$-coloring}. \end{definition} Note that for a polychromatic $k$-coloring to exist, it is necessary that each edge of the underlying hypergraph has at least $k$ vertices. More generally, we say that a hypergraph is \emph{$m$-heavy} if each of its edges has at least $m$ vertices. The main question that Pach raised can be rephrased as follows. \begin{question} For which planar families $\Fc$ is there an $m_k=m(\Fc,k)$ such that any $m_k$-heavy hypergraph realizable with $\Fc$ has a proper/polychromatic $k$-coloring? \end{question} Initially, this question has been mainly studied for polychromatic $k$-colorings (known in case of a dual range space as \emph{cover-decomposition} problem), and it was shown that such an $m_k$ exists if $\Fc$ is the family of translates of some convex polygon \cite{Pach86,TT07,PT10}, or the family of all halfplanes \cite{wcf2,MR2844088}, or the homothetic\footnote{A \emph{homothetic copy}, or \emph{homothet}, is a scaled and translated (but non-rotated) copy of a set. We always require the scaling factor to be positive. Note that this is sometimes called a positive homothet.} copies of a triangle \cite{octants} or of a square \cite{homotsquare}, while it was also shown that not even $m_2$ exists if $\Fc$ is the family of translates of some appropriate concave polygon \cite{MR2364757,MR2679054} or any body\footnote{By \emph{body}, we always mean a compact subset of the plane with a non-empty interior, though our results (and most of the results mentioned) also hold for sets that are unbounded, or that contain an arbitrary part of their boundary, and are thus neither open, nor closed. This is because a realization of a hypergraph can be perturbed slightly to move the points off from the boundaries of the sets realizing the respective edges of the hypergraph.} with a smooth boundary \cite{unsplittable}. It was also shown that there is no $m_k$ for proper $k$-colorings if $\Fc$ is the family of all lines \cite{MR2364757} or all axis-parallel rectangles \cite{Chen}; for these families, the same holds in case of dual realizations \cite{MR2364757,PT08}. For homothets of convex polygons other than triangles, it is known that there is no $m_2$ for dual realizations \cite{kovacs}, unlike for primal realizations. Higher dimensional variants \cite{octants,CKMU13} and improved bounds for $m_k$ have been also studied \cite{Alou,MR2812512,MR3151767,MR3216669,MR3126347,CKMPUV20}. For other results, see also the decade old survey \cite{surveycd}, or the up-to-date website \url{https://coge.elte.hu/cogezoo.html}. If $\Fc$ is the translates or homothets of some planar convex body, it is an easy consequence of the properties of generalized Delaunay-triangulations and the Four Color Theorem that any hypergraph realizable with $\Fc$ is proper 4-colorable if every edge contains at least two vertices. We have recently shown that this cannot be improved for homothets. \begin{theorem}[Dam\'asdi, Pálvölgyi \cite{fourchromatic}] Let $C$ be any convex body in the plane that has two parallel supporting lines such that $C$ is strictly convex in some neighborhood of the two points of tangencies. For any positive integer $m$, there exists a 4-chromatic $m$-uniform hypergraph that is realizable with homothets of $C$. \end{theorem} For translates, we recall the following result. \begin{theorem}[Pach, Pálvölgyi \cite{unsplittable}]\label{thm:unsplittable} Let $C$ be any convex body in the plane that has two parallel supporting lines such that $C$ is strictly convex in some neighborhood of the two points of tangencies.\footnote{This condition can be relaxed to require only one smooth neighborhood on the boundary. Since this is not the main topic of our paper, we just give a sketch of the construction in Appendix \ref{sec:halfdisk}.} For any positive integer $m$, there exists a 3-chromatic $m$-uniform hypergraph that is realizable with translates of $C$. \end{theorem} This left only the following question open: Is there for any planar convex body $C$ a positive integer $m$ such that that no 4-chromatic $m$-uniform hypergraph is realizable with translates of $C$? Our Theorem \ref{thm:general_three_col} answers this question affirmatively for all $C$ by showing that all realizable $m$-heavy hypergraphs are three-colorable for some $m$. This has been hitherto known to hold only when $C$ is a polygon (in which case 2 colors suffice \cite{PT10}, and 3 colors are known to be enough even for homothets \cite{3propercol}) and pseudodisk families that intersect in a common point \cite{MR4012917} (which generalizes the case when $C$ is unbounded, in which case 2 colors suffice \cite{unsplittable}). Note that the extended abstract of our first proof attempt appeared recently in the proceedings of EuroComb 2021 \cite{threechromaticdisk}. That proof, however, only worked when $C$ was a disk, and while the generalization to other convex bodies with a smooth boundary seemed feasible, we saw no way to extend it to arbitrary convex bodies. The proof of Theorem \ref{thm:general_three_col} relies on a surprising connection to two other famous results, the solution of the two dimensional case of the Illumination conjecture \cite{MR76368}, and a recent solution of the Erdős-Sands-Sauer-Woodrow conjecture by Bousquet, Lochet and Thomassé~\cite{esswproof}. In fact, we need a generalization of the latter result, which we prove with the addition of one more trick to their method; this can be of independent interest.\\ The rest of the paper is organized as follows.\\ In Section \ref{sec:tools} we present the three main ingredients of our proof: \begin{itemize} \item the Union Lemma (Section \ref{sec:unionlemma}), \item the Erdős-Sands-Sauer-Woodrow conjecture (Section \ref{sec:essw})---the proof of our generalization of the Bousquet-Lochet-Thomassé theorem can be found in Appendix \ref{app:essw}, \item the Illumination conjecture (Section \ref{sec:illum}), which is a theorem of Levi in the plane. \end{itemize} In Section \ref{sec:proof} we give the detailed proof of Theorem \ref{thm:general_three_col}.\\ In Section \ref{sec:overview} we give a general overview of the steps of the algorithm requiring computation to show that we can find a three-coloring in randomized polynomial time.\\ Finally, in Section \ref{sec:open}, we pose some problems left open. \section{Tools}\label{sec:tools} \subsection{Union Lemma}\label{sec:unionlemma} Polychromatic colorability is a much stronger property than proper colorability. Any polychromatic $k$-colorable hypergraph is proper $2$-colorable. We generalize this trivial observation to the following statement about unions of polychromatic $k$-colorable hypergraphs. \begin{lemma}[Union Lemma]\label{lem:combine} Let $\Hc_1=(V,E_1),\dots, \Hc_{k-1}=(V,E_{k-1})$ be hypergraphs on a common vertex set $V$. If $\Hc_1,\dots, \Hc_{k-1}$ are polychromatic $k$-colorable, then the hypergraph $\bigcup\limits_{i=1}^{k-1} \Hc_i=(V,\bigcup\limits_{i=1}^{k-1} E_i)$ is proper $k$-colorable. \end{lemma} \begin{proof} Let $c_i:V\rightarrow \{1,\ldots,k\}$ be a polychromatic $k$-coloring of $\Hc_i$. Choose $c(v)\in \{1,\ldots,k\}$ such that it differs from each $c_i(v)$. We claim that $c$ is a proper $k$-coloring of $\bigcup\limits_{i=1}^{k-1} \Hc_i$. To prove this, it is enough to show that for every edge $H\in\Hc_i$ and for every color $j\in\{1,\ldots,k-1\}$, there is a $v\in H$ such that $c(v)\ne j$. We can pick $v\in H$ for which $c_i(v)=j$. This finishes the proof. \end{proof} Lemma \ref{lem:combine} is sharp in the sense that for every $k$ there are $k-1$ hypergraphs such that each is polychromatic $k$-colorable but their union is not properly $(k-1)$-colorable.\\ We will apply the Union Lemma combined with the theorem below. A \emph{pseudoline arrangement} is a collection of simple curves, each of which splits $\mathbb R^2$ into two unbounded parts, such that any two curves intersect at most once. A \emph{pseudohalfplane} is the region on one side of a pseudoline in such an arrangement. For hypergraphs realizible by pseudohalfplanes the following was proved, generalizing a result of Smorodinsky and Yuditsky \cite{MR2844088} about halfplanes. \begin{theorem}[Keszegh-P\'alv\"olgyi \cite{abafree}]\label{thm:pseudohalfplane} Any $(2k-1)$-heavy hypergraph realizable by pseudohalfplanes is polychromatic $k$-colorable, i.e., given a finite set of points and a pseudohalfplane arrangement in the plane, the points can be $k$-colored such that every pseudohalfplane that contains at least $2k-1$ points contains all $k$ colors. \end{theorem} Combining Theorem \ref{thm:pseudohalfplane} with Lemma \ref{lem:combine} for $k=3$, we obtain the following. \begin{corollary}\label{cor:pseudohalfplane} Any $5$-heavy hypergraph realizable by two pseudohalfplane families is proper $3$-colorable, i.e., given a finite set of points and two different pseudohalfplane arrangements in the plane, the points can be $3$-colored such that every pseudohalfplane that contains at least $5$ points contains two differently colored points. \end{corollary} \subsection{Erdős-Sands-Sauer-Woodrow conjecture}\label{sec:essw} Given a quasi order\footnote{A quasi order $\prec$ is a reflexive and transitive relation, but it is not required to be antisymmetric, so $p\prec q\prec p$ is allowed, unlike for partial orders.} $\prec$ on a set $V$, we interpret it as a digraph $D=(V,A)$, where the vertex set is $V$ and a pair $(x,y)$ defines an arc in $A$ if $x \prec y$. The \emph{closed in-neighborhood} of a vertex $x\in V$ is $N^-(x)=\{x\}\cup \{y|(y,x)\in A \}$. Similarly the \emph{closed out-neighborhood} of a vertex $x$ is $N^+(x)=\{x\}\cup \{y|(x,y)\in A \}$. We extend this to subsets $S\subset V$ as $N^-(S) = \bigcup\limits_{ x\in S } N^-(x)$ and $N^+(S) = \bigcup\limits_{ x\in S } N^+(x)$. A set of vertices $S$ such that $N^+(S) = V$ is said to be \emph{dominating}. For $A,B\subset V$ we will also say that \emph{$A$ dominates $B$ } if $B\subset N^+(A)$. A \emph{complete multidigraph} is a digraph where parallel edges are allowed and in which there is at least one arc between each pair of distinct vertices. Let $D$ be a complete multidigraph whose arcs are the disjoint union of $k$ quasi orders $\prec_1, \dots , \prec_k$ (parallel arcs are allowed). Define $N^-_i(x)$ (resp.\ $N^+_i(x)$) as the closed in-neighborhood (resp.\ out-neighborhood) of the digraph induced by $\prec_i$. Proving a conjecture of Erdős, and of Sands, Sauer and Woodrow \cite{sandssauer}, Bousquet, Lochet and Thomassé recently showed the following. \begin{theorem}[Bousquet, Lochet, Thomassé~\cite{esswproof}]\label{thm:multi_essw_old} For every $k$, there exists an integer $f(k)$ such that if $D$ is a complete multidigraph whose arcs are the union of $k$ quasi orders, then $D$ has a dominating set of size at most $f(k)$. \end{theorem} We show the following generalization of Theorem \ref{thm:multi_essw_old}. \begin{theorem}\label{thm:multi_essw_new} For every pair of positive integers $k$ and $l$, there exist an integer $f(k,l)$ such that if $D=(V,A)$ is a complete multidigraph whose arcs are the union of $k$ quasi orders $\prec_1,\dots, \prec_k$, then $V$ contains a family of pairwise disjoint subsets $S_{i}^j$ for $i\in [k]$, $j\in [l]$ with the following properties: \begin{itemize} \item $|\bigcup\limits_{i,j}S_{i}^j|\le f(k,l)$ \item For each vertex $v\in V\setminus \bigcup\limits_{i,j}S_{i}^j$ there is an $i\in [k]$ such that for each $j\in [l]$ there is an edge of $\prec_i$ from a vertex of $S_{i}^j$ to $v$. \end{itemize} \end{theorem} Note that disjointness is the real difficulty here, without it the theorem would trivially hold from repeated applications of Theorem \ref{thm:multi_essw_old}. We saw no way to derive Theorem \ref{thm:multi_essw_new} from Theorem \ref{thm:multi_essw_old}, but with an extra modification the proof goes through. The full proof of Theorem \ref{thm:multi_essw_new} can be found in Appendix \ref{app:essw}. \subsection{Hadwiger's Illumination conjecture and pseudolines}\label{sec:illum} Hadwiger's Illumination conjecture has a number of equivalent formulations and names.\footnote{These include names such as Levi–Hadwiger Conjecture, Gohberg–Markus Covering Conjecture, Hadwiger Covering Conjecture, Boltyanski–Hadwiger Illumination Conjecture.} For a recent survey, see \cite{MR3816868}. We will use the following version of the conjecture. Let $\mathbb{S}^{d-1}$ denote the unit sphere in $\mathbb R^d$. For a convex body $C$, let $\partial C$ denote the boundary of $C$ and let $int(C)$ denote its interior. A direction $u\in \mathbb{S}^{d-1}$ \emph{illuminates} $b\in \partial C$ if $\{b+\lambda u:\lambda>0 \}\cap int (C)\ne \emptyset$. \begin{conjecture} The boundary of any convex body in $\mathbb{R}^d$ can be illuminated by $2^d$ or fewer directions. Furthermore, the $2^d$ lights are necessary if and only if the body is a parallelepiped. \end{conjecture} The conjecture is open in general. The $d=2$ case was settled by Levi \cite{MR76368} in 1955. For $d=3$ the best result is due to Papadoperakis \cite{MR1689273}, who showed that 16 lights are enough. In the following part we make an interesting connection between the Illumination conjecture and pseudolines. Roughly speaking, we show that the Illumination conjecture implies that for any convex body in the plane the boundary can be broken into three parts such that the translates of each part behave similarly to pseudolines, i.e., we get three pseudoline arrangements from the translates of the three parts. To put this into precise terms, we need some technical definitions and statements. Fix a body $C$ and an injective parametrization of $\partial C$, $\gamma:[0,1]\rightarrow \partial C$, that follows $\partial C$ counterclockwise. For each point $p$ of $\partial C$ there is a set of possible tangents touching at $p$. Let $g(p)\subset \mathbb{S}^1$ denote the Gauss image of $p$, i.e., $g(p)$ is the set of unit outernormals of the tangent lines touching at $p$. Note that $g(p)$ is an arc of $\mathbb{S}^1$ and $g(p)$ is a proper subset of $\mathbb{S}^1$. Let $g_+:\partial C\rightarrow\mathbb{S}^1$ be the function that assigns to $p$ the counterclockwise last element of $g(p)$. (See Figure \ref{fig:gauss_tan} left.) Similarly let $g_-$ be the function that assigns to $p$ the clockwise last element of $g(p)$. Thus, $g(p)$ is the arc from $g_-(p)$ to $g_+(p)$. Let $|g(p)|$ denote the length of $g(p)$. \begin{figure}[!ht] \centering \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(5.824158146704215,1.6939909822621093) rectangle (10.8,4.4022061705050906); \draw [shift={(7.,2.)},line width=1.0] plot[domain=0.5235987755982986:1.5707963267948966,variable=\t]({1.*2.*cos(\t r)+0.*2.*sin(\t r)},{0.*2.*cos(\t r)+1.*2.*sin(\t r)}); \draw [shift={(7.,4.)},line width=1.0] plot[domain=4.71238898038469:5.759586531581288,variable=\t]({1.*2.*cos(\t r)+0.*2.*sin(\t r)},{0.*2.*cos(\t r)+1.*2.*sin(\t r)}); \draw [shift={(7.,3.)},line width=1.0] plot[domain=1.5707963267948966:4.71238898038469,variable=\t]({1.*1.*cos(\t r)+0.*1.*sin(\t r)},{0.*1.*cos(\t r)+1.*1.*sin(\t r)}); \draw [line width=1.0,domain=5.824158146704215:10.274171485061972] plot(\x,{(--18.124355652982153-1.7320508075688785*\x)/1.}); \draw [line width=1.0,domain=5.824158146704215:10.274171485061972] plot(\x,{(-12.12435565298215--1.7320508075688785*\x)/1.}); \draw [->,line width=1.pt] (8.732050807568879,3.) -- (9.832456454322593,3.6353194963710407); \draw [->,line width=1.pt] (8.732050807568879,3.) -- (9.844045808452828,2.3579893869021333); \draw (8.15,3.25) node[anchor=north west] {$p$}; \draw (9.7,2.95) node[anchor=north west] {$g_-(p)$}; \draw (9.7,3.8) node[anchor=north west] {$g_+(p)$}; \end{tikzpicture} ~~~~~~~~ \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.6cm,y=0.6cm] \clip(1.010405779360095,0.7779725023481115) rectangle (9.945938145792228,4.969084292744436); \draw (4.75,3.7) node[anchor=north west] {$p$}; \draw (4.9,1.4) node[anchor=north west] {$q$}; \draw [line width=1.0,dash pattern=on 3pt off 3pt] (5.52700500582115,3.969615338937309)-- (5.701093102682993,1.3323544637799518); \draw [line width=1.0,dash pattern=on 3pt off 3pt] (4.691398477047223,3.8957218914504184)-- (4.86141088560568,1.3202036253261604); \draw [line width=1.0] (5.516440449569205,1.8865088084802843)-- (4.995922845299596,1.0326578708773406); \draw [line width=1.0] (8.392512263686044,1.2568699665550929)-- (8.913029867955656,2.1107209041580446); \draw [line width=1.0] (3.0820370497082816,4.613210333135324)-- (4.995922845299596,4.032657870877341); \draw [line width=1.0] (8.39251226368604,4.256869966555094)-- (6.478626468094716,4.837422428813083); \draw [shift={(3.495922845299593,2.532657870877341)},line width=1.0] plot[domain=-0.30951591373703113:0.7853981633974475,variable=\t]({1.*2.1213203435596446*cos(\t r)+0.*2.1213203435596446*sin(\t r)},{0.*2.1213203435596446*cos(\t r)+1.*2.1213203435596446*sin(\t r)}); \draw [shift={(3.495922845299593,2.532657870877341)},line width=1.0] plot[domain=1.7671635199760698:3.9269908169872414,variable=\t]({1.*2.121320343559645*cos(\t r)+0.*2.121320343559645*sin(\t r)},{0.*2.121320343559645*cos(\t r)+1.*2.121320343559645*sin(\t r)}); \draw [shift={(6.892512263686043,2.756869966555093)},line width=1.0] plot[domain=-0.3095159137370276:0.7853981633974495,variable=\t]({1.*2.121320343559643*cos(\t r)+0.*2.121320343559643*sin(\t r)},{0.*2.121320343559643*cos(\t r)+1.*2.121320343559643*sin(\t r)}); \draw [shift={(6.892512263686043,2.756869966555093)},line width=1.0] plot[domain=1.7671635199760762:3.9269908169872414,variable=\t]({1.*2.1213203435596544*cos(\t r)+0.*2.1213203435596544*sin(\t r)},{0.*2.1213203435596544*cos(\t r)+1.*2.1213203435596544*sin(\t r)}); \draw (8.676673546001679,4.6) node[anchor=north west] {$J_2$}; \draw (0.8,4.6) node[anchor=north west] {$J_1$}; \end{tikzpicture} \caption{Extremal tangents at a boundary point (on the left) and parallel tangents on two intersecting translates (on the right).} \label{fig:gauss_tan} \end{figure} \begin{obs}\label{obs:continuity} $g_+\circ \gamma$ is continuous from the right and $g_-\circ \gamma$ is continuous from the left. \end{obs} For $t_1<t_2$ let $\gamma_{[t_1,t_2]}$ denote the restriction of $\gamma$ to the interval $[t_1,t_2]$. For $t_1>t_2$ let $\gamma_{[t_1,t_2]}$ denote the concatenation of $\gamma_{[t_1,1]}$ and $\gamma_{[0,t_2]}$. When it leads to no confusion, we identify $\gamma_{[t_1,t_2]}$ with its image, which is a connected part of the boundary $\partial C$. For such a $J=\gamma_{[t_1,t_2]}$, let $g(J)=\bigcup\limits_{p\in J}g(p)$. Clearly, $g(J)$ is an arc of $\mathbb{S}^1$ from $g_-(t_1)$ to $g_+(t_2)$; let $|g(J)|$ denote the length of this arc. \begin{lemma} Let $C$ be a convex body and assume that $J$ is a connected part of $\partial C$ such that $|g(J)|<\pi$. Then there are no two translates of $J$ that intersect in more than one point. \end{lemma} \begin{proof} Suppose $J$ has two translates $J_1$ and $J_2$ such that they intersect in two points, $p$ and $q$. Now both $J_1$ and $J_2$ has a tangent that is parallel to the segment $pq$. (See Figure \ref{fig:gauss_tan} right.) This shows that $J$ has two different tangents parallel to $pq$ and therefore $g(J)\ge \pi$. \end{proof} \begin{lemma}\label{lemma:our_illumination} For a convex body $C$, which is not a parallelogram, and an injective parametrization $\gamma$ of $\partial C$, we can pick $0\le t_1<t_2<t_3\le 1$ such that $|g(\gamma_{[t_1,t_2]})|,|g(\gamma_{[t_2,t_3]})|$ and $|g(\gamma_{[t_3,t_1]})|$ are each strictly smaller than $\pi$. \end{lemma} \begin{proof} We use the 2-dimensional case of the Illumination conjecture (proved by Levi \cite{MR76368}). If $C$ is not a parallelogram, we can pick three directions, $u_1,u_2$ and $u_3$, that illuminate $C$. Pick $t_1$ such that $\gamma(t_1)$ is illuminated by both $u_1$ and $u_2$. To see why this is possible, suppose that the parts illuminated by $u_1$ and $u_2$ are disjoint. Each light illuminates a continuous open ended part of the boundary. So in this case there are two disjoint parts of the boundary that are not illuminated. If $u_3$ illuminates both, then it illuminates everything that is illuminated by $u_1$ or everything that is illuminated by $u_2$. But two lights are never enough for illumination, a contradiction. Using the same argument, pick $t_2$ and $t_3$ such that $\gamma(t_2)$ is illuminated by both $u_2$ and $u_3$ and $\gamma(t_3)$ is illuminated by both $u_3$ and $u_1$. Note that $u_1$ illuminates exactly those points for which $g_+(p)<u_1+\pi/2$ and $g_-(p)>u_1-\pi/2$. Therefore, $|g(\gamma_{[t_1,t_3]})|<u_1+\pi/2-(u_1-\pi/2)=\pi$. Similarly $|g(\gamma_{[t_1,t_2]})|<\pi$ and $|g(\gamma_{[t_2,t_3]})|<\pi$. \end{proof} Observation \ref{obs:continuity} and Lemma \ref{lemma:our_illumination} immediately imply the following statement. \begin{lemma}\label{lemma:our_illumination_epsilon} For a convex body $C$, which is not a parallelogram, and an injective parametrization $\gamma$ of $\partial C$, we can pick $0\le t_1<t_2<t_3\le 1$ and $\varepsilon>0$ such that $|g(\gamma_{[t_1-\varepsilon,t_2+\varepsilon]})|$, $|g(\gamma_{[t_2-\varepsilon,t_3+\varepsilon]})|$ and $|g(\gamma_{[t_3-\varepsilon,t_1+\varepsilon]})|$ are each strictly smaller than $\pi$. \end{lemma} \section{Proof of Theorem \ref{thm:general_three_col}}\label{sec:proof} \subsection{Quasi orderings on planar point sets} Cones provide a natural way to define quasi orderings on point sets (see \cite{TT07} for an example where this idea was used). A \emph{cone} is a closed region in the plane that is bounded by two rays that emanate from the origin. For a cone $K$ let $-K$ denote the cone that is the reflection of $K$ across the origin and let $p+K$ denote the translate of $K$ by the vector $p$. \begin{obs}\label{obs:cones} For any $p,q\in \mathbb{R}^2$ and cone $K$, the following are equivalent (see Figure \ref{fig:basic_cones}): \begin{itemize} \item $p\in q+K$ \item $q \in p+(-K)$ \item $p+K\subset q+K$ \end{itemize} \end{obs} \begin{figure}[!ht] \centering \definecolor{zzttqq}{rgb}{0.6,0.2,0.} \scalebox{0.7}{ \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.5cm,y=0.5cm] \clip(0.7905167827637798,-0.6536209763473118) rectangle (28.955457063301157,10.602349828485595); \fill[line width=1.0,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (21.914471376040254,4.093030278329455) -- (23.914471376040254,7.093030278329451) -- (24.914471376040254,4.093030278329455) -- cycle; \fill[line width=1.0,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (18.,2.) -- (20.,5.) -- (21.,2.) -- cycle; \fill[line width=1.0,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (2.,2.) -- (4.,5.) -- (5.,2.) -- cycle; \fill[line width=1.0,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (8.339322178085318,5.977538969999748) -- (6.339322178085318,2.977538969999748) -- (5.339322178085318,5.977538969999748) -- cycle; \draw [line width=1.0] (21.914471376040254,4.093030278329455)-- (23.914471376040254,7.093030278329451); \draw [line width=1.0] (23.914471376040254,7.093030278329451)-- (25.914471376040254,10.093030278329444); \draw [line width=1.0] (21.914471376040254,4.093030278329455)-- (24.914471376040254,4.093030278329455); \draw [line width=1.0] (24.914471376040254,4.093030278329455)-- (27.914471376040247,4.093030278329455); \draw [line width=1.0] (18.,2.)-- (20.,5.); \draw [line width=1.0] (20.,5.)-- (22.,8.); \draw [line width=1.0] (18.,2.)-- (21.,2.); \draw [line width=1.0] (21.,2.)-- (24.,2.); \draw (16.899061667902384,2.598103922826639) node[anchor=north west] {$q$}; \draw (20.801131546911115,4.799271546882852) node[anchor=north west] {$p$}; \draw [line width=1.0] (2.,2.)-- (4.,5.); \draw [line width=1.0] (4.,5.)-- (6.,8.); \draw [line width=1.0] (2.,2.)-- (5.,2.); \draw [line width=1.0] (5.,2.)-- (8.,2.); \draw [line width=1.0] (8.339322178085318,5.977538969999748)-- (6.339322178085318,2.977538969999748); \draw [line width=1.0] (6.339322178085318,2.977538969999748)-- (4.339322178085318,-0.022461030000251903); \draw [line width=1.0] (8.339322178085318,5.977538969999748)-- (5.339322178085318,5.977538969999748); \draw [line width=1.0] (5.339322178085318,5.977538969999748)-- (2.339322178085318,5.977538969999748); \draw (8.844789225333082,6.550200338745749) node[anchor=north west] {$p$}; \draw (0.9405963934948848,2.6481304597370077) node[anchor=north west] {$q$}; \begin{scriptsize} \draw [fill=black] (21.914471376040254,4.093030278329455) circle (2.5pt); \draw [fill=black] (18.,2.) circle (2.5pt); \draw [fill=black] (2.,2.) circle (2.5pt); \draw [fill=black] (8.339322178085318,5.977538969999748) circle (2.5pt); \end{scriptsize} \end{tikzpicture}}\caption{Basic properties of cones.} \label{fig:basic_cones} \end{figure} For a cone $K$ let $\prec_K$ denote the quasi ordering on the points of the plane where a point $p$ is bigger than a point $q$ if and only if $p+K$ contains $q$, i.e., when interpreted as a digraph, $qp$ is an edge of $\prec_K$. By Observation \ref{obs:cones}, this ordering is indeed transitive. \begin{figure}[!ht] \centering \definecolor{qqttcc}{rgb}{0.,0.2,0.8} \definecolor{yqqqqq}{rgb}{0.5019607843137255,0.,0.} \definecolor{qqwuqq}{rgb}{0.,0.39215686274509803,0.} \definecolor{qqttzz}{rgb}{0.,0.2,0.6} \scalebox{0.7}{ \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.9cm,y=0.9cm] \clip(2.8081291197505673,2.5852872443375357) rectangle (19.480030573405248,6.726256287901889); \draw [shift={(14.,6.)},line width=1.0,color=qqttzz,fill=qqttzz,fill opacity=1.0] (0,0) -- (-135.:0.5401263969866527) arc (-135.:-71.56505117707799:0.5401263969866527) -- cycle; \draw [shift={(15.,3.)},line width=1.0,color=qqwuqq,fill=qqwuqq,fill opacity=1.0] (0,0) -- (108.43494882292202:0.5401263969866527) arc (108.43494882292202:161.56505117707798:0.5401263969866527) -- cycle; \draw [shift={(12.,4.)},line width=1.0,color=yqqqqq,fill=yqqqqq,fill opacity=1.0] (0,0) -- (-18.43494882292201:0.5401263969866527) arc (-18.43494882292201:45.:0.5401263969866527) -- cycle; \draw [shift={(3.,4.)},line width=1.0,color=yqqqqq,fill=yqqqqq,fill opacity=1.0] (0,0) -- (-18.43494882292201:0.5401263969866527) arc (-18.43494882292201:45.:0.5401263969866527) -- cycle; \draw [line width=1.0] (12.,4.)-- (14.,6.); \draw [line width=1.0] (14.,6.)-- (15.,3.); \draw [line width=1.0] (15.,3.)-- (12.,4.); \draw [line width=1.0,color=qqttcc] (16.4144675126188,5.222364386041088)-- (16.,4.); \draw [line width=1.0,color=qqttcc] (17.39966534681431,6.059782545107268)-- (17.8,4.2); \draw [line width=1.0,color=qqttcc] (17.39966534681431,6.059782545107268)-- (17.,3.); \draw [line width=1.0,color=qqttcc] (18.2124535600256,5.444033898735077)-- (17.8,4.2); \draw [line width=1.0,color=qqttzz] (17.8,4.2)-- (17.,3.); \draw [line width=1.0,color=qqwuqq] (19.185336421293663,3.732252661820378)-- (17.39966534681431,6.059782545107268); \draw [line width=1.0,color=qqwuqq] (19.185336421293663,3.732252661820378)-- (18.2124535600256,5.444033898735077); \draw [line width=1.0,color=qqwuqq] (17.8,4.2)-- (16.4144675126188,5.222364386041088); \draw [line width=1.0,color=qqwuqq] (17.,3.)-- (16.,4.); \draw [line width=1.0,color=yqqqqq] (16.4144675126188,5.222364386041088)-- (17.39966534681431,6.059782545107268); \draw [line width=1.0,color=yqqqqq] (16.,4.)-- (18.2124535600256,5.444033898735077); \draw [line width=1.0,color=yqqqqq] (17.8,4.2)-- (19.185336421293663,3.732252661820378); \draw [line width=1.0,color=yqqqqq] (17.,3.)-- (19.185336421293663,3.732252661820378); \draw [line width=1.0,color=yqqqqq] (16.,4.)-- (17.8,4.2); \draw [line width=1.0,color=qqwuqq] (18.2124535600256,5.444033898735077)-- (17.39966534681431,6.059782545107268); \draw [line width=1.0,color=qqwuqq] (16.4144675126188,5.222364386041088)-- (19.185336421293663,3.732252661820378); \draw [line width=1.0,color=qqttcc] (17.,3.)-- (18.2124535600256,5.444033898735077); \draw [line width=1.0,color=yqqqqq] (16.4144675126188,5.222364386041088)-- (18.2124535600256,5.444033898735077); \draw [line width=1.0,color=qqttcc] (16.,4.)-- (17.39966534681431,6.059782545107268); \draw [line width=1.0,color=yqqqqq] (16.,4.)-- (19.185336421293663,3.732252661820378); \draw [line width=1.0,color=qqttcc] (17.,3.)-- (16.4144675126188,5.222364386041088); \draw [line width=1.0] (3.,4.)-- (5.,6.); \draw [line width=1.0] (6.,3.)-- (3.,4.); \draw [line width=1.0,color=yqqqqq] (8.399665346814308,6.059782545107264)-- (7.414467512618802,5.222364386041083); \draw [line width=1.0,color=yqqqqq] (7.804180545110583,5.553620463659098) -- (7.802122827418463,5.764536527101339); \draw [line width=1.0,color=yqqqqq] (7.804180545110583,5.553620463659098) -- (8.012010032014645,5.517610404047007); \draw [line width=1.0,color=yqqqqq] (9.212453560025601,5.444033898735072)-- (7.414467512618802,5.222364386041083); \draw [line width=1.0,color=yqqqqq] (8.17944361442798,5.316676508181941) -- (8.293633375274837,5.494019448661143); \draw [line width=1.0,color=yqqqqq] (8.17944361442798,5.316676508181941) -- (8.333287697369565,5.172378836115012); \draw [line width=1.0,color=yqqqqq] (8.8,4.2)-- (7.,4.); \draw [line width=1.0,color=yqqqqq] (7.765794289841775,4.085088254426863) -- (7.882105905312236,4.26104685218987); \draw [line width=1.0,color=yqqqqq] (7.765794289841775,4.085088254426863) -- (7.917894094687763,3.938953147810129); \draw [line width=1.0,color=yqqqqq] (10.185336421293664,3.732252661820378)-- (8.8,4.2); \draw [line width=1.0,color=yqqqqq] (9.36473230652311,4.009322826499032) -- (9.544504005353444,4.119649415858658); \draw [line width=1.0,color=yqqqqq] (9.36473230652311,4.009322826499032) -- (9.440832415940221,3.81260324596172); \draw [line width=1.0,color=yqqqqq] (10.185336421293664,3.732252661820378)-- (7.,4.); \draw [line width=1.0,color=yqqqqq] (8.458111127749135,3.8774366906380453) -- (8.606240642320259,4.027594830387427); \draw [line width=1.0,color=yqqqqq] (8.458111127749135,3.8774366906380453) -- (8.579095778973405,3.704657831432951); \draw [line width=1.0,color=yqqqqq] (10.185336421293664,3.732252661820378)-- (8.,3.); \draw [line width=1.0,color=yqqqqq] (8.96463306203684,3.323224891359415) -- (9.041186483185905,3.5197685092421795); \draw [line width=1.0,color=yqqqqq] (8.96463306203684,3.323224891359415) -- (9.144149938107761,3.2124841525781975); \begin{scriptsize} \draw [fill=black] (16.4144675126188,5.222364386041088) circle (2.5pt); \draw [fill=black] (16.,4.) circle (2.5pt); \draw [fill=black] (17.,3.) circle (2.5pt); \draw [fill=black] (19.185336421293663,3.732252661820378) circle (2.5pt); \draw [fill=black] (17.8,4.2) circle (2.5pt); \draw [fill=black] (18.2124535600256,5.444033898735077) circle (2.5pt); \draw [fill=black] (17.39966534681431,6.059782545107268) circle (2.5pt); \draw [fill=black] (7.414467512618802,5.222364386041083) circle (2.5pt); \draw [fill=black] (7.,4.) circle (2.5pt); \draw [fill=black] (8.,3.) circle (2.5pt); \draw [fill=black] (10.185336421293664,3.732252661820378) circle (2.5pt); \draw [fill=black] (8.8,4.2) circle (2.5pt); \draw [fill=black] (9.212453560025601,5.444033898735072) circle (2.5pt); \draw [fill=black] (8.399665346814308,6.059782545107264) circle (2.5pt); \end{scriptsize} \end{tikzpicture}}\caption{Quasi orderings on a point set.} \label{fig:ordering} \end{figure} Suppose the cones $K_1, K_2, K_3$ correspond to the three corners of a triangle, in other words the cones $K_1,-K_3,K_2,-K_1,K_3,-K_2$ partition the plane around the origin in this order. Then we will say that $K_1, K_2, K_3$ is a \emph{set of tri-partition} cones. In this case the intersection of any translates of $K_1, K_2, K_3$ forms a (sometimes degenerate) triangle. \begin{obs} Let $K_1,K_2,K_3$ be a set of tri-partition cones and let $P$ be a planar point set. Then any two distinct points of $P$ are comparable in either $\prec_{K_1}$, $\prec_{K_2}$ or $\prec_{K_3}$. (See Figure \ref{fig:ordering}.) \end{obs} In other words, when interpreted as digraphs, the union of $\prec_{K_1}$, $\prec_{K_2}$ and $\prec_{K_3}$ forms a complete multidigraph on $P$. As a warm up for the proof of Theorem \ref{thm:general_three_col}, we show the following theorem. \begin{theorem}\label{thm:three_cones} There exists a positive integer $m$ such that for any point set $P$, and any set of tri-partition cones $K_1,K_2,K_3$, we can three-color $P$ such that no translate of $K_1$, $K_2$ or $K_3$ that contains at least $m$ points of $P$ is monochromatic. \end{theorem} \begin{proof} We set $m$ to be $f(3,2)+13$ according to Theorem \ref{thm:multi_essw_new}. Consider the three quasi orders $\prec_{K_1}$, $\prec_{K_2}$ or $\prec_{K_3}$. Their union gives a complete multidigraph on $P$, hence we can apply Theorem \ref{thm:multi_essw_new} with $k=3$ and $l=2$, resulting in subsets $S_i^j$ for $i\in[3],j\in [2]$. Let $S=\bigcup\limits_{i\in [3],j\in[2]}S_i^j$. For each point $p\in P\setminus S$ there is an $i$ such that $\prec_{K_i}$ has an edge from a vertex of $S_{i,1}$ and $S_{i,2}$ to $p$. Let $P_1,P_2,P_3$ be the partition of $P\setminus S$ according to this $i$ value. We start by coloring the points of $S$. Color the points of $S_{1,1}\cup S_{2,1} \cup S_{3,1}$ with the first color and color the points of $S_{1,2}\cup S_{2,2}\cup S_{3,2}$ with the second color. Any translate of $K_1$, $K_2$ or $K_3$ that contains $f(3,2)+13$ points of $P$, must contain $5$ points from either $P_1,P_2$ or $P_3$ by the pigeonhole principle. (Note that the cone might contain all points of $S$.) Therefore, it is enough to show that for each $i\in [3]$ the points of $P_i$ can be three-colored such that no translate of $K_1$, $K_2$, or $K_3$ that contains at least $5$ points of $P_i$ is monochromatic. Consider $P_1$; the proof is the same for $P_2$ and $P_3$. Take a translate of $K_1$ and suppose that it contains a point $p$ of $P_1$. By Theorem \ref{thm:multi_essw_new}, there is and edge of $\prec_{K_1}$ from a vertex of $S_{1,1}$ to $p$ and another edge from a vertex of $S_{1,2}$ to $p$. Thus any such translate contains a point from $S_{1,1}$ and another point from $S_{1,2}$, and hence it cannot be monochromatic. Therefore, we only have to consider the translates of $K_2$ and $K_3$. Two translates of a cone intersect at most once on their boundary. Hence, the translates of $K_2$ form a pseudohalfplane arrangement, and so do the translates of $K_3$. Therefore, by Corollary \ref{cor:pseudohalfplane}, there is a proper three-coloring for the translates of $K_2$ and $K_3$ together. \end{proof} \begin{remark} From Theorem \ref{thm:three_cones}, it follows using standard methods (see Section \ref{sec:proofend}) that Theorem \ref{thm:general_three_col} holds for triangles. This was of course known before, even for two-colorings of homothetic copies of triangles. Our proof cannot be modified for homothets, but a two-coloring would follow if instead of Corollary \ref{cor:pseudohalfplane} we applied a more careful analysis for the two cones. \end{remark} \subsection{Proof of Theorem \ref{thm:general_three_col}}\label{sec:proofend} If $C$ is a parallelogram, then our proof method fails. Luckily, translates of parallelograms (and other symmetric polygons) were the first for which it was shown that even two colors are enough \cite{Pach86}; in fact, by now we know that two colors are enough even for homothets of parallelograms \cite{homotsquare}. So from now on we assume that $C$ is not a parallelogram. The proof of Theorem \ref{thm:general_three_col} relies on the same ideas as we used for Theorem \ref{thm:three_cones}. We partition $P$ into several parts, and for each part $P_i$, we divide the translates of $C$ into three families such that two of the families each form a pseudohalfplane arrangement over $P_i$, while the third family will only contain translates that are automatically non-monochromatic. Then Corollary \ref{cor:pseudohalfplane} would provide us a proper three-coloring. As in the proof of Theorem \ref{thm:three_cones}, this is not done directly. First, we divide the plane using a grid, and then in each small square we will use Theorem \ref{thm:multi_essw_new} to discard some of the translates of $C$ at the cost of a bounded number of points.\\ The first step of the proof is a classic divide and conquer idea \cite{Pach86}. We chose a constant $r=r(C)$ depending only on $C$ and divide the plane into a grid of squares of side length $r$. Since each translate of $C$ intersects some bounded number of squares, by the pidgeonhole principle we can find for any positive integer $m$ another integer $m'$ such that the following holds: each translate $\hat C$ of $C$ that contains at least $m'$ points intersects a square $Q$ such that $\hat C\cap Q$ contains at least $m$ points. For example, choosing $m'=m(diam(C)/r+2)^2$ is sufficient, where $diam(C)$ denotes the diameter of $C$. Therefore, it is enough to show the following localized version of Theorem \ref{thm:general_three_col}, since applying it separately for the points in each square of the grid provides a proper three-coloring of the whole point set. \begin{theorem}\label{thm:local_three_col} There is a positive integer $m$ such that for any convex body $C$ there is a positive real $r$ such that any finite point set $P$ in the plane that lies in a square of side length $r$ can be three-colored in a way that there is no translate of $C$ containing at least $m$ points of $P$, all of the same color. \end{theorem} We will show that $m$ can be chosen to be $f(3,2)+13$ according to Theorem \ref{thm:multi_essw_new}, independently of $C$. \begin{proof} We pick $r$ the following way. First we fix an injective parametrization $\gamma$ of $\partial C$ and then fix $t_1,t_2,t_3$ and $\varepsilon$ according to Lemma \ref{lemma:our_illumination_epsilon}. Let $\ell_1,\ell_2,\ell_3$ be the tangents of $C$ touching at $\gamma(t_1),\gamma(t_2)$ and $\gamma(t_3)$. Let $K_{1,2}$, $K_{2,3}$, $K_{3,1}$ be the set of tri-partition cones bordered by $\ell_1,\ell_2,\ell_3$, such that $K_{i,i+1}$ is bordered by $\ell_i$ on its counterclockwise side, and by $\ell_{i+1}$ on its clockwise side (see Figure \ref{fig:cone_in_C} left, and note that we always treat $3+1$ as 1 in the subscript). For a translate $\hat{C}$ of $C$ we will denote by $\hat{\gamma}$ the translated parametrization of $\partial \hat{C}$, i.e., $\hat{\gamma}(t)=\gamma(t)+v$ if $\hat{C}$ was translated by vector $v$. Our aim is to choose $r$ small enough to satisfy the following two properties for each $i\in [3]$. \begin{enumerate}[label=(\Alph*)] \item Let $\hat C$ be a translate of $C$, and $Q$ be a square of side length $r$ such that $\partial \hat C\cap Q\subset \hat{\gamma}_{[t_i+\varepsilon/2,t_{i+1}-\varepsilon/2]}$ (see Figure \ref{fig:cone_in_C} right). Then for any translate $K$ of $K_{i,i+1}$ whose apex is in $Q\cap \hat C$, we have $K\cap Q\subset \hat C$. (I.e., $r$ is small with respect to $C$.) \item Let $\hat C$ be a translate of $C$, and $Q$ be a square of side length $r$ such that $\hat{\gamma}_{[t_i-\varepsilon/2,t_{i+1}+\varepsilon/2]}$ intersects $Q$. Then $\partial \hat C\cap Q\subset \hat{\gamma}_{[t_i-\varepsilon,t_{i+1}+\varepsilon]}$. (I.e., $r$ is small compared to $\varepsilon$.) \end{enumerate} \begin{figure}[!ht] \centering \definecolor{zzttqq}{rgb}{0.6,0.2,0.} \definecolor{uuuuuu}{rgb}{0.26666666666666666,0.26666666666666666,0.26666666666666666} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.5cm,y=0.5cm] \clip(2.1640924696902344,-3.291941380065454) rectangle (16.64624177595318,6.606761736183365); \fill[line width=1.0,fill=black,fill opacity=0.25] (3.768827753322032,4.392669166650977) -- (4.123243589875512,3.4575812484314694) -- (4.758793204447675,4.5339787755868235) -- cycle; \fill[line width=1.0,fill=black,fill opacity=0.25] (14.058734708892569,5.861470654848949) -- (13.068769257766968,5.720161045913108) -- (13.374479808664983,5.132227738178157) -- cycle; \fill[line width=1.0,fill=black,fill opacity=0.25] (6.332889037089297,-2.3723296870708763) -- (7.017143937316888,-1.6430867704000818) -- (5.978473200535815,-1.4372417688513646) -- cycle; \draw [shift={(7.958515351695592,2.108914472950761)},line width=1.0] plot[domain=2.6956780077804776:4.321854967035546,variable=\t]({1.*3.1083274241025274*cos(\t r)+0.*3.1083274241025274*sin(\t r)},{0.*3.1083274241025274*cos(\t r)+1.*3.1083274241025274*sin(\t r)}); \draw [shift={(7.261346221122771,2.5938329918446867)},line width=1.0] plot[domain=0.13035761915140343:2.755875028289039,variable=\t]({1.*2.2743120841793814*cos(\t r)+0.*2.2743120841793814*sin(\t r)},{0.*2.2743120841793814*cos(\t r)+1.*2.2743120841793814*sin(\t r)}); \draw [shift={(6.496593035223344,2.298949087855251)},line width=1.0] plot[domain=-1.4801162709845777:0.19311405339801058,variable=\t]({1.*3.0769654110024027*cos(\t r)+0.*3.0769654110024027*sin(\t r)},{0.*3.0769654110024027*cos(\t r)+1.*3.0769654110024027*sin(\t r)}); \draw [line width=1.0,domain=2.1640924696902344:16.64624177595318] plot(\x,{(--12.223776958212898--0.4526542136088514*\x)/3.17113631658728}); \draw [line width=1.0,domain=2.1640924696902344:16.64624177595318] plot(\x,{(--18.39532881276564-3.3853951579956414*\x)/1.2831281782249193}); \draw [line width=1.0,domain=2.1640924696902344:16.64624177595318] plot(\x,{(-21.960768293888048--2.565850114616926*\x)/2.407559228948587}); \draw (9.4,6.6) node[anchor=north west] {$\ell_1$}; \draw (4.6,-0.1) node[anchor=north west] {$\ell_2$}; \draw (6.6,2.6) node[anchor=north west] {$C$}; \draw (10.94582130433904,2.4) node[anchor=north west] {$\ell_3$}; \draw [shift={(3.768827753322032,4.392669166650977)},line width=1.0,fill=black,fill opacity=0.25] plot[domain=-1.2085070485393068:0.14178417369315438,variable=\t]({1.*1.*cos(\t r)+0.*1.*sin(\t r)},{0.*1.*cos(\t r)+1.*1.*sin(\t r)}); \draw [shift={(6.332889037089297,-2.3723296870708763)},line width=1.0,fill=black,fill opacity=0.25] plot[domain=0.817214862644781:1.9330856050504859,variable=\t]({1.*1.*cos(\t r)+0.*1.*sin(\t r)},{0.*1.*cos(\t r)+1.*1.*sin(\t r)}); \draw [shift={(14.058734708892569,5.861470654848949)},line width=1.0,fill=black,fill opacity=0.4000000059604645] plot[domain=3.283376827282948:3.958807516234576,variable=\t]({1.*1.*cos(\t r)+0.*1.*sin(\t r)},{0.*1.*cos(\t r)+1.*1.*sin(\t r)}); \draw (13.6,5.5) node[anchor=north west] {$K_{3,1}$}; \draw (2.203321433221302,4.026165982141844) node[anchor=north west] {$K_{1,2}$}; \draw (6.7,-1.9) node[anchor=north west] {$K_{2,3}$}; \begin{scriptsize} \draw [fill=uuuuuu] (6.939964069909312,4.845323380259829) circle (2.0pt); \draw [fill=uuuuuu] (8.740448266037884,0.19352042754604953) circle (2.0pt); \draw [fill=uuuuuu] (5.051955931546951,1.0072740086553358) circle (2.0pt); \end{scriptsize} \end{tikzpicture} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.8cm,y=0.8cm] \clip(-0.5212593802625312,0.9024160297185335) rectangle (7.098126520651556,7.480250043437565); \fill[line width=1.0,fill=black,fill opacity=0.30000001192092896] (2.9139611807128176,4.440100887949994) -- (3.068078600743505,3.0862602098415906) -- (4.272853164612676,4.54034726918462) -- cycle; \fill[line width=1.0,color=zzttqq,fill=zzttqq,fill opacity=0.10000000149011612] (2.12382,3.74) -- (3.54248,3.74) -- (3.54248,5.15866) -- (2.12382,5.15866) -- cycle; \draw [shift={(4.663491963072474,3.1523141871657336)},line width=1.0] plot[domain=2.63100772848181:3.9408911121618377,variable=\t]({1.*2.759430143068236*cos(\t r)+0.*2.759430143068236*sin(\t r)},{0.*2.759430143068236*cos(\t r)+1.*2.759430143068236*sin(\t r)}); \draw [shift={(4.858950201988104,2.01321086543119)},line width=1.0] plot[domain=1.0014831356942346:2.3788491897615827,variable=\t]({1.*3.6008052563532615*cos(\t r)+0.*3.6008052563532615*sin(\t r)},{0.*3.6008052563532615*cos(\t r)+1.*3.6008052563532615*sin(\t r)}); \draw [line width=1.0,domain=-0.8212593802625312:7.098126520651556] plot(\x,{(--5.714971243081739-2.5455417413714536*\x)/0.2897773217368045}); \draw [line width=1.0,domain=-0.8212593802625312:7.098126520651556] plot(\x,{(--15.596206877223619--0.21851199420715073*\x)/2.962044052434135}); \draw [shift={(2.9139611807128176,4.440100887949994)},line width=1.0,fill=black,fill opacity=0.30000001192092896] plot[domain=-1.4574470824511945:0.07363728921063928,variable=\t]({1.*1.3625845885147592*cos(\t r)+0.*1.3625845885147592*sin(\t r)},{0.*1.3625845885147592*cos(\t r)+1.*1.3625845885147592*sin(\t r)}); \draw [line width=1.0] (2.9139611807128176,4.440100887949994)-- (3.068078600743505,3.0862602098415906); \draw [line width=1.0] (4.272853164612676,4.54034726918462)-- (2.9139611807128176,4.440100887949994); \draw [line width=1.0] (3.1733300410036582,2.161681526748409)-- (3.068078600743505,3.0862602098415906); \draw [line width=1.0] (4.272853164612676,4.54034726918462)-- (5.154440683911443,4.6053825770666235); \draw (4.3,5.9) node[anchor=north west,rotate=50] {$\hat{\gamma}(t_1)$}; \draw (0.6,3.183055506852521) node[anchor=north west] {$\hat{\gamma}(t_2)$}; \draw (6.487561621261355,6.433415706547415) node[anchor=north west] {$\ell_1$}; \draw (1.3,1.6925588406940881) node[anchor=north west] {$\ell_2$}; \draw [line width=1.0,color=zzttqq] (2.12382,3.74)-- (3.54248,3.74); \draw [line width=1.0,color=zzttqq] (3.54248,3.74)-- (3.54248,5.15866); \draw [line width=1.0,color=zzttqq] (3.54248,5.15866)-- (2.12382,5.15866); \draw [line width=1.0,color=zzttqq] (2.12382,5.15866)-- (2.12382,3.74); \draw (-0.7,3.7) node[anchor=north west] {$\hat{\gamma}(t_2-\varepsilon/2)$}; \draw (3.6,5.9) node[anchor=north west,rotate=50] {$\hat{\gamma}(t_1+\varepsilon/2)$}; \draw (4.35,3.919324944352469) node[anchor=north west] {$K$}; \begin{scriptsize} \draw [fill=black] (1.9217694985931228,2.840204202956866) circle (2.0pt); \draw [fill=black] (4.594036229290453,5.60425793853547) circle (2.0pt); \draw [fill=black] (1.9127284810063392,3.370843316127941) circle (2.0pt); \draw [fill=black] (4.030305286656739,5.517372120064994) circle (2.0pt); \end{scriptsize} \end{tikzpicture} \caption{Selecting the cones (on the left) and Property (A) (on the right).} \label{fig:cone_in_C} \end{figure} We show that an $r$ satisfying properties (A) and (B) can be found for $i=1$. The argument is the same for $i=2$ and $i=3$, and we can take the smallest among the three resulting $r$-s. First, consider property (A). Since the sides of $K$ are parallel to $\ell_1$ and $\ell_2$, the portion of $K$ that lies ``above'' the segment $\overline{\hat{\gamma}(t_1)\hat{\gamma}(t_2)}$ is in $\hat{C}$. Hence, if we choose $r$ small enough so that $Q$ cannot intersect $\overline{\hat{\gamma}(t_1)\hat{\gamma}(t_2)}$, then property (A) is satisfied. For example, choosing $r$ to be smaller than $\frac{1}{\sqrt{2}}$ times the distance of the segments $\overline{\hat{\gamma}(t_1)\hat{\gamma}(t_2)}$ and $\overline{\hat{\gamma}(t_1+\varepsilon/2)\hat{\gamma}(t_2-\varepsilon/2)}$ works. Using that $\gamma$ is a continuous function on a compact set, we can pick $r$ such that property (B) is satisfied. Therefore, there is an $r$ satisfying properties (A) and (B). \bigskip The next step is a subdivision of the point set $P$ using Theorem \ref{thm:multi_essw_new}, like we did in the proof of Theorem \ref{thm:three_cones}. The beginning of our argument is exactly the same. Apply Theorem \ref{thm:multi_essw_new} for the graph given by the union of $\prec_{K_{1,2}}$, $\prec_{K_{2,3}}$ and $\prec_{K_{3,1}}$. By Observation \ref{obs:cones}, this is indeed a complete multidigraph on $P$. We apply Theorem \ref{thm:multi_essw_new} with $k=3$ and $l=2$, resulting in subsets $S_i^j$ for $i\in[3],j\in [2]$. Let $S=\bigcup\limits_{i\in [3],j\in[2]}S_i^j$. For each point $p\in P\setminus S$ there is an $i$ such that $\prec_{K_{i,i+1}}$ has an edge from a vertex of $S_{i,1}$ and $S_{i,2}$ to $p$. Let $P_1,P_2,P_3$ be the partition of $P\setminus S$ according to this $i$ value. We start by coloring the points of $S$. Color the points of $S_{1,1}\cup S_{2,1} \cup S_{3,1}$ with the first color and color the points of $S_{1,2}\cup S_{2,2}\cup S_{3,2}$ with the second color. Note that $m$ is at least $f(3,2)+13$. Any translate of $C$ that contains $f(3,2)+13$ points of $P$ must contain $5$ points from either $P_1,P_2$ or $P_3$. (Note that the cone might contain all points of $S$). Therefore it is enough to show that for each $i\in [3]$ the points of $P_i$ can be colored with three color such that no translate of $C$ that contains at least $5$ points of $P_i$ is monochromatic.\\ Consider $P_1$, the proof is the same for $P_2$ and $P_3$. We divide the translates of $C$ that intersect $Q$ into four groups. Let $\mathcal{C}_0$ denote the translates where $\hat{C}\cap Q=Q$. Let $\mathcal{C}_1$ denote the translates for which $\partial \hat{C}\cap Q\subset \hat{\gamma}_{[t_1+\varepsilon/2,t_{2}-\varepsilon/2]}$. Let $\mathcal{C}_2$ denote the translates for which $\partial \hat{C}\cap Q\cap \hat{\gamma}_{[t_2-\varepsilon/2,t_{3}]}\ne \emptyset$. Let $\mathcal{C}_3$ denote the remaining translates for which $\partial \hat{C}\cap Q\cap \hat{\gamma}_{[t_3,t_{1}+\varepsilon/2]}\ne \emptyset$. We do not need to worry about the translates in $\mathcal{C}_0$, as $Q$ itself will not be monochromatic. Take a translate $\hat C$ from $\mathcal{C}_1$ and suppose that it contains a point $p\in P_1$. By Theorem \ref{thm:multi_essw_new}, there is an edge of $\prec_{K_{1,2}}$ from a vertex of $S_{1,1}$ to $p$ and another edge from a vertex of $S_{1,2}$ to $p$. I.e., the cone $p+K_{1,2}$ contains a point from $S_{1,1}$ and another point from $S_{1,2}$, and hence it is not monochromatic. From property (A) we know that every point in $(p+K_{1,2})\cap P$ is also in $\hat C$. Therefore, $\hat C$ is not monochromatic. Now consider the translates in $\mathcal{C}_2$. From property (B) we know that for these translates we have $\partial \hat C\cap Q\subset \hat{\gamma}_{[t_2-\varepsilon,t_3+\varepsilon]}$. By the definition of $t_1,t_2$ and $t_3$, we know that this implies that any two translates from $\mathcal{C}_2$ intersect at most once on their boundary within $Q$, i.e., they behave as pseudohalfplanes. To turn the translates in $\mathcal{C}_2$ into a pseudohalfplane arrangement as defined earlier, we can do as follows. For a translate $\hat{C}$, replace it with the convex set whose boundary is $\hat{\gamma}_{[t_2-\varepsilon,t_3+\varepsilon]}$ extended from its endpoints with two rays orthogonal to the segment $\overline{\hat{\gamma}(t_2-\varepsilon)\hat{\gamma}(t_3+\varepsilon)}$. This new family provides the same intersection pattern in $Q$ and forms a pseudohalfplane arrangement. We can do the same with the translates in $\mathcal{C}_3$. Therefore, by Corollary \ref{cor:pseudohalfplane} there is a proper three-coloring for the translates in $\mathcal{C}_2\cup \mathcal{C}_3$. \end{proof} \section{Overview of the computational complexity of the algorithm}\label{sec:overview} In this section we show that given a point set $P$ and a convex set $C$, we can determine some $m=m(C)$ and calculate a three-coloring of $P$ efficiently if $C$ is given in a natural way, for example, if $C$ is a disk. Our algorithm is randomized and its running time is a polynomial of the number of points, $n=|P|$. \begin{itemize} \item First, we need to fix three points on the boundary, $\tau_1,\tau_2,\tau_3\subset \partial C$ such that Lemma \ref{lemma:our_illumination_epsilon} is satisfied with $\tau_i=\gamma(t_i)$ for some $t_i$ and $\varepsilon>0$ for each $i$. Note that we do not need to fix a complete parametrization $\gamma$ of $\partial C$ or $\varepsilon>0$; instead, it is enough to choose some points $\tau_i^{\scalebox{0.6}{$--$}}$ and $\tau_i^{\scalebox{0.6}{$++$}}$ that satisfy the conclusion of Lemma \ref{lemma:our_illumination_epsilon} if we assume $\tau_i^{\scalebox{0.6}{$--$}}=\gamma(t_i-\varepsilon)$ and $\tau_i^{\scalebox{0.6}{$++$}}=\gamma(t_i+\varepsilon)$ for each $i$. If $C$ has a smooth boundary, like a disk, we can pick $\tau_1,\tau_2,\tau_3$ to be the touching points of an equilateral triangle with $C$ inscribed in it. If the boundary of $C$ contains vertex-type sharp turns, the complexity of finding these turns depends on how $C$ is given, but for any reasonable input method, this should be straight-forward. After that, one can follow closely the steps of the proof of the Illumination conjecture in the plane to get an algorithm, but apparently, this has not yet been studied in detail. \item To pick $r$, the side length of the squares of the grid, we can fix some arbitrary points $\tau_i^{\scalebox{0.6}{$-$}}$ between $\tau_i^{\scalebox{0.6}{$--$}}$ and $\tau_i$, and points $\tau_i^{\scalebox{0.6}{$+$}}$ between $\tau_i$ and $\tau_i^{\scalebox{0.6}{$++$}}$, to play the roles of $\gamma(t_i-\varepsilon/2)$ and $\gamma(t_i+\varepsilon/2)$, respectively, for each $i$. It is sufficient to pick $r$ so that $r\sqrt{2}$, the diameter of the square of side length $r$, is less than \begin{itemize} \item the distance of $\tau_i^{\scalebox{0.6}{$+$}}$ and $\tau_{i+1}^{\scalebox{0.6}{$-$}}$ from the segment $\overline{\tau_i\tau_{i+1}}$, \item the distance of $\tau_i^{\scalebox{0.6}{$-$}}$ from $\tau_i^{\scalebox{0.6}{$--$}}$, and \item the distance of $\tau_i^{\scalebox{0.6}{$+$}}$ from $\tau_i^{\scalebox{0.6}{$++$}}$, \end{itemize} for each $i$, to guarantee that properties (A) and (B) are satisfied. \item Set $m=f(3,2)+13$, which is an absolute constant given by Theorem \ref{thm:multi_essw_new}. We need to construct the complete multidigraph given by the tri-partition cones determined by $\tau_1,\tau_2,\tau_3$, which needs a comparison for each pair of points. To obtain the subsets $S_i^j\subset P$ for $i\in[3],j\in [2]$, where $P$ is the set of points that are contained in a square of side length $r$, we randomly sample the required number of points from each of the constantly many $T_{j_1,\dots, j_i}$ according to the probability distributions $w_{j_1,\dots, j_i}$ given by Lemma \ref{lemma:prob_dist}. These probability distributions can be computed by LP. With high probability, all the $S_i^j$-s will be disjoint---otherwise, we can resample until we obtain disjoint sets. \item To find the three-coloring for the two pseudohalfplane arrangements, given by Corollary \ref{cor:pseudohalfplane}, it is enough to determine the two-coloring given by Theorem \ref{thm:pseudohalfplane} for one pseudohalfplane arrangement. While not mentioned explicitly in \cite{abafree}, the polychromatic $k$-coloring can be found in polynomial time if we know the hypergraph determined by the range space, as this hypergraph can only have a polynomial number of edges, and the coloring algorithm only needs to check some simple relations among a constant number of vertices and edges. \item Finally, to compute a suitable $m'$ for Theorem \ref{thm:general_three_col} from the $m$ of Theorem \ref{thm:local_three_col}, it is enough to know any upper bound $B$ for the diameter of $C$, and let $m'=m(B/r+2)^2$. \end{itemize} \section{Open questions}\label{sec:open} It is a natural question whether there is a universal $m$ that works for all convex bodies in Theorem \ref{thm:general_three_col}, like in Theorem \ref{thm:local_three_col}. This would follow if we could choose $r$ to be a universal constant. While the $r$ given by our algorithm can depend on $C$, we can apply an appropriate affine transformation to $C$ before choosing $r$; this does not change the hypergraphs that can be realized with the range space determined by the translates of $C$. To ensure that properties (A) and (B) are satisfied would require further study of the Illumination conjecture. Our bound for $m$ is quite large, even for the unit disk, both in Theorems \ref{thm:general_three_col} and \ref{thm:local_three_col}, which is mainly due to the fact that $f(3,2)$ given by Theorem \ref{thm:multi_essw_new} is huge. It has been conjectured that in Theorem \ref{thm:multi_essw_old} the optimal value is $f(3)=3$, and a similarly small number seems realistic for $f(3,2)$ as well. While Theorem \ref{thm:general_three_col} closed the last question left open for primal hypergraphs realizable by translates of planar bodies, the respective problem is still open in higher dimensions. While it is not hard to show that some hypergraphs with high chromatic number often used in constructions can be easily realized by unit balls in $\mathbb{R}^5$, we do not know whether the chromatic number is bounded or not in $\mathbb{R}^3$. From our Union Lemma (Lemma \ref{lem:combine}) it follows that to establish boundedness, it would be enough to find a polychromatic $k$-coloring for pseudohalfspaces, whatever this word means.
proofpile-arXiv_065-54
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsubsection{Video / Text Features} Eight video features and five text features are used, see \cref{tab:feature_intro}. \input{table/feature_intro.tex} \section{Experiments of video description datasets} \rebuttal{ \subsection{The original performance} The original performance of the state-of-the-art is reported in \cref{tab:supplementary_compare_all}, bigger is better. \input{table/Supplementary_all_result} } \subsection{The \emph{Med r} scores} The \emph{Med r} scores of the state-of-the-art is reported in \cref{tab:compair_all_supp}, smaller is better. \input{table/Supplementary_tab6_MedR} \section{Experiments of TRECVID AVS 2016-2020} The infAP of 20 queries of TV20 is shown in \cref{tab:TV20_case_study}. The CLIP-FT and CLIP2Video have achieved excellent results on the video description datasets, while on the TRECIVD, the average infAP is 0.172 and 0.180, respectively, which is 21.0\% and 17.2\% lower than LAFF. We find that for the 10 queries containing action, the LAFF outperform or is equivalent to CLIP-FT and CLIP2Video. It is proved that only using the CLIP series models pre-trained on large-scale image text data sets cannot completely solve the video text retrieval problem. Features containing temporal information, such as \textit{ircsn}, \textit{c3d}, etc., are still necessary for large-scale retrieval scenarios. \input{table/TV20_case_study} \subsubsection{Dataset} Our ablation study is conducted on MSR-VTT \cite{xu2016msr}, which has 10k videos in total, each associated with 20 captions. We adopt the official data split: 6,513 videos for training, 497 videos for validation and the remaining 2,990 images for test. In order to distinguish this data split from other customized splits, \textit{e.g.}~ JSFusion~\cite{yu2018joint}, we term the split \xredit{\textbf{MV-test3k}}. \textbf{On Combining Diverse Video/Text Features}. We investigate how LAFF responds when diverse video/text features are gradually added. For the ease of lateral comparison, we include as baselines the following two models: W2VV++ \cite{LiXirong2019W2VVPP}, which simply uses vector concatenation, and SEA \cite{LiXirong2020SEA} which learns cross-modal similarities per text feature. \begin{figure}[htp] \sbox\twosubbox{% \resizebox{\dimexpr\textwidth-1em}{!}{% \includegraphics[height=3cm]{fig/model_adjust_txt_encoder.pdf} \includegraphics[height=3cm]{fig/model_adjust_vis_encoder.pdf}% }% } \setlength{\twosubht}{\ht\twosubbox} \centering \subcaptionbox{\label{fig:model_adjust_txt_encoder} Text feature fusion}{% \includegraphics[height=\twosubht]{fig/model_adjust_txt_encoder.pdf}% }\quad \subcaptionbox{\label{fig:model_adjust_vis_encoder} Video feature fusion}{% \includegraphics[height=\twosubht]{fig/model_adjust_vis_encoder.pdf}% } \caption{\textbf{Performance curves of three distinct models, \textit{i.e.}~ W2VV++, SEA and LAFF}, \textit{w.r.t.}~ (a) text feature fusion, with \{\emph{rx101},\emph{re152}\} as video features, and (b) video feature fusion, with \{\textit{bow,w2v,gru}\} as text features. LAFF is both effective and stable for fusing diverse features. Data: MV-test3k.} \label{Figure:model_adjust_txt_vis_encoder} \end{figure} Given the many video and text features investigated in this work, a complete enumeration of video-text feature combinations is impractical. We choose to reduce the computation by only varying the features at one end, with features at the other end fixed. \cref{fig:model_adjust_txt_encoder} shows the performance curves of W2VV++, SEA and LAFF \textit{w.r.t.}~ text features, with \{\emph{rx101}, \emph{re152}\} as their common video features. The performance of all three models improves at the earlier steps when few features are fused. There is a noticeable drop in the performance curve of W2VV++ when \emph{bert} is included. LAFF is more effective and more stable. Similar results can be observed from \ref{fig:model_adjust_vis_encoder}, which shows the performance curves of the three models \textit{w.r.t.}~ video features. The above results justify the effectiveness of LAFF for combining diverse video/text features. \textbf{Comparing Feature Fusion Blocks}. We compare the three feature fusion blocks by replacing LAFF in Fig. \ref{Figure:overall_framework} with MHSA and Attention-free, respectively. For a more fair comparison, we also apply the multi-loss trick on MHSA by optimizing losses for different heads, denoted as MHSA(multi-loss). Moreover, we include as a baseline method that uses the simple feature concatenation strategy, as previously adopted in W2VV++ \cite{LiXirong2019W2VVPP}. The performance of text-to-video retrieval with specific feature fusion blocks is reported in \cref{tab:four_method_with_fixed_video_feature}. LAFF performs the best, followed by Attention-free, the concatenation baseline and MHSA. Attention-free, while being extremely simple, is more effective than MHSA for combining the increasing amounts of text features, with its mAP increases from 0.264, 0.321 to 0.326. The superior performance of LAFF against Attention-free (0.358 \emph{versus} 0.326) justifies the necessity of the attentional layer. \input{table/tab4_compare_fusion_blocks} \textbf{LAFF Weights for Model Interpretability and Feature Selection}. \cref{Figure:weigth_exp} visualizes the LAFF weights of videos and their associated captions selected from the MV-test3k test set. We observe that 3D-CNN features receive more weight when the video content contains more motions, see \cref{Figure:weigth_exp}(b). For each feature, its weight averaged over samples reflects its contribution to the retrieval performance. The weights of text features in descending order are \textit{clip} (64.3\%), \textit{bow} (15.7\%), \textit{gru} (9.5\%), \textit{w2v} (6.5\%), \textit{bert} (4.0\%). For video feaetures, the order is \textit{clip} (38.0\%), \textit{x3d} (16.8\%), \textit{ircsn} (13.3\%), \textit{tf} (10.9\%), \textit{rx101}(7.0\%), \textit{wsl}(6.6\%) , \textit{c3d} (5.1\%), \textit{re152} (1.4\%). We re-train our model with the top-3 ranked video / text features. Compared to the full setup (mAP of 0.358), the reduced model obtains mAP of 0.353, meaning a relatively small performance loss of 1.4\%. Hence, the LAFF weights are helpful for feature selection. \begin{figure}[tbh!] \centering \includegraphics[width=1\textwidth]{fig/weight_exp.pdf} \caption{\textbf{Visualization of LAFF weights per feature}, with samples from the MV-test3k test set. Green, brown, and blue mean text features, 2D video features and 3D video features, respectively. Best viewed in color.} \label{Figure:weigth_exp} \end{figure} \textbf{Combined Loss \emph{versus} Single Loss}. As \cref{tab:single_vs_multi_loss} shows, LAFF trained with the combined loss produces a relative improvement of over 10\% in terms of mAP, when compared to its single-loss counterpart. \input{table/CombinedLoss_and_SpaceNum} \textbf{The Effect of the Number of Common Spaces}. Concerning the number of common spaces $h$, we try different values, \textit{i.e.}~ \{1, 2, 4, 8, 16\}. As shown in \cref{tab:model_adjust_space}, the performance improves as $h$ increases, with the peak performance reached at $h=8$. While using a larger $h$ is beneficial, the relatively small gap between LAFF($h$=1) and LAFF($h$=8) suggests that the good performance of LAFF-based video retrieval is largely contributed by the LAFF block rather than the multi-space similarity. To reveal how different are the embedding spaces to each other, we compute the Jaccard index between the top-5 video retrieval results of the individual spaces \textit{w.r.t.}~ a specific query caption. The inter-space Jaccard index is lower than $0.5$, suggesting sufficient divergence. Nevertheless, whether videos/captions have been separated along different axes needs further investigation. \subsection{Common Setups} \textbf{Implementation Details}. Eight video features and five text features are used, see \cref{tab:feature_intro}. The margin $\alpha$ in the loss is set to 0.2 according to VSE++~\cite{bmvc_FaghriFKF18}. We perform SGD based training, with a mini-batch size of 128 and RMSProp as the optimizer. The learning rate is initially set to $10^{-4}$, decayed by a factor of 0.99 per epoch. Following \cite{JoulinMJV16}, we half the learning rate if the validation performance does not increase in three consecutive epochs. Early stop occurs when no validation performance increase is achieved in ten consecutive epochs. The dropout rate of the \emph{Linear} layers is set to 0.2. All experiments were done with PyTorch (1.7.1) \cite{PaszkePytorch19} on an Nvidia GEFORCE GTX 2080Ti GPU. \textbf{Evaluation Criteria}. We report three standard rank-based metrics: Recall at Rank N (R@N, N=1, 5, 10), Median rank (Med r), and mean Average Precision (mAP) for assessing the overall ranking quality. \input{table/table_feats.tex} \subsection{Ablation Study} \label{ssec:eval-abla} \input{ablation-study} \subsection{Comparison with SOTA on Video Description Datasets} \label{ssec:sota_on_video_description_data} \input{experiments-on-video-description} \subsection{Comparison with SOTA on TRECVID AVS 2016-2020} \input{experiments-on-trecvid} \subsubsection{Network Architecture} We now detail the usage of LAFF for text-to-video retrieval. A straightforward solution is to substitute LAFF for the $fusion$ functions in \cref{eq:xdef}. As such, we have a single configuration of how the video/text features are combined. However, due to the high complexity of the video and text contents, we hypothesize that the single configuration is suboptimal for cross-modal representation and matching. Borrowing the multi-head idea of MHSA, we consider multi-head LAFF. In particular, we deploy $h$ pairs of LAFFs, where each pair of LAFFs jointly determine a latent common space for video-text matching. In particular, a specific pair of LAFFs, denoted as $<LAFF_{v,i}, LAFF_{t,i}>$, aggregates the video/text features into a $d$-dimensional cross-modal embedding vector $e_i(x)$/$e_i(q)$, \textit{i.e.}~ \begin{equation} \label{eq:paired_laffs} \left\{ \begin{array}{rl} e_i(x) &= LAFF_{v,i}(x) \\ e_i(q) &= LAFF_{t,i}(q) \\ s_i(x,q) &= similarity(e_i(x), e_i(q)) \end{array} \right. \end{equation} where $similarity$ is the widely used cosine similarity. Accordingly, we compute the final video-text similarity as the mean of the $h$ individual similarities, \begin{equation} s(x,q) = \frac{1}{h}\sum_{i=1}^h s_i(x,q). \end{equation} The overall architecture is illustrated in Fig. \ref{Figure:overall_framework}. In order to make the amount of trainable parameters invariant with respect to $h$, we set $d=\frac{d_0}{h}$, where $d_0$ is a constant empirically set to 2,048. As such, the multi-head version of LAFF is not an ensemble. We use $h=8$, unless otherwise stated. \textbf{LAFF for multi-level feature fusion}. So far we presume the features to be fused are already at the video level. In fact, for its high flexibility, LAFF can be extended with ease to a multi-level variant to deal with the situation wherein different frame-level and video-level features coexist. Fig. \ref{fig:LAFF_multi_level} shows this variant, which we term \emph{LAFF-ml}. LAFF-ml works in a bottom-up manner, where a set of specific frame-level features are aggregated via a specific LAFF block to produce a video-level feature. Suppose there are two different frame-level features, \textit{e.g.}~ \emph{clip} and \emph{rx101}. Each will have its own LAFF block. The (resultant) different video features are then fused via a video-level LAFF block. \begin{figure}[tbh!] \centering \includegraphics[width=0.7\textwidth]{fig/fig3_laff_ml.pdf} \caption{\textbf{LAFF-ml for multi-level feature fusion}. Frame-level LAFF is applied per feature, \textit{e.g.}~ \emph{clip} or \emph{rx101}. The outputs of the frame-level LAFF blocks are later combined (with other video-level features, \textit{e.g.}~ \emph{x3d}) by a video-level LAFF.} \label{fig:LAFF_multi_level} \end{figure} \subsubsection{Network Training} Following the good practice of the previous work, we adopt as our base loss function the triplet ranking loss with hard-negative mining \cite{bmvc_FaghriFKF18}. For a specific sentence $q$ in a given training batch, let $x_+$ and $x_-$ be videos relevant and irrelevant \textit{w.r.t.}~ $q$, and $x^*_-$ be the hard negative that violates the ranking constraint the most. We have \begin{equation} \label{eq:base-loss} \left\{ \begin{array}{ll} x^*_- &=\operatorname{argmax}_{x_-} (s(x_-,q) - s(x_+,q)) \\ \operatorname{loss}(q) &=\max (0, \alpha+s(x^*_-,q) - s(x_+, q)), \end{array} \right. \end{equation} where $\alpha$ is a positive hyper-parameter controlling the margin of the ranking loss. As \cite{LiXirong2020SEA} has documented, when training a cross-modal network that produces multiple similarities, combining losses per similarity gives better results than using a single loss with the combined similarity. Hence, we follow this strategy, computing $\operatorname{loss}_i(q)$, namely the loss in the $i$-th space by substituting $s_i$ for $s$ in \cref{eq:base-loss}. The network is trained to minimize a combined loss $\sum_{i=1}^h \operatorname{loss}_i(q)$. \subsection{The LAFF Block} \input{laff} \subsection{Paired LAFFs for Text-to-Video Retrieval} \input{laff-for-t2vr} \section{Introduction} \input{intro} \section{Related Work} \label{sec:related} \input{related} \section{A New Baseline} \label{sec:method} \input{method} \section{Experiments} \label{sec:experimet} \input{experiments} \section{Conclusions} \label{sec:conclusion} For video retrieval by text, we propose LAFF, an extremely simple feature fusion block. LAFF is more effective than Multi-head Self-Attention, yet with much fewer parameters. Moreover, the attentional weights produced by LAFF can be used to explain the contribution of the individual video/text features for cross-modal matching. Consequently, the weights can be used for feature selection for building a more compact video retrieval model. Our LAFF-based video retrieval model surpasses the state-of-the-art on MSR-VTT, MSVD, TGIF, VATEX and TRECVID AVS 2016-2020. Given the increasing availability of (deep) video/text features, we believe our work opens up a promising avenue for further research. \medskip \textbf{Acknowledgments}. This work was supported by NSFC (No. 62172420, No. 62072463), BJNSF (No. 4202033), and Public Computing Cloud, Renmin University of China. \clearpage \bibliographystyle{splncs04}
proofpile-arXiv_065-55
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Multi-label text classification techniques enable predictions of treatable risk factors in patients, aiding in better life expectancy and quality of life~\cite{aubert2019patterns}. The goal of multi-label learning is to predict a subset of labels for an unseen instance from a given label set while considering label correlations~\cite{zhang2013review}. One of the known challenges with multi-label classification is the long-tailed distribution of labels. In general, with multi-label problems, a small subset of the labels are associated with a large number of instances, and a significant fraction of the labels are associated with a small number of instances (as shown in Figure~\ref{fig:freq}). There are some examples of studies that focus on exploiting label structure~\cite{zhang2018deep} and label co-occurrence patterns~\cite{kurata2016improved}. However, in studies especially relating to medical text, the focus is on improving the overall performance of the model instead of individual tail-end labels~\cite{moons2020comparison,amin2019mlt}. There are also examples of studies, such as Wei and Li (2019)~\cite{wei2019does}, which demonstrate that tail-end labels have minimal impact on the overall performance. However, prediction of infrequent labels in order to understand all aspects of a patient's prognosis is as crucial as predicting frequent labels~\cite{flegel2018we}. The knowledge gained by one or more infrequent labels can impact the cause of medical decisions, treatment plans and patient care. This research explores the opportunity to improve predictions of tail-end labels using transformers for medical-domain specific tasks by exploiting models pre-trained on health data. We consider the option of using three variations of concatenated language models: multi-CNNText, multi-BioMed-Transformers and CNNText with Transformers. We show concatenated BioMed-Transformers improve tail-end predictions compared to other neural networks and single transformers. In addition to improving the tail-end performance, we demonstrate concatenated domain-specific transformer models are a solution for handling text data with extended text and multi-sources of texts. For short or truncated electronic health records (EHRs), medical domain-specific transformer models outperform state-of-the-art (SOTA) methods for many classification tasks, including predicting medical codes and name entity recognition~\cite{yogarajan2021trans,domains,gu2020domain}. However, given that most transformer models are limited to a maximum sequence length of 512 tokens, with some exceptions, there is still a gap in alternative solutions for long documents. Transformer models such as Longformer~\cite{beltagy1904longformer} and TransformerXL~\cite{dai2019transformer} can handle longer sequences and perform better than other language models for long documents. Unfortunately, these models require considerable amounts of memory and processing time. In contrast, concatenated domain-specific transformers require fewer resources. We also present new SOTA results using TransformerXL for predicting medical codes. We compare these results directly with the most recent (Nov, 2021) published SOTA~\cite{liueffective} for the exact same multi-label text classification problem. We compare concatenated domain-specific transformer models with standard language models for increasingly larger multi-label problems with 30, 42, 50, 73, 158 and 923 labels. The multi-label problems considered in this paper are: predicting ICD-9 codes for ICD-9 hierarchy levels, most frequent 50 ICD-9 codes, cardiovascular disease, COVID-19 patient shielding (introduced in Yogarajan et al (2021)~\cite{yogarajan2021predicting}) and systemic fungal or bacterial disease. The contributions of this work are: \begin{enumerate} \item analyse the effectiveness of using concatenated domain-specific language models, multi-CNNText, multi-BioMed-Transformers and CNNText with Transformers, for predicting medical codes from EHRs for multiple document lengths, multi-sources of texts and number of labels; \item show that concatenated domain-specific transformers improve F1 scores of infrequent labels; \item show improvements in overall micro and macro F1 scores and achieve such improvements with fewer resources; \item present new SOTA results for predicting medical codes from EHRs. \end{enumerate} \begin{figure}[t] \centering \includegraphics[width=0.46\textwidth,height=2.2cm]{level3_freq.png} \hfill \includegraphics[width=0.46\textwidth,height=2.18cm]{fun_freq.png} \caption{Percentage frequency of labels for ICD-9 level-3 codes with 923 labels (left) and systemic fungal or bacterial infection with 73 labels (right) for MIMIC-III data. The labels are ordered from most frequent (left) to least frequent (right) for each plot. The threshold for tail-end labels with \% Freq of occurrences $< 1\%$ is indicated for reference. } \label{fig:freq} \vspace{-1.5em} \end{figure} \section{Related Work} In the last two to three years, there have been considerable advancements in transformer models, which have shown substantial improvements in many NLP tasks, including BioNLP tasks~\cite{gu2020domain,yang2020clinical}. With minimum effort, transfer learning of pre-trained models by fine-tuning on downstream supervised tasks achieves very good results~\cite{amin2020exploring,amin2019mlt}. Examples of BioNLP tasks where transformers have shown performance improvements include named entity recognition, question answering, relation extraction, and clinical concept extraction tasks~\cite{gu2020domain,yang2020clinical,domains}. A significant obstacle for transformers is the 512 token size limit they impose on input sequences~\cite{9364676}. Gao et al. (2021)~\cite{9364676} presents evidence showing BERT-based models under-perform in clinical text classification tasks with long input data, such as MIMIC-III~\cite{johnson2016mimic}, when compared to a CNN trained on word embeddings that can process the complete input sequences. Si and Roberts (2021)~\cite{si2021hierarchical} presents an alternative system to overcome the issue of long documents, where transformer-based encoders are used to learn from words to sentences, sentences to notes and notes to patients progressively. This transformer-based hierarchical attention networks system presents SOTA methods for in-hospital mortality prediction and phenotype predictions using MIMIC-III. However, it requires considerable computational resources~\cite{si2021hierarchical}. Chalkidis et al. (2020)~\cite{chalkidis2020empirical} proposes a similar hierarchical version using SCI-BERT to deal with long documents for predicting medical codes from MIMIC III. Here SCI-BERT reads words of each sentence, resulting in sentence embeddings. This is followed by a self-attention mechanism that reads the sentence embeddings to produce single document embeddings fed through an output layer. Unfortunately, HIER-SCI-BERT performed poorly compared to other neural networks~\cite{chalkidis2020empirical}. One possible reason for poor results is the use of a continuously pre-trained BERT model ~\cite{chalkidis2020empirical}. The continuous training approach would initialise with the standard BERT model, pre-trained using Wikipedia and BookCorpus. It then continues the pre-training process with a masked language model and next-sentence prediction using domain-specific data. In this case, the vocabulary is the same as the original BERT model, which is considered a disadvantage for domain-specific tasks~\cite{gu2020domain}. For our research, PubMedBERT~\cite{gu2020domain}, a domain-specific BERT based model trained solely on biomedical text, is used. Our research focuses on automatically predicting medical codes from medical text as the multi-label classification task. Examples of predicting medical codes using transformers include ICD-10 predictions from German documents~\cite{amin2019mlt,sanger2019classifying}, and predicting frequent medical codes from MIMIC-III~\cite{biswas2021transicd,yogarajan2021trans}. These examples restrict themselves to (1) truncated text sequences of $<512$ tokens and (2) predicting frequent labels~\cite{biswas2021transicd,amin2020exploring}. MIMIC-III consists of many infrequent labels, as shown in Figure~\ref{fig:freq}, where most codes only occur in a small number of clinical documents. This research focuses on improving the predictive accuracy for infrequent labels and using long medical texts. Moons et al. (2020)~\cite{moons2020comparison} presents a survey of deep learning methods for ICD coding of medical documents and indicates Convolutional Attention for Multi-Label classification (CAML)~\cite{mullenbach2018explainable} as the SOTA method for automatically predicting medical codes from EHRs. Yogarajan et al. (2021)~\cite{yogarajan2021trans} presents evidence to show that domain-specific transformers outperform CAML for truncated sequences. Liu et al (2021)~\cite{liueffective} presents the most recent evidence where EffectiveCAN --an effective convolution attention network-- outperforms SOTA for predicting medical codes. We extend the findings in Yogarajan et al. (2021)~\cite{yogarajan2021trans} by providing evidence to show TransformerXL outperforms CAML and sets new SOTA results for predicting medical codes. We also present a direct comparison with EffectiveCAN for the same multi-label problem with the same labels and data to show transformers such as TransformerXL outperform SOTA. \vspace{-1.5em} \section{Data}\label{sec:data} Medical Information Mart for Intensive Care (MIMIC-III) is one of the most extensive publicly available medical databases \cite{johnson2016mimic,goldberger2000physiobank} with more than 50,000 patient EHRs. It contains data including billing, laboratory, medications, notes, physiological information, and reports. Among the available free-form medical text, more than 90\% of the unique hospital admissions contain at least one discharge summary (\textsf{dis}). In addition to the free-form medical text from \textsf{dis}, this research also makes use of text summary of categories ECG (\textsf{ecg}) and Radiology(\textsf{rad}). As with most free form EHRs, MIMIC-III text data includes acronyms, abbreviations, and spelling errors. For example (data as presented in MIMIC III with errors): \begin{quote} \textit{82 yo M with h/o CHF, COPD on 5 L oxygen at baseline, tracheobronchomalacia s/p stent, presents with acute dyspnea over several days, and lethargy...} \end{quote} MIMIC-III data includes long documents, where \textsf{dis} ranges from 60 to 9,500 tokens with an average of 1,513 tokens and \textsf{rad} with an average of 2,500 tokens. The document lengths of \textsf{ecg} are short with an average of 84 tokens. In this research, MIMIC-III text is pre-processed by removing tokens that contain non-alphabetic characters, including all special characters and tokens that appear in less than three training documents. The discharge summary is split into equal segments for a given hospital admission, and each section is labelled text $1,...,4$. For example, for two splits, if a given discharge summary is $700$ tokens long, text 1 is the first 350 tokens, and text 2 is the last 350 tokens. In the case of a lengthy document, if the discharge summary is $2500$ tokens long, text 1 is the first $1,250$ tokens, and text 2 is the last $1,250$ tokens. For multi-BioMed-Transformers where the maximum sequence length is $512$, each of text $1,...,4$ is truncated to $512$ tokens. There are many other ways to split the text, including sequential splits. For instance, with the first example above, text 1 being the first $512$ tokens, and text 2 being the remainder $238$ tokens. Each of these decisions has some advantages and disadvantages. After preliminary experiments, the decision was made to split the discharge summary into equal sections. This research presents results for the following configurations: \begin{enumerate}\addtocounter{enumi}{-1} \item \textsf{dis$_{1 \text{ of } 2}$} + \textsf{dis$_{2 \text{ of } 2}$} \item \textsf{dis$_{1 \text{ of } 3}$} + \textsf{dis$_{2 \text{ of } 3}$} + \textsf{dis$_{3 \text{ of } 3}$}. \item \textsf{dis$_{1 \text{ of } 2}$} + \textsf{dis$_{2 \text{ of } 2}$} + \textsf{ecg}. \item \textsf{dis$_{1 \text{ of } 2}$} + \textsf{dis$_{2 \text{ of } 2}$} + \textsf{rad}. \item \textsf{dis$_{1 \text{ of } 2}$} + \textsf{dis$_{2 \text{ of } 2}$} + \textsf{ecg} + \textsf{rad}. \item \textsf{dis$_{1 \text{ of } 4}$} + \textsf{dis$_{2 \text{ of } 4}$} + \textsf{dis$_{3 \text{ of } 4}$} + \textsf{dis$_{4 \text{ of } 4}$}. \item \textsf{dis} + \textsf{ecg}. \item \textsf{dis} + \textsf{rad}. \end{enumerate} \section{Multi-label Datasets and Labels} We consider predicting ICD-9 codes (standards for international Statistical Classification of Diseases and Related Health Problems) from EHRs as flat multi-label problems. ICD codes are used to classify diseases, symptoms, signs, and causes of diseases. Almost all health conditions can be assigned a unique code. Manual assigning of medical codes requires expert knowledge and is very time-consuming. Thus, the ability to predict and automate medical coding is vital. ICD-9 codes are grouped in a hierarchical tree-like structure by the World Health Organisation. In this research, we focus on levels 2 and 3 for MIMIC-III data containing 158 labels at level 2 and 923 labels at level 3 with associated medical text for the patient. In addition, we consider case studies, cardiovascular disease, COVID-19 patient shielding, and systemic fungal or bacterial infections, where commonly used medical codes are used as labels. As mentioned earlier, for the purposes of direct comparison with the recently published SOTA, the most frequent 50 ICD-9 codes in MIMIC-III are also considered. \begin{table}[t!] \centering \caption{Statistics of multi-label classification problems. Counts for frequent and infrequent, or tail-end labels, are also provided. * MIMIC III Top50 is the most frequent 50 labels, hence no tail labels, and is only used in this research for direct SOTA comparison. } \label{tab:lab_den} \begin{tabular}{l@{\hspace{.2cm}}r@{\hspace{.2cm}}r@{\hspace{.2cm}}r@{\hspace{.2cm}}r@{\hspace{.2cm}}r@{\hspace{.2cm}}r} \hline \textbf{Multi-label Problems} & \textbf{q} & \textbf{\# Inst} & \textbf{LCard} & \textbf{LDens} & {\textbf{LFreq}} $\geq 1\%$ & {\textbf{LFreq}} $< 1\%$ \\ \hline MIMIC-III Level 3 & 923 & 52,722 & 14.43 &0.02 &244 & 679 \\ MIMIC-III Level 2 & 158 & 52,722 & 11.61 &0.07 & 100 & 58 \\ MIMIC-III Top50* & 50 & 50,957 & 5.60 & 0.11 & 50 & 0 \\ Fungal or bacterial & 73 & 30,814 &2.06 & 0.03 &34 & 39 \\ COVID-19~\cite{yogarajan2021predicting} & 42&35,458 &1.84 & 0.04 & 27 & 15 \\ Cardiovascular & 30 & 28,154& 2.51& 0.08 & 16 & 14 \\ \hline \end{tabular} \vspace{-10pt} \end{table} Table~\ref{tab:lab_den} provides a summary of the multi-label problems used in this research. For multi-label problems, the notations as per Tsoumakas et al., (2009)~\cite{tsoumakas2009mining} are used, where $L = \{\lambda_j:j=1...q\}$ refers to the finite set of labels and $D = \{(x_i,Y_i),i=1...m\}$ refers to set of multi-label training examples. Here $x_i$ is the feature vector, and $Y_i\subseteq L$ is the set of labels of the $i$-th example. Label cardinality ($LCard$) is the average number of labels of the examples in a dataset, and label density ($LDens$) is cardinality divided by $q$. Table~\ref{tab:lab_den} provides the number of labels selected for experiments presented in this paper, with the frequency of occurrences $<1\%$, tail-end labels, and the number of labels $\geq1\%$. \section{Language Models} This research mainly focuses on transformer models. Transformers are feed-forward models based on the self-attention mechanism with no recurrence. Self-attention takes into account the context of a word while processing it. Similar to the sequence-to-sequence attention mechanism, self-attention is considered a soft measure where multiple words are considered. Transformer models take all the tokens in the sequence at once in parallel, enabling the capture of long-distance dependencies. Vaswani et al. (2017)~\cite{vaswani2017attention} provides an introduction to the transformer architecture. BERT (Bidirectional Encoder Representations from Transformers) \cite{DBLP:journals/corr/abs-1810-04805} is one of the early transformer models that applies bidirectional training of encoders \cite{vaswani2017attention} to language modelling. The 12-layer BERT-base model with a hidden size of 768, 12 self-attention heads, 110M parameter neural network architecture, was pre-trained from scratch on BookCorpus and English Wikipedia. PubMedBERT~\cite{gu2020domain} uses the same architecture, and is domain-specifically pre-trained from scratch using abstracts from PubMed and full-text articles from PubMedCentral to better capture the biomedical language \cite{gu2020domain}. BioMed-RoBERTa-base~\cite{domains} is based on the RoBERTa-base \cite{DBLP:journals/corr/abs-1907-11692} architecture. RoBERTa-base, originally trained using 160GB of general domain training data, was further continuously pre-trained using 2.68 million scientific papers from the Semantic Scholar corpus. Gururangan et al. (2020)~\cite{domains} show that BioMED-RoBERTa-base, which was specifically pre-trained on medical text data, outperforms the generically trained RoBERTa-base model on biomedical domain-specific tasks. TransformerXL~\cite{dai2019transformer} is an architecture that enables the representation of language beyond a fixed length. It can learn dependency that is longer than recurrent neural networks and vanilla transformers. The Longformer~\cite{beltagy1904longformer} model is designed to handle longer sequences without the limitation of the maximum token size of 512. Longformer reduces the model complexity from quadratic to linear by reformulating the self-attention computation. Compared to Transformer-XL~\cite{dai2019transformer}, Longformer is not restricted to the left-to-right approach of processing documents. In addition to transformer models, CNNText~\cite{kim2014convolutional} with domain-specific fastText pre-trained 100-dimensional embeddings is used. CNNText combines one-dimensional convolutions with a max-over-time pooling layer and a fully connected layer. The final prediction is made by computing a weighted combination of the pooled values and applying a sigmoid function. A simple architecture of CNNText is presented in Figure \ref{fig:dual}. CAML~\cite{mullenbach2018explainable} is also used to compare with TransformerXL and other languange models. CAML combines convolution networks with an attention mechanism. Simultaneously, a second module is used to learn embeddings of the descriptions of ICD-9 codes to improve the predictions of less frequent labels and target regularisation. For each word in a given document, word embeddings are concatenated into a matrix, and a one-dimensional convolution layer is used to combine these adjacent embeddings. \begin{algorithm}[t!] \caption{Multiple BioMed-Transformer} \label{alg:multiple} \begin{algorithmic}[1] \STATE\textbf{Input:} Fixed length multi-sourced or long document text input with set of labels $Y\subseteq L$, domain specific pre-trained transformer models $x_i$ with parameters $\theta_{1,2,...,n}$, Linear layer (FC) with $L$ number of output units having $\theta_l$ parameters and loss function Binary-cross-entropy (BCE). \FOR {each mini-batch } \STATE pooled$\_features$ = [] \FOR {each document $i$} \STATE $x_i$ = $\text{BioMed-Transformer(document}_i)$ \STATE pooled$\_$features.append( AVG$\_$POOL($x_i$)) \ENDFOR \STATE combined$\_$features = CONCATENATE(pooled$\_$features) \STATE drop$\_$output = DROPOUT(combined$\_$features) \STATE output = FC$_{\theta_l}$(drop$\_$output) \STATE $\mathcal{L} = \mathcal{L}_{BCE}$(output, targets) \STATE $\theta = [\theta_1,\theta_2,\theta_3,\ldots,\theta_n ,\theta_l]$ \STATE $\theta = \theta - \nabla_\theta \mathcal{L} $ \ENDFOR \end{algorithmic} \end{algorithm} \vspace{-1em} \section{Concatenated Language Models} \subsection{Multi-BioMed-Transformers} Multi-BioMed-Transformers use an architecture where two or more domain-specific transformer models are concatenated together to enable the usage of multiple text inputs. Algorithm~\ref{alg:multiple} presents an outline of multi-Bio-Med-Transformer models concatenated together. We explore the options of two to four PubMedBERT models that are concatenated together. See Figure~\ref{fig:dual} for an example of TriplePubMedBERT architecture. Concatenated transformer models enable the processing of longer sequences, where the longer input sequence is split into multiple smaller segments with a maximum length of 512 tokens. The average length of discharge summaries in MIMIC-III is approximately $1,500$ tokens, hence the choice to concatenate two to four PubMedBERT models. Moreover, as indicated in Section~\ref{sec:data}, MIMIC-III contains text from other categories, such as \textsf{ecg} and \textsf{rad}. Multi-BioMed-Transformers provides the option to explore using these other available texts as additional input text. \subsection{Multi-CNNText} Multi-CNNText adopts the same idea as multi-BioMed-Transformers, where two or more CNNText models are concatenated together. Figure~\ref{fig:dual} presents an example of DualCNNText where two CNNText models are concatenated together. Although CNNText can handle longer sequence length as input text, concatenating multiple CNNText models provides the option of using input text from different categories such as ECG and radiology, as mentioned before, as the features of different categories can be captured separately. \subsection{CNNText with Transformers} The third variation is combining CNNText with transformers (see Figure~\ref{fig:dual}). Although many variations are possible, this research only considers a couple of variations. BERT-base and PubMedBERT are the two transformers that are used with CNNText. However, variations, such as embeddings dimensions, and multiple transformer models can be used for CNNText. It is also important to point out that CNNText is just one possible choice, and there are many other deep learning models that could be used instead of CNNtext. \begin{figure}[t!] \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=0.55\textwidth]{triple_pub_bert-1.png} \caption{TriplePubMedBERT architecture.} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=0.5\textwidth]{dualCNN.png} \caption{DualCNNText architecture.} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=0.5\textwidth]{cnn_bert.png} \caption{CNNText with Transformer architecture} \end{subfigure} \caption{Concatenated Language Model architectures. } \vspace{-1.5em} \label{fig:dual} \end{figure} \section{Experiments} We present overall micro and macro F1 scores and individual label F1 scores for the multi-label problems outlined in Table~\ref{tab:lab_den}. Critical difference plots are presented as supportive statistical analysis. The Nemenyi posthoc test (95\% confidence level) identifies statistical differences between learning methods. CD graphs show the average ranking of individual F1 scores obtained using various language models. The lower the rank, the better it is. The difference in average ranking is statistically significant if there is no bold line connecting the two settings. All experimental results are obtained from a random seeds training-testing scheme and averaged over three runs. The variation of these three independent runs are within a range of $\pm 0.015$. We explore several different transformer models and compare the performance to concatenated BioMed-Transformers. Transformer implementations are based on the open-source PyTorch transformer repository.\footnote{https://github.com/huggingface/transformers} Transformer models are fine-tuned on all layers without freezing. For the optimiser, we use Adam~\cite{kingma2014adam} with learning rates between 9e-6, and 1e-5. Training batch sizes were varied between 1 and 16. A non-linear sigmoid function $f(z) = \frac{1}{1+e^{-z}}$, with a range of 0 to 1 is used as the activation function. Binary-cross-entropy~\cite{cox1958regression} loss, $Loss_{BCE}(X,y) = -\sum_{l=1}^{L}(y_l log(\hat{y}_l)+(1-y_l)log(1-\hat{y}_l))$, over each label is used for multi-label classification. Domain-specific fastText embeddings~\cite{yogarajan2020,yogarajan2020seeing} of a 100-dimensional skipgram model are used for neural networks.\footnote{Our source code can be obtained from: \\ \url{https://github.com/vithyayogarajan/Medical-Domain-Specific-Language-Models/tree/main/Concatenated-Language-Models-Multi-label}} \section{Results} Results are presented in three parts. First we present the overall performance of the language models, followed by SOTA comparison, and finally we present tail-end performance. \subsection{Overall performance}\label{sec:overall} We present an extensive comparison across models for cardiovascular disease, followed by selected results for other multi-label problems. Table~\ref{tab:cardio28154_dual} presents the results for various language model variations for cardiovascular disease, using MIMIC-III data with 28,154 hospital admissions of patients and 30 labels. Multi-PubMedBERT and multi-BioMed-RoBERTa show a consistent improvement of 3\% to 7\% in micro-F1 scores over single PubMedBERT and BioMed-RoBERTa, respectively. The macro-F1 score of TriplePubMedBERT option is better than other language models presented with at least 3\% improvement, except for TransformerXL with 3,072 tokens. Macro F1 scores of multi-CNNText and CNNText with transformers perform poorly compared to all other language models presented. For cardiovascular disease, incorporating \textsf{ecg} and \textsf{rad} does show some improved overall results, especially with TriplePubMedBERT options. Critical difference plots for individual label F1 scores obtained using various language models in Table~\ref{tab:cardio28154_dual} are presented in Figure~\ref{fig:cardio_dual_cd}. Both Table~\ref{tab:cardio28154_dual} and Figure~\ref{fig:cardio_dual_cd} show that TransformerXL with \textsf{dis} 3,072 tokens is the best option. However, multi-BioMed-Transformers show improvements, especially when compared to single-BioMed-Transformers. \begin{table}[t!] \caption{Comparison of micro-F1, macro-F1 of cardiovascular disease among various language models and input text for MIMIC-III data. Input text options include the maximum sequence length and reference to the options. Bold is used to indicate the best results for each grouping in the table, and underline is used for overall best results. Results are averaged over three runs. } \label{tab:cardio28154_dual} \centering \resizebox{\linewidth}{!}{ \begin{tabular}{llcc} \hline Neural Network Details & Input Text Options & Micro-F1 & Macro-F1 \\ \hline BioMed-RoBERTa& dis 512 &0.69 &0.30 \\ PubMedBERT & dis 512& 0.70 & {0.30} \\ TransformerXL & dis 1,536 & 0.75 & 0.28 \\ TransformerXL & dis 3,072 & \underline{\textbf{0.78}} & \underline{\textbf{0.32}} \\ Longformer & dis 3,000 & 0.74&0.30 \\ CAML (T100SG) & dis 3,000 & {{0.77}} & 0.24 \\ \hdashline\noalign{\smallskip} Dual-Bio-RoBERTa & Option 0: 512 &0.72 & 0.28 \\ DualPubMedBERT & Option 0: 512 & 0.72 & {0.30} \\ Triple-BioMed-RoBERTa &Option 1: 512 & 0.72 & 0.29 \\ TriplePubMedBERT &Option 1: 512 & 0.73 & 0.29 \\ TriplePubMedBERT &Option 2: 512 & {0.73} & \textbf{0.31} \\ TriplePubMedBERT &Option 3: 512 & {0.73} & {0.30} \\ QuadruplePubMedBERT & Option 4: 512 & \textbf{0.74} & 0.28 \\ \hdashline\noalign{\smallskip} CNNText (T100SG) &dis 512&0.72 &0.23 \\ CNNText (T100SG) & dis 3,000 & 0.74 & \textbf{0.30} \\ DualCNNText (T100SG) & Option 0: 1,000&0.73&0.22 \\ TripleCNNText (T100SG) &Option 2: 1,000 &0.74&0.24 \\ TripleCNNText (T100SG) &Option 3: 1,000& \textbf{0.75}& {0.25} \\ QuadrupleCNNText (T100SG) & Option 4: 1,000 & 0.74&0.22 \\ \hdashline\noalign{\smallskip} CNNText (T100SG) + BERT-base & Option 6: dis 3,000 + ecg 512 &0.75 &0.20 \\ CNNText (T100SG) + PubMedBERT &Option 6: dis 3,000 + ecg 512 &\textbf{0.76} &\textbf{0.22} \\ CNNText (T100SG) + PubMedBERT &Option 7: dis 3,000 + rad 512 & 0.75 &0.21 \\ \hline \end{tabular}} \end{table} \begin{figure}[b!] \includegraphics[width=1\textwidth,height=4cm]{cardio_dual_mimic_cd_f.png} \caption[Critical difference plots.]{Critical difference plots. Nemenyi post-hoc test (95\% confidence level), identifying statistical differences between language models for cardiovascular disease presented in Table~\ref{tab:cardio28154_dual}.} \label{fig:cardio_dual_cd} \end{figure} \begin{table}[t!] \centering \caption{Comparison of micro-F1, macro-F1 of cardiovascular disease, systemic fungal or bacterial infection, levels 2 and 3 of ICD-9 codes among various language models. Time required per epoch\protect\footnotemark[3] of systemic fungal or bacterial infection and MIMIC-III Level 3 is also presented. Input text options are included for reference. Bold is used to indicate the best results among the groups except for time where it is the lowest time, and underline is used for overall best results. Published results are also presented for direct comparison. Results are averaged over three runs. } \label{tab:dual} \resizebox{\linewidth}{!}{ \begin{tabular}{llrrr:rrr} \hline \noalign{\smallskip} & & \multicolumn{2}{c}{COVID-19} & & \multicolumn{3}{:c}{Fungal or Bacterial} \\ Transformers & Input Text &Micro-F1 & Macro-F1 & \quad \quad & Micro-F1 & Macro-F1 & Time (epoch) \\ \noalign{\smallskip}\hline\noalign{\smallskip} BioMed-RoBERTa& dis 512 & 0.53~\cite{yogarajan2021predicting}& 0.45~\cite{yogarajan2021predicting}&& 0.45&0.39 & {2,554 sec} \\ PubMedBERT & dis 512 & 0.54~\cite{yogarajan2021predicting} &0.48~\cite{yogarajan2021predicting} & &0.48 &0.39 & 2,940 sec \\ TransformerXL & dis 512 & 0.51 &0.45 & &0.47 &0.39 &2,921 sec \\ TransformerXL & dis 3,072 & \underline{\textbf{0.65}}~\cite{yogarajan2021predicting} & \underline{\textbf{0.51}}~\cite{yogarajan2021predicting} & &\underline{\textbf{0.64}} &\underline{\textbf{0.46}} &{43,200 sec} \\ Longformer & dis 3,000 &0.58~\cite{yogarajan2021predicting} &0.50~\cite{yogarajan2021predicting} & &0.58 &0.43 & 13,500 sec \\ CAML (T100SG)& dis 3,000 &0.61~\cite{yogarajan2021predicting} &0.40~\cite{yogarajan2021predicting} & & {0.62}&0.38 & \textbf{47 sec} \\ & & & & & & & \\ DualPubMedBERT & Option 0: 512 & \textbf{0.58} &\textbf{0.49} & & \textbf{0.57} & \textbf{0.43} & 4,020 sec\\ TriplePubMedBERT & Option 1: 512 & 0.54 &0.46 & &0.56 &0.40 & 5,580 sec \\ TriplePubMedBERT & Option 2: 512 & -& - & & 0.54& 0.39 & 5,580 sec\\ TriplePubMedBERT & Option 3: 512 & - &- & &0.54 &0.39 & 5,580 sec\\ QuadruplePubMedBERT & Option 4: 512 & - & - & &0.54 &0.40 & 7,080 sec \\ QuadruplePubMedBERT & Option 5: 512 & 0.52& 0.46 & &\textbf{0.57} &0.40 & 7,080 sec \\ \noalign{\smallskip}\hline\noalign{\smallskip} & & \multicolumn{2}{c}{MIMIC-III Level 2 codes} & & \multicolumn{3}{c}{MIMIC-III Level 3 codes} \\ Transformers & Input Text &Micro-F1 & Macro-F1 & \quad \quad & Micro-F1 & Macro-F1 & Time (epoch) \\ \noalign{\smallskip}\hline\noalign{\smallskip} PubMedBERT & dis 512 & 0.65~\cite{yogarajan2021multilabel} & 0.41~\cite{yogarajan2021multilabel} & & 0.55 & 0.18 & \textbf{3,393 sec}\\ BioMed-RoBERTa & dis 512 & 0.64~\cite{yogarajan2021multilabel} & 0.40~\cite{yogarajan2021multilabel} & & 0.53 & 0.18 & 4,877 sec \\ TransformerXL & dis 3,072 & \underline{\textbf{0.73}} & \underline{\textbf{0.46}} & & -&- &- \\ Longformer & dis 3,000 & {{0.72}} & {{0.45}} & & {{0.62}} & 0.19 & {16,889 sec} \\ CAML (T100SG)& dis 3,000 & {{0.72}} & 0.43 & & \underline{\textbf{0.64}} & \underline{\textbf{0.26}} & \textbf{64 sec}\\ & & & & & & & \\ DualPubMedBERT & Option 0: 512 &\textbf{0.68} &{\textbf{0.45}} & &\textbf{0.57} &\underline{\textbf{0.20}} & 4,750 sec\\ DualBioMed-RoBERTa & Option 0: 512 &0.66 &0.43 & &0.56 &0.19 & 6,842 sec\\ TriplePubMedBERT & Option 1: 512 & 0.66 &0.43 & & - & - & - \\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular}} \vspace{-8pt} \end{table} \begin{figure}[t!] \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth,height=2.8cm]{fungal_dual_cd_f.png} \caption{Task: Systemic fungal or bacterial infections} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth,height=2.8cm]{level2_cd.png} \caption{Task: MIMIC-III Level 2 codes} \end{subfigure} \caption{Critical difference plots. Nemenyi post-hoc test (95\% confidence level), identifying statistical differences between language models in Table~\ref{tab:dual}, where critical difference is calculated for individual label F1 scores. } \label{fig:cd} \vspace{-4pt} \end{figure} Table~\ref{tab:dual} presents micro and macro F1 scores for various language model variations for COVID-19 patient shielding and systemic fungal or bacterial infection using MIMIC-III data. For systemic fungal or bacterial infections multi-PubMedBERT show improvements of 12\% to 19 \% in micro-F1, and 2\% to 10\% in macro-F1 scores over single PubMedBERT, except for TriplePubMedBERT with \textsf{rad} and \textsf{ecg}, where the macro-F1 score is on par with single PubMedBERT. Contrary to the case of cardiovascular disease, here the additional inputs of \textsf{ecg} and \textsf{rad} do not result in better performance. It is likely that \textsf{ecg} and \textsf{rad} are not that relevant for coding fungal or bacterial infections. Table~\ref{tab:dual} for COVID-19 patient shielding shows TransformerXL with \textsf{dis} 3,072 tokens to be the best option, as observed with the other case studies. DualPubMedBERT show improvements over single PubMedBERT and other variations of multi-PubMedBERT. All three case studies show that TransformerXL with \textsf{dis} 3,072 tokens is the top performer in terms of predictive performance. However, concatenated BioMed-Transformers show improvements, especially when compared to single BioMed-Transformers. Table~\ref{tab:dual} also presents the time per epoch in seconds for systemic fungal or bacterial infection to provide a direct comparison among the language models. TransformerXL (3,072) requirements are much greater than that of other language models, including multi-PubMedBERT, for example, needing 240 hours (for \textsf{dis} 3,072) when DualPubMedBERT only requires 22 hours. Table~\ref{tab:dual} also presents micro and macro F1 scores for levels 2 and 3 of ICD-9 codes using MIMIC-III data. As mentioned above, due to the processing time required by TransformerXL (3,072), we only use Longformer for encoding long documents for ICD-9 level 3. For MIMIC-III Level 2 codes, TransformerXL with \textsf{dis} 3,072 tokens is the top performer. DualPubMedBERT shows improvements in both micro and macro F1 scores by 3\% to 5\% over other PubMedBERT variations, and macro-F1 of DualPubMedBERT and Longformer are equal and only marginally behind TransformerXL. For MIMIC-III Level 3 codes, the macro-F1 score of DualPubMedBERT is better than other transformer models, including Longformer. However, CAML (T100SG) outperforms all variations of transformer models. Figure~\ref{fig:cd} presents the critical difference plots for results presented in Table~\ref{tab:dual}. The Nemenyi posthoc test (95\% confidence level) shows statistical differences between learning methods. TransformerXL (3,072) and Longformer (3,000) are the overall top performers. However, the difference between them and DualPubMedBERT is not statistically significant. \footnotetext[3]{Average times (in seconds) based on experiments run on 12 core Intel(R) Xeon(R) W-2133 CPU @ 3.60GHz, GPU device GV100GL [Quadro GV100].} This section compares the overall performance of multiple language models for MIMIC-III data for the number of labels being 30, 42, 73, 158 and 923. TransformerXL (3,072) consistently outperformed other language models. Multi-CNNText and CNNText with Transformers performed poorly when compared to other language model variations. Hence, only results for cardiovascular disease are presented in this research for the CNNText variations. Multi-BioMed-transformers outperforms single BioMed-Transformers with a more noticeable improvement in micro-F1 scores for cardiovascular disease and systemic fungal or bacterial infections. Due to computational restrictions, only Longformer was used to handle long text sequences for level 3 of ICD-9 codes. DualPubMedBERT macro-F1 score was the same as Longformer and TransformerXL for level 2 ICD-9 codes, and better than Longformer for for level 3 ICD-9 codes with 923 labels. \subsection{SOTA Results} Both Tables~\ref{tab:cardio28154_dual} and \ref{tab:dual} and Figures~\ref{fig:cardio_dual_cd} and \ref{fig:cd} show that TransformerXL outperforms CAML across all multi-label problems for predicting medical codes. In addition, there are other language models, including concatenated models, that perform on par with or above CAML, especially when macro-F1 scores are compared. \begin{table}[t!] \caption{Overall results for MIMIC-III Top 50 ICD-9 codes. Bold is used to indicate the best results. Published results are presented for direct comparison. Description Regularized-CAML is referred by DR-CAML. Both variations of EffectiveCAN is presented.\protect\footnotemark[4] Our results are averaged over three runs.} \label{tab:top50} \centering \begin{tabular}{lrr} \hline Models & Micro-F1 & Macro-F1 \\\hline\noalign{\smallskip} CAML~\cite{mullenbach2018explainable} & 0.614&0.532 \\ DR-CAML~\cite{mullenbach2018explainable} & 0.633&0.576 \\ EffectiveCAN (Sum-pooling attention)~\cite{liueffective} & 0.702 & 0.644 \\ EffectiveCAN (Multi-layer attention)~\cite{liueffective} & 0.717 & 0.668 \\ \noalign{\smallskip}\hdashline\noalign{\smallskip} DualPubMedBERT (Option 0: 512) &0.640 &0.576 \\ TriplePubMedBERT (Option 1: 512) &0.641 &0.583 \\ Longformer (3,000) & 0.703&0.654 \\ TransformerXL (3,072) & \textbf{0.723}&\textbf{0.677} \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \footnotetext[4]{See Liu et al (2021)~\cite{liueffective} for details of EffectiveCAN variations and architectures.} Table~\ref{tab:top50} provides the overall micro and macro F1 scores of the most frequent 50 ICD-9 codes in MIMIC III with discharge summary. In this particular case, for direct comparison, the labels and input data are all matched to the exact specifications of the compared published methods. This is the only section in this research where Top 50 ICD-9 codes are used for experimental evaluations. Evidently TransformerXL (3,072) with a learning rate of 1e-5 presents new SOTA results. \begin{figure}[tph!] \centering \includegraphics[width=0.95\textwidth]{923_f1_2.png}\\ \includegraphics[width=0.95\textwidth]{158_f1_2.png} \\ \includegraphics[width=0.95\textwidth]{fungal_f1_2.png}\\ \includegraphics[width=0.95\textwidth]{covid_f1_2.png}\\ \includegraphics[width=0.95\textwidth]{cardio_f1_2.png}\\ \includegraphics[width=0.85\textwidth]{legend.png}\\ \caption{F1-score for tail-end labels (frequency $< 1\%$), where single PubMedBERT (in red) is compared with multi-BioMed-transformer models. } \label{fig:tail_f1} \vspace{-4pt} \end{figure} \subsection{Tail-end Labels} This section presents a comparison of individual label F1 scores for the multi-label problems presented in Section~\ref{sec:overall}. The focus here is on showing the differences and the improvements in F1 scores of tail-end labels with multi-BioMed-Transformers compared to single transformer models including Longformer and TransformerXL. Table~\ref{tab:lab_den} presents the number of labels with frequency $\geq$ 1\% , and tail-end labels (with label frequency $<$ 1\%). Figure~\ref{fig:tail_f1} presents tail end F1 scores across all five multi-label problems. With the exception of a few specific labels, including \textsf{ICD-9 code 508} for COVID-19 patient shielding problem, in general F1-scores of concatenated Bio-Med-Transformers for tail-end labels are consistently better. This improvement is more evident for long tail-end cases such as levels 2 and 3 of ICD-9 codes where there is also an improvement in the number of labels with F1-score $\neq$ 0. \begin{figure}[t!] \centering \includegraphics[width=0.95\textwidth]{923_f1_diff1.png}\\ \includegraphics[width=0.95\textwidth]{158_dual_2.png} \caption{MIMIC-III Level 2 and 3 codes, where the difference between F1 scores of dual/triple language model variations and Longformer (3,000 tokens) is presented. Tail-end labels (frequency $< 1\%$) - labels ordered based on frequency with the least frequent label at the right end. Legend is presented for reference. Negative values indicate better F1 scores for Longformer (3,000). } \label{fig:level2_tail_long} \vspace{-4pt} \end{figure} Tables~\ref{tab:cardio28154_dual} and \ref{tab:dual} show the overall performance of Longformer and TransformerXL in general are better, especially when compared to single Bio-Med-Transformers. To analyse the difference in tail-end label F1 scores we also present the actual difference between F1-scores by calculating the differences of F1 scores between a particular language model variation and Longformer or TransformerXL, hence, negative values indicate Longformer or TransformerXL have a better F1 score. Figure~\ref{fig:level2_tail_long} presents the difference in F1 scores for level 2 and 3 ICD-9 codes for the following combinations: dualPubMedBERT - Longformer (3,000), triplePubMedBERT - Longformer (3,000) and dualBioMed-RoBERTa - Longformer (3,000). Due to space restrictions only tail-end labels are presented. However, it is important to note that for frequent labels, the F1 scores of Longformer are on par or better than the other three models. Smaller differences in F1 scores are noticed among the most frequent labels, and occasional dual and triple models perform slightly better than Longformer for specific labels. In general, Longformer has the most wins over other models for label frequency $\geq 1\%$. This pattern is reversed for tail-end labels, with Longformer losing more to the dual and triple models where the difference in F1 scores is noted. For some tail-end labels, these differences are noticeably higher than other labels. Level 3 contains 923 labels, with more than 650 labels being infrequent. Figure~\ref{fig:level2_tail_long} also shows Longformer losing more to dual transformers at tail-end labels. \begin{table}[t!] \centering \caption{The number of wins, draws and losses of concatenated language models compared to Longformer (LF) and TransformerXL (TXL) for systemic fungal or bacterial infections and levels 2 and 3 of ICD-9 codes.} \label{tab:fun_win} \begin{tabular}{lrrrrrr} \hline \multicolumn{7}{c}{\textbf{Systemic fungal or bacterial infections, 73 labels}} \\ \noalign{\smallskip} \hline Models & \multicolumn{3}{c}{Freq $\geq$ 1\%} & \multicolumn{3}{c}{ Freq $<$ 1\%} \\ & wins & draws & losses & \qquad wins & draws & losses \\ \noalign{\smallskip} \hline \noalign{\smallskip} PubMedBERT -TXL&1&1&32&11&12&16\\ DualPubMedBERT - TXL&3&3&28&15&9&15\\ TriplePubMedBERT - TXL &4&0&30&8&13&18\\ QuadruplePubMedBERT - TXL &3&0&31&10&12&17\\ \noalign{\smallskip} \noalign{\smallskip} PubMedBERT - LF&4&3&27&13&10&16\\ DualPubMedBERT - LF&12&0&21&14&13&12\\ TriplePubMedBERT - LF&9&2&23&12&12&15\\ QuadruplePubMedBERT - LF&15&1&18&10&10&19\\ \noalign{\smallskip} \hline \noalign{\smallskip} \multicolumn{7}{c}{\textbf{MIMIC-III Level 2 codes, 158 labels}} \\ \noalign{\smallskip} \hline \noalign{\smallskip} DualPubMedBERT - TXL & 26 &0 &74 & 26 &16& 16 \\ DualBioMed-RoBERTa - TXL & 25 &0 &75 & 28 &16&14 \\ TriplePubMedBERT - TXL & 20 & 1& 79& 27 &14 & 17 \\ \noalign{\smallskip} \noalign{\smallskip} DualPubMedBERT - LF & {19} & 11 & 70 & {30} &22 & 6 \\ DualBioMed-RoBERTa - LF &{{12}} &5 &83 & {{28}} &21 & 9 \\ TriplePubMedBERT - LF & {{13}} &2 & 85 & {{26}} &20 & 12 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multicolumn{7}{c}{\textbf{MIMIC-III Level 3 codes, 923 labels}} \\ \hline \noalign{\smallskip} DualPubMedBERT -LF & {{92}} &17 & 135 & {{160}} & 476 & 43 \\ DualBioMed-RoBERTa -LF & {{83}} & 21& 144 &{{181}} &454 & 40 \\ \noalign{\smallskip} \hline \end{tabular} \vspace{-6pt} \end{table} Table~\ref{tab:fun_win} presents the number of per-label wins, draws, and losses for levels 2 and 3 ICD-9 codes, and fungal or bacterial infections. For multi-label problems, F1-scores of many infrequent labels are zero. This observation is also evident in Figures~\ref{fig:tail_f1} and \ref{fig:level2_tail_long}. To quantify the observations, differences of F1 scores are presented as wins, draws and losses. For most cases, draws are where the F1 scores are zero. We acknowledge that there is a need for further analysis to understand the behaviour observed in Table~\ref{tab:fun_win}. As observed in Figure~\ref{fig:level2_tail_long} for tail-end labels, more wins are observed for concatenated models. DualPubMedBERT is the best performing option with the fewest losses among the more frequent label groups and most wins among the tail-end labels. For MIMIC-III Level 3 codes, the results in Table~\ref{tab:fun_win} show Longformer losing more to dual transformers at tail-end labels. For frequent labels for systemic fungal or bacterial infection, the F1 scores of TransformerXL are consistently better than the PubMedBERT variations and a clear winner. For infrequent labels, multi-PubMedBERT variations perform better than TransformerXL for many labels. \section{Discussion} We presented concatenated domain-specific language model variations to improve the overall performance of the many infrequent labels in multi-label problems with long input sequences. Although TransformerXL and Longformer can encode long sequences, and in general, TransformerXL outperforms other models setting new SOTA results, the required computational resources are prohibitive. Concatenated PubMedBERT models outperformed single BioMed-Transformers. There was a noticeable improvement in micro-F1 for multi-BioMed-transformers with cardiovascular disease and systemic fungal or bacterial infection. For larger multi-label problems, dualPubMedBERT, TransformerXL and Longformer achieve the same macro-F1 for MIMIC-III Level 2, but dualPubMedBERT wins for MIMIC-III Level 3. We also study the impact on predictive performance for less frequent labels. Label frequency is highly biased by the hospital/department the data were collected. If the data were from a fertility ward, the label frequency of pregnancy-related medical codes would be high, while for the cardiovascular ward this may not be the case. However, only being able to predict highly frequent labels well poses risks to a patient's health and well being. Hence, this research also compared individual label F1 scores for multi-label problems focusing on tail-end labels. For larger multi-label problems with long tail-end labels, such as level 2 and 3 ICD-9 codes, multi-BioMed-transformers had more wins than Longformer and TransformerXL. This provided experimental evidence shows that, with fewer resources, concatenated BioMed-Transformers can improve overall micro and macro F1 scores for multi-label problems with long medical text. In addition, for multi-label problems with many tail-end labels, multi-BioMed-Transformers outperform other language models when F1 scores of tail-end labels are compared directly. There are many avenues of research that arise directly from this research. If processing time or resources are not an issue, then continuous training of TransformerXL and Longformer on health-related data might improve prediction accuracy possibly even for tail-end labels. Concatenating TransformerXL or Longformer is also a possibility. ICD-9 codes have a tree-like hierarchy nature. Hence, predicting ICD-9 codes as a hierarchical multi-label classification problem using transformers to encode medical text is another relevant avenue to explore.
proofpile-arXiv_065-56
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The classical one-sample goodness-of-fit problem is concerned with testing the null hypothesis in which the cumulative distribution function (cdf), $F$, of an independent and identically distributed (iid) random sample $X_1, \ldots, X_n$ equals a certain prescribed cdf $F_0$. The most popular class of goodness-of-fit statistics for testing $\mathcal{H}_0:F=F_0$ is arguably that based on $F_n$, the empirical cumulative distribution function (ecdf) of $X_1, \ldots, X_n$. Ecdf-based test statistics confront $F_n$ against $F_0$, their best known representatives being the Kolmogorov--Smirnov ($D_n$), Cramér--von Mises ($W_n^2$), and Anderson--Darling ($A_n^2$) statistics, all of them generating omnibus tests of $\mathcal{H}_0$ against $\mathcal{H}_1:F\neq F_0$. When $F_0$ is continuous, testing $\mathcal{H}_0$ reduces to testing whether the iid sample $U_1,\ldots,U_n$, $U_i := F_0(X_i)$, $i=1,\ldots,n$, is distributed as $\operatorname{Unif}(0, 1)$, the continuous uniform distribution on $(0,1)$. Hence, tests of uniformity, despite its a priori limited applicability, provide powerful approaches to most of the goodness-of-fit problems concerned with fully-specified null hypotheses. In particular, the above ecdf-based statistics have the attractive property of being distribution-free, i.e., their exact null distributions do not depend on $F_0$. Both ecdf-based tests and uniformity tests have been exported to deal with data naturally arising in supports different from $\mathbb{R}$ or subsets thereof. This is the case of directional data, that is, data supported on the unit hypersphere $\mathbb{S}^{p-1}:=\{\mathbf{x}\in \mathbb{R}^{p}:\|\mathbf{x}\|= 1\}$, $p\geq2$, which is commonly instantiated in the form of circular ($p=2$) or spherical ($p=3$) data. The analysis of directional data faces specific challenges due to the non-Euclideanity of the support; see \cite{Mardia1999a} for a book-length treatment of tailored statistical methods and \cite{Pewsey2021} for a review of recent advances. In particular, tests of uniformity on $\mathbb{S}^{p-1}$ must be invariant to arbitrary rotations of the data coordinates, as these do not alter the uniform/non-uniform nature of the data. While a sizable number of tests of uniformity on $\mathbb{S}^{p-1}$ exist (see a review in \cite{Garcia-Portugues2020a}), perhaps the two most known omnibus tests are that of \cite{Kuiper1960} and \cite{Watson1961} on $\mathbb{S}^1$: their statistics, $V_n$ and $U_n^2$, can be regarded as the rotation-invariant versions of the Kolmogorov--Smirnov and Cramér--von Mises tests of uniformity, respectively. Moving beyond $\mathbb{S}^1$ has proven a challenging task for ecdf-based tests up to relatively recent years, with \cite{Cuesta-Albertos2009} using a Kolmogorov--Smirnov test on random projections data and \cite{Garcia-Portugues2020b} proposing a class of projected-ecdf statistics that extends \cite{Watson1961} test to $\mathbb{S}^{p-1}$ (see Section \ref{section:proj-based-unif-tests}). As in the classical setting, tests of uniformity on $\mathbb{S}^{p-1}$ allow for testing the goodness-of-fit of more general distributions: in $\mathbb{S}^1$, this is a straightforward application of the probability integral transform in the angles space $[-\pi,\pi)$; the case $\mathbb{S}^{p-1}$, $p\geq3$, is remarkably more complex and has been recently put forward in \cite{Jupp2020}. \begin{table}[h] \iffigstabs \small \centering \begin{tabular}{ >{\arraybackslash}m{1.5cm} >{\arraybackslash}m{13.5cm}} \toprule Statistic & Exact distribution approximations\\ \midrule $D_n$ & \cite{Massey1950, Massey1951}$\ssymbol{1}{}^{,}{}\ssymbol{2}$, \cite{Birnbaum1952}$\ssymbol{3}$, \cite{Maag1971}$\ssymbol{4}$, \cite{Marsaglia2003}$\ssymbol{3}$, \cite{Brown2007}$\ssymbol{2}{}^{,}{}\ssymbol{8}$, \cite{Facchinetti2009}$\ssymbol{8}$\\ \midrule $W^2_n$ & \cite{Marshall1958}$\ssymbol{8}$, \cite{Pearson1962}$\ssymbol{5}{}^{,}{}\ssymbol{9}$, \cite{Tiku1965}$\ssymbol{3}$, \cite{Stephens1968}$\ssymbol{3}{}^{,}{}\ssymbol{5}{}^{,}{}\ssymbol{9}$, \cite{Knott1974}$\ssymbol{7}$, \cite{Csorgo1996}$\ssymbol{3}$\\ \midrule $V_n$ & \cite{Stephens1965}$\ssymbol{1}$, \cite{Maag1971}$\ssymbol{4}$, \cite{Durbin1973, Arsham1988}$\ssymbol{8}$\\ \midrule $U^2_n$ & \cite{Pearson1962}$\ssymbol{5}{}^{,}{}\ssymbol{9}$, \cite{Tiku1965}$\ssymbol{3}$, \cite{Quesenberry1977}$\ssymbol{9}$\\ \midrule $A^2_n$ & \cite{Lewis1961}$\ssymbol{9}$, \cite{Marsaglia2004}$\ssymbol{6}$\\ \bottomrule \end{tabular} \fi \caption{\small Summary of existing specific approaches for approximating exact distributions of several goodness-of-fit test statistics. The approximations rely of the following main techniques: difference equations$\ssymbol{1}$, recursive formulae$\ssymbol{2}$, truncated approximations$\ssymbol{3}$, asymptotic expansions$\ssymbol{4}$, approximation of distribution moments$\ssymbol{5}$, correction factors$\ssymbol{6}$, characteristic function approximation$\ssymbol{7}$, direct formulae$\ssymbol{8}$, and Monte Carlo simulations$\ssymbol{9}$.} \label{tab:exact-approx-literature} \end{table} Historically, applications of goodness-of-fit tests were somehow hampered due to the absence of exact distribution theory for finite sample sizes. Statisticians focused on giving extensive tables of critical values for each statistic's exact distribution and, alternatively, approximating exact distributions of remarkable statistics. Table \ref{tab:exact-approx-literature} lists the approximations available for the exact distributions of $D_n$, $W^2_n$, $V_n$, $U^2_n$, and $A^2_n$, as well as the main techniques behind them. Although these specific approximations are highly accurate, the complexity of their expressions, and the lack of straightforward applicability to other statistics beyond the ones they were designed for, have not displaced the customary use of Monte Carlo simulations, asymptotic distributions, or even lookup tables when emitting general test decisions. In order to reduce the size of lookup tables, \cite{Stephens1970} transformed several statistics $T_n$ (among others, $D_n$, $V_n$, $W_n^2$, and $U_n^2$) into $T_n^{\ast}$ in such a way that the upper tails of $T_n^{\ast}$ remain roughly constant on $n$. Comparing $T_n^{\ast}$ (and not $T_n$) with certain fixed asymptotic critical values for $T_n$ gives a more accurate test calibration for small-to-moderate $n$'s. This approach also allowed finding finite-sample approximations in a wider set of goodness-of-fit problems: \cite{Stephens1974,Stephens1977b,Stephens1979b} and \cite{D'Agostino1986} derived analogous transformations for $D_n$, $V_n$, $W_n^2$, $U_n^2$, and $A_n^2$ when testing the goodness-of-fit of normal, exponential, logistic, and extreme value distributions. Other authors, such as \cite{Dufour1978}, found modifications for $D_n$ to use with truncated or censored samples, and \cite{Crown2000} applied this method to an $A_n^2$-related statistic for testing normal and exponential distributions. \cite{Hegazy1975} found transformations for new test statistics by fitting a functional relationship between the critical values and the sample size, introducing the first explicit use of a regression view to stabilize test statistics and offering insight into Stephens' original work. \cite{Pettitt1977} also applied this regression approach to $A_n^2$ for normality tests. \cite{Johannes1980} proposed an improved modification for \cite{Durbin1969}’s $C$ statistic, finding a specific transformation for each significance level; these approximations give more accurate results for a wider set of significance levels, yet at the expense of tabulating a higher number of transformations. More recently, using several regressions for different significance levels too, \cite{Marks1998} and \cite{Marks2007} found transformations for $D_n$ to test for Erlang distributions, while \cite{Heo2013} did the same for $A_n^2$ with several extreme value distributions. As Table \ref{tab:code-mod} shows, Stephens' transformations are present in nowadays' R software for goodness-of-fit testing, which also implements some of the statistic-specific approaches from Table~\ref{tab:exact-approx-literature}. \begin{table}[ht!] \iffigstabs \small \centering \begin{tabular}{ >{\arraybackslash}p{2.5cm} | >{\centering\arraybackslash}m{1.75cm} | >{\arraybackslash}m{10cm}} \toprule Methodology & R package & Statistics and references \\ \midrule \multirow{2}{2.2cm}{Exact distributions} & \texttt{goftest} & $W_n^2$ \citep{Csorgo1996}, $A_n^2$ \citep{Marsaglia2004}\\ \cline{2-3} & \texttt{stats} & $D_n$ \citep{Marsaglia2003} \\ \midrule \multirow{3}{2.2cm}{Transformation-based} & \texttt{circular} & $V_n$, $U_n^2$ \citep{Stephens1970}\\ \cline{2-3} & \texttt{sphunif} & $D_n$, $W_n^2$, $V_n$, $U_n^2$ \citep{Stephens1970} \\ \cline{2-3} & \texttt{EnvStats} & $D_n$, $W_n^2$, $A_n^2$ \citep{D'Agostino1986} \\ \bottomrule \end{tabular} \fi \caption{\small R packages implementing different approximation methods to compute exact $p$-values of goodness-of-fit tests: \texttt{circular} \citep{Agostinelli2017}, \texttt{sphunif} \citep{ Garcia-Portugues2020c}, \texttt{EnvStats} \citep{Millard2013}, \texttt{goftest} \citep{Faraway2019}, and \texttt{stats} \citep{RCoreTeam2021}.} \label{tab:code-mod} \end{table} In this paper we build on Stephens' transformations to expand and automatize them. First, we present a data-driven procedure to achieve a better stabilization, with respect to the sample size $n$, of the exact null distribution of a generic test statistic $T_n$ of interest, for a wider range of significance levels $\alpha$ (i.e., upper $\alpha$-quantiles of $T_n$). Specifically, new modifications for the (one-sample) Kolmogorov--Smirnov, Cramér--von Mises, Kuiper, and Watson test statistics are derived and shown to extend the scope of applicability of previous approaches. To the best of our knowledge, we also provide the first instance of such a stabilization for the Anderson--Darling test statistic. Second, we provide a method to approximate semi-continuous exact $p$-values for the tests constructed from stabilized statistics. Through an extensive simulation study, we evidence a significant improvement in the precision of the stabilization of the exact critical values of $T_n$ for several sample sizes, as well as a competitive computational cost when compared with statistic-specific methods for evaluating exact null distributions. We also show large improvements, both in precision and computational efficiency, over the use of Monte Carlo simulation, arguably the most popular test calibration approach nowadays. Third, we develop an extension of our stabilization procedure to deal with several recent test statistics for assessing uniformity on $\mathbb{S}^{p-1}$, $p \geq 2$, and which hence have dimension-dependent distributions. In particular, we stabilize the exact null distribution of a novel Anderson--Darling test statistic for circular data. Finally, the introduced stabilization methodology allows us to perform tests in batches of small-to-moderate samples in an accurate and fast manner that does not require Monte Carlo simulation. This is illustrated in an astronomical dataset comprised of sunspots appearance longitudes that exhibits a suspected temporal mix of uniform and non-uniform patterns. The rest of the paper is organized as follows. Section \ref{section:modification} introduces Stephens' approach (Section \ref{section:Stephens}) and our proposed extension (Section \ref{section:n-alpha-mod}), together with simulation studies and a comparison between several modifications (Section \ref{section:simulations}). Section \ref{section:parameter-modification} briefly introduces the projected-ecdf statistics for testing uniformity on the hypersphere (Section \ref{section:proj-based-unif-tests}), develops the parameter-dependent transformations to achieve their stabilization (Section \ref{section:modification-projected-ecdf}), and analyzes the empirical performance of these transformations (Section \ref{section:simulations2}). Section \ref{section:applications} gives an application of the modified statistics to astronomy. A final discussion of the obtained results concludes the paper in Section \ref{section:discussion}. \section{Stabilization of ecdf statistics} \label{section:modification} \subsection{On Stephens' stabilization} \label{section:Stephens} \cite{Stephens1970} stabilization aims to transform a statistic $T_n$ into $T^{\ast}_n$ through a function of $n$, so that the upper $\alpha$-quantiles of $T^{\ast}_n$ are well approximated by the upper $\alpha$-quantiles of $T_\infty$, the random variable distributed as the asymptotic null distribution of $T_n$, for small-to-moderate sample sizes. The transformation can be interpreted as a two-step stabilization. First, in the \textit{quantile ratios stabilization}, $T_n$ is modified to $T^{\alpha_0\text{-s}}_n$ so that the ratios of $T^{\alpha_0\text{-s}}_n$'s upper $\alpha$-quantiles with respect to a certain reference upper $\alpha_0$-quantile are roughly constant as a function of $n$. Second, in the \textit{asymptotic stabilization}, $T^{\alpha_0\text{-s}}_n$ is transformed into $T^{\ast}_n$ so that the upper $\alpha$-quantiles of $T^{\ast}_n$ are approximately equal to the asymptotic upper $\alpha$-quantiles for small-to-asymptotic sample sizes. For the sake of brevity, and since we are concerned only with upper tail tests, henceforth we will use ``$\alpha$-quantile'' as a replacement of ``upper $\alpha$-quantile''. The ratios involved in the first step are $T_{n; \alpha}/T_{n; \alpha_0}$, where $T_{n; \alpha}$ is the $\alpha$-quantile of the distribution for sample size $n$, i.e., $\mathbb{P}\left[T_{n} \ge T_{n; \alpha}\right] = \alpha$. Obviously, these ratios do not have to be constant for all $n$, as Figure \ref{fig:W2} shows for $W^2_n$. The \textit{quantile ratios stabilization} step searches for a transformed statistic, $T_{n}^{\alpha_0\text{-s}}$, whose quantile ratios $T_{n; \alpha}^{\alpha_0\text{-s}}/T_{n; \alpha_0}^{\alpha_0\text{-s}}$ do not depend on $n$. In other words, the desideratum is that the previous quantile ratios, for any sample size $n$, equal the asymptotic quantile ratios of the statistic, $T_{\infty; \alpha}/T_{\infty; \alpha_0}$, where $T_{\infty; \alpha}$ is the asymptotic $\alpha$-quantile. One way to find such transformation is by setting $T_{n}^{\alpha_0\text{-s}}:=T_{n} - p(n)$ for a certain function $p:\mathbb{N}\rightarrow \mathbb{R}$ such that it verifies $\lim_{n \to \infty}p(n) = 0$ and the second equality below, for all $n$ and $\alpha$: \begin{align} \frac{T_{n; \alpha}^{\alpha_0\text{-s}}}{T_{n; \alpha_0}^{\alpha_0\text{-s}}} = \frac{T_{n; \alpha} - p(n)}{T_{n; \alpha_0} - p(n)} = \lim_{n \to \infty}\frac{T_{n; \alpha} - p(n)}{T_{n; \alpha_0} - p(n)} = \frac{T_{\infty; \alpha}}{T_{\infty; \alpha_0}} =: k_{\infty; \alpha}. \label{eq:stabilize_ratios} \end{align} Hence, $p$ is such that \begin{align*} p(n) = \frac{T_{n; \alpha} - k_{\infty; \alpha} \cdot T_{n; \alpha_0}}{1-k_{\infty; \alpha}}, \end{align*} which clearly depends on $\alpha$. Stephens fitted $p$ (see the end of this section) for a specific value of $\alpha$, at the expense of accuracy for other values of $\alpha$. Upon this step, the quantile ratios of $T_{n}^{\alpha_0\text{-s}}$ are roughly constant for all $n$, as Figure \ref{fig:W2} shows for $W^2_n$. This first step is omissible for statistics with quantile ratios that are already roughly stable, as it is remarkably the case of $D_n$ and $V_n$ \citep[Section 5]{Stephens1970}. In this case, $p\approx 0$. The \textit{asymptotic stabilization} step aims to transform the already modified statistic, $T_n^{\alpha_0\text{-s}}$, into $T_n^{\ast}$ so that the $\alpha$-quantiles of this latter statistic are well approximated by the asymptotic $\alpha$-quantiles of the original statistic $T_n$. For that goal, $g:\mathbb{N}\rightarrow \mathbb{R}$ is defined as $g(n):=T_{\infty;\alpha}/T_{n;\alpha}^{\alpha_0\text{-s}}$. Owing to \eqref{eq:stabilize_ratios}, in principle this function does not depend on the significance level $\alpha$, only on $\alpha_0$: \begin{align} \label{eq:asint_ratios_relationship} \frac{T_{\infty; \alpha}}{T_{n; \alpha}^{\alpha_0\text{-s}}} = \frac{T_{\infty; \alpha_0}}{T_{n; \alpha_0}^{\alpha_0\text{-s}}}, \end{align} which holds for any value of $\alpha$. However, when $p$ and $g$ are fitted in practice, \eqref{eq:asint_ratios_relationship} will approximately hold for a certain set of $\alpha$ values, as those shown in Figure \ref{fig:W2}). The function $g$ is estimated from the ratio between $T_{\infty;\alpha}/T_{n;\alpha}^{\alpha_0\text{-s}}$ for a particular value $\alpha_1$ (possibly different from $\alpha_0$), which may eventually result in a loss of accuracy for other quantiles. The final modified form of $T_n$ is \begin{align} T_n^{\ast} = T^{\alpha_0\text{-s}}_{n} \cdot g(n) = \left(T_n - p(n)\right) \cdot g(n), \label{eqn:stephens-modification} \end{align} where we highlight that in practice the functions $p$ and $g$ have to be previously estimated. Once these fits are readily available, the main benefit of \eqref{eqn:stephens-modification} is the simplicity of its use, which only requires evaluating a simple $n$-dependent transformation of $T_n$. The fits of $p$ and $g$ were originally handcrafted in a case-by-case basis \citep[even ``found by trial'',][Section 5]{Stephens1970}, or were heavily influenced by Stephens' functional forms, which pose significant limitations in terms of automation and flexibility. Moreover, the approximation error to the exact quantiles of $T_n$ that is obtained is, first, dependent on $\alpha_1$ and, second, significant for $\alpha$-quantiles different than $\alpha_1$. An additional downside of \eqref{eqn:stephens-modification} is the initial stabilization step, which increases the complexity and tuning required (selection of $\alpha_0$), and is a source of uncertainty for the final approximation. In order to overcome these problems, we present in the next section an enhanced stabilization approach that improves the accuracy of exact $\alpha$-quantiles while retaining the simplicity of the transformation. \begin{figure}[h!] \iffigstabs \centering \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=\linewidth]{img/Fig1a.pdf} \caption{\small $W_{n; \alpha}^2/W_{n; 0.10}^2$} \label{fig:W2-ratio} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=\linewidth]{img/Fig1b.pdf} \caption{\small $W_{\infty;\alpha}^2/W_{n;\alpha}^2$} \label{fig:W2-asint} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=\linewidth]{img/Fig1c.pdf} \caption{\small $W_{n; \alpha}^{2,0.10\text{-s}}/W_{n; 0.10}^{2,0.10\text{-s}}$} \label{fig:W2-stable-ratio} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=\linewidth]{img/Fig1d.pdf} \caption{\small $W_{\infty;\alpha}^2/W_{n;\alpha}^{2,0.10\text{-s}}$} \label{fig:W2-stable-asint} \end{subfigure} \fi \caption{\small Quantile ratios of the Cramér--von Mises statistic $W_{n}^{2}$ (leftmost two figures) and its ratio-stabilized statistic $W_{n}^{2,0.10\text{-s}}$ (rightmost two figures).} \label{fig:W2} \end{figure} \subsection{\texorpdfstring{$(n,\alpha)$-stabilization}{(n, alpha)-stabilization}} \label{section:n-alpha-mod} Our stabilization consists on a single-step transformation of the original statistic $T_n$ into $T_n^{\ast}(\alpha)$ by a function that depends on the sample size $n$ and the significance level $\alpha$ at which the test is to be performed, so that the exact $\alpha$-quantile of $T_n^{\ast}(\alpha)$ is closely approximated by the $\alpha$-quantile of $T_{\infty}$. Additionally to its improved accuracy and simplicity, an advantage of our modification is that it compresses extensive lookup tables: critical values do not need to be available for different significance levels because $\alpha$ is already included within the transformation. The ratios $T_{\infty; \alpha}/T_{n; \alpha}$, shown in Figure \ref{fig:W2} for $W_n^2$, can be directly modeled as a function $g:\mathbb{N}\times(0,1)\to\mathbb{R}$ of $(n,\alpha)$, hence condensing the two steps from Section \ref{section:Stephens} into one. To that aim, we define $g$ as the function satisfying \begin{align} \label{eq:f_n_alpha} \alpha = \mathbb{P} \left[T_n \ge T_{n; \alpha}\right] = \mathbb{P} \left[T_n\ge T_{\infty; \alpha} / g(n, \alpha)\right], \end{align} for all $(n,\alpha)$, and our transformed statistic (for the $\alpha$ significance level) as \begin{align*} T_{n}^{\ast}(\alpha):= T_{n} \cdot g(n, \alpha). \end{align*} It is very convenient to reexpress $g$, as defined in \eqref{eq:f_n_alpha}, as \begin{align} \frac{T_{\infty; \alpha}}{T_{n; \alpha}} = g(n, \alpha) + \varepsilon,\label{eq:reg} \end{align} where $\varepsilon=0$ if \eqref{eq:f_n_alpha} is perfectly satisfied for all $(n,\alpha)$. Indeed, Equation \eqref{eq:reg} casts the stabilization problem as an error-free fixed-design regression problem with predictors $(n,\alpha)$, response $Y:=T_{\infty; \alpha}/T_{n; \alpha}$, and unknown regression function $g$. Casting Stephens' stabilizations as a regression problem was early introduced in \cite{Hegazy1975}, \cite{Pettitt1977}, and \cite{Johannes1980}. Yet these works focus on using the sample size as the unique predictor, for isolated $\alpha$-quantiles, an approach that has been later applied in \cite{Marks1998}, \cite{Marks2007}, and \cite{Heo2013}. We introduce now a sufficiently flexible parametric specification for $g$ in \eqref{eq:reg} that allows its effective estimation in practice. We resort to a linear model featuring negative powers of the sample size $n$ and significance level $\alpha$ as predictors, as well as the corresponding interaction effects between them. Precisely, we consider the following saturated model: \begin{align} g(n, \alpha)= 1+\frac{\beta_1}{\sqrt{n}}+\frac{\beta_2}{n}+\frac{\beta_3}{\sqrt{\alpha}}+\frac{\beta_4}{\alpha} +\frac{\beta_5}{\sqrt{n\alpha}}+\frac{\beta_6}{\sqrt{n}\alpha}+\frac{\beta_7}{n\sqrt{\alpha}}+\frac{\beta_8}{n\alpha}. \label{eq:saturated_g_n_alpha} \end{align} The fixed intercept and negative powers of $n$ were included to guarantee that $\lim_{n\to\infty}g(n,\alpha)=1$, thus in accordance with $\lim_{n\to\infty} T_{\infty; \alpha}/T_{n; \alpha}=1$. Powers of $n^{-1/2}$ resemble the sample size factors in the terms of an Edgeworth series. The powers of $\alpha^{-1/2}$ were experimentally found to be an appropriate specification for capturing the interactions with $n$. Upon available samples of the form $\{(n_j,\alpha_j,Y_j)\}_{j=1}^J$, $Y_j:=T_{\infty; \alpha_j}/T_{n_j; \alpha_j}$, model \eqref{eq:saturated_g_n_alpha} is estimated through weighted least squares, using the weight $\smash{w_j:=n_j^{-1/2}1_{\{0<\alpha_j\leq 0.25\}}}$ for the $j$-th observation to give heavier weight to the approximation error on lower sample sizes. This specific choice is driven by the order of the standard errors of sample quantiles \citep[see, e.g.,][Section 2.3.3]{Serfling1980}. The indicator in $w_j$ reflects our interest in only stabilizing the upper tail of the test statistic $T_n$, hence disregarding those quantiles associated to non-rejections of the test based on $T_n$. The data required for fitting \eqref{eq:saturated_g_n_alpha} is to be produced under the (fairly realistic nowadays) assumption that it is feasible to simulate a large number of statistics $T_n$ under the null hypothesis and for varying sample sizes. Specifically, we have carried out the following simulation for the test statistics $D_n$, $W^2_n$, $V_n$, $U_n^2$, and $A_n^2$. We produced $M=10^7$ Monte Carlo random samples for $T_n$, for each of the sample sizes $n$ in the set $\mathcal{N}:=\{5,\ldots,100, 102,\ldots, 200, 204, \ldots,300,308,\ldots,404,\allowbreak420,\allowbreak\ldots,500\}$. We then condensed these statistics as the quantiles $\{T_{n_j;\alpha_j}: n_j\in\mathcal{N},\,\alpha_j\in\mathcal{A}\}$, for $\mathcal{A}:=\{a/A:a=1,\ldots,A\}$, $A=10^3$. The asymptotic $\alpha$-quantiles $\{T_{\infty; \alpha_j}:\alpha_j\in\mathcal{A}\}$ were computed from the statistics' asymptotic null distributions, as those were readily available in the literature. The generated sample is therefore $\{(n_j,\alpha_j,Y_j)\}_{j=1}^J$, $J=\#\mathcal{N}\times A$. Clearly, this is a computationally-intensive process, although it only needs to be done once per kind of test statistic. The procedure is analogous for other one-sample test statistics that are feasible to simulate under the simple null hypothesis at hand (if the limiting distribution is not available or tractable, a sufficiently large sample size $n$ could be used to approximate $T_{\infty; \alpha}$ by $T_{n; \alpha}$ by Monte Carlo). Using the sample $\{(n_j,\alpha_j,Y_j)\}_{j=1}^J$, we advocate the use of stepwise regression for performing model selection within \eqref{eq:saturated_g_n_alpha}. Specifically, we performed a forward-backward search for minimizing the Bayesian Information Criterion (BIC) on the space of models contained in \eqref{eq:saturated_g_n_alpha}. The search was initiated with the model featuring only the predictors used in Stephens' modifications, i.e., $n^{-1/2}$ and $n^{-1}$. To attain simpler models than the BIC-optimal one, a final step was implemented to iteratively drop one-by-one the predictors that contributed the least to the adjusted $R^2$ of the resulting model. The threshold was established to keep only three final terms, the predictors removed decreasing less than $0.15\%$ the adjusted $R^2$ which, averaged across the five statistics, was larger than $0.96$. \begin{table}[h!] \iffigstabs \centering \small \begin{tabular}{>{\centering\arraybackslash}m{0.5cm} >{\centering\arraybackslash}m{5cm} >{\centering\arraybackslash}m{1cm} >{\centering\arraybackslash}m{1cm} >{\centering\arraybackslash}m{1cm} >{\centering\arraybackslash}m{1cm} >{\centering\arraybackslash}m{1cm}} \toprule $T_n$ & $T_n^{\ast}(\alpha)$ & $T_{\infty;0.15}$ & $T_{\infty;0.1}$ & $T_{\infty;0.05}$ & $T_{\infty;0.025}$ & $T_{\infty;0.01}$\\ \midrule $D_n$ & $D_n\left(1 + \frac{0.1575}{\sqrt{n}} + \frac{0.0192}{n\sqrt{\alpha}} - \frac{0.0051}{\sqrt{n\alpha}}\right)$ & 1.1380 & 1.2239 & 1.3581 & 1.4803 & 1.6277 \\ [3ex] $W_n^2$ & $W_n^2\left(1 - \frac{0.1651}{n} + \frac{0.0749}{n\sqrt{\alpha}} - \frac{0.0014}{n \alpha}\right)$ & 0.2841 & 0.3473 & 0.4613 & 0.5806 & 0.7435 \\ [3ex] $V_n$ & $V_n\left(1 + \frac{0.2330}{\sqrt{n}} + \frac{0.0276}{n\sqrt{\alpha}} - \frac{0.0068}{\sqrt{n\alpha}}\right)$ & 1.5370 & 1.6196 & 1.7473 & 1.8625 & 2.0010 \\ [3ex] $U_n^2$ & $U_n^2\left(1 - \frac{0.1505}{n} + \frac{0.0917}{n\sqrt{\alpha}} - \frac{0.0018}{n \alpha}\right)$ & 0.1313 & 0.1518 & 0.1869 & 0.2220 & 0.2685 \\ [3ex] $A_n^2$ & $A_n^2\left(1 + \frac{0.0360}{n} - \frac{0.0234}{n \sqrt{\alpha}} + \frac{0.0006}{n \alpha}\right)$ & 1.6212 & 1.9331 & 2.4922 & 3.0775 & 3.8784 \\ [3ex] \bottomrule \end{tabular} \fi \caption{\small Modified statistics for sample size $n$ and significance level $\alpha$. Modified forms are valid for $n \ge 5$ and $0<\alpha \leq 0.25$. $\mathcal{H}_0$ is rejected at significance level $\alpha$ if $T_{n}^{\ast}(\alpha)>T_{\infty;\alpha}$.}\label{table:modified-stats} \end{table} The resulting modified forms for $D_n$, $W_n^2$, $V_n$, $U_n^2$, and $A_n^2$ are collected in Table \ref{table:modified-stats}. All of the transformations have three correcting terms, one dependent on $n$ and the other two related with $n$ and $\alpha$, $(n\sqrt{\alpha})^{-1}$ being a common factor to the five statistics. Interestingly, the same correction terms are present within the groups of supremum- and quadratic-norm statistics, as well as in the pairs of linear and circular variants. These forms are valid for $n \ge 5$, which anecdotally gives a minor improvement over Stephens' forms, valid for $n\ge 8$. The steps to use them with the upper tail test for $\mathcal{H}_0$ that is based on $T_n$ and that is carried out at the significance level $\alpha$ are as follows: \begin{enumerate}[label=(\textit{\roman*}),ref=(\textit{\roman*})] \item Compute the test statistic $T_n$ using its original form. \item Calculate the corresponding modified test statistic, $T_n^{\ast}(\alpha)$, in Table \ref{table:modified-stats}. \item Retrieve an asymptotic critical value $T_{\infty;\alpha}$ in Table \ref{table:modified-stats}. If $T_n^{\ast}(\alpha)>T_{\infty;\alpha}$, reject $\mathcal{H}_0$ at significance level $\alpha$. \end{enumerate} The transformed statistics can also be used to obtain approximations to exact $p$-values, provided the asymptotic quantiles $\mathcal{T}_\infty:=\{T_{\infty;\alpha_j}:\alpha_j\in\mathcal{A}\}$ have been precomputed. This is done in two steps. First, $p$-value bounds $[\alpha_1,\alpha_2]$ are obtained from the grid $\mathcal{A}$ such that $T_n^*(\alpha_1)\leq T_{\infty;\alpha_1}$ and $T_n^*(\alpha_2)> T_{\infty;\alpha_2}$. Once these discrete bounds for $p\text{-value}$ are available, a linear interpolation is applied to define $t_\infty(\alpha):=T_{\infty;\alpha_1}+(T_{\infty;\alpha_2}-T_{\infty;\alpha_1})(\alpha-\alpha_1)/(\alpha_2-\alpha_1)$ for $\alpha\in[\alpha_1,\alpha_2]$ and then the root $\alpha^*\in[\alpha_1,\alpha_2]$ of \begin{align} T_n^*(\alpha^*)=t_\infty(\alpha^*)\label{eq:root} \end{align} is obtained by Newton--Raphson (NR). The approximate $p\text{-value}$ is then set to $\alpha^*$. If $\alpha_1 \ge \alpha_{\max}$, $\alpha_{\max}=0.25$ being the maximum element in $\mathcal{A}$ for which the transformation has been estimated, $p\text{-value}=\alpha_{\max}$ is returned. The following algorithm summarizes this process. \begin{algorithm} \caption{\small $p$-value approximation using the $(n, \alpha)$-modification}\label{euclid} \label{alg:p-val} \small \begin{algorithmic}[1] \Function{pvalue\_approx}{$T_n$, $n$, $\mathcal{T}_\infty$, $\mathcal{A}$} \For{$j \textbf{ from } 1\textbf{ to }\#\mathcal{A}$} \State $T_{\mathrm{mod}, \alpha} \gets T_n^*(T_n, n, \mathcal{A}\left[j\right])$ \If {$T_{\mathrm{mod}, \alpha} > \mathcal{T}_\infty\left[j\right]$} \If {$j=1$} \State $\left(\alpha_1, \alpha_2\right) \gets \left(\mathcal{A}\left[j\right], \mathcal{A}\left[j+1\right]\right)$ \State $\left(T_{\infty;\alpha_2},T_{\infty;\alpha_1}\right) \gets \left(\mathcal{T}_\infty\left[j\right], \mathcal{T}_\infty\left[j+1\right]\right)$ \Else \State $\left(\alpha_1, \alpha_2\right) \gets \left(\mathcal{A}\left[j-1\right], \mathcal{A}\left[j\right]\right)$ \State $\left(T_{\infty;\alpha_2},T_{\infty;\alpha_1}\right) \gets \left(\mathcal{T}_\infty\left[j-1\right], \mathcal{T}_\infty\left[j\right]\right)$ \EndIf \State $\alpha^\ast \gets \operatorname{NR}(T_n^*(T_n, n, \alpha)-t_\infty(\alpha, \mathcal{T}_{\infty}, \alpha_1, \alpha_2))$ \State $\textbf{return }\alpha^\ast$ \EndIf \EndFor \State $\textbf{return } 0.25$ \EndFunction \end{algorithmic} \end{algorithm} When there is no $\alpha_1$ in $\mathcal{A}$ such that $T_n^*(\alpha_1)\leq T_{\infty;\alpha_1}$, the $p$-value is set as the nonnegative extrapolation of the root in \eqref{eq:root}, with $\alpha_1$ and $\alpha_2$ being the two lowest elements in $\mathcal{A}$. \subsection{Simulation study} \label{section:simulations} \begin{figure}[htpb!] \iffigstabs \centering \vspace{-3.25em} \begin{subfigure}{0.44\textwidth} \includegraphics[width=\linewidth]{img/Fig2a.pdf} \caption{\small $D_n$\label{fig:Dn}} \end{subfigure}\vspace{-2.5em} \begin{subfigure}{0.44\textwidth} \includegraphics[width=\linewidth]{img/Fig2b.pdf} \caption{\small $V_n$} \end{subfigure} \vspace{-2.5em} \begin{subfigure}{0.44\textwidth} \includegraphics[width=\linewidth]{img/Fig2c.pdf} \caption{\small $W^2_n$\label{fig:Wn2}} \end{subfigure} \begin{subfigure}{0.44\textwidth} \includegraphics[width=\linewidth]{img/Fig2d.pdf} \caption{\small $U^2_n$} \end{subfigure} \vspace{-0.25em} \begin{subfigure}{0.44\textwidth} \includegraphics[width=\linewidth]{img/Fig2e.pdf} \caption{\small $A^2_n$\label{fig:An2}} \end{subfigure} \fi \caption{\small Relative error (in \%) $\lvert\alpha-\tilde{\alpha}\rvert/\alpha$ between the significance level $\alpha$ and $\tilde{\alpha}$, the empirical rejection rate using an approximated exact-$n$ critical value, averaged across different sample sizes $n$. The legend in Figure \ref{fig:Dn} details the approximation methods considered and applies to the rest of panels, with different specific methods on Figures \ref{fig:Wn2} and \ref{fig:An2}. The gray shaded area corresponds to the $95\%$ confidence interval of the relative error when $\tilde{\alpha}$ is produced by the exact-$n$ critical value estimated by $M = 10^7$ Monte Carlo samples.} \label{fig:rejection-proportion-MC-error} \end{figure} We evaluate next the divergence of the exact-$n$ critical values of the test statistics $D_n$, $V_n$, $W^2_n$, $U^2_n$, and $A_n^2$ under $\mathcal{H}_0$ with their corresponding approximations given by: (a) Stephens' modified forms; (b) the particular approximation methods from Table \ref{tab:code-mod}; (c) Monte Carlo approximation with $10^4$ trials; and (d) our proposed transformations. Figure \ref{fig:rejection-proportion-MC-error} displays the relative errors for the rejection proportions generated by approximated critical values based on methods (a)--(d). These relative errors are defined as $\lvert\alpha-\tilde{\alpha}\rvert/\alpha$, where $\alpha$ is the significance level and $\tilde{\alpha}$ is the empirical rejection rate obtained with $M=10^7$ Monte Carlo samples when using an $\alpha$-critical value computed by each approximation method. The $M=10^7$ Monte Carlo samples under $\mathcal{H}_0$ were drawn for each of the sample sizes $n$ in $\mathcal{N}_{\text{test}}:=\{5, \ldots, 10, 20, \ldots, 50, 100, 200, 300\}$. The sample quantiles for the significance levels in $\mathcal{A}_{\text{test}}:= \{a/100:a=1, \ldots, 25\}$ were computed for each sample size and statistic. For the critical value approximations (a) and (d), critical values were computed by applying the corresponding inverse transformation from Table \ref{table:summary-table} to the asymptotic $\alpha$-critical value $T_{\infty;\alpha}$. Obtaining the critical values in (b) is straightforward using the functions \texttt{stats:::C\_pKolmogorov2x} \citep{RCoreTeam2021} for $D_n$, and \texttt{goftest::pCvM} and \texttt{goftest::pAD} \citep{Faraway2019} for $W^2_n$ and $A^2_n$, respectively. For the critical value approximation based on (c), the (random) relative error for each critical value was averaged over $10^3$ simulations to give an estimate of the average Monte Carlo relative error. Each panel in Figure \ref{fig:rejection-proportion-MC-error} shows the relative error along $\mathcal{A}_{\text{test}}$ averaged for three sets of sample sizes: $5 \leq n < 10$, $10 \leq n < 100$, and $n \geq 100$. Along $\mathcal{A}_{\text{test}}$, the average relative errors of our stabilizations are $0.5\%$, $0.3\%$, $0.5\%$, $0.3\%$, and $0.7\%$ for $D_n$, $W_n^2$, $U_n^2$, $A_n^2$, and $V_n$, respectively. The relative errors remain fairly stable for every significance value in $\mathcal{A}_{\text{test}}$ without significant differences between the slots of sample sizes analyzed. Compared to Stephens' stabilizations, our relative error is lower by a factor of $\times2$, $\times12$, $\times2$, $\times3$, and $\times4$ on average, respectively. The largest improvements are achieved for $\alpha \neq 0.05$, since Stephens' stabilizations were tuned for $\alpha=0.05$, and for sample sizes $n \leq 100$. This behavior is more obvious in $W_n^2$ and $U_n^2$, which are the statistics that, in Stephens' approach, use an additional prior step for stabilizing the quantile ratios. When comparing to the Monte Carlo approximation with $10^4$ samples, our relative error is lower for every significance level and sample size tested, and improves by $\times5$, $\times10$, $\times5$, $\times9$, and $\times4$ on average, respectively. As expected, the approximation methods that are specifically designed for each test statistic achieve the lowest relative errors. \begin{table}[h!] \iffigstabs \small \centering \begin{tabular}{>{\centering\arraybackslash}m{1cm} wr{0.6cm} wr{0.6cm} wr{0.6cm} wr{0.6cm} wr{0.6cm} wr{0.6cm} wr{0.6cm} wr{0.6cm} wr{0.6cm} wr{0.6cm}} \toprule \multirow{2}{*}{$\alpha$} & \multicolumn{10}{c}{$n$}\\ [1ex] & \multicolumn{1}{c}{$5$} & \multicolumn{1}{c}{$6$} & \multicolumn{1}{c}{$7$} & \multicolumn{1}{c}{$8$} & \multicolumn{1}{c}{$9$} & \multicolumn{1}{c}{$10$} & \multicolumn{1}{c}{$20$} & \multicolumn{1}{c}{$30$} & \multicolumn{1}{c}{$40$} & \multicolumn{1}{c}{$50$}\\ \midrule \multicolumn{11}{c}{$D_n$: \cite{Marsaglia2003} vs. Algorithm \ref{alg:p-val}} \\ [1ex] 0.01 & 2.48 & 2.56 & 3.04 & 2.96 & 3.00 & 3.23 & 7.09 & 10.74 & 17.08 & 23.38 \\ 0.02 & 2.28 & 2.28 & 2.40 & 2.75 & 2.80 & 2.85 & 4.80 & 9.62 & 12.04 & 17.39 \\ 0.05 & 1.61 & 1.97 & 1.90 & 1.87 & 1.90 & 2.29 & 3.06 & 5.94 & 7.29 & 10.50 \\ 0.10 & 1.22 & 1.21 & 1.44 & 1.43 & 1.48 & 1.49 & 2.24 & 3.33 & 4.17 & 6.05 \\ 0.15 & 1.02 & 0.96 & 0.98 & 1.13 & 1.15 & 1.19 & 1.42 & 2.74 & 3.45 & 3.67 \\ 0.25 & 0.68 & 0.71 & 0.70 & 0.71 & 0.70 & 0.81 & 1.04 & 1.48 & 1.82 & 2.64 \\ \midrule \multicolumn{11}{c}{$W^2_n$: \cite{Csorgo1996} vs. Algorithm \ref{alg:p-val}} \\ [1ex] 0.01 & 10.43 & 10.40 & 10.17 & 10.12 & 10.03 & 10.00 & 10.60 & 10.68 & 10.66 & 11.82 \\ 0.02 & 8.69 & 8.47 & 8.42 & 8.47 & 8.73 & 8.75 & 8.92 & 9.06 & 8.85 & 8.99 \\ 0.05 & 5.54 & 5.53 & 5.61 & 5.57 & 5.56 & 5.58 & 5.67 & 5.68 & 5.64 & 5.68 \\ 0.10 & 3.46 & 3.50 & 3.48 & 3.46 & 3.45 & 3.48 & 3.50 & 3.48 & 3.48 & 3.49 \\ 0.15 & 2.50 & 2.48 & 2.49 & 2.54 & 2.48 & 2.55 & 2.57 & 2.50 & 2.51 & 2.52 \\ 0.25 & 1.62 & 1.62 & 1.63 & 1.59 & 1.59 & 1.64 & 1.61 & 1.61 & 1.65 & 1.64 \\ \midrule \multicolumn{11}{c}{$A^2_n$: \cite{Marsaglia2004} vs. Algorithm \ref{alg:p-val}} \\ [1ex] 0.01 & 6.66 & 6.28 & 6.23 & 6.14 & 6.20 & 6.42 & 6.29 & 6.18 & 6.29 & 6.43 \\ 0.02 & 6.00 & 6.52 & 5.91 & 6.18 & 6.13 & 6.22 & 5.91 & 6.26 & 6.14 & 6.60 \\ 0.05 & 5.12 & 5.24 & 5.72 & 5.74 & 5.36 & 6.04 & 5.24 & 5.23 & 5.44 & 5.44 \\ 0.10 & 4.26 & 4.39 & 4.35 & 4.26 & 4.26 & 4.81 & 4.35 & 4.52 & 4.36 & 4.32 \\ 0.15 & 3.70 & 3.62 & 3.64 & 3.78 & 3.64 & 3.65 & 3.78 & 3.75 & 3.72 & 3.74 \\ 0.25 & 2.87 & 3.19 & 2.79 & 2.85 & 3.10 & 3.02 & 2.83 & 2.85 & 2.98 & 2.87 \\ \bottomrule \end{tabular} \fi \caption{\small Running time ratios between specific $p$-value approximation methods and our $p$-value approximation method (Algorithm \ref{alg:p-val}). Ratios are among the median of $10^3$ evaluations for each pair $(n, \alpha)$. The average of the median running times of Algorithm \ref{alg:p-val} are $3.65 \mu s$, $225 \mu s$ (for R version, $4.5 \mu s$ for C++ version), and $3 \mu s$ for $D_n$, $W^2_n$, and $A^2_n$, respectively.} \label{table:exec-time} \end{table} Table \ref{table:exec-time} presents a comparison of the running times between our $p$-value approximation (Algorithm \ref{alg:p-val}) and the already implemented $p$-value approximation methods for $D_n$, $W^2_n$, and $A^2_n$ described in Table \ref{tab:code-mod}. Our method is shown to be $\times 3.8$, $\times 5.4$, and $\times 4.8$ faster than \cite{Marsaglia2003}, \cite{Csorgo1996}, and \cite{Marsaglia2004}, respectively. These methods are already implemented in C++, except for \cite{Csorgo1996} which is in R. Hence, C++ and R versions implementing Algorithm \ref{alg:p-val} were developed for each statistic to allow a fair comparison. In addition, Table \ref{table:exec-timeMC} compares the running times between the $p$-value approximation based on Algorithm \ref{alg:p-val} and a Monte Carlo $p$-value approximation based on $10^4$ trials, which shows that our method is $\times 75 \cdot 10^4$, $\times 58 \cdot 10^4$, and $\times 93 \cdot 10^4$ faster. Monte Carlo approximation was implemented in R code with calls to C++-coded statistics (the most time-consuming part), and the C++ version of Algorithm \ref{alg:p-val} was used. All comparisons were carried out using \texttt{microbenchmark} package \citep{Mersmann2019}. In order to compute the median running time of each function for a given sample size $n$ and significance level $\alpha$, $10^3$ evaluations of the compiled functions were run after $10$ warm-up runs using the same machine, a regular desktop computer with a $3.6$GHz processor. In all cases, the computation of the original statistic $T_n$ was excluded from the timings. R and C++ integration was done with the \texttt{Rcpp} package \citep{Eddelbuettel2011}. \begin{table}[!h] \iffigstabs \small \centering \begin{tabular}{>{\centering\arraybackslash}m{1cm} >{\centering\arraybackslash}m{0.6cm} >{\centering\arraybackslash}m{0.6cm} >{\centering\arraybackslash}m{0.6cm} >{\centering\arraybackslash}m{0.6cm} >{\centering\arraybackslash}m{0.6cm} >{\centering\arraybackslash}m{0.6cm} >{\centering\arraybackslash}m{0.6cm} >{\centering\arraybackslash}m{0.6cm} >{\centering\arraybackslash}m{0.6cm} >{\centering\arraybackslash}m{0.6cm}} \toprule \multirow{2}{*}{$\alpha$} & \multicolumn{10}{c}{$n$}\\ [1ex] & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $20$ & $30$ & $40$ & $50$\\ \midrule \multicolumn{11}{c}{$D_n$: Monte Carlo vs. Algorithm \ref{alg:p-val}} \\ [1ex] $0.05$ & 14 & 16 & 19 & 23 & 16 & 28 & 69 & 118 & 182 & 261 \\ \midrule \multicolumn{11}{c}{$W^2_n$: Monte Carlo vs. Algorithm \ref{alg:p-val}} \\ [1ex] $0.05$ & 10 & 12 & 14 & 13 & 17 & 21 & 51 & 94 & 146 & 203 \\ \midrule \multicolumn{11}{c}{$A^2_n$: Monte Carlo vs. Algorithm \ref{alg:p-val}} \\ [1ex] $0.05$ & 15 & 19 & 22 & 26 & 32 & 33 & 80 & 150 & 227 & 325 \\ \bottomrule \end{tabular} \fi \caption{\small Running time ratios, in scale $\times 10^4$, between a $p$-value Monte Carlo approximation based on $10^4$ trials and our $p$-value approximation method (Algorithm \ref{alg:p-val}). Ratios are among the median of $10^3$ evaluations for each pair $(n, \alpha)$. The average of the median running times for the Monte Carlo approximation are $2.34 s$, $2.35 s$, and $2.35 s$ for $D_n$, $W^2_n$, and $A^2_n$, respectively.} \label{table:exec-timeMC} \end{table} The empirical results show that our stabilized statistics forms give more accurate results than those by Stephens', while still retaining the simplicity of the latter. When it comes to the Monte Carlo approximation (with $10^4$ trials), relative errors on the empirical rejection rates are lowered by a factor that varies from $\times4$ to $\times10$, depending on the statistic. In addition, Table \ref{table:exec-timeMC} shows how our stabilization algorithm outperforms Monte Carlo execution times. Part of these improvements could be attributed to the R-C++ mix, as opposed to pure C++. Yet, given the massive difference in timing orders, we regard this effect as marginal. Arguably, for $D_n$, $W_n^2$, and $A_n^2$, the tailored approximation methods are to be preferred due to their better accuracy. Even in these highly-competitive setting, our stabilizations still offer comparative advantages, as Figure \ref{fig:rejection-proportion-MC-error} shows that their average relative error is $<0.7\%$, sufficing for most practical applications, while Table \ref{table:exec-time} shows an improvement of $\times5$ in running times with respect to specific methods. \section{Stabilization of parameter-dependent statistics} \label{section:parameter-modification} This section gives an extension of the $(n, \alpha)$-transformations introduced in Section \ref{section:n-alpha-mod} that is designed to stabilize the exact distributions of statistics that depend on a (known) parameter. The transformation is instantiated with several statistics for testing uniformity on $\mathbb{S}^{p-1}$, $p\geq2$ being the statistic parameter. \subsection{Projected-ecdf test statistics} \label{section:proj-based-unif-tests} \cite{Garcia-Portugues2020b} proposed a class of test statistics to evaluate the null hypothesis of uniformity of an iid sample $\mathbf{X}_1,\ldots,\mathbf{X}_n$ on $\mathbb{S}^{p-1}$. Projected-ecdf statistics compute the weighted quadratic discrepancy between $F_{n,\boldsymbol{\gamma}}$, the ecdf of $\boldsymbol{\gamma}'\mathbf{X}_1,\ldots,\boldsymbol{\gamma}'\mathbf{X}_n$ for $\boldsymbol{\gamma}\in\mathbb{S}^{p-1}$, and $F_p$, the cdf of the random variable $\boldsymbol{\gamma}'\mathbf{X}$ when $\mathbf{X}\sim\operatorname{Unif}(\mathbb{S}^{p-1})$. The weighted quadratic discrepancies are integrated on all possible directions $\boldsymbol{\gamma}\in\mathbb{S}^{p-1}$, a convenient specification of the projected-ecdf statistics being \begin{align*} P_{n, p}^w : = n\int_{\mathbb{S}^{p-1}}\bigg[\int_{-1}^{1}\left(F_{n, \boldsymbol{\gamma}}(x) - F_{p}(x)\right)^2 w(F_p(x))\,\mathrm{ d}F_{p}(x)\bigg]\,\mathrm{d}\boldsymbol{\gamma}, \end{align*} where $w:[0, 1]\rightarrow\mathbb{R}$ is a certain weight function and the cdf $F_p$ is that of the random variable $T$, with $T^2\sim\mathrm{Beta}(1/2,(p-1)/2)$. The weights $w\equiv1$ and $w(u)=1/(u(1-u))$ result in the Projected Cramér--von Mises statistic, $P_{n, p}^{\text{CvM}}$, and the Projected Anderson--Darling statistic, $P_{n, p}^{\text{AD}}$, respectively. The test based on $P_{n, p}^{\text{CvM}}$ happens to be an extension of the Watson test to $\mathbb{S}^{p-1}$, $p\geq2$, since $P_{n, 2}^{\text{CvM}} = U^2_n/2$. Moreover, the test based on $P_{n, 3}^{\text{CvM}}$ is equivalent to the chordal-based test on $\mathbb{S}^{2}$ by \cite{Bakshaev2010}, whose statistic for $p\geq 2$ is \begin{align*} N_{n,p}:=n\mathbb{E}_{\mathcal{H}_0}\left[\|\mathbf{X}_1-\mathbf{X}_2\|\right]-\frac{1}{n}\sum_{i,j=1}^n \|\mathbf{X}_i-\mathbf{X}_j\|. \end{align*} The statistic $P_{n, p}^{\text{AD}}$ represents the first instance of the Anderson--Darling statistic in the context of directional data. Particularly, $P_{n, 2}^{\text{AD}}$ can be regarded as the circular variant of $A_n^2$, just as $U_n^2$ is the circular variant of $W_n^2$. Asymptotic distributions and computational formulae for $P_{n, p}^{\text{CvM}}$ and $P_{n, p}^{\text{AD}}$ are provided in \cite{Garcia-Portugues2020b}, while the \texttt{sphunif} R package \citep{ Garcia-Portugues2020c} provides implementations for $P_{n, p}^{\text{CvM}}$, $P_{n, p}^{\text{AD}}$, and $N_{n,p}$, for all ${p\geq2}$. \subsection{Stabilization of projected-ecdf statistics} \label{section:modification-projected-ecdf} Let $T_{n, p}$ be an statistic depending on a known parameter $p$, which we assume in $\mathbb{N}$. From expression \eqref{eq:f_n_alpha}, the ratios $T_{\infty, p; \alpha} / T_{n, p; \alpha}$ can be modeled as a function $g:\mathbb{N}\times\mathbb{N}\times(0,1)\to\mathbb{R}$ of $(n,p,\alpha)$. Hence, the modified version of the statistic $T_{n, p}$ is defined as \begin{align*} T_{n,p}^{\ast}(\alpha) := T_{n, p} \cdot g(n, p,\alpha). \end{align*} As in expression \eqref{eq:reg}, the stabilization of $T_{n,p}$ can be approached as a regression problem, now with predictors $(n, p, \alpha)$, response $Y:=T_{\infty, p; \alpha} / T_{n, p; \alpha}$, and unknown regression function $g$. The connection between $P_{n, 2}^{\text{CvM}}$ and $U_{n}^2$ implies the stabilized form of $P_{n, 2}^{\text{CvM}}$ to have the same set of predictors based on $(n, \alpha)$ as the Watson statistic already presented in Table \ref{table:modified-stats}: $\mathcal{R} := \{1/n, 1/(n\sqrt{\alpha}), 1/(n\alpha)\}$. An additional reflection suggests the adequacy of choosing $\mathcal{R}$ for stabilizing $P_{n, p}^{\text{CvM}}$, also when $p\geq 3$, due to its appearance in all the transformations for quadratic-ecdf statistics in Table \ref{table:modified-stats} and the quadratic nature of $P_{n, p}^{\text{CvM}}$. For different particular values of $p\geq 2$, it was noted that, if regression models were fitted to the ratios $P_{\infty, p; \alpha}^{\text{CvM}}/P_{n, p; \alpha}^{\text{CvM}}$, the coefficients fitted for each predictor $r \in \mathcal{R}$ could be modeled as a smooth function of $p$ denoted as $q_r:\mathbb{N}\to\mathbb{R}$. Unsurprisingly, given its similarity to $P_{n, p}^{\text{CvM}}$, the same considerations also hold for $P_{n, p}^{\text{AD}}$. Moreover, the statistic $N_{n,p}$ can also be stabilized through $\mathcal{R}$ and $q_r$, a fact explained by the closeness between $P_{n, p}^{\text{CvM}}$ and $N_{n,p}$ when $p\neq 3$ and its equivalence when $p=3$. Empirical investigations suggested the following saturated model for $q_r$, for each $r \in \mathcal{R}$: \begin{align*} q_r(p) = \frac{\beta_{r,1}}{\sqrt{p}} + \frac{\beta_{r,2}}{p}. \end{align*} Thus, the resulting saturated model for $g$ is set as \begin{align} g(n, p, \alpha) = 1 + q_{1/n}(p)\cdot\frac{1}{n} + q_{1/(n\alpha)}(p)\cdot\frac{1}{n\alpha} + q_{1/(n\sqrt{\alpha})}(p)\cdot\frac{1}{n\sqrt{\alpha}}. \label{eq:saturated_g_n_p_alpha} \end{align} Once training samples of the form $\{(n_j,\alpha_j,p_j,Y_j)\}_{j=1}^J$, $Y_j:=T_{\infty, p_j; \alpha_j} /\allowbreak T_{n_j, p_j; \alpha_j}$, are available, model \eqref{eq:saturated_g_n_p_alpha} is estimated following the same methodology described in Section \ref{section:n-alpha-mod}. For each of the three test statistics $P_{n,p}^{\text{CvM}}$, $P_{n,p}^{\text{AD}}$, and $N_{n,p}$, we obtained $M = 10^7$ Monte Carlo random samples for each sample size $n$ in $\mathcal{N}:=\{5,\ldots,100, 102,\ldots, 200, 204, \ldots,\allowbreak300,308,\ldots,\allowbreak404,420,\allowbreak\ldots,500\}$ and for each dimension $p$ in $\mathcal{P}:=\{2,\ldots,\allowbreak11,21,31,41,\allowbreak51,61,71,\allowbreak81,91,\allowbreak101,151,201,251,\allowbreak301\}$. We then summarized these statistics as the quantiles $\{T_{n_j,p_j;\alpha_j}: n_j\in\mathcal{N},\,p_j\in\mathcal{P},\,\alpha_j\in\mathcal{A}\}$ for $\mathcal{A}:=\{a/A:a=1,\ldots,A\}$, $A=10^3$. The asymptotic $\alpha$-quantiles $T_{\infty, p; \alpha}$ were approximated through $T_{500,p;\alpha}$ due to the accuracy limitations on inverting the asymptotic cdfs of the three statistics for large dimensions. Table \ref{table:modified-sph-stats} lists the approximated $T_{\infty,p;\alpha}$ for the first ten dimensions. \begin{table}[h!] \iffigstabs \centering \small \begin{tabular}{>{\centering\arraybackslash}m{1.5cm}| >{\centering\arraybackslash}m{1cm} >{\centering\arraybackslash}m{0.8cm} >{\centering\arraybackslash}m{0.8cm} >{\centering\arraybackslash}m{0.8cm} >{\centering\arraybackslash}m{0.8cm} >{\centering\arraybackslash}m{0.8cm} >{\centering\arraybackslash}m{0.8cm} >{\centering\arraybackslash}m{0.8cm} >{\centering\arraybackslash}m{0.8cm} >{\centering\arraybackslash}m{0.8cm} >{\centering\arraybackslash}m{0.8cm}} \toprule & & \multicolumn{10}{c}{$p$}\\ [1ex] Critical value & $\alpha$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ \\ \midrule \multirow{5}{*}{$P_{\infty,p;\alpha}^{\text{CvM}}$} & 0.10 & 0.3035 & 0.2768 & 0.2606 & 0.2500 & 0.2421 & 0.2361 & 0.2312 & 0.2272 & 0.2239 & 0.2210 \\ [1ex] & 0.05 & 0.3735 & 0.3288 & 0.3027 & 0.2858 & 0.2735 & 0.2641 & 0.2568 & 0.2508 & 0.2458 & 0.2416 \\ [1ex] & 0.01 & 0.5358 & 0.4461 & 0.3960 & 0.3638 & 0.3413 & 0.3244 & 0.3115 & 0.3008 & 0.2922 & 0.2849 \\ [1ex] \midrule \multirow{5}{*}{$P_{\infty,p;\alpha}^{\text{AD}}$} & 0.10 & 1.6871 & 1.5604 & 1.4816 & 1.4279 & 1.3883 & 1.3576 & 1.3327 & 1.3124 & 1.2957 & 1.2809 \\ [1ex] & 0.05 & 2.0293 & 1.8214 & 1.6951 & 1.6106 & 1.5494 & 1.5023 & 1.4651 & 1.4347 & 1.4092 & 1.3875 \\ [1ex] & 0.01 & 2.8197 & 2.4096 & 2.1679 & 2.0090 & 1.8969 & 1.8126 & 1.7471 & 1.6931 & 1.6493 & 1.6121 \\ [1ex] \midrule \multirow{5}{*}{$N_{\infty,p;\alpha}$} & 0.10 & 2.4034 & 2.2141 & 2.1003 & 2.0231 & 1.9673 & 1.9238 & 1.8887 & 1.8601 & 1.8367 & 1.8158 \\ [1ex] & 0.05 & 2.9906 & 2.6305 & 2.4320 & 2.3034 & 2.2119 & 2.1423 & 2.0879 & 2.0437 & 2.0067 & 1.9752 \\ [1ex] & 0.01 & 4.3495 & 3.5687 & 3.1669 & 2.9136 & 2.7402 & 2.6112 & 2.5124 & 2.4314 & 2.3661 & 2.3108 \\ [1ex] \bottomrule \end{tabular} \fi \caption{\small Asymptotic critical values for modified uniformity statistics with dimension $p$, sample size $n$, and significance level $\alpha$.} \label{table:modified-sph-stats} \end{table} \begin{table}[h!] \iffigstabs \centering \small \begin{tabular}{>{\centering\arraybackslash}m{1.5cm} >{\centering\arraybackslash}m{2.3cm} >{\centering\arraybackslash}m{2.3cm} >{\centering\arraybackslash}m{2.3cm}} \toprule \multirow{2}{*}{$T_{n,p}$} & \multicolumn{3}{c}{$T_{n, p} \left(1 + q_{1/n}\cdot\frac{1}{n} + q_{1/(n\alpha)}\cdot\frac{1}{n\alpha} + q_{1/(n\sqrt{\alpha})}\cdot\frac{1}{n\sqrt{\alpha}}\right)$}\\ [1ex] & $q_{1/n}$ & $q_{1/(n\alpha)}$ & $q_{1/(n\sqrt{\alpha})}$\\ \midrule $P_{n,p}^{\text{CvM}}$ & $\frac{0.1130}{\sqrt{p}} - \frac{0.5415}{p}$ & $- \frac{0.0031}{\sqrt{p}}$ & $\frac{0.1438}{\sqrt{p}}$ \\ [3ex] $P_{n,p}^{\text{AD}}$ & $\frac{0.0978}{\sqrt{p}} -\frac{0.3596}{p}$ & $- \frac{0.0025}{\sqrt{p}}$ & $\frac{0.1126}{\sqrt{p}}$ \\ [3ex] $N_{n,p}$ & $\frac{0.1189}{\sqrt{p}} - \frac{0.5838}{p}$ & $- \frac{0.0030}{\sqrt{p}}$ & $\frac{0.1210}{\sqrt{p}} + \frac{0.0385}{p}$ \\ [1.5ex] \bottomrule \end{tabular} \fi \caption{\small Modified uniformity statistics for dimension $p$, sample size $n$, and significance level $\alpha$. Modified forms are valid for $2\leq p \leq 300$, $n \ge 5$, and $\alpha \leq 0.25$. $\mathcal{H}_0$ is rejected at significance level $\alpha$ if $T_{n,p}^{\ast}(\alpha)>T_{\infty,p;\alpha}$, where $T_{\infty,p;\alpha}$ is given in Table \ref{table:modified-sph-stats} for $p=2,\ldots,11$.} \label{table:modified-forms-sph-stats} \end{table} The resulting modified forms for $P_{n,p}^{\text{CvM}}$, $P_{n,p}^{\text{AD}}$, and $N_{n,p}$ are presented in Table \ref{table:modified-forms-sph-stats}, where each fitted $q_r$ is shown for each predictor $r\in \mathcal{R}$. An algorithm similar to Algorithm \ref{alg:p-val} for computing an approximated $p$-value has been implemented for these statistics, with the only difference being that the modified statistic function in lines $3$ and $11$ is the corresponding dimension-dependent version which also includes the parameter $p$ as an input. \subsection{Simulation study} \label{section:simulations2} In the same manner as in Section \ref{section:simulations}, the empirical stabilization of the modified forms of the projected-ecdf test statistics are investigated (Figure \ref{fig:error-projbased-statistics}) in terms of the relative error between the significance level and the empirical rejection rate of the $T_{n,p}^\ast(\alpha)$-test for sample sizes $n \in \mathcal{N}_{\text{test}}$ and dimensions $p \in \mathcal{P}_{\text{test}}$, where $\mathcal{N}_{\text{test}}$ was defined in Section \ref{section:simulations} and $\mathcal{P}_{\text{test}} := \{2, \ldots, 11, 21, 51, 101, 151,\allowbreak 201,\allowbreak 301\}$. As for most non-heavily studied test statistics, Monte Carlo is the only method readily available to approximate the exact-$n$ $p$-values of $P_{n,p}^{\text{CvM}}$, $P_{n,p}^{\text{AD}}$, and $N_{n,p}$. Figure \ref{fig:error-projbased-statistics} shows an average improvement of our stabilizations' accuracy over Monte Carlo approximations (using $10^4$ trials) of $\times3$, $\times4$, and $\times4$, for each of the three statistics, respectively. We point out the steadiness of our relative errors regardless of the significance level and the dimension $p$ (except for $\alpha = 0.01$, which increases for large $p$'s), which on average are $1.3\%$, $0.9\%$, and $1\%$ respectively. In almost all circumstances, our relative errors are largely below those obtained by Monte Carlo (except for $\alpha=0.25$ when $p>10$ in $P_{n,p}^{\text{CvM}}$ and $N_{n,p}$). \begin{figure}[h!] \iffigstabs \centering \begin{subfigure}[t]{0.44\textwidth} \includegraphics[width=\linewidth]{img/Fig3a.pdf} \caption{\small $P_{n,p}^{\text{CvM}}$\label{fig:CvM}} \end{subfigure} \vspace{-1.5em} \begin{subfigure}[t]{0.44\textwidth} \includegraphics[width=\linewidth]{img/Fig3b.pdf} \caption{\small $P_{n,p}^{\text{AD}}$} \end{subfigure} \begin{subfigure}[t]{0.44\textwidth} \includegraphics[width=\linewidth]{img/Fig3c.pdf} \caption{\small $N_{n,p}$} \end{subfigure} \fi \caption{\small Relative error (in \%) $\lvert\alpha-\tilde{\alpha}\rvert/\alpha$ between the significance level $\alpha$ and $\tilde{\alpha}$, the empirical rejection rate using an approximated exact-$n$ critical value, averaged over $5\leq n \leq 300$. For the Monte Carlo approximation method, a regression fit is shown for each significance level $\alpha$ to show no trend on the error with respect to $p$. The legend in Figure \ref{fig:CvM} details the approximation methods considered and significance levels, and applies to the rest of panels.} \label{fig:error-projbased-statistics} \end{figure} We conclude this section by summarizing in Table \ref{table:summary-table} a comparison of the modified forms found by \cite{Stephens1970} and our results, for each of the classical ecdf-based statistics, and their corresponding versions for circular data, along with the circular particularizations of the projected-ecdf statistics. We emphasize the simplicity of the formulae in both approaches, with the mean relative error (MRE) being reduced for the second by $\times2$ for $D_n$ and $U_n^2$, by $\times9$ for $W_n^2$, and by $\times4$ for $A_n^2$ and $V_n$. The stabilizations for the projected-ecdf statistics are such $\text{MRE}<0.9\%$ for the circular case, which aligns with the results specifically attained for $U_n^2$ and $P_{n,2}^{\text{AD}}$, and supports the convenience of the extension proposed in this section for the $(n, \alpha)$-stabilization. \begin{table}[h!] \iffigstabs \centering \small \begin{tabular}{>{\centering\arraybackslash}p{1.8cm}| >{\centering\arraybackslash}p{4cm} >{\centering\arraybackslash}p{1cm} | >{\centering\arraybackslash}p{5.2cm} >{\centering\arraybackslash}p{1cm}} \toprule $T_n$ & Stephens' $T_n^{\ast}$ & MRE & $T_n^{\ast}(\alpha)$ & MRE \\ \midrule $D_n$ & $D_n\left(1 + \frac{0.12}{\sqrt{n}} + \frac{0.11}{n}\right)$ & $1.44\%$ & $D_n\left(1 + \frac{0.1575}{\sqrt{n}} + \frac{0.0192}{n\sqrt{\alpha}} - \frac{0.0051}{\sqrt{n\alpha}}\right)$ & $0.63\%$\\ [3ex] $W_n^2$ & $\left(W_n^2-\frac{0.4}{n}+\frac{0.6}{n^2}\right)\left(1 + \frac{1}{n}\right)$ & $3.28\%$ & $W_n^2\left(1 - \frac{0.1651}{n} + \frac{0.0749}{n\sqrt{\alpha}} - \frac{0.0014}{n \alpha}\right)$ & $0.36\%$ \\ [3ex] $A_n^2$ & $A_n^2$ $(\ssymbol{1})$ & $1.42\%$ & $A_n^2\left(1 + \frac{0.0360}{n} - \frac{0.0234}{n \sqrt{\alpha}} + \frac{0.0006}{n \alpha}\right)$ & 0.38\%\\ [3ex] $V_n$ & $V_n\left(1 + \frac{0.155}{\sqrt{n}} + \frac{0.24}{n}\right)$ & $3.40\%$ & $V_n\left(1 + \frac{0.2330}{\sqrt{n}} + \frac{0.0276}{n\sqrt{\alpha}} - \frac{0.0068}{\sqrt{n\alpha}}\right)$ & $0.85\%$ \\ [3ex] $U_n^2 \equiv P^{\text{CvM}}_{n, 2}$ & $\left(U_n^2-\frac{0.1}{n}+\frac{0.1}{n^2}\right)\left(1 + \frac{0.8}{n}\right)$ & $1.62\%$ & $U_n^2\left(1 - \frac{0.1505}{n} + \frac{0.0917}{n\sqrt{\alpha}} - \frac{0.0018}{n \alpha}\right)$ & $0.63\%$ \\ [3ex] & $-$ & $-$ & $P^{\text{CvM}}_{n, 2}\left(1 - \frac{0.1908}{n} + \frac{0.1017}{n\sqrt{\alpha}} - \frac{0.0022}{n\alpha}\right)$ & $0.88\%$ \\ [3ex] $P^{\text{AD}}_{n, 2}$ $(\ssymbol{2})$ & $-$ & $-$ & $P^{\text{AD}}_{n, 2} \left(1 - \frac{0.0751}{n} + \frac{0.0692}{n \sqrt{\alpha}} - \frac{0.0014}{n \alpha}\right)$ & $0.74\%$\\ [3ex] & $-$ & $-$ & $P^{\text{AD}}_{n, 2} \left(1 - \frac{0.1106}{n} + \frac{0.0796}{n \sqrt{\alpha}} - \frac{0.0018}{n \alpha}\right)$ & $0.83\%$\\ [3ex] \bottomrule \end{tabular} \fi \caption{\small Modified forms of ecdf-based statistics for sample size $n$ and significance level $\alpha$. MRE refers to the Mean Relative Error between the expected rejection proportion and the approximated proportion for each pair of $(n, \alpha)$ with $n \in \mathcal{N}_{\text{test}}$ and $\alpha \in \{0.25, 0.2, 0.15, 0.1, 0.05, 0.02, 0.01\}$. The $T_n^\ast(\alpha)$ forms are valid for $n \ge 5$ and $\alpha \leq 0.25$. $(\ssymbol{1})$ \cite{Stephens1974} states the best modification for Anderson--Darling statistic for $n \geq 5$ is its asymptotic distribution. $(\ssymbol{2})$ Both the modified form estimated for $p = 2$ (top row) and the $(n, \alpha, p)$-modification particularized for $p = 2$ (bottom row) are given for $P^{\text{CvM}}_{n, 2}$ and $P^{\text{AD}}_{n, 2}$.} \label{table:summary-table} \end{table} \section{Detecting temporary longitudinal non-uniformity in sunspots} \label{section:applications} The Sun's magnetic field presents periodic behavioral patterns of about $11$ years. During this period, the magnetic field wraps due to the Sun's differential rotation until its polarity is eventually reversed and the wrapping restarts, indicating the beginning of a new solar cycle \citep{Babcock1961}. Sunspots are created by high-intensity magnetic loops emerging from the Sun's interior convection zone to the surface, producing darker, cooler regions on the Sun's photosphere. Through their lifespans, which can last from hours to days, they experience continuous changes in shape, area, and location. The total number of active sunspots varies throughout the cycle, showing the maximum activity during the middle years (see Figure \ref{fig:sunspots}). Sunspots appear in a marked rotational symmetric fashion: they are mainly distributed in latitudinal belts that are initially situated at $\pm30^{\circ}$ and that decay to the equator as the solar cycle advances (a phenomenon known as the \textit{Sp\"orer's law}). Sunspots also appear to cluster in \textit{active longitudes}. Non-uniform patterns may appear by \textit{preferred zones of occurrence} where sunspots had originated previously, as early described by \citet[pages 574 and 581]{Babcock1961}. The existence of active longitudes was also suggested in \cite{Bogart1982} upon inspection of the significant autocorrelation of daily sunspot numbers. Since daily sunspot numbers have no positional information, such analysis shows either there is one active longitude band or there are two active longitude bands separated by $180^\circ$. More recently, \cite{Berdyugina2003} concluded the presence of two active longitudinal regions in both hemispheres that are separated by $180^\circ$ and the alternation of major solar activity between both longitudes in about $1.5$ to $3$ years. This effect is known as \textit{flip-flop} phenomenon, and is also present in other active stars \citep{Elstner2005}. Analyzing the presence of solar active longitudes requires knowledge from the Carrington period (or solar rotation period). It corresponds to the mean synodic rotation period of sunspots, which is about $27.275$ days. Because of differential rotation, lower latitudes rotate faster than regions closer to the poles. This effect causes the migration of active longitudes in the Carrington reference frame, due to changes in the mean latitude of sunspot emergence, causing a lag of $2.5$ solar rotations per solar cycle which blurs the active longitudes pattern if more than one solar cycle data is analyzed at once \citep{Berdyugina2003}. Hence, in order to ensure the adequate detection of active longitudes, a sequential analysis of data limited to a certain number of Carrington rotations is preferable. According to \cite{Bogart1982} and \cite{deToma2000}, active regions are observable for 3--7 solar rotations, though \cite{Pelt2010} claim they can be observed for about 10--15 Carrington rotations. The data we analyze is based on the Debrecen Photoheliographic Data (DPD) sunspot catalog \citep{Baranyi2016, Gyoeri2016}. It contains observations of sunspots locations since 1974 and is a continuation of the Greenwich Photoheliographic Results (GPR) catalog, which spanned 1872--1976. The dataset $\texttt{sunspots\_births}$, available in the R package \texttt{rotasym} \citep{Garcia-Portugues2020e}, accounts just for the first-ever observation (referred to as ``birth'' henceforth) of a group of sunspots. In our analysis, summarized in Figure \ref{fig:sunspots}, we first applied the $P_{n, 2}^{\text{AD}}$-based uniformity test sequentially to the to the longitudes of sunspot births ---which include a total of $6195$, $4551$, and $5373$ observations for the 21st, 22nd, and 23rd cycle, respectively--- within a rolling window conformed by $10$-Carrington rotations (approximately, nine months). The corresponding $p$-values were computed using Algorithm \ref{alg:p-val} for northern (blue), southern (red), and both (black) hemispheres. In addition, the $p$-value was also computed by Monte Carlo with $5\times10^3$ samples, in order to compare the running times between both methods. Our method runs in an average of $1.6\text{ s}$ per solar cycle, while Monte Carlo completes it in $1600\text{ s}$ per solar cycle. In order to account for dependency between sequential tests, \cite{Benjamini2001}'s FDR correction was applied to the $p$-values obtained with the test based on $P_{n, 2}^{\text{AD}}$. These corrected $p$-values are shown in the top row of Figure \ref{fig:sunspots}. Second, circular-linear kernel density estimation \citep{Garcia-Portugues2013b} of sunspot births for northern (middle-top figure) and southern (middle-bottom) hemispheres allowed us to compute several level sets, represented as contour lines labeled as ``$100p \%$''. Each of these sets are the smallest sets containing $1-p$ of the probability of the estimated density function. Hence, darker sets represent higher-density zones of sunspot births, both through time and longitude. Third, a scatter plot of sunspot births is shown in the bottom figures, along with the circular Nadaraya--Watson \citep{DiMarzio2012} regression for northern (blue), southern (red), and both (black) hemispheres. The Nadaraya--Watson regression gives a moving circular mean of sunspots births longitude through time. Both density and regression kernel estimates use ``rule-of-thumb'' bandwidths for normal \citep{Silverman1986} and von Mises--Fisher \citep{Garcia-Portugues2013a} distributions, given the similarity of marginal distributions with these respective distributions and the marked undersmoothing that resulted from cross-validation bandwidths. \begin{sidewaysfigure} \iffigstabs \centering \includegraphics[width=1\linewidth]{img/Fig4.pdf} \fi \caption{\small Longitudinal non-uniformity patterns of sunspot births. Each column represents the analysis for each of the 21st, 22nd, and 23rd solar cycles. Northern (blue), southern (red), and both (black) hemispheres were separately analyzed. Top figures: $P_{n, 2}^{\text{AD}}$-based uniformity test of sunspot births longitudes. The $p$-values shown are corrected by \cite{Benjamini2001}'s FDR. Middle figures: Circular-linear kernel density level sets of sunspot births through time and longitude. Bottom figures: Sunspot births (points) along with the corresponding Nadaraya--Watson regression (lines).} \label{fig:sunspots} \end{sidewaysfigure} We draw the following conclusions from the analysis: \begin{enumerate}[label=(\textit{\roman*}),ref=(\textit{\roman*})] \item In general, both hemispheres seem to have different behavioral patterns, both in terms of longitudinal non-uniformities and sunspots activity level, along solar cycles. During the 21st cycle, the northern hemisphere presents $33\%$ of the tests rejected at significance level $\alpha=0.05$. In cycles 22 and 23, the southern hemisphere presents more non-uniform periods ($9\%$ and $10\%$ of the tests are rejected for $\alpha=0.05$, respectively) than the northern hemisphere ($5\%$ and $3\%$ are rejected, respectively). \item Non-uniformity periods are intermittent during the lifetime of the solar cycle, without a clear association with the intensity of the sunspots appearance. The length and quantity of non-uniformity periods differ between solar cycles. \item Sunspots seem to appear in preferred zones of occurrence. Highest density sets, together with Nadaraya--Watson regressions, show consistent patterns of activity within certain longitudinal zones. In particular, during uniformity-rejected periods at significance level $\alpha=0.05$, the northern sunspot births seem to cluster around $0^\circ$ (1982, 1990, 2000), $135^\circ$ (1983--1984), and $180^\circ$ (1977--1978, 1979--1980), while the southern hemisphere sunspot births cluster around $-135^\circ$ (1991, 2004, 2008). However, non-uniformity periods are scarce to claim the existence of \textit{active longitudes}. \item The \textit{flip-flop} effect between $180^\circ$ active longitudes is not obvious throughout all the cycles. Although longitudes $0$ and $180^\circ$ seem to accumulate more sunspots in the northern hemisphere, the alternation between supplementary longitudes is not a clear, fixed-duration pattern. \end{enumerate} \section{Discussion} \label{section:discussion} We have presented a general, automatized approach to construct simple yet effective approximations for the upper tail of the exact-$n$ null distribution of numerous goodness-of-fit statistics. The simulation results demonstrate that these approximations are accurate enough for practical applications of several upper-tail tests, even when these depend on a varying (yet known) parameter. Although state-of-the-art statistic-specific algorithms like \cite{Marsaglia2003}, \cite{Csorgo1996}, and \cite{Marsaglia2004} provide arbitrarily accurate upper-tail $p$-values for the $D_n$, $W_n^2$, and $A_n^2$ statistics, respectively, our $p$-value approximation method offers significant computational improvements, has a reasonable precision (mean relative errors below $1\%$), and, most importantly, can be applied to a wide range of statistics. Compared to the general and omnipresent $p$-value approximation by $M$ Monte Carlo trials, our method presents two key advantages: (\textit{i}) more accurate results (at least, when $M=10^4$); and (\textit{ii}) faster running times by several orders of magnitude. This computational expediency makes the stabilized statistic specially convenient for sequential tests, as illustrated in the data application. The $(n, \alpha)$-stabilization significantly extends the scope of applicability of \cite{Stephens1970}-like stabilizations. The stabilization focuses only on the upper tail of $T_n$, as this is usually the most useful in practice. However, stabilizations for the lower tail can be analogously derived. Obtaining modifications that stabilize the whole distribution, while still retaining simplicity, would offer the advantage of having approximated $p$-values that are roughly uniformly distributed under the null hypothesis. This task is left for future research. \section*{Acknowledgments} The second author acknowledges support by grants PGC2018-097284-B-100 and IJCI-2017-32005 by Spain's Ministry of Science, Innovation and Universities. The two grants were co-funded with ERDF funds. The computational resources of the Supercomputing Center of Galicia (CESGA) are greatly appreciated.
proofpile-arXiv_065-57
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Discoveries of topological materials, such as topological insulators, Dirac and Weyl semimetals, have revolutionised contemporary physics~\cite{hasan2010colloquium, armitage2018weyl}. Moreover, these materials hold promises for upcoming technologies based on quantum science and electronics~\cite{keimer2017physics, sirica2021shaking, tokura2017emergent}. One of the remarkable properties of these materials is the robustness of the electronic states against perturbations, which has catalysed a plethora of interesting phenomena~\cite{qi2011topological, moore2010birth, yan2017topological}. Methods based on light-matter interaction play a pivotal role in probing and understanding various exotic properties of these topological materials~\cite{basov2017towards, bao2021light, mciver2012control}. If one increases the intensity of the light significantly, various interesting perturbative and non-perturbative nonlinear processes occur in the matter. High-harmonic generation (HHG) is one such nonlinear process in which radiation of integer multiples of the incident light's frequency is emitted~\cite{ghimire2011observation, ghimire2019}. Numerous static and dynamic properties of solids have been probed by analysing the emitted radiation during HHG~\cite{luu2015extreme, mrudul2021high, schubert2014sub, mrudul2021light, mrudul2021controlling, hohenleutner2015real, zaks2012experimental, pattanayak2020influence, langer2018lightwave, mrudul2020high, neufeld2021light, pattanayak2019direct}. In recent years, topological materials have turned out to be the centre of attention for HHG~\cite{reimann2018subcycle, bai2021high, bauer2018high, dantas2021nonperturbative}. It is experimentally found that the bulk and the topological surface play different roles during HHG from a topological insulator~\cite{schmid2021tunable}. The interplay of the time-reversal symmetry protection and the spin-orbit coupling in a topological insulator leads to anomalous dependence of harmonic yield on the polarisation of the driving laser~\cite{baykusheva2021all}. Berry curvature plays an important role in determining the behaviour of high-harmonic spectra in both cases. In a three-dimensional Dirac semimetal, coherent dynamics of the Dirac electrons plays the central role in HHG ~\cite{kovalev2020non, cheng2020efficient}. Moreover, it has been reported that the nonlinear responses of the three- and two-dimensional Dirac semimetals are significantly different~\cite{lim2020efficient}. In all cases, time-reversal symmetry (TRS) is inherently preserved in topological insulators and Dirac semimetals. Therefore, it is natural to envision exploring how the breaking of the TRS affects HHG from topological materials and is the main emphasis of the present work. A Weyl semimetal (WSM) can be formed either by breaking time-reversal or inversion symmetry of the corresponding Dirac semimetal phase, and either case results in non-zero Berry curvature~\cite{yan2017topological}. A Weyl semimetal consists of the topologically protected degenerate points, known as Weyl points, which can be seen as the monopoles of the Berry curvatures in momentum space~\cite{armitage2018weyl}. This makes WSM as one of the most exotic gapless systems. In 2015, the first WSM was realised experimentally in transition-metal monopnictides, which form the class of nonmagnetic WSM with broken inversion-symmetry~\cite{xu2015discovery, xu2015discovery1, lv2015experimental}. Later, three groups have shown the evidence of magnetic WSM in ferromagnetic materials with broken TRS experimentally~\cite{morali2019fermi, liu2019magnetic, belopolski2019discovery}. Present work focuses on addressing some crucial questions such as how TRS breaking and resultant modifications in Berry curvature affect HHG, the role of the form of the Berry curvature's components, and how the separations of the Weyl points influence HHG in WSM. In the following, we will demonstrate that non-zero Berry curvature in TRS-broken WSM leads to anomalous current in a direction perpendicular to the electric field and anomalous odd harmonics -- analogous to the anomalous Hall effect. Moreover, we will show that the directions of the emitted anomalous odd harmonics are related to the nature of the Berry curvature's components. The appearance of the anomalous odd harmonics allows us to probe non-trivial topology of the TRS-broken WSM by measuring the polarisation of the emitted anomalous odd harmonics. Recently, HHG from an inversion-symmetry broken WSM was explored experimentally in which a linearly polarised pulse leads to the generation of even harmonics, related to non-zero Berry curvature~\cite{lv2021high}. Our findings are in contrast to previously reported works where Berry curvature mediated anomalous electron's velocity leads to the generation of even harmonics~\cite{schubert2014sub, liu2017high, hohenleutner2015real, luu2018measurement}. The Hamiltonian corresponding to WSM with broken TRS can be written as~\cite{hasan2017discovery} \begin{equation} \mathcal{H}(\mathbf{k}) = \bm{d}(\mathbf{k})\cdot \bm{\sigma} = d_1(\mathbf{k})\sigma_x + d_2(\mathbf{k}) \sigma_y + d_3(\mathbf{k})\sigma_z. \label{eq1} \end{equation} Here, $\bm{\sigma}$ is the Pauli vector and [$d_1 =t_x \{ \cos(k_x a) - \cos(k_0 a)\} + t_y \{\cos(k_y b) -1\} + t_z \{\cos(k_z c) -1\}$, $ d_2 = t_y \sin(k_y b)$ and $d_3 = t_z \sin(k_z c)$] with $(\pm k_0,0,0)$ as the positions of the Weyl points, $t_{x,y,z}$ as hopping parameters and $a, b, c$ are lattice parameters. Here, we assume ferromagnetic WSM with tetragonal crystal structure, i.e., $a = b \neq c$~\cite{meng2019large}. $a = b = 3.437~ \text{\AA}, c = 11.646 ~\text{\AA}$ and $t_{x} = 1.88~\textrm{eV}, t_{y} = 0.49~\textrm{eV}, t_{z} = 0.16~\textrm{eV}$ are considered. The parameters used here are in accordance with the ones used in Ref.~\cite{nematollahi2020topological}. It is easy to see that the above Hamiltonian exhibits TRS breaking, i.e., $\hat{\mathcal{T}}^\dagger {\mathcal{H}}(-\mathbf{k})\hat{\mathcal{T}}\neq {\mathcal{H}}(\mathbf{k})$. However, $\hat{\mathcal{P}}^\dagger {\mathcal{H}}(-\mathbf{k})\hat{\mathcal{P}} = {\mathcal{H}}(\mathbf{k})$ ensures that inversion symmetry is preserved. Here, $\hat{\mathcal{P}} = \sigma_{x}$ and $\hat{\mathcal{T}} = \hat{\mathcal{K}}$ such that $ \hat{\mathcal{K}}^{\dagger} i \hat{\mathcal{K}} = -i$ are the inversion symmetry and time-reversal symmetry operators, respectively. After diagonalizing the above Hamiltonian, energy dispersion can be obtained as $\mathcal{E}_\pm = \pm|\bm{d}(\mathbf{k})|=\pm \sqrt{d_1^2 + d_2^2 + d_3^2}$. We have considered $k_0$ = 0.2 rad/au. The corresponding band-structure on $k_z=0$ plane is presented in Fig. S1~\cite{NoteX}. Interaction of the laser with the WSM is modelled using semiconductor-Bloch equations in the Houston basis as discussed in Refs.~\cite{mrudul2021light,floss2018ab}. Within this formalism, current at any time can be written as \begin{equation} \mathbf{J}(\textbf{k}, t)= \sum_{m,n}\rho_{mn}^{\mathbf{k}}(t) \mathbf{p}_{mn}^{\mathbf{k_t}}, \label{eq:sbecurrent} \end{equation} where $\mathbf{k_t} = \mathbf{k} + \mathbf{A(t)}$ with $ \mathbf{A(t)}$ as the vector potential of the laser, and $\rho_{mn}^{\mathbf{k}}(t)$ is the density matrix at time $t$. Here, $\textbf{A}(t)$ is related to its electric field $\textbf{E}(t)$ as $\textbf{E}(t) = -\partial \textbf{A}(t)/\partial t$. $\mathbf{p}^{\mathbf{k_t}}_{nm}$ is group velocity matrix element and calculated as $\mathbf{p}^{\mathbf{k_t}}_{nm}= \matrixel{n,\mathbf{k_t}}{\grad_{\mathbf{k_t}}\mathcal{H}_{\mathbf{k_t}}}{m,\mathbf{k_t}}$. By performing the integral over entire Brillouin zone and taking Fourier transform ($\mathcal{FT}$), high-harmonic spectrum is simulated as \begin{equation} \mathcal{I}(\omega) = \left|\mathcal{FT}\left(\frac{d}{dt} \left[\int_{BZ} \textbf{J}(\mathbf{k}, t)~d{\textbf{k}} \right] \right) \right|^2. \end{equation} Figure~\ref{fig2} presents high-harmonic spectra corresponding to linearly polarised pulse. When the pulse is polarised along the $x$ direction, odd harmonics are generated along the laser polarisation as evident from Fig.~\ref{fig2}(a). However, results become intriguing when the pulse is polarised along the $y$ or $z$ direction. In both cases, odd harmonics are generated along the laser polarisation. Moreover, anomalous odd harmonics along perpendicular directions are also generated. As reflected from Figs.~\ref{fig2}(b) and (c), when laser is polarised along the $y$ or $z$ direction, anomalous odd harmonics along the $z$ or $y$ direction, respectively, are generated. However, the yield of the anomalous harmonics is relatively weaker in comparison to the parallel harmonics. On the other hand, odd and even harmonics are generated in an inversion-symmetry broken WSM~\cite{lv2021high}. Furthermore, it has been concluded that the appearance of the even harmonics is related to the spike-like Berry curvatures in inversion-symmetry broken WSM~\cite{lv2021high}. \begin{figure}[h!] \includegraphics[width= \linewidth]{Fig1.pdf} \caption{High-harmonic spectra corresponding to a linearly polarised pulse. The pulse is polarised along (a) $x$, (b) $y$, and (c) $z$ directions. The driving pulse is $\simeq$ 100 fs long with intensity $1 \times 10^{11}$ W/cm$^2$, and wavelength 3.2 $\mu$m. Decoherence time of 1.5 fs is added phenomenologically in semiconductor Bloch equations.} \label{fig2} \end{figure} To understand why a linearly polarised driving pulse leads to parallel and anomalous odd harmonics, and how these findings are related to TRS breaking in WSM, we employ the semiclassical equation of Bloch electrons in an external electric field $\textbf{E}(t)$. Within this approach, expression of the anomalous current is written as $\bm{J}_\Omega(t) = -\textbf{E}(t) \times \int \boldsymbol{\Omega}_\mu(\mathbf{k})~\rho_{ \mu}(\mathbf{k},t)~ d \mathbf{k}$ with $\boldsymbol{\Omega}_\mu$ and $\rho_\mu(\mathbf{k},t)$ as the Berry curvature and band-population of the $\mu^{th}$ energy band, respectively~\cite{liu2017high}. We can assume that the initial band population is symmetric under inversion as $\rho(\mathbf{k}, 0) = \rho(-\mathbf{k}, 0)$. In the presence of a laser, momentum of an electron changes from $\mathbf{k}$ to $\mathbf{k_t}$, which leads to the change in the band population as $ \rho(\mathbf{k}, t) = \rho(\mathbf{k_t},0)$. Under time-translation of the laser $t\rightarrow t+T/2$, the anomalous current can be expressed as \begin{eqnarray} \bm{J}_\Omega(t+T/2) & = & -\mathbf{E}(t+T/2) \times \int \boldsymbol{\Omega}_\mu(\mathbf{k})~\rho_\mu\big(\mathbf{k}_{\mathbf{t}+T/2},0\big) ~d \mathbf{k} \nonumber \\ & = & \mathbf{E}(t) \times \int\boldsymbol{\Omega}_\mu(\mathbf{k}) ~\rho_\mu(\mathbf{k}-\mathbf{A}(t), 0\big) ~d \mathbf{k} \nonumber \\ & = & \mathbf{E}(t) \times \int \boldsymbol{\Omega}_\mu(\mathbf{k}) ~\rho_\mu\big(\mathbf{k_t}, 0\big) ~d \mathbf{k} \nonumber \\ & = & - \bm{J}_\Omega(t). \end{eqnarray} In the above equations, we have used $\mathbf{E}(t+T/2) = - \mathbf{E}(t)$, $ \mathbf{A}(t+T/2) = -\mathbf{A}(t)$, and $\rho(\mathbf{k}, 0) = \rho(-\mathbf{k}, 0)$; and changed the dummy variable $\mathbf{k}\rightarrow-\mathbf{k}$ in the integral. Also, Berry curvature for an inversion-symmetric system with broken TRS obeys $ \boldsymbol{\Omega}(\mathbf{k})= \boldsymbol{\Omega}(-\mathbf{k})$. The contribution of $\bm{J}_\Omega (t)$ to the $n^{\textrm{th}}$ harmonic is given by $ \bm{J}^n_\Omega(\omega) \propto \int_{-\infty}^\infty \bm{J}_\Omega(t) e^{in\omega t}~dt$. By changing $t \rightarrow t + T/2$ in the integral, we obtain $ \bm{J}^n_\Omega(\omega) \propto \int_{-\infty}^\infty \bm{J}_\Omega(t+T/2) e^{in\omega (t+T/2)}~ dt = -e^{in\pi}\int_{-\infty}^\infty \bm{J}_\Omega(t) e^{in\omega t}~dt$, which implies that only odd harmonics are allowed as $\exp(in\pi) = -1$. Thus, TRS-broken systems lead to anomalous odd harmonics, which is in contrast to the case of TRS-preserving systems with broken-inversion symmetry in which the anomalous current leads to the generation of even harmonics~\cite{schubert2014sub, liu2017high, hohenleutner2015real, luu2018measurement}. In order to discern the directions of the anomalous current, we need to understand the distinct role of the Berry curvature's components, which is written as $\boldsymbol{\Omega}(\mathbf{k}) = \Omega_{k_{x}}(\mathbf{k}) \hat{e}_{k_{x}} + \Omega_{k_{y}}(\mathbf{k}) \hat{e}_{k_{y}} + \Omega_{k_{z}}(\mathbf{k}) \hat{e}_{k_{z}}$. The expressions of the Berry curvature's components corresponding to the Hamiltonian in Eq.~(\ref{eq1}) are given in the supplementary material (see Eqs. S2-S4~\cite{NoteX}). The direction of the anomalous current is given by $\mathbf{E} \times \mathbf{\Omega}$ and the integral is perform over the entire Brillouin zone. Moreover, $\mathbf{E}$ is a function of time and $\boldsymbol{\Omega}$ is function of $\mathbf{k}$, so their product does not changes the parity. \begin{figure}[h!] \includegraphics[width= 1.05 \linewidth]{Fig2.pdf} \caption{Third harmonic (H3) in the time domain. Driving laser pulse is linearly polarised along (a) $y$ and (b) $z$ directions. The driving pulse has the same parameters as in Fig.~\ref{fig2}.} \label{fig3} \end{figure} If the laser is polarised along the $x$ direction, then it is straightforward to see that the anomalous current along the $y$ and $z$ directions turn out to be zero as $\Omega_{k_{y}}$ and $\Omega_{k_{z}}$ are odd functions in the two directions. On the other hand, $\Omega_{k_{x}}$ is an even function in all directions, contributing to the anomalous current when the laser is polarised along $y$ or $z$ direction. Thus, present theoretical analysis is consistent with numerical results shown in Fig.~\ref{fig2}, which unequivocally establishes that non-trivial topology of the Berry curvature leads to nonlinear anomalous odd harmonics -- the light-driven nonlinear anomalous Hall effect. At this point it is natural to investigate what determines the phase between the parallel and the anomalous harmonics. To address this issue, we focus on the third harmonic (H3) in the time domain. When the laser is polarised along $y$ direction, H3 along the $y$ and $z$ directions is in phase as evident from Fig.~\ref{fig3}(a). However, it becomes out of phase in the case of the $z$ polarised pulse. The reason behind the in phase or out-of-phase of H3 can be attributed to the sign of $\bm{J}_\Omega (t) \propto \int \mathbf{E}(t) \times \mathbf{\Omega}(\mathbf{k}) d\mathbf{k}$, which yields positive (negative) sign when laser is along the $y (z)$ direction. \begin{figure}[h!] \includegraphics[width= \linewidth]{Fig3.pdf} \caption{High-harmonic spectra generated by the right-handed circularly polarised pulse in (a) $x-y$, (b) $x-z$, and (c) $y-z$ planes. The parameters of the laser and decoherence time are the same as given in Fig.~\ref{fig2}.} \label{fig4} \end{figure} To corroborate our findings about the generation of the anomalous odd harmonics and its relation with non-trivial topology of the Berry curvature's component, high-harmonic spectra generated by circularly polarised pulse are presented in Fig.~\ref{fig4}. In agreement with the two-fold rotation symmetry of the Hamiltonian, only odd harmonics are generated. When the pulse is on $x-y$ plane, odd harmonics along the $x$ and $y$ directions are generated as $x$ and $y$ components of the driving electric field are non-zero. Moreover, due to the non-zero $y$ component of the driving field, anomalous odd harmonics are generated along the $z$ direction [see Fig.~\ref{fig3}(a)]. In this case, the mechanism is same as it was in the case of the linearly polarised pulse along the $y$ direction. The same is applicable in the case of the circularly polarised pulse on $x-z$ plane. In this case, parallel odd harmonics are generated along the $x$ and $z$ directions, whereas anomalous odd harmonics are generated along the $y$ direction [see Fig.~\ref{fig3}(b)]. However, when the pulse is polarised on $y-z$ plane, only parallel harmonics along the $y$ and $z$ directions are generated, and no anomalous harmonics along the $x$ direction are generated. This is expected due to even and odd natures of $ \Omega_{k_{x}}$ and $ \Omega_{k_{y}/k_{z}}$, respectively (see Eqs. S2-S4~\cite{NoteX}). \begin{figure}[h!] \includegraphics[width= \linewidth]{Fig4.pdf} \caption{Comparison of the anomalous high-harmonic yield for different values of $k_0$ = 0.2 and 0.3 rad/au. Odd anomalous harmonics along (a) the $z$ direction when polarisation of the pulse is on $x-y$ plane, and (b) the $y$ direction when polarisation of the pulse is on $x-z$ plane. $k_0$ is proportional to the distance between the two Weyl points.} \label{fig5} \end{figure} After establishing the non-trivial role of the Berry curvature's components and their parity, let us explore how their strengths affect the yield of the anomalous odd harmonics. We know that the magnitude of the anomalous current depends on $\mathbf{E}\times \mathbf{\Omega}$. Moreover, the magnitude of the Berry curvature's components depend on $k_0$ (see Fig. S2~\cite{NoteX}). Therefore, as we change the value of $k_0$ from $0.2$ to $0.3$, the strength of the Berry curvature's components reduces, which lead to the reduction in the strength of the anomalous current. Fig.~\ref{fig5} presents a comparison of the yield of the anomalous harmonics for two different values of $k_0$. In the case of circularly polarised pulse on $x-y$ plane, the anomalous harmonics along the $z$ direction reduces drastically as we change $k_0$ from $0.2$ to $ 0.3$ rad/au [see Fig.~\ref{fig5}(a)]. The same is true for the pulse on the $x-z$ plane and anomalous harmonics along the $y$ direction [see Fig.~\ref{fig5}(b)]. Therefore, the yield of the anomalous harmonics reduces drastically as the value of $k_0$ increased from 0.2 to 0.3 rad/au. Similar conclusions can be drawn in the case of HHG from the linearly polarised pulse (see Fig. S3~\cite{NoteX}). However, the yield of the parallel harmonics is insensitive to the change in the value of $k_0$ (see Figs. S4 and S5~\cite{NoteX}). Our findings are similar to the anisotropic anomalous Hall effect in which the magnitude of the current depends on the integral of the Berry curvature~\cite{yang2021noncollinear}. Not only the anomalous current and harmonics encode the non-trivial symmetry and the magnitude of the Berry curvature's components but also the anomalous current and harmonics tailor the polarisation of the emitted harmonics, which offer an elegant way to probe non-trivial topological properties of the Berry curvature by an all-optical way. As evident from Fig.~\ref{fig3}, $y$ and $z$ components of H3 are in-phase and out-of-phase when the driving laser is polarised along the $y$ and $z$ directions, respectively, which gives two different polarisation of H3 (see Fig. S6(a)~\cite{NoteX}). Thus, by measuring the polarisation of H3, the non-trivial topology of the Berry curvature can be probed as it controls the strength and the phase between $y$ and $z$ components of H3. The same observations are true for other higher-order harmonics corresponding to linearly polarised driver (see Figs.~\ref{fig2} and S6~\cite{NoteX}). Similar conclusions can be made when circularly polarised laser is used for HHG (see Figs.~\ref{fig4} and S7~\cite{NoteX}). In summary, we have investigated the role of TRS breaking in the strong-field driven nonlinear process in topological materials. For this purpose, the inversion-symmetric Weyl semimetal with broken TRS is considered. It is found that the non-trivial topology of the TRS-broken Weyl semimetal leads to the generation of the anomalous odd harmonics, which are anisotropic and appear only when the driving laser has non-zero components along the $y$ or $z$ direction. Non-trivial symmetry of the Berry curvature's components of the TRS-broken Weyl semimetal is responsible for the anisotropic nature of the anomalous harmonics (current). Moreover, the strength of the Berry curvature dictates the strength of the anomalous odd harmonics. Furthermore, non-trivial topology properties of the Berry curvature and its strength can be probed by measuring the the polarisation of the emitted anomalous odd harmonics. Present work opens a new avenue for studying strong-field driven electron dynamics and high-harmonic generation in systems with broken TRS, such as exotic magnetic and topological materials; and tailoring the polarisation of the emitted harmonics. G. D. acknowledges useful discussion with Prof. Sumiran Pujari (IIT Bombay). G. D. acknowledges support from Science and Engineering Research Board (SERB) India (Project No. ECR/2017/001460) and the Ramanujan fellowship (SB/S2/ RJN-152/2015).
proofpile-arXiv_065-58
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{introduction} After being introduced in \citep{sutskever2014sequence, cho2014properties, bahdanau2016neural}, Neural Machine Translation(NMT) rapidly replaced and outperformed the traditional Statistical models for translation tasks. It has since achieved state-of-the-art performances for a multitude of languages. This can be attributed to the fact that NMT uses continuous representations for languages, is capable of handling long-distance dependencies, and requires significantly less feature-engineering \citep{TAN20205}. Nearly all such models consist of an encoder-decoder architecture. Earlier applications of this framework incorporate Recurrent Neural Networks \citep{cho-etal-2014-learning} and Convolutional Neural Networks \citep{kalchbrenner2017neural, gehring2017convolutional} as their encoder and decoder components. While there have been many variants of recurrent networks which have performed well in language modeling, they bear a few shortcomings. Most importantly, they inhibit parallelization, and have no explicit model hierarchy. Moreover, training deep neural networks with recurrence is challenging and can result in vanishing or exploding gradients \citep{pascanu2013difficulty}. The concept of attention was introduced in \citep{bahdanau2016neural} to avoid having a fixed-length source sentence representation, which solved the fixed-length bottleneck problem. The attention mechanism has also eased optimization difficulty, and is considered to be a milestone in Machine Translation research. \citealt{vaswani2017attention} introduced the Transformer architecture, which forgo RNNs and CNNs, and are entirely based on the attention mechanism. Transformers proved to be efficacious on sequence-to-sequence tasks, and, as a result, transformer based language models emerged. Pretraining deep transformers on language modeling tasks has significantly improved performances on NLP tasks as compared to training from scratch \citep{devlin2019bert,radford2019language,lewis2019bart,liu2019roberta}. The idea behind pretraining is that the models are initialized with general linguistic knowledge, which can then be applied to downstream tasks by further finetuning the model. Multitask Learning(MTL) \citep{Caruana2004MultitaskL} has been successful in boosting results across many domains. The increase in performance can be credited to learning shared representations which improves generalization performance in two or more related tasks which have been jointly trained. MTL has also proven advantageous in neural machine translation. \citealt{luong2016multitask} showed that combining Machine Translation with Parsing and Image Captioning led to better translation results. \citealt{niehues2017exploiting} integrated POS tagging, NER and Machine Translation. In this study, we propose a multitask finetuning methodology which utilises monolingual data to increase the performance of NMT on Indian language pairs. Inspired by the works of \citep{wang2020multitask} and \citep{domhan-hieber-2017-using}, we propose finetuning of a pretrained model on bilingual parallel data and one auxiliary task- Causal Language Modeling (CLM) on the monolingual corpora of the source side language as well as the target side language. We use a pretrained mBART \citep{liu-etal-2020-multilingual-denoising} for multi-task finetuning, and compare the performance with the standard finetuning method on the same model and corpus. The rest of the paper is organised as follows: Section \ref{data} provides an overview of the dataset used for this study, Section \ref{method} describes the proposed methodology in depth. In Section \ref{experiment}, we describe our experimental setup and the results are discussed in Section \ref{results}. Section \ref{conclusion} provides a conclusion to the study and discusses possible future works. \section{Dataset} \label{data} For translation, we selected three language pairs: Marathi-English, Hindi-English and Marathi-Hindi. We chose these languages as Hindi, English and Marathi are three of the four most spoken languages across India. The parallel corpus that we used was \emph{Samanantar} \citep{ramesh2021samanantar} which is the largest publicly available parallel corpora collection for Indic languages. The Samanantar corpus has 1.99 million sentence pairs for Marathi-Hindi, 3.32 million sentence pairs for Marathi-English, and 8.56 million sentence pairs for Hindi-English. We randomly selected a subset of these examples for translation. The distribution of this subset is given in Table \ref{table1}\footnote{xx in Table \ref{table1} signifies two target languages in every case. For example, in Mr\textrightarrow xx, xx is English and Hindi.}. \par The data used for the language modeling task was sampled from \emph{IndicCorp} \citep{kakwani-etal-2020-indicnlpsuite}, which is a monolingual corpora spanning 11 Indic languages. Out of these we select just our source and target side languages i.e Hindi, Marathi and English. We sample a subset of IndicCorp for each of these languages. The distribution for monolingual data selected for the Causal Language Modeling is given in Table \ref{table2}. \begin{table}[htp] \centering \resizebox{180pt}{37pt}{ \resizebox{\textwidth}{!}{% \begin{tabular}{cccccc} \hline\hline & \textbf{Mr\textrightarrow xx} &\textbf{Hi\textrightarrow xx} &\textbf{En\textrightarrow xx}\\ \hline\hline \textbf{Training} & 100k & 100k & 100k\\ \textbf{Validation} & 20k & 20k & 20k\\ \textbf{Testing} & 5k & 5k & 5k\\ \hline\hline \end{tabular}} } \caption{\label{Bilingual-corpus}Data distribution of bilingual parallel corpora}\label{table1} \end{table} \begin{table}[h] \centering \resizebox{165pt}{40pt}{ \begin{tabular}{ccc} \hline\hline & \textbf{Total Available} & \textbf{Selected}\\ & \textbf{Corpora} &\\ \hline\hline \textbf{Mr} & 34.0M & 70k\\ \textbf{Hi} & 63.1M & 70k\\ \textbf{En} & 54.3M & 70k\\ \hline\hline \end{tabular} } \caption{\label{Monolingual-corpus}Data distribution of monolingual corpora}\label{table2} \end{table} \section{Methodology} \label{method} We use the pretrained mBART50 model in a multitask setting, with translation as the main task and self-supervised language modeling as an auxiliary task. We then compare the performance of this model to that of the conventionally finetuned mBART50 which has been trained solely on the translation task. The principal components of the multitask model are briefly explained in this section. \subsection*{Multitask Learning} Translation is the primary downstream task in our multitask model, for which we train a bitext corpus $\mathnormal{D_B}$ which consist of sentence pairs $\mathnormal(s,t)$, and is optimized on the crossentropy loss function: \begin{equation} \mathcal{L_\mathnormal{T}} = \mathbb{E_{\mathnormal{(s,t)\sim\mathnormal{D_B}}}} [-\log{P(t|s)}] \end{equation} Where $\mathnormal(s,t)$ represents the source and target text respectively. As a large amount of monolingual corpora for these languages is available, we leverage it to improve NMT performance by training language modeling auxiliary tasks with our primary translation task. We train the source as well as target side languages with their respective monolingual corpora on the Causal Language Modeling objective. \subsection*{Causal Language Modeling} In Causal Language Modeling(CLM), the model has to predict the next token given a sequence of previous tokens. Given a monolingual corpus $\mathnormal{D_M}$, CLM minimizes the crossentropy loss: \begin{equation} \mathcal{L_\mathnormal{CLM}} = \mathbb{E_{\mathnormal{(x)\sim\mathnormal{D_M}}}} [-\log{P(x_t|x_{t-1},x_{t-2},\cdot\cdot\cdot, x_1)}] \end{equation} \normalsize Where $\mathnormal{x_t}$ is the token predicted given $\mathnormal{(x_{t-1},\cdot\cdot x_1)}$ tokens. CLM has proven to be highly effective in enhancing sequence generation and natural language understanding \citep{radford2019language}. We thus explore it’s efficacy and leverage it as an auxiliary task in our multitask framework. \subsection*{Training} Both the Translation and Causal Language Modeling objective are trained jointly and the cross entropy losses for both tasks are added together: \begin{equation} \mathcal{L_\mathnormal{MTL}} = \mathcal{L_\mathnormal{T}} + \mathcal{L_\mathnormal{CLM}} \end{equation} \section{Experimental Setup}\label{experiment} We use a standalone mBART50 model as the baseline to compare results on Machine Translation with the multitask methodology proposed in section \ref{method}. We chose an mBART50 model which uses a standard sequence-to-sequence Transformer architecture with 12 encoder and decoder layers each. This model has been shown to perform relatively well on machine translation tasks in multiple Indian languages. \subsection*{Baseline Models(mBART50)} We make use of the pretrained Huggingface {\fontfamily{qcr}\selectfont Transformers }\footnote{\url{https://huggingface.co/transformers/}} library implementation for mBART50-large. This model was finetuned on the parallel corpus described in Section \ref{data}. The batch size chosen for our baseline models is $16$. We finetune a separate pretrained mBART50 for each language pair in our parallel dataset (for example, En\textrightarrow Mr and Mr\textrightarrow En constitute different models). \subsection*{Multitask Models(MTL-mBART50)} We use the same mBART50 implementation as that of the baseline for our multitask models.The {\fontfamily{qcr}\selectfont Transformers } library does not support multitask learning; Hence, we employ an approach inspired by a publicly available implementation for the same\footnote{\url{https://github.com/zphang/zphang.github.io/blob/master/files/notebooks/Multi_task_Training_with_Transformers_NLP.ipynb}}. For the sake of a direct comparison between the Neural Machine Translation performance between Baseline models and the multitask models, we use the same parallel corpus as that of the baselines. Additionally, the multitask models are trained for the auxiliary task of Causal Language modeling on the monolingual data described in Section \ref{data}. Due to computational constraints, the batch size selected for the multitask models was $2$. Similar to the baseline models, a separate multitask model was trained for each language pair considered. \begin{figure}[h!] \includegraphics[width=0.5\textwidth,height=10cm]{image} \caption{A multitask setup using the mBART model from the Huggingface {\fontfamily{qcr}\selectfont Transformers } Library. The two models above share an encoder, and have different decoders and therefore, make one multitask model. Here, $X$ = input tokens and $Y$ = generated output tokens.} \end{figure} In both the baseline and multitask models, we freeze the first $6$ of mBART50’s $12$ encoder layers. All models were trained for one epoch each on one P100 GPU provided by Google Colab \footnote{\url{https://colab.research.google.com}}. The models were trained using the Adam optimizer \citep{kingma2017adam} with $\beta_1 = 0.9$ and $\beta_2 = 0.999$. The learning rate was kept constant at $1e-5$ across the training run. During inference, we decoded the generated sentences with a beam of size $2$ and used a length penalty of $1.2$. We then measure and report the BLEU \citep{papineni-etal-2002-bleu} scores calculated after applying the Smoothing Function method 4 \footnote{\url{https://www.nltk.org/_modules/nltk/translate/bleu_score.html}} in the {\fontfamily{qcr}\selectfont nltk } \citep{bird-loper-2004-nltk} library. \begin{table*}[!htb] \captionsetup{justification=centering,margin=1cm} \centering \resizebox{0.85\textwidth}{30pt}{ \begin{tabular}{lllllllll} \hline \textbf{Model} & &\textbf{Mr\textrightarrow Hi} & \textbf{Hi\textrightarrow Mr} & \textbf{Mr\textrightarrow En} & \textbf{En\textrightarrow Mr} & \textbf{En\textrightarrow Hi} & \textbf{Hi\textrightarrow En}\\ \hline\hline \textbf{Baseline-mBART50}& & 9.48 & 5.61 & 10.17 & 5.33 & 6.49 & 8.12\\ \hline\textbf{MTL-mBART50}& & \textbf{10.33} & \textbf{6.85} & \textbf{11.84} & \textbf{6.47} & \textbf{8.71} & \textbf{9.17}\\ \hline\hline \end{tabular} } \caption{\label{performance-comparison}Resulting BLEU scores on different language pairs for the baseline models and our models.}\label{table3} \end{table*} \section{Results}\label{results} Table \ref{table3} shows the cumulative 4-gram BLEU scores of the baseline as well as the multitask finetuned models on different language pairs. The multitask methodology, when trained on the same parallel corpus as the baseline models, experiences a $10-20\%$ improvement in BLEU scores, and this increase in the metric is consistently seen across evaluations for all the language pairs considered. We ascribe this improvement in BLEU scores to our multitask models' ability to better generate sentences and their increased Natural Language Understanding which is a result of training the Machine Translation task in conjunction with CLM for both, source and target side languages. It would be reasonable to postulate that joint training for the two aforementioned tasks facilitated the generation of more coherent translations, which, in comparison to the baseline model translations, were more similar to the ground truths in the parallel corpus, resulting in better BLEU scores. Considering the amount of bitext data on which both the model variants were trained, the resulting BLEU scores are expectedly low. But since the purpose of this study is to estimate the viability of our proposed method, the increment seen in scores from the multitask models adequately confirm our hypothesis. \section{Conclusion}\label{conclusion} In this work we propose a Multi-Task finetuning methodology for Bilingual Neural Machine translation, which, along with training a model on a bilingual parallel corpus, also trains it on a Causal Language Modeling objective for both the source and target side monolingual data in a self supervised manner. We show that this approach outperforms the standard Fine-tuning methodology for Neural Machine Translation for the considered language pairs. This study is of preliminary nature and for future work we aim to train other transformer based language models using the same methodology. Due to the modest computational resources available to us, we were compelled to train on relatively low amounts of data. Therefore, we also hope to use this method to train on larger bitext and monolingual corpora, for more Indian Language pairs. Although the proposed methodology has proven to be effective in our experiments, training models on larger datasets using this multitask framework would add to its credibility.
proofpile-arXiv_065-59
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In the modern context of machine learning, deep neural networks (DNNs) have enjoyed enormous success by leveraging the rich availability of labeled data for supervised training. Despite this, deep supervised learning is primarily limited in terms of scaling towards unseen samples due to the high cost of acquiring large amounts of labeled data. This is in clear contrast to how humans learn, where in many cases, only a handful of training examples are sufficient for generalizing towards unseen samples. Few-Shot Learning (FSL) addresses this critical problem through the development of algorithms that can learn using limited data~\cite{finn2017model, nichol2018first, snell2017prototypical,sung2018learning,vinyals2016matching,ye2020few,Hong_2021_CVPR_RAP,wang2020instance}. \begin{figure}[t] \centering \subfigure[]{\includegraphics[width=0.45\linewidth]{Figures/protonet.pdf}\label{fig:protonet intro}}% \hfil \subfigure[]{\includegraphics[width=0.45\linewidth]{Figures/p2s.pdf}\label{fig:p2s intro}}% \hfil \caption{(a): For a query sample in ``red" class, outliers (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, yellow and white circles) drag the prototype (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, the black circle) far away from the real cluster center in the embedding space such that the nearest neighbor classifier mis-classifies the query point into ``green" class. (b): Our method computes an adaptive point to set distance on the manifold, which is more robust to outliers than prototypes. Best viewed in color. }\label{fig:compare} \end{figure} Performing FSL well is essential towards creating robust frameworks that can learn with the efficiency of humans. In many cases, FSL methods deem to learn an embedding space to distinguish samples from different classes. Therein, the embedding space is a multidimensional Euclidean space and is realized via a deep neural network. Employing hyperbolic geometry to encode data has been shown rewarding, as the volume of space expands exponentially~\cite{ganea2018hyperbolic, khrulkov2020hyperbolic}. Recent works have shown that a hierarchical structure exists within visual datasets and that the use of hyperbolic embeddings can yield significant improvements over Euclidean embeddings~\cite{khrulkov2020hyperbolic,Fang_2021_ICCV}. Most existing FSL solutions learn a metric through comparing the distance between a query sample and the class prototypes, often modeled as the mean embeddings of each class. However, this does not take the adverse effects of outliers and noises into consideration~\cite{sun2019hierarchical}. This severely limits the representation power of embedding-based methods since the outliers may drag the prototype away from the true center of the cluster (see Fig.\ref{fig:protonet intro}). For a more robust approach, we require an adaptive metric, which can faithfully capture the distribution per class, while being robust to outliers and other nuances in data (Fig.~\ref{fig:p2s intro}). With this in mind, we propose learning a context-aware hyperbolic metric that characterizes the point to set (dis)similarities. This is achieved through employing a Poincar\'e ball to model hyperbolic spaces and casting the (dis)similarity as a weighted-sum between a query and a class that is learned adaptively. In doing so, each sample (from the support and query sets) is modeled by a set itself (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, a feature map). Therefore, we propose to make use of pairwise distances between elements of two sets, along with a refinement mechanism to disregard uninformative parts of the feature maps. This leads to a flexible and robust framework for the FSL tasks. We summarize our contributions as follows: \begin{itemize} \item We propose a novel adaptive Poincar\'e point to set (APP2S) distance metric for the FSL task. \item We further design a mechanism to produce a weight, dependent on the constellation of the point, for our APP2S metric. \item We conduct extensive experiments across five FSL benchmarks to evaluate the effectiveness of the proposed method. \item We further study the robustness of our method, which shows our method is robust against the outliers compared to competing baselines. \end{itemize} \section{Preliminaries} \label{preliminary} In what follows, we use $\mathbb{R}^n$ and $\mathbb{R}^{m \times n}$ to denote the $n$-dimensional Euclidean space and space of $m \times n$ real matrices, respectively. The $n$-dimensional hyperbolic space is denoted by $\mathbb{H}_c^n$. The $\mathrm{arctanh}: (-1,1) \to \mathbb{R}, \mathrm{arctanh}(x) = \frac{1}{2}\ln(\frac{1 + x}{1 - x}), |x|<1$ refers to the inverse hyperbolic tangent function. The vectors and matrices (or 3-D tensors) are denoted by bold lower-case letters and bold upper-case letters throughout the paper. \subsection{Riemannian Geometry} In this section, we will give a brief recap of Riemannian geometry. A manifold, denoted by $\mathcal{M}$, is a curved surface, which locally resembles the Euclidean space. The tangent space at $\Vec{x} \in \mathcal{M}$ is denoted by $T_{\Vec{x}} \mathcal{M}$. It contains all possible vectors passing through point $\Vec{x}$ tangentially. On the manifold, the shortest path connecting two points is a geodesic, and its length is used to measure the distances on the manifold. \subsection{Hyperbolic Space} Hyperbolic spaces are Riemannian manifolds with constant negative curvature and can be studied using the Poincar\'e ball model~\cite{ganea2018hyperbolic,khrulkov2020hyperbolic}. The Poincar\'e ball ($\mathbb{D}_c^n, g^{c}$) is a smooth $n$-dimensional manifold identified by satisfying $\mathbb{D}^n_c = \{\Vec{x} \in \mathbb{R}^n: c\lVert \Vec{x} \rVert < 1 , c\geqslant0\}$\footnote{In the supplementary material, we provide further details regarding the Poincar{\'e} ball model and its properties.}, where $c$ is the absolute value of the curvature for a Poincar{\'e} ball, while the real curvature value is $-c$. The Riemannian metric $g^{c}$ at $\Vec{x}$ is defined as $g^{c} = \lambda{_{\Vec{x}}^c}^2g^{E}$, where $g^E$ is the Euclidean metric tensor and $\lambda_{\Vec{x}}^c$ is the conformal factor, defined as: \begin{equation} \lambda_{\Vec{x}}^c\coloneqq\frac{2}{1- c\lVert \Vec{x} \rVert^2}. \end{equation} Since the hyperbolic space is a non-Euclidean space, the rudimentary operations, such as vector addition, cannot be applied (as they are not faithful to the geometry). The M{\"o}bius gyrovector space provides many standard operations for hyperbolic spaces. Essential to our developments in this work is the M{\"o}bius addition of two points $\Vec{x}, \Vec{y} \in \mathbb{D}^n_c$, which is calculated as: \begin{equation}\label{eq:mobius addition} \Vec{x} \oplus_{c} \Vec{y} = \frac{(1+2c \langle \Vec{x},\Vec{y} \rangle + c\|\Vec{y} \|^2)\Vec{x}+(1-c\|\Vec{x}\|^2 )\Vec{y}}{1+2c\langle \Vec{x},\Vec{y} \rangle+c^2\|\Vec{x}\|^2 \|\Vec{y}\|^2}. \end{equation} The geodesic distance between two points $\Vec{x}, \Vec{y} \in \mathbb{D}^n_c$ can be obtained as: \begin{equation}\label{eq:distance} d_c(\Vec{x},\Vec{y})=\frac{2}{\sqrt{c}}\text{arctanh}(\sqrt{c}\|-\Vec{x}\oplus_c \Vec{y}\|). \end{equation} Another essential operation used in our model is the hyperbolic averaging. The counterpart of Euclidean averaging in hyperbolic space is the $E\emph{instein mid-point}$ which has the most simple form in $K\emph{lein}$ coordinates (another model of the hyperbolic space which is isometric to the Poincar{\'e} ball). Thus, we transform the points from Poincar{\'e} (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, $\Vec{x}_{\mathbb{D}}$) ball model to Klein model (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, $\Vec{x}_{\mathbb{K}}$) using the transformation: \begin{equation}\label{eq: p2k} \Vec{x}_{\mathbb{K}} = \frac{2\Vec{x}_{\mathbb{D}}}{1+c\|\Vec{x}_{\mathbb{D}}\|^2}. \end{equation} Then, the hyperbolic averaging in Klein model is obtained as: \begin{equation}\label{eq: hype ave} \text{HypAve}(\Vec{x}_1,\ldots,\Vec{x}_{N})=\sum_{i=1}^N\gamma_i\Vec{x}_i/\sum_{i=1}^N\gamma_i, \end{equation} where $\gamma_i=\frac{1}{\sqrt{1-c\|\Vec{x}_i\|^2}}$ are the Lorentz factors. Finally, we transform the coordinates back to Poincar{\'e} model using: \begin{equation}\label{eq: k2p} \Vec{x}_{\mathbb{D}} = \frac{\Vec{x}_{\mathbb{K}}}{1+\sqrt{1-c\|\Vec{x}_{\mathbb{K}}\|^2}}. \end{equation} In our work, we make use of the tangent bundle of the $\mathbb{D}^n_c$. The logarithm map defines a function from $\mathbb{D}^n_c \to T_{\Vec{x}}\mathbb{D}_c^n$, which projects a point in the Poincar\'e ball onto the tangent space at $\Vec{x}$, as: \begin{equation} \label{eq:logmap} \Vec{\pi}^c_\Vec{x}(\Vec{y})=\frac{2}{\sqrt{c}\lambda^c_\Vec{x}}\text{arctanh}(\sqrt{c}\|-\Vec{x}\oplus_c\Vec{y}\|)\frac{-\Vec{x}\oplus_c\Vec{y}}{\|-\Vec{x}\oplus_c\Vec{y}\|}. \end{equation} \subsection{Point to Set Distance} Let $\mathcal{S} = \{\Vec{s}_1, \ldots, \Vec{s}_k\}$ be a set. The distance from a point $\Vec{p}$ to the set $\mathcal{S}$ can be defined in various forms. The min and max distance from a point $\Vec{p}$ to the set $\mathcal{S}$ are two simple metrics, which can be defined as: \begin{equation}\label{eq:leastdistance} d_{\mathrm{p2s}}^{\mathrm{l}} (\Vec{p}; \mathcal{S})=\inf \{d(\Vec{p}, \Vec{s}_i) | \Vec{s}_i \in \mathcal{S}\}, \end{equation} \begin{equation}\label{eq:highestdistance} d_{\mathrm{p2s}}^{\mathrm{h}} (\Vec{p}; \mathcal{S})=\sup \{d(\Vec{p}, \Vec{s}_i) | \Vec{s}_i \in \mathcal{S}\}, \end{equation} where $\inf$ and $\sup$ are the infimum and supremum functions, respectively. Given their geometrical interpretation, $d_{\mathrm{p2s}}^{\mathrm{l}}$ and $d_{\mathrm{p2s}}^{\mathrm{h}}$ define the lower and upper pairwise bounds, and fail to encode structured information about the set. Therefore, we opt for a weighted-sum formalism to measure the distance between a point and a set in~\S~\ref{sec:adaptive dist}. \section{Method} This section will give an overview of the proposed method, followed by a detailed description of each component in our model. \begin{figure*}[t] \centering \subfigure[]{\includegraphics[width=0.79\linewidth, height = 6cm]{Figures/pipleline3.pdf}\label{fig:framework}}% \hfil \subfigure[]{\includegraphics[width=0.18\linewidth, height = 6cm]{Figures/s2s.pdf}\label{fig:s2s}}% \hfil \caption{(a): The overall pipeline of our method. Given an episode, we use a backbone network to extract and map the inputs to a hyperbolic space. We then project the support samples onto the tangent plane of the query point and employ a refinement function $f_{\omega}$ to obtain the class and episode-aware signature of every class in the support set. This is followed by a mapping $f_{\phi}$ that weighs the importance of each sample of the support set w.r.t\onedot} \def\dof{d.o.f\onedot their corresponding class. This enables us to calculate an adaptive point to set distance towards inference. (b): The S2S distance module. We calculate a pair-wise distance between each feature vector of the two feature maps as the input of the distance network $f_{\zeta}$. Then the $f_{\zeta}$ outputs a S2S distance, which is further used to compute the adaptive P2S. }\label{fig:pipeline} \end{figure*} \subsection{Problem Formulation} We follow the standard protocol to formulate few-shot learning (FSL) with episodic training. An episode represents an $N$-way $K$-shot classification problem (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, the training set, named support set, includes $N$ classes where each class has $K$ examples). As the name implies, $K$ (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, the number of examples per class) is small (\emph{e.g}\onedot} \def\Resp{\emph{resp}\onedot, $K = 1$ or $5$). The goal of learning is to realize a function $\mathcal{F}: \mathcal{X} \to \mathbb{R}^n$ to embed the support set to a latent and possibly lower-dimensional space, such that query samples can be recognized easily using a nearest neighbour classifier. To be specific, an episode or task $\mathcal{E}_i$ consists of a query set $\mathcal{X}^q = \{(\Vec{X}^q_{i}, y^q_i) | i = 1, \ldots, N\}$, where $\Vec{X}^q_{i}$ denotes a query example\footnote{Without losing generality, we use one sample per class as a query for presenting our method. In practice, each episode contains multiple samples for the query image per class.} sampled from class $y^q_i$, and a support set $\mathcal{X}^s = \{ (\Vec{X}^s_{ij}, y^s_{i})| i = 1, \ldots, N, j = 1, \ldots, K\}$, where $\Vec{X}^s_{ij}$ denotes the $j$-th sample in the class $y^s_i$. The embedding methods for FSL, our solution being one, often formulate training as: \begin{equation}\label{eq:obj} \mathcal{F}^* : = \argmin_{\mathcal{F}} \sum_{\Mat{X}^q_{u}\in\mathcal{X}^q}\delta\big( \mathcal{F}(\Vec{X}^q_u), \mathcal{F}({\mathcal{X}}^s_v) \big)~~\text{s.t.}~y^q_u = y^s_v, \end{equation} where $\delta$ measures a form of distance between the query and the support samples. \subsection{Model Overview} We begin by providing a sketch of our method (see the conceptual diagram in Fig.~\ref{fig:framework} and Fig.~\ref{fig:s2s}). The feature extractor network, denoted by $\mathcal{F}$, maps the input to a hyperbolic space in our work. We model every class in the support set by its signature. The signature is both class and episodic-aware, meaning that the signature will vary if the samples of a class or samples in the episode vary. This will enable us to calculate an adaptive distance from the query point to every support-class while being vigilant to the arrangement and constellation of the support samples. We stress that our design is different from many prior works where class-specific prototypes are learned for FSL. For example, in~\cite{ khrulkov2020hyperbolic,snell2017prototypical,sung2018learning}, the prototypes are class-specific but not necessarily episodic-aware. To obtain the signatures for each class in the support set, we project the support samples onto the tangent space of the query point and feed the resulting vectors to a signature generator $f_{\omega}$. The signature generator realizes a permutation-invariant function and refines and summarizes its inputs to one signature per class. We then leverage a relational network $f_{\phi}$ to contrast samples of a class against their associated signature and produce a relational score. To obtain the adaptive P2S distance, we first compute a set to set (S2S) distance between the query feature map and each support feature map using the distance module $f_{\zeta}$. Moreover, a weighted-sum is calculated using the relational score acting as the weight on the corresponding S2S distance, which serves as the P2S distance. Given P2S distances, our network is optimized by minimizing the adaptive P2S distance between the query and its corresponding set while ensuring that the P2S distance to other classes (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, wrong classes) is maximized. \subsection{Adaptive Poincar\'e Point to Set Distance}\label{sec:adaptive dist} In FSL, we are given a small support set of $K$ images, $\mathcal{X}^s_i = \{\Vec{X}^s_{i1}, \ldots, \Vec{X}^s_{iK}\}$ per class $y^s_{i}$ to learn a classification model. We use a deep neural network to first encode the input to a multi-channel feature map, as $\mathcal{S}_i = \mathcal{F}(\mathcal{ X}_i^s)$, with $\mathcal{S}_i = \{\Mat{S}_{i1}, \ldots, \Mat{S}_{iK} |\Mat{S}_{ij} \in \mathbb{R}^{H\times W\times C} \}$, where $H$, $W$, and $C$ indicate the height, width, and channel size of the instance feature map. Each feature map consists of a set of patch descriptors (local features), which can be further represented as $\Mat{S}_{ij}=\{\Vec{s}_{ij}^{1},\ldots,\Vec{s}_{ij}^{HW} | \Vec{s}_{ij}^r \in \mathbb{R}^C \}$. In our work, we train the network to embed the representation in the Poincar\'e ball; thus, we need to impose a constraint on patch descriptors at each spatial location $\Vec{s}_{ij}^{r}$ as follows: \begin{equation} \label{eq: clip norm implement} \Vec{s}_{ij}^{r}=\left\{\begin{matrix} \Vec{s}_{ij}^{r} &~\text{if}~\|\Vec{s}_{ij}^{r}\| \leq \mu \\ \mu \Vec{s}_{ij}^{r} / \|\Vec{s}_{ij}^{r}\| &~~\text{if}~\|\Vec{s}_{ij}^{r}\| > \mu, \end{matrix}\right. \end{equation} where $\mu$ is the norm upper bound of the vectors in the Poincar{\'e} ball. In our model, we choose $\mu=(1-\epsilon)/c$, where $c$ is the curvature of the Poincar\'e ball and $\epsilon$ is a small value that makes the system numerically stable. The same operation applies to the query sample, thereby obtaining an instance feature map for the query sample $\Mat{Q} = \{\Vec{q}^{1}, \ldots, \Vec{q}^{HW}\}, \Vec{q}^{r}\in\mathbb{D}^C_c$. Then the P2S distance between the query sample $\Mat{Q}$ and the support set per class $\mathcal{S}_i$ can be calculated using Eq.~\eqref{eq:leastdistance} or Eq.~\eqref{eq:highestdistance}. However, those two metrics only determine the lower or upper bound of P2S distance, thereby ignoring the structure and distribution of the set to a great degree. To make better use of the distribution of samples in a set, we propose the adaptive P2S distance metric as: \begin{equation}\label{eq:adaptive} d_{\mathrm{p2s}}^{\mathrm{adp}}(\Mat{Q}; {\mathcal{S}}_i) \coloneqq \frac{\sum\limits_{j = 1}^{K} w_{ij}d(\Mat{Q}, {\Mat{S}}_{ij})} {\sum\limits_{j = 1}^K w_{ij}}, \end{equation} where $w_{ij}$ is the adaptive factor for $d(\Mat{Q}, {\Mat{S}}_{ij})$. We refer to the distance in Eq.~\eqref{eq:adaptive} as Adaptive Poincar\'e Point to Set (APP2S) distance, hereafter. In Eq.~\eqref{eq:adaptive}, we need to calculate the distance between two feature maps (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, $d(\Mat{Q}, \Mat{S}_{ij})$). In doing so, we formulate a feature map as a set (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, $\{\Vec{q}^{1},\ldots,\Vec{q}^{HW}\}$ and $\{\Vec{s}_{ij}^{1},\ldots,\Vec{s}_{ij}^{HW}\}$), such that a set to set (S2S) distance can be obtained. One-sided Hausdorff and two-sided Hausdorff distances~\cite{huttenlocher1993comparing} are two widely used metrics to measure the distance between sets. However, these two metrics are sensitive to outliers~\cite{huttenlocher1993comparing,ibanez2008use}. To alleviate this issue, we propose to learn the S2S distance by a network $f_{\zeta}$. We first calculate the pair-wise distance between two sets as $\Mat{D}(\Mat{Q}, \Mat{S}_{ij}) \in \mathbb{R}^{HW \times HW}$, where each element in $\Mat{D}$ is obtained by $d_{h, w} = d_c(\Vec{q}^h, \Vec{s}_{ij}^w )$ using Eq.~\eqref{eq:distance}, where $h = 1, \ldots, HW$ and $w = 1, \ldots, HW$. Then we use a neural network to further learn the distance between two feature maps (see Fig.~\ref{fig:s2s}), which is given by: \begin{equation}\label{eq: s2s} d_{\mathrm{s2s}}(\Mat{Q}, \Mat{S}_{ij}) : = f_{\zeta}(\Mat{D}). \end{equation} Comparing to the Hausdorff distance~\cite{Conci2018DistanceBS} (see supplementary material), our set to set distance is more flexible and learned through the optimization process. To further obtain the weights of APP2S (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, $w_{ij}$), we make use of the tangent space of the query sample. We first compute a mean query vector $\Vec{\bar{q}}$ over the spatial dimensions of the feature map $\Mat{Q}$ using Eq.~\eqref{eq: p2k}- Eq.~\eqref{eq: k2p}. Then, we project the samples in the support set to the tangent space of the mean query vector (see Fig.~\ref{fig:framework}), using the logarithm map as: \begin{equation} \tilde{\mathcal{S}} = \Vec{\pi}^c_{\bar{\Vec{q}}}({\mathcal{S}}), \end{equation} where $\tilde{\mathcal{S}}$ indicates the projected support set on the tangent space at $\bar{\Vec{q}}$. For the $i$-th class, we can obtain a set of feature maps: $\tilde{\mathcal{S}}_i = \{\tilde{\Mat{S}}_{i1}, \ldots, \tilde{\Mat{S}}_{iK} \}$\footnote{The projected feature map is also composed by the vectors at each spatial location $\tilde{\Mat{S}}_{ij}=\{\tilde{\Vec{s}}_{ij}^{{1}},\ldots, \tilde{\Vec{s}}_{ij}^{{HW}}\}$}. To obtain a meaningful weight $w_{ij}$, we first propose a signature generator, which jointly refines sample representations in the support set and summarizes the set representation per class as the class signature. As shown in Fig.~\ref{fig:framework}, the signature generator receives the projected support set $\tilde{\mathcal{S}}= \{\tilde{\mathcal{S}}_{1}, \ldots, \tilde{\mathcal{S}}_{N} \}$ as input and refines them for the follow-up task (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, obtaining the weights for the APP2S). We denote the output of the refinement module by $ \hat{\mathcal{S}}= f_{\omega}(\tilde{\mathcal{S}})$ ($\hat{\mathcal{S}} = \{\hat{\mathcal{S}}_{1}, \ldots, \hat{\mathcal{S}}_{N} \}$, $\hat{\mathcal{S}}_{i} = \{\hat{\Mat{S}}_{i1}, \ldots, \hat{\Mat{S}}_{iK} \}$). One can understand the refinement function as learning the context of the support set by seeing all the samples, thereby highlighting the discriminative samples and restraining the non-informative samples such as outliers for all the samples. Then the signature for each class is obtained by summarizing as: $\bar{\Vec{S}}_{i} ={ \sum_{j=1}^{K}\hat{\Mat{S}}_{ij}}/{K}$. \begin{remark} Our proposed set-signature generator $f_{\omega}$ is similar to the set-to-set function in FEAT~\cite{ye2020few}, in the sense that both functions perform self-attention over the input features. However, the fundamental difference is that our module exploits the relation between the spatial feature descriptors of all samples in a support set~(\emph{e.g}\onedot} \def\Resp{\emph{resp}\onedot, $\tilde{\Vec{s}}^{r}_{ij}$), instead of prototypes as proposed in FEAT~\cite{ye2020few}, which possibly gives the model more flexibility to encode meaningful features. \end{remark} Given sample features in a class $\tilde{\mathcal{S}}_i = \{\tilde{\Mat{S}}_{i1}, \ldots, \tilde{\Mat{S}}_{iK} \}$ and the corresponding class signature $\bar{\Mat{S}}_{i}$, we use a relation generator (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, $f_{\phi}$ in Fig.~\ref{fig:framework}) to compare the relationship between an individual feature map and the class signature. In doing so, we first concatenate the individual feature maps and their class signature along the channel dimension to obtain a hybrid representation, as: \begin{equation} \Mat{G}_{ij} = \mathrm{CONCAT}(\tilde{\Mat{S}}_{ij}, \bar{\Mat{S}}_{i}). \end{equation} Given the hybrid representation $\Mat{G}_{ij}$, the relation generator produces a relation score as: ${w}_{ij} = f_{\phi}(\Mat{G}_{ij})$. This score will serve as the adaptive factor for the APP2S distance metric in Eq.~\eqref{eq:adaptive}. Note that the hybrid representation for the whole support and a support-class set are denoted by $\mathcal{G}=\{\mathcal{G}_1,\ldots,\mathcal{G}_N\}$ and $\mathcal{G}_{i}=\{\Mat{G}_{i1},\ldots,\Mat{G}_{iK}\}$, respectively. Algorithm \ref{alg: Online Adaptation} summarizes the process of training our APP2S metric for FSL. \begin{remark}\label{remark} The point to set distance defined by Eq.~\eqref{eq:adaptive} is different from that in MatchingNet~\cite{vinyals2016matching}. MatchingNet formulates all the samples in the support set as a set. In contrast, we treat the samples in a class as the set, which makes our adaptive point to set distance fully contextual aware of the whole support set (by the set-signature) and encodes the distribution of each class. \end{remark} \begin{figure}[!h] \begin{center} \scalebox{1}{ \includegraphics[width=0.9\linewidth]{Figures/signature_relation.pdf} } \end{center} \caption{The information flow of the signature generator and the relation generator. The signature generator $f_{\omega}$ receives the projected support set $\{\tilde{\mathcal{S}}_1, \tilde{\mathcal{S}}_2\}$ as a bag and outputs the refined representation per sample such that each element (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, $\tilde{\Mat{S}}_{ij}$) sees all the other elements of the support set. Then the refined representations per class/set $\hat{\mathcal{S}}_1$ and $\hat{\mathcal{S}}_2$ are averaged to obtain the signature per class/set $\bar{\Mat{S}}_1$ and $\bar{\Mat{S}}_2$. Finally, we concatenate one projected sample $\tilde{\Mat{S}}_{1j}$ and $\tilde{\Mat{S}}_{2j}$ with the corresponding class signature $\bar{\Mat{S}}_1$ and $\bar{\Mat{S}}_2$, repectively, and feed it into relation generator $f_{\phi}$ to produce the adaptive factors $\omega_{1j}$ and $\omega_{2j}$.} \label{fig:sig relation} \end{figure} \begin{algorithm}[h] \caption{Train network using adaptive Poincar{\'e } point to set distance} \textbf{Input:} An episodes $\mathcal{E}$, with their associated support set $\mathcal{X}^s = \{ (\Vec{X}^s_{ij}, y^s_{i})| i = 1, \ldots, N, j = 1, \ldots, K\}$ and a query sample $\Mat{X}^q$ \textbf{Output:} The optimal parameters for $\mathcal{F}\text{,}~ f_{\omega}, f_{\zeta},~\text{and}~f_{\phi}$ \begin{algorithmic}[1] \State Map $\mathcal{X}^s$ and $\Mat{X}^q$ into Poincar{\'e} ball \State Obtain the tangent support set $\tilde{\mathcal{S}}$ using Eq.~\eqref{eq:logmap} \State $\hat{\mathcal{S}}=f_{\omega}(\tilde{\mathcal{S}})$ \Comment{the refined support set} \For {$i$ in $\{1,...,N\}$} \State $\bar{\Mat{S}}_{i} ={ \sum_{j=1}^{K}\hat{\Mat{S}}_{ij}}/{K}$ \Comment{ the set signature} \State $\Mat{G}_{ij}= \mathrm{CONCAT}(\tilde{\Mat{S}}_{ij}, \bar{\Mat{S}}_{i})$ \Comment{ the hybrid representation} \State $\omega_{ij}=f_{\phi}(\Mat{G}_{ij})$\Comment{ the weight } \State Compute point to set distance and set to set distance using Eq.~\eqref{eq:adaptive} and Eq.~\eqref{eq: s2s} \EndFor \State Optimize the model using Eq.~\eqref{eq:obj} \end{algorithmic} \label{alg: Online Adaptation} \end{algorithm} \section{Related Work} In this section, we discuss the literature on few-shot learning and highlight those that motivate this work. Generally, there are two main branches on the few-shot learning literature, optimization-based and metric-based methods. The optimization-based methods~\cite{antoniou2019train,chen2019closer,finn2017model, flennerhag2019meta,franceschi2018bilevel, nichol2018first}, such as MAML and Reptile~\cite{finn2017model,nichol2018first}, aim to learn a set of initial model parameters that can adapt to new tasks quickly using backpropagation in the episodic regime, without severe overfitting. However, this group of methods usually adopt a bi-level optimization setting to optimize the initial parameters, which is computationally expensive during inference. On the other hand, our proposed method is closer to metric-based methods~\cite{simon2020adaptive,snell2017prototypical,sung2018learning,vinyals2016matching, ye2020few,zhang2020deepemd,tang2020blockmix}, which target to realize an embedding: $\mathbb{R}^M \rightarrow \mathbb{R}^D$ to represent images in semantic space equipped with an appropriate distance metric such that different categories are distinctive. Matching Network~\cite{vinyals2016matching} determines the query labels by learning a sample-wise distance along with a self-attention mechanism that produces a fully contextualized embedding over samples. Prototypical Network~\cite{snell2017prototypical} takes a step further from a sample-wise to a class-wise metric, where all the samples of a class are averaged into a prototype to represent the class in the embedding space. Relation Network~\cite{sung2018learning} and CTM~\cite{li2019finding} replace the hand-crafted metric with a network to encode the non-linear relation between the class representations and the query embedding. Ye \emph{et al}\onedot~\cite{ye2020few} propose adopting a transformer to learn the task-specific features for few-shot learning. Zhang \emph{et al}\onedot~\cite{zhang2020deepemd} adopt the Earth Mover's Distance as a metric to compute a structural distance between representation to obtain the labels for the query images. Simon \emph{et al}\onedot~\cite{simon2020adaptive} propose to generate a dynamic classifier via using subspace. Along this line of research, most of the previous methods utilize the global feature vectors as representations. However, several recent works have demonstrated that utilizing the local feature maps can further boost performance. Therefore, we follow these works~\cite{doersch2020crosstransformers, zhang2020deepemd, wertheimer2021few, lifchitz2019dense, li2019revisiting} to develop our model. However, the majority of the aforementioned metric-based works employ various metrics within Euclidean space. Ganea \emph{et al}\onedot~\cite{ganea2018hyperbolic} have proved that embedding via hyperbolic geometry enjoys low distortion for hierarchical and structured data (\emph{e.g}\onedot} \def\Resp{\emph{resp}\onedot, trees) and developed the hyperbolic version of the feed-forward neural networks and recurrent neural networks~(RNN). Moreover, a recent work~\cite{khrulkov2020hyperbolic} has shown that the vision tasks can largely benefit from hyperbolic embeddings, which inspires us to further develop algorithms with hyperbolic geometry. \section{Experiments} \subsection{Datasets} In this section, we will empirically evaluate our approach across five standard benchmarks, \emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, \emph{mini}-ImageNet~\cite{ravi2016optimization}, \emph{tiered}-ImageNet~\cite{ren2018meta}, Caltech-UCSD Birds-200-2011 (CUB)~\cite{wah2011caltech}, CIFAR-FS~\cite{bertinetto2018meta} and Fewshot-CIFAR100 (FC100)~\cite{oreshkin2018tadam}. Full details of the datasets and implementation are described in the supplementary material. In the following, we will briefly describe our results on each dataset. \subsection{Main Result} We evaluate our methods for 100 epochs, and in each epoch, we sample 100 tasks (episodes) randomly from the test set, for both 5-way 1-shot and 5-way 5-shot settings. Following the standard protocol~\cite{simon2020adaptive}, we report the mean accuracy with 95$\%$ confidence interval. \begin{table*}[h] \begin{center} \scalebox{0.82}{ \begin{tabular}{c c| c c|c c} \Xhline{2\arrayrulewidth} \multirow{2}{*}{\bf{Model}} &\multirow{2}{*}{\bf{Backbone}} &\multicolumn{2}{c|}{\bf{\emph{mini}-ImageNet}} &\multicolumn{2}{c}{\bf{\emph{tiered}-ImageNet}} \\ & & \bf{5-way 1-shot} &\bf{5-way 5-shot} & \bf{5-way 1-shot} &\bf{5-way 5-shot} \\ \toprule\bottomrule ProtoNet~\cite{snell2017prototypical} &ResNet-12 &$60.37\pm{0.83}$ &$78.02\pm{0.57}$ &$61.74\pm{0.77}$ &$80.00\pm{0.55}$ \\ MatchingNet~\cite{vinyals2016matching} &ResNet-12 &$63.08\pm{0.80}$ &$75.99\pm{0.60}$ &$68.50\pm{0.92}$ &$80.60\pm{0.71}$ \\ MetaOptNet~\cite{lee2019meta} &ResNet-12 &$62.64\pm{0.61}$ &$78.63\pm{0.46}$ &$65.99\pm{0.72}$ &$81.56\pm{0.53}$ \\ Ravichandran \emph{et al}\onedot~\cite{ravichandran2019few} &ResNet-12 &$60.71$ &$77.64$ &$66.87$ &$82.64$ \\ DeepEMD~\cite{zhang2020deepemd} &ResNet-12 &$65.91\pm{0.82}$ &$82.41\pm{0.56}$ &$71.16\pm{0.87}$ &$86.03\pm{0.58}$ \\ P-transfer~\cite{shen2021partial} &ResNet-12 &$64.21\pm{0.77}$ &$80.38\pm{0.59}$ &- &- \\ GLoFA~\cite{lu2021tailoring} &ResNet-12 &$66.12\pm{0.42}$ &$81.37\pm{0.33}$ &$69.75\pm{0.33}$ &$83.58\pm{0.42}$ \\ DMF~\cite{xu2021learning} &ResNet-12 &$\bm{67.76\pm{0.46}}$ &$82.71\pm{0.31}$ &$71.89\pm{0.52}$ &$85.96\pm{0.35}$ \\ \rowcolor{LightCyan} Hyperbolic ProtoNet~\cite{khrulkov2020hyperbolic} &ResNet-12 &$*60.65\pm{0.18}$ &$*76.13\pm{0.21}$ &$*67.38\pm{0.14}$ &$*79.11\pm{0.22}$ \\ \hline \rowcolor{Gray} \bf{Ours~(APP2S)} &ResNet-12 &$66.25\pm{0.20}$ &$\bm{83.42\pm{0.15}}$ &$\bm{72.00}\pm{0.22}$ &$\bm{86.23\pm{0.15}}$ \\ \Xhline{2\arrayrulewidth} LwoF~\cite{gidaris2018dynamic} &WRN-28-10 &$60.06\pm{0.14}$ &$76.39\pm{0.11}$ &- &- \\ wDAE-GNN~\cite{gidaris2019generating} &WRN-28-10 &$61.07\pm{0.15}$ &$76.75\pm{0.11}$ &$68.18\pm{0.16}$ &$83.09\pm{0.12}$ \\ LEO~\cite{rusu2018meta} &WRN-28-10 &$61.76\pm{0.08}$ &$77.59\pm{0.12}$ &$66.33\pm{0.05}$ &$82.06\pm{0.08}$ \\ Su \emph{et al}\onedot~\cite{su2020does} &ResNet-18 &- &$76.60\pm{0.70}$ &- &$78.90\pm{0.70}$ \\ AFHN~\cite{li2020adversarial} &ResNet-18 &$62.38\pm{0.72}$ &$78.16\pm{0.56}$ &- &- \\ Neg-Cosine~\cite{liu2020negative} &ResNet-18 &$62.33\pm{0.82}$ &$80.94\pm{0.59}$ &- &- \\ \rowcolor{LightCyan} Hyperbolic ProtoNet~\cite{khrulkov2020hyperbolic} &ResNet-18 &$*57.05\pm{0.16}$ &$*74.20\pm{0.14}$ &$*66.20\pm{0.12}$ &$*76.50\pm{0.13}$ \\ \hline \rowcolor{Gray} \bf{Ours~(APP2S)} &ResNet-18 &$64.82\pm{0.12}$ &$81.31\pm{0.22}$ &$70.83\pm{0.15}$ &$84.15\pm{0.29}$ \\ \Xhline{2\arrayrulewidth} \end{tabular} } \end{center} \caption{Few-shot classification accuracy and 95 \% confidence interval on \emph{mini}-ImageNet and \emph{tiered}-ImageNet with ResNet backbones. ``*" notes the result obtained by the self-implemented network.} \label{tab: mini and tier} \end{table*} \begin{table*}[h] \begin{center} \scalebox{0.9}{ \begin{tabular}{c c| c c | c c} \Xhline{2\arrayrulewidth} \multirow{2}{*}{\bf{Model}} &\multirow{2}{*}{\bf{Backbone}} &\multicolumn{2}{c|}{\bf{CIFAR-FS}} &\multicolumn{2}{c}{\bf{FC100}} \\ & &\bf{5-way 1-shot} &\bf{5-way 5-shot} &\bf{5-way 1-shot} &\bf{5-way 5-shot} \\ \toprule\bottomrule TEAM~\cite{qiao2019transductive} &ResNet-12 &$70.40$ &$81.30$ &- &- \\ ProtoNet~\cite{snell2017prototypical} &ResNet-12 &$72.20\pm{0.70}$ &$83.50\pm{0.50}$ &$37.50\pm{0.60}$ &$52.50\pm{0.60}$ \\ TADAM~\cite{oreshkin2018tadam} &ResNet-12 &- &- &$40.10\pm{0.40}$ &$56.10\pm{0.40}$ \\ DeepEMD~\cite{zhang2020deepemd} &ResNet-12 &- &- &$46.47\pm{0.78}$ &$63.22\pm{71}$ \\ \rowcolor{LightCyan} Hyperbolic ProtoNet~\cite{khrulkov2020hyperbolic} &ResNet-12 &*$70.27\pm{0.22}$ &*$80.98\pm{0.16}$ &*$36.04\pm{0.18}$ &*$51.60\pm{0.18}$ \\ \hline \rowcolor{Gray} \bf{Ours~(APP2S)} &ResNet-12 & $\bm{73.12\pm{0.22}}$ & $\bm{85.69\pm{0.16}}$ & $\bm{47.64\pm{0.21}}$ &$\bm{63.56\pm{0.22}}$ \\ \Xhline{2\arrayrulewidth} \end{tabular} } \end{center} \caption{Few-shot classification accuracy and 95 \% confidence interval on CIFAR-FS and FC100 with ResNet-12 backbones. ``*" notes the result obtained by the self-implemented network.} \label{tab: cifar} \end{table*} \begin{table}[h] \begin{center} \scalebox{0.7}{ \begin{tabular}{c | c c} \Xhline{2\arrayrulewidth} \bf{Model} &\bf{5-way 1-shot} &\bf{5-way 5-shot} \\ \toprule\bottomrule MAML~\cite{finn2017model} &$69.96\pm{1.01}$ &$82.70\pm{0.65}$ \\ RelationNet~\cite{sung2018learning} &$67.59\pm{1.02}$ &$82.75\pm{0.58}$ \\ Chen \emph{et al}\onedot~\cite{chen2019closer} &$67.02$ &$83.58$ \\ MatchingNet~\cite{vinyals2016matching} &$72.36\pm{0.90}$ &$83.64\pm{0.60}$ \\ SimpleShot~\cite{wang2019simpleshot} &$70.28$ &$86.37$ \\ ProtoNet~\cite{snell2017prototypical} &$71.88\pm{0.91}$ &$87.42\pm{0.48}$ \\ DeepEMD~$^\clubsuit$~\cite{zhang2020deepemd} &$76.65\pm{0.83}$ &$88.69\pm{0.50}$ \\ P-transfer~$^\clubsuit$~\cite{shen2021partial} &$73.88\pm{0.87}$ &$87.81\pm{0.48}$ \\ \rowcolor{LightCyan} Hyperbolic ProtoNet~\cite{khrulkov2020hyperbolic} &$*73.70\pm{0.22}$ &$*85.55\pm{0.13}$ \\ \hline \rowcolor{Gray} \bf{Ours~(APP2S)} & $\bm{77.64\pm{0.19}}$ & $\bm{90.43\pm{0.18}}$ \\ \Xhline{2\arrayrulewidth} \end{tabular} } \end{center} \caption{Few-shot classification accuracy and 95 \% confidence interval on CUB. ``*" notes the result obtained by the self-implemented network. ``$\clubsuit$" denotes the method using ResNet-12 as the backbone, otherwise ResNet-18.} \label{tab: cub} \end{table} \noindent{\bf{\emph{mini}-ImageNet}}. As shown in Table~\ref{tab: mini and tier}, we evaluate our model using ResNet-12 and ResNet-18 as the backbones on \emph{mini}-ImageNet. Between them, ResNet-12 produces the best results. In addition, our model also outperforms recent state-of-the-art models in most of the cases. Interestingly, our model further outperforms hyperbolic ProtoNet by 7.77\% and 7.11\% for 5-way 1-shot and 5-way 5-shot with ResNet-18, respectively. With ResNet-12, we outperform the hyperbolic ProtoNet by 5.60\% and 7.29\% for 5-way 1-shot and 5-way 5-shot, respectively. \noindent{\bf{\emph{tiered}-ImageNet}}. We further evaluate our model on \emph{tiered}-ImageNet with ResNet backbones. The results in Table \ref{tab: mini and tier} indicate that with ResNet-12, our model outperforms the hyperbolic ProtoNet by 4.62\% and 7.12\% for 5-way 1-shot and 5-way 5-shot, respectively, and achieves state-of-the-art results for inductive few-shot learning. \noindent{\bf{CIFAR-FS} and \bf{FC100}}. As the results in Table \ref{tab: cifar} suggested, our model also achieves comparable performance with the relevant state-of-the-state methods on this dataset, with ResNet-12 backbone, which vividly shows the superiority of our method. \noindent{\bf{CUB.}} We use ResNet-18 as our backbone to evaluate our method on the CUB dataset. Table~\ref{tab: cub} shows that our model improves the performance over baseline by 3.94\% and 4.88\% for 5-way 1-shot and 5-way 5-shot settings, respectively. Besides, our model achieves 77.64\% and 90.43\% for 5-way 1-shot and 5-way 5-shot settings on this dataset, which outperforms state-of-the-art models (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, DeepEMD~\cite{zhang2020deepemd} and P-transfer~\cite{shen2021partial}) and achieve competitive performance on this dataset. \subsection{Robustness to Outliers} \label{Robutness analysis} To further validate the robustness of our method, we conduct experiments in the presence of outliers in the form of mislabelled images. In the first study, we add a various number of outliers (\emph{e.g}\onedot} \def\Resp{\emph{resp}\onedot, 1, 2, 3, 4), whose classes are disjoint to the support-class, to each class of the support set. We performed this study with ResNet-12 backbone on the 5-way 5-shot setting on \emph{tiered}-ImageNet. Fig.~\ref{fig:outa} shows that the performances of hyperbolic ProtoNet degrade remarkably. On the contrary, both our APP2S and Euclidean AP2S are robust to outliers, which shows the superiority of our adaptive metric. Comparing to Euclidean AP2S, APP2S is even more robust (see the slope of Fig.~\ref{fig:outa}) and performs consistently even in the presence of 20 outliers. This suggests that integrating our proposed adaptive metric and hyperbolic geometry can further bring robustness to our framework. In the second study (shown in Fig.~\ref{fig:outb}), we conduct the same experiments on \emph{mini}-ImageNet. The results show a similar trend as the previous one, which further proves the effectiveness of our proposed method. \begin{figure}[!h] \centering \subfigure[]{\includegraphics[width=0.48\linewidth]{Figures/re1.pdf}\label{fig:outa}}% \hfil \subfigure[]{\includegraphics[width=0.48\linewidth]{Figures/re2.pdf}\label{fig:outb}}% \hfil \caption{Robustness analysis. Horizontal axis: The number of outliers. Vertical axis: Accuracy. (a): The performance vs. the number of outliers on \emph{tiered}-ImageNet. (b): The performance vs. the number of outliers on \emph{mini}-ImageNet. }\label{fig:compare_1} \end{figure} \subsection{Ablation Study} \label{Ablation Study} We further conduct the ablation study to verify the effectiveness of each component in our method on the \emph{tiered}-ImageNet dataset using the ResNet-12 backbone. \noindent{\bf{Experiments Set-Up}}. For setting (ii) in Table~\ref{tab: ablation study}, we disable the relation module $f_{\phi}$ and signature generator $f_{\omega}$. The P2S distance can be obtained by Eq.~\eqref{eq:adaptive} and Eq.~\eqref{eq: s2s} with equal weights (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, $1$). Moreover, we enable the relation generator $f_{\phi}$ but not the signature generator in setting (iii). We use the class prototype instead of the signature for this experiment. We enable both $f_{\phi}$ and $f_{\omega}$ and use the Euclidean distances for setting (iv). In the end, we enable the Poincar{\'e} ball but disable the $f_{\zeta}$ for setting (v). In terms of implementation of (v), the backbone is designed to output a feature vector instead of a feature map, such that the P2S distance can be directly computed by Eq.~\eqref{eq:distance} and Eq.~\eqref{eq:adaptive}. \noindent{\bf{Effectiveness of Point to Set Distance}}. In this experiment, we first evaluate the effectiveness of the P2S distance by comparing to its point to point (P2P) distance counterpart (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, hyperbolic ProtoNet). From Table~\ref{tab: ablation study}, we could observe that the P2S distance can learn a more discriminative embedding space than P2P distance (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, (i) vs. (ii)), and the adaptive P2S can further bring performance gain to our application (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, (ii) vs. (iii)). This observation shows the potential of our P2S distance setting in the FSL task. \noindent{\bf{Effectiveness of Signature Generator}}. We further evaluate another essential component in our work, \emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, the signature generator, which refines the entire support set and produces a signature per class. As shown in Table~\ref{tab: ablation study} (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, (iii) and (vi)), we could observe that our method benefits from the signature generator, which shows that the signature of each class could help to generate an informative weight for individual feature map within the same class. \noindent{\bf{Effectiveness of Hyperbolic Geometry}}. We also implement our model in the Euclidean space to verify the effectiveness of our method. The row (iv) and (vi) in Table \ref{tab: ablation study} vividly show that the representation in the Poincar\'e ball has a richer embedding than that in Euclidean spaces. \noindent{\bf{Effectiveness of Set to Set Distance}}. The comparison between (v) and (vi) shows that our set to set distance generator associated with the feature map outputs richer information than using a feature vector to directly compute the APP2S. \begin{table}[H] \begin{center} \scalebox{0.72}{ \begin{tabular}{c| c | c|c|c|c| c| c } \Xhline{2\arrayrulewidth} \multirow{2}{*}{\bf{ID}} &\multirow{2}{*}{\bf{Model}} &\multirow{2}{*}{$\mathbb{{D}}^n_c$} &\multirow{2}{*}{P2S} &\multirow{2}{*}{${f}_{\phi}$} &\multirow{2}{*}{${f}_{\omega}$} &\multirow{2}{*}{${f}_{\zeta}$} &{\bf{\emph{tiered}-ImageNet}} \\ &&&&&& &\bf{5-way 5-shot} \\ \toprule\bottomrule (i) &Hyperbolic ProtoNet & \checkmark & & & & &$79.11\pm{0.22}$ \\ (ii) &Hyperbolic P2S w/o~$f_{\phi}$ &\checkmark &\checkmark & & &\checkmark &$83.14\pm{0.17}$ \\ (iii) &Hyperbolic P2S w/~$f_{\phi}$ &\checkmark &\checkmark &\checkmark & &\checkmark &$84.88\pm{0.17}$ \\ (iv) &Euclidean AP2S & &\checkmark &\checkmark &\checkmark &\checkmark &$81.96\pm{0.18}$ \\ (v) & APP2S w/o $f_{\zeta}$ &\checkmark &\checkmark &\checkmark &\checkmark & &$84.12\pm{0.13}$ \\ \hline \rowcolor{Gray} (vi) &\bf{APP2S} &\checkmark &\checkmark &\checkmark &\checkmark &\checkmark &$\bm{86.23\pm{0.15}}$ \\ \Xhline{2\arrayrulewidth} \end{tabular} } \end{center} \caption{The ablation study of our model, we start from the hyperbolic ProtoNet~\cite{khrulkov2020hyperbolic} towards APP2S.} \label{tab: ablation study} \end{table} \section{Conclusion} In this paper, we propose a novel adaptive Poincar{\'e} point to set (APP2S) distance metric for the few-shot learning, which can adapt depending on the samples at hands. Empirically, we showed that this approach is expressive with both hyperbolic geometry and Euclidean counterpart. Our model improves the performances over baseline models and achieves competing results on five standard FSL benchmarks. \section{Supplementary Material} In this supplementary material, we provide an additional description of operations in Poincar{\'e} Ball and details of the each public few-shot learning benchmark we used. Furthermore, we conduct additional experiments, including ablation studies on the effect of the curvature $c$, global feature vector implementation and parameter and time complexity analysis to analyze the our model. Finally, we provide the details of the implementation of our model and extra visualizations and discussion of APP2S. \subsection{Hyperbolic Operations} {\bf{\noindent{Exponential Map}}}. The exponential map defines a function from $T_{\Vec{x}}\mathbb{D}_c^n \to \mathbb{D}^n_c $, which maps Euclidean vectors to the hyperbolic space. Formally, it is defined as: \begin{equation} \label{eq:expmap} \Vec{\Omega}^c_\Vec{x}(\Vec{v})=\Vec{x}\oplus_c(\text{tanh}(\sqrt{c}\frac{\lambda^c_{\Vec{x}}\|\Vec{v}\|}{2})\frac{\Vec{v}}{\sqrt{c}\|\Vec{v}\|}). \end{equation} The exponential map and logarithmic map (introduced in the main paper) have simpler forms when $\Vec{x}=0$: \begin{equation} \label{eq:simpler} \Vec{\Omega}^c_{\bm{0}}(\Vec{v})=\text{tanh}(\sqrt{c}\|\Vec{v}\|)\frac{\Vec{v}}{\sqrt{c}\|\Vec{v}\|} , \end{equation} \begin{equation} \label{eq:simpler1} \Vec{\pi}^c_{\bm{0}}(\Vec{y})=\text{arctanh}(\sqrt{c}\|\Vec{y}\|)\frac{\Vec{y}}{\sqrt{c}\|\Vec{y}\|}. \end{equation} \noindent{\bf{Parallel Transport}}. Parallel transport provides a way to move tangent vectors along geodesics $P_{\Vec{x}\to\Vec{y}}: T_\Vec{x}\mathcal{M} \xrightarrow{}T_\Vec{y}\mathcal{M}$ and defines a canonical way to connect tangent spaces. For further details of hyperbolic space and geometry, please refer to the thesis~\cite{ganea2019non}. \subsection{Set to Set Distance} Set to set distance has been widely adopted in computer vision tasks~\cite{Fang_2021_WACV, Conci2018DistanceBS,huttenlocher1993comparing}. In this section, we discuss the well-known Hausdorff distance. There are two variants of Hausdorff distance, including one-sided Hausdorff distance and bidirectional Hausdorff distance. The one-sided Hausdorff distance between set $A=\{\Vec{a}_1,\Vec{a}_2,\ldots,\Vec{a}_n\}$ and set $B=\{\Vec{b}_1,\Vec{b}_2,\ldots,\Vec{b}_n\}$ can be defined as: \begin{equation}\label{eq:one-sided Hausdorff} {\mathrm{d_{Hau}^{o}}}(A,B)=\max_{\Vec{a}\in A}\min_{\Vec{b}\in B}\mathrm{d}(\Vec{a},\Vec{b}), \end{equation} and the bidirectional Hausdorff distance can be defined as: \begin{equation}\label{eq:bi Hausdorff} \mathrm{d_{Hau}^{bi}}(A,B)=\max({\mathrm{d_{Hau}^{o}}}(A,B), {\mathrm{d_{Hau}^{o}}}(B,A)). \end{equation} \subsection{Datasets} \noindent{\bf{\emph{mini}-ImageNet}}. The \emph{mini}-ImageNet is a subset of ImageNet~\cite{deng2009imagenet}. The size of images in \emph{mini}-ImageNet is fixed to 84 $\times$ 84. It has 100 classes, with each having 600 samples. We adopt the standard setting form~\cite{ravi2016optimization} to split the dataset into 64, 16, and 20 classes for training, validation, and testing. \noindent{\bf{\emph{tiered}-ImageNet}}. Like \emph{mini}-ImageNet, \emph{tiered}-ImageNet~\cite{ren2018meta} is also sampled from ImageNet, while it has more classes than the \emph{mini}-ImageNet. This dataset is split into 351 classes from 20 categories, 97 classes from 6 categories, and 160 classes from 8 different categories for training, validation, and testing. \noindent{\bf{CUB}}. The CUB dataset~\cite{wah2011caltech} consists of 11,788 images from 200 different species of birds. Following the standard split~\cite{liu2020negative}, the CUB dataset is divided into 100 species for training, 50 species for validation, and another 50 species for testing. \noindent\textbf{CIFAR-FS} and \textbf{FC100}. Both CIFAR-FS~\cite{bertinetto2018meta} and FC100~\cite{oreshkin2018tadam} are modified from the CIFAR-100 dataset containing 100 classes, with 600 samples per class. The CIFAR-FS is split into 64, 16, and 20 classes for training, validation, and testing, respectively. While the FC100 dataset is split into 60, 20, and 20 classes for training, validation, and testing, respectively. \subsection{Additional Experiments} \noindent{\bf{Conv-4 Backbone.}} We also employ the simple 4-convolutional network (Conv-4) to evaluate our method on \emph{mini-}ImageNet comparing with some early works. The Table~\ref{tab: conv} summarizes our results. \begin{table}[h] \begin{center} \scalebox{0.7}{ \begin{tabular}{c | c c} \Xhline{2\arrayrulewidth} \bf{Model} & \bf{5-way 1-shot} & \bf{5-way 5-shot} \\ \toprule\bottomrule MatchingNet~\cite{vinyals2016matching} &$43.56\pm{0.84}$ &$55.31\pm{0.73}$\\ MAML~\cite{finn2017model} &$48.70\pm{1.84}$ &$63.11\pm{0.92}$\\ RelationNet~\cite{sung2018learning} &$50.44\pm{0.82}$ &$65.32\pm{0.70}$ \\ R2-D2~\cite{bertinetto2018meta} &$48.70\pm{0.60}$ &$65.50\pm{0.60}$ \\ Reptile~\cite{nichol2018first} &$49.97\pm{0.32}$ &$65.99\pm{0.58}$\\ ProtoNet~\cite{snell2017prototypical} &$49.42\pm{0.78}$ &$68.20\pm{0.66}$ \\ Neg-Cosine~\cite{liu2020negative} &$52.84\pm{0.76}$ &$70.41\pm{0.66}$ \\ \rowcolor{LightCyan} Hyperbolic ProtoNet~\cite{khrulkov2020hyperbolic} &$54.43\pm{0.20}$ &$72.67\pm{0.15}$ \\ \hline \rowcolor{Gray} \bf{Ours~(APP2S)} &$\bm{55.73\pm{0.20}}$ &$\bm{72.86\pm{0.22}}$ \\ \Xhline{2\arrayrulewidth} \end{tabular} } \end{center} \caption{Few-shot classification accuracy and 95 \% confidence interval on \emph{mini}-ImageNet with Conv-4 Backbone.} \label{tab: conv} \end{table} \noindent{\textbf{Comparison with DN4 and FEAT.}} The comparison of our APP2S, DN4, and FEAT on \emph{mini}-ImageNet and \emph{tiered}-ImageNet with Conv-4 and ResNet-12 backbones is summarized in Table~\ref{tab:com}. \begin{table}[h] \begin{center} \scalebox{0.75}{ \begin{tabular}{c|c|c c|c c} \Xhline{2\arrayrulewidth} \multirow{2}{*}{\textbf{Model}} &\multirow{2}{*}{\textbf{Backbone}} &\multicolumn{2}{c|}{\textbf{\emph{mini}-ImageNet}} &\multicolumn{2}{c}{\textbf{\emph{tiered}-ImageNet}}\\ & &1-shot &5-shot &1-shot &5-shot \\ \toprule\bottomrule {DN4~\cite{li2019revisiting}} &Conv4 & $51.24$ & ${71.02}$ & - &- \\ {FEAT~\cite{ye2020few}} &Conv-4 & $55.15$ & ${71.13}$ &- &- \\ \rowcolor{LightCyan} {Ours} &Conv-4 & $\bm{55.73}$ & $\bm{72.86}$ &- &- \\ \hline {FEAT} &ResNet-12 & $\bm{66.78}$ & $82.05$ &$70.80$ &$84.79$ \\ \rowcolor{LightCyan} {Ours} &ResNet-12 & $66.25$ & $\bm{83.42}$ &$\bm{72.00}$ &$\bm{86.23}$ \\ \Xhline{2\arrayrulewidth} \end{tabular} } \end{center} \caption{The comparison of our model, DN4 and FEAT on the \emph{mini}-ImageNet and \emph{tiered}-ImageNet with Conv-4 and ResNet-12 backbones.} \label{tab:com} \end{table} \noindent{\bf{The Curvature of Poincar{\'e} ball.}} The curvature of the Poincar{\'e} ball is an important parameter, which determines the radius of the Poincar{\'e} ball. We conduct experiments with different values of $c$ on \emph{tiered}-ImageNet. The results are summarized into Table~\ref{tab: c}. As the results suggested, our model is not very sensitive to $c$. However, with a larger $c$ value, the performance is slightly better. \begin{table}[H] \begin{center} \scalebox{0.8}{ \begin{tabular}{c |c c c c c c} \Xhline{2\arrayrulewidth} \bf{Model} & $\bm{0.7}$ & $\bm{0.5}$ & $\bm{0.1}$ & $\bm{0.05}$ & $\bm{0.01}$ & $\bm{0.01}$ \\ \toprule\bottomrule APP2S &$\bm{86.23}$ &$85.92$ &$85.21$ &$85.18$ &$84.43$ &$84.19$ \\ \Xhline{2\arrayrulewidth} \end{tabular} } \end{center} \caption{The influence from the curvature of Poincar{\'e} ball on the performance, given \emph{tiered}-ImageNet and ResNet-12 backbone on 5-way 5-shot setting.} \vspace{0.5em} \label{tab: c} \end{table} \noindent{\bf{1-shot case}.} To fully leverage the capability of APP2S for 1-shot setting, we require more than one sample in the set. Therefore, we followed the practice in~\cite{simon2020adaptive} to augment the support images by flipping. To have a fair comparison, we also applied augmentation to our baseline model (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, hyperbolic ProtoNet~\cite{khrulkov2020hyperbolic}) on both \emph{mini}-ImageNet and \emph{tiered}-ImageNet, given ResNet-12 backbone. Table~\ref{tab: aug} shows that the image augmentation does not boost the performance of the baseline model significantly. \begin{table}[H] \begin{center} \scalebox{0.78}{ \begin{tabular}{c |c c| c c } \Xhline{2\arrayrulewidth} \multirow{2}{*}{\bf{Model}} &\multicolumn{2}{c|}{\bf{\emph{mini}-ImageNet}} &\multicolumn{2}{c}{\bf{\emph{tiered}-ImageNet}} \\ &\bf{w/o Aug.} &\bf{w/ Aug.} &\bf{w/o Aug.} &\bf{w/ Aug.} \\ \toprule\bottomrule hyperbolic ProtoNet &$*60.65$ &$*59.97$ &$*67.38$ &$*67.84$ \\ APP2S &- &$66.25$ & - &$72.00$ \\ \Xhline{2\arrayrulewidth} \end{tabular} } \end{center} \caption{The accuracy without and with augmentation for 1-shot setting. ``*" notes the results obtained by self-implemented network.} \vspace{0.5em} \label{tab: aug} \end{table} \noindent{\bf{any-shot setting}.} We follow the any-way \& any-shot setting introduced in~\cite{lee2019learning} to further validate the efficacy of our algorithm. We use a variant of our final model (APP2S without $f_{\zeta}$) to perform this experiments due to less computation requirement on this experiment setting. The results are shown in Table~\ref{tab:any}. \begin{table}[!h] \begin{center} \scalebox{0.6}{ \begin{tabular}{c | c c | c c } \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{{\emph{tiered}-ImageNet}} & \multicolumn{2}{c}{{\emph{mini}-ImageNet}} \\ &Conv-4 &ResNet-12 &Conv-4 &ResNet-12\\ \hline ProtoNet &-&-&$51.02\pm{0.30}$ &- \\ L2G ProtoNet\cite{lee2019learning} &-&-&$53.00\pm{0.28}$&- \\ \hline \textbf{Ours} &$\bm{66.23\pm{0.15}}$ &$\bm{75.52\pm{0.19}}$ &$\bm{58.62\pm{0.16}}$ &$\bm{70.01\pm{0.17}}$ \\ \Xhline{2\arrayrulewidth} \end{tabular} } \end{center} \vspace{-1em} \caption{The results of our model under any-way \& any-shot setting compared to ProtoNet and L2G PrptoNet.} \label{tab:any} \end{table} \noindent{\bf{Using Global Feature Vectors.}} We performed extra experiments using global feature vectors in our method. The table below shows that our method, even with global feature vectors, outperforms the Hyperbolic ProtoNet significantly, and the local features can further boost our methods. Note that we use ResNet-18 backbone for CUB dataset and ResNet-12 for the rest. \begin{table}[h] \begin{center} \scalebox{0.62}{ \begin{tabular}{c|c|c|c} \hline {Dataset}&Hyperbolic ProtoNet &Ours w/o global feature &Ours w/ local feature\\ \hline {\emph{mini}-ImgeNet} & $76.13\pm{0.21}$ & ${80.13\pm{0.11}}$& ${83.42\pm{0.15}}$\\ \hline {\emph{tiered}-ImageNet} &$79.11\pm{0.22}$ & ${84.12\pm{0.13}}$ & ${86.23\pm{0.15}}$ \\ \hline {CUB} &$85.55\pm{0.13}$ & ${90.24\pm{0.15}}$& ${90.43\pm{0.18}}$ \\ \hline {CIFAR-FS} & ${80.98\pm{0.16}}$ & ${84.08\pm{0.16}}$& ${85.69\pm{0.16}}$ \\ \hline \end{tabular} } \end{center} \caption{The results of our model using the global feature vectors compared to Hyperbolic ProtoNet.} \label{tab: global and local} \end{table} \noindent{\bf{Parameter and time complexity analysis.}} Comparing to Hyperbolic ProtoNet, we have $3$ extra modules, including $f_{\omega}$, $f_{\phi}$ and $f_{\zeta}$ to realize the adaptive distances. We summarize the parameter numbers (PNs) and FLOPs for each module and the backbone network. We can find that the PNs and FLOPs of our module are acceptable as compared with the backbone network. We also compare the time complexity to the SOTA method, \emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, DeepEMD, given that both methods are using local feature maps. The FPS value of our method is $83$, as compared to $0.6$ of DeepEMD under the 5-shot setting, clearly showing that our method runs faster than DeepEMD. Note that both models are tested on a single Nvidia Quadro-GV100 graphic card. \begin{table}[h] \begin{center} \scalebox{1}{ \begin{tabular}{c|c|c|c|c} \hline complexity metrics &$f_{\omega}$ &$f_{\phi}$ &$f_{\zeta}$ &$f_{\theta}$\\ \hline PNs ($\times10^6$) &$1.64$ &$0.74$ &$0.02$ &$\bm{12.42}$\\ \hline FLOPs ($\times10^9$) &$2.01$ &$0.17$ &$0.0008$ &$\bm{6.98}$\\ \hline \end{tabular} } \end{center} \caption{The Parameter and time complexity analysis.} \label{tab: time complexity} \end{table} \subsection{Implementation Details} \noindent{\bf{Network and Optimizer}}. We mainly use ResNet~\cite{he2016deep}, including ResNet-12 and ResNet-18, as our backbones across all datasets. We also employ the simple 4-convolutional network (Conv-4) to evaluate our method comparing with some early works. The size of the input image is fixed to 84 $\times$ 84. We use Adam~\cite{kingma2014adam} and SGD~\cite{ye2020few} for Conv-4 and ResNet backbones, respectively. In the SGD optimizer, we adopt the L2 regularizer with 0.0005 weight decay coefficient. In the ResNet-12 backbones, we disable the average pooling and remove the last fully connected (FC) layer, such that the networks generate the feature map with size of $640 \times 5 \times 5$. For ResNet-18, we set the average pooling layer to generate the feature map with the size of $512\times 5 \times 5$. Note that we set $c$ (the curvature of the Poincar{\'e} ball) to 0.7 and 0.5 for 5-way 5-shot setting and for 5-way-1-shot setting, respectively, with ResNet backbones, across all datasets. While for Conv-4, we set $c$ to 0.4 for both 5-way 5-shot and 5way 1-shot settings across all datasets. \noindent{\bf{Training}}. Following the excellent practice in state-of-the-art methods~\cite{ye2020few,zhang2020deepemd,simon2020adaptive}, network training has two stages, \emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, pre-training stage and meta-learning stage. In the pre-training process, the backbone network followed by a FC layer is trained on all training classes with the standard classification task. The network with the highest validation accuracy is selected as the pre-trained backbone for the next training stage. In the meta-learning stage, we also follow the standard training protocol, where the network is trained for 200 epochs, and each epoch samples 100 tasks randomly. In order to create the set for 5 way 1-shot setting, we follow the previous practice in~\cite{simon2020adaptive}, which augments the image per class by horizontal flipping. \noindent{\bf{Signature Generator}}. For the signature generator, we choose Transformer encoder as the set refinement function $f_{\omega}$ as it performs contextualization over the whole support set with permutation invariant property. Note that the Transformer is implemented with single-head self-attention because more heads do not boost the performance but require more computational power for our model by experiments. Moreover, We follow the implementation of~\cite{carion2020end} to provide spatial encoding along with flattened feature map into the transformer. \noindent{\bf{Relation Generator}}. We implemented the relation generator using a simple two-layer CNN followed by a flatten operation in the end. In the first layer, the linear transformation is followed by the batch normalization and activation. The second layer uses the sigmoid function to bound the output. Finally, a softmax layer is implemented to convert the output into a probability distribution. The structure of the relation generator can be summarized into Table~\ref{tab:r_structure}. \begin{table}[H] \begin{center} \scalebox{1}{ \begin{tabular}{c|c| c } \Xhline{2\arrayrulewidth} \bf{layer name} & \bf{output size} & \bf{operation parameter} \\ \toprule\bottomrule conv1 & $3 \times 3$ & $3 \times 3$, 64, stride 1 \\ \hline batch norm & $3 \times 3$ &64 \\ \hline relu &$3 \times 3$ &- \\ \hline dropout &$3 \times 3$ & $p=0.5$ \\ \hline conv2 &$1 \times 1$ &$3 \times 3$, 1, stride 1 \\ \hline batch norm &$1 \times 1$ &1 \\ \hline sigmoid &$1 \times 1$ &- \\ \Xhline{2\arrayrulewidth} \end{tabular} } \end{center} \caption{The structure of the Relation Generator.} \vspace{0.5em} \label{tab:r_structure} \end{table} \noindent{\bf{Set to Set Distance Generator}}. We simply implement a two layer MLP (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, $625\rightarrow25\rightarrow1$) as the set to set distance generator. The structure is summarized into Table~\ref{tab:s_strucutre} \begin{table}[H] \begin{center} \scalebox{1}{ \begin{tabular}{c|c| c } \Xhline{2\arrayrulewidth} \bf{layer name} & \bf{output size} & \bf{operation parameter} \\ \toprule\bottomrule linear1 & $25$ &$625\rightarrow25$ \\ \hline 1D batch norm & $25$ &$25$ \\ \hline relu &$25$ &- \\ \hline dropout &$25$ & $p=0.5$ \\ \hline linear2 &$1$ &$25\rightarrow1$ \\ \hline 1D batch norm &$1 \times 1$ &1 \\ \Xhline{2\arrayrulewidth} \end{tabular} } \end{center} \caption{The stucture of the Set to Set Distance Generator.} \vspace{0.5em} \label{tab:s_strucutre} \end{table} \subsection{Extra Visualizations and Discussion} We also provide extra visualizations to show that the APP2S will adapt depending on the constellation of the points in a set. Fig.~\ref{fig:p2s} shows that in both cases~\ref{fig:APP2S1} and~\ref{fig:APP2S2}, APP2S assigns larger weights (dark blue area) to the points that are closer to the center of the cluster, while smaller weights (light blue) to the outliers. \begin{figure}[H] \begin{center} \subfigure[]{\includegraphics[width=0.3\linewidth]{Figures/fig_sm/non-linear.pdf}\label{fig:min-max}}% \subfigure[]{\includegraphics[width=0.3\linewidth]{Figures/fig_sm/s1.pdf}\label{fig:APP2S1}}% \subfigure[]{\includegraphics[width=0.3\linewidth]{Figures/fig_sm/s2.pdf}\label{fig:APP2S2}}% \end{center} \caption{(a): The P2S distances based on the minimum and maximum distances are sensitive to outliers and ignore the distribution of the points in the set to a great degree. (b) and (c): Our proposed point to set distance is bounded between the infimum and the supremum and, it is also non-linear due to the weighted-sum. It covers the distribution of the individual sample in the set and adapts based on the relationship between the sample distribution and overall set distribution.} \label{fig:p2s} \end{figure} \noindent{\bf{Our P2S.}} The existing P2S distance metrics (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, the min and max distances discussed in Preliminary) only consider the lower bound and upper bound of P2S distance, thereby ignoring the distribution of the samples of the set to a great degree. Furthermore, such metrics are very sensitive to the outliers in the set (see Fig.~\ref{fig:min-max}). Our proposed adaptive P2S distance is a more flexible metric and able to adapt based on the distribution of the samples in the set. See Fig.~\ref{fig:APP2S1} and~\ref{fig:APP2S2} for an example, the measurement from our proposed metric is more flexible than the existing ones. Note that the weight (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, $w_{ij}$) generated by our method is distance-dependent. This is due to the way we model the problem using the tangent space of the hyperbolic space. To see this, recall that the norm of projected sample vector in support-class, which is the input of the relation generator, is indeed the geodesic distance between the associated support vector and the query vector on the manifold (\emph{i.e}\onedot} \def\Ie{\emph{i.e}\onedot, $\|{\tilde{\Vec{s}}}_{ij}^{{r}} \| = d_c(\bar{\Vec{q}}, {{\Vec{s}}}_{ij}^{{r}})$). \noindent{\textbf{RelationNet.}} Our relation generator resembles the RelationNet. However, instead of computing the relation score between the prototype and the query, our relation generator computes the relation score between each support sample and its corresponding class-signature, further used as the adaptive factors for our point-to-set distance. \noindent{\textbf{DN4.}} The distance in DN4 resembles the point to set distance in our work. However and in contrast to DN4, our point to set distance is adaptive, while that in DN4 is fixed weighted summation.
proofpile-arXiv_065-60
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Monochromators are often used at X-ray Free-Electron Lasers (XFELs) to reduce the spectral bandwidth of pulses generated by self-amplified spontaneous emission (SASE, bandwidth $\Delta E/E \sim 2\cdot10^{-3}$). This improves the temporal coherence, which is beneficial for several techniques. For instance, X-ray Photon Correlation Spectroscopy (XPCS) in Wide-Angle X-ray Scattering (WAXS) provides a better contrast when a monochromator is used \cite{Madsen2016,Lehmkuhlerro5014}. Also, high-resolution and inelastic X-ray scattering require narrower bandwidth than SASE provides \cite{ShvydkoIXS,Chubarixs}. Obtaining nano-sized foci with refractive optics (chromatic focusing) also benefits from a reduction of the X-ray bandwidth \cite{nanofocusmono}. At European XFEL (EuXFEL), X-ray pulses with the intensity of several mJ and a few femtosecond duration are generated. The pulses are delivered in so-called "trains" which can contain several hundreds of pulses that are separated by sub-\SI{600}{\micro\second} intervals. That is, within a train of typically $\sim$\SI{600}{\micro\second} duration, pulses arrive at MHz repetition rate and 10~trains are delivered per second. For the moment, 2.25~MHz repetition rate is available on a standard basis at EuXFEL but the design value of 4.5~MHz has also been achieved \cite{decking2020mhz}. The intense radiation causes deformation of the crystal lattice, which affects the diffraction of X-rays and degrades the monochromator performance. We study the performance of a cryogenically cooled Si(111) monochromator that consists of two parallel crystals forming an artificial channel-cut \cite{dongmono,petrovfel2019} (Darwin width $\Delta E/E\sim1.4\cdot10^{-4}$) operating at 9~keV photon energy using experimental data obtained at the Materials Imaging and Dynamics (MID) instrument of EuXFEL \cite{XFELbeamtrans,Madsenay5570}. The geometry of Si crystals (60~mm length, 35~deg. angular range of Bragg-angle rotation) enables to use the monochromator in a 5-25~keV photon energy range. To achieve parallel position of both Si crystals the second crystal can be adjusted in pitch and roll angles. Fine pitch adjustment with sub-microradian precision is enabled using a piezo-actuator. Both crystals are supplied with cryogenic cooling using a helium cryocompressor and equipped with heaters to achieve stable thermal conditions \cite{Sinn139077}. The temperature is typically set to ~100~K which is close to the zero point of the thermal expansion coefficient of silicon \cite{Lyonsilinexp}. In the following, simulations which consider heat propagation during a train \cite{SinnHeatload} and dynamical diffraction effects \cite{Bushuev2013,Bushuev2016} (Sections \ref{HeatingSim}, \ref{DynDifrSim}) are compared with experimental data (Section \ref{Experimental}). \section{Simulation of crystal heating}\label{HeatingSim} For the simulation of heat absorption and transfer, the crystal is divided into $n$ cylindrical shells with inner radii of $r_i=(i-1)\cdot dr$. $i$ varies from $1$ to $n$ and the outer radii of the ith shell is $r_i+dr$ where $dr$ is the thickness of each cylindrical shell \cite{SinnHeatload}. Along the surface normal direction cylinders are divided into $m$ layers of $dz$ thickness and the position of the jth layer in the depth coordinate is $z_j=j\cdot dz$, where $j$ varies from $1$ to $m$. Let us consider a Gaussian pulse whose radial intensity profile reads \begin{equation} \label{gaus_beam} I(r_i,z_j)=I_0S(r_i)\frac{\exp(-r_i^2/2\sigma^2)\exp(-z_j/a)}{2\pi \sigma^2a}dz, \end{equation} where $I_0$ is the total pulse energy, $S(r_{i>1})=2\pi r_i dr$, $S(r_1)=\pi\cdot dr^2$, $\sigma=w_{\text{equiv}}/2\sqrt{2\ln{2}}$ where $w_\text{equiv}$ is the full-width at half maximum (FWHM) of the beam size at the crystal surface and $a$ is the depth at which the intensity of the beam decreases by a factor of $e$. Since the X-rays impinge the crystal at an angle $\theta_0$, the pulse size $w_\text{equiv}=w/\sqrt{\sin\theta_0}$ is used for the simulations, where $w$ is the FWHM size of the pulse incident at an angle $\theta_0$. $a=l_\text{abs}\sin{\theta_0}$, where $l_\text{abs}$ is the absorption length of X-rays at a given photon energy. The heat load per unit of surface area for the pulse with the size $w_\text{equiv}$ is the same as of the pulse with the size $w$ incident at an angle $\theta_0$. The temperature of each cylinder layer with inner radius $r_i$ at depth $z_j$ is determined by the heat absorbed per unit of mass. The absorption of an incident pulse and resulting heating are considered to be instantaneous in comparison with the characteristic time for the redistribution of temperature (see below for the estimations of the timescales using Eq.~\ref{estim_time}). The temperature $T_0(r,z)$ at each radius $r$ and depth $z$ (indices of $r_i$ and $z_j$ are omitted) after the absorption of a pulse is defined by the absorbed heat per unit of mass in the corresponding cylindrical shell given by \begin{equation} \label{Tinit} \int_{T_\text{init}}^{T_0(r,z)}c_p(T)dT=\frac{I(r,z)}{dz\cdot\rho S(r)}, \end{equation} where $c_p(T)$ is the temperature-dependent specific heat of silicon that has been calculated following Debye's model\cite{debyespecheat}, $\rho$ is the density of silicon (whose temperature dependence is neglected), and $T_\text{init}$ is the initial temperature of the crystal. The temperature evolution with time $T(t,r,z)$ is defined by the heat transfer equation which in the depth direction is written as \begin{equation} \label{heat_transf} \frac{\partial T(t,r,z)}{\partial t}=D(T)\cdot\frac{\partial^2 T(t,r,z)}{\partial z^2}, \end{equation} where $D(T)=K(T)/\rho c_p(T)$ is the temperature-dependent thermal diffusivity and $K(T)$ is the temperature-dependent thermal conductivity \cite{lambdaSi}. The boundary conditions for Eq.~(\ref{heat_transf}) are \begin{subequations} \label{bound_heat} \begin{align} T(0,r,z) = T_0(r,z),\\ \left.\frac{\partial T}{\partial z}\right|_{z=0}=0,\\ T(t,r,z=z_m)=T_\text{init}, \end{align} \end{subequations} which correspond to the absence of heat exchange at the crystal surface and a constant temperature $T_\text{init}$ at depth $z_m$. If a second pulse arrives at an instant $t_1$, the temperature profile $T'(t_1,r,z)$ is defined analogous to Eq.~(\ref{Tinit}): \begin{equation} \label{Tinit_1} \int_{T(t_1,r,z)}^{T'(t_1,r,z)}c_p(T)dT=\frac{I(r,z)}{\rho\cdot dz\cdot S(r)}. \end{equation} Let us analyze Eq.~(\ref{heat_transf}) in order to estimate the characteristic timescale of heat transfer in the radial and depth directions. The parameters used in the simulations match the parameters of the experiment described in Sec. \ref{Experimental}: $T_{\text{init}}=100$~K, FWHM beam size of \SI{549}{\micro\meter}, $a=$~\SI{21}{\micro\meter} corresponding to $l_\text{abs}=$~\SI{97}{\micro\meter} and $\theta_0=12.7${\textdegree}, which is the Bragg angle for Si(111) reflection at 9~keV photon energy. For silicon, $D(100~K)\approx17$~$\text{cm}^{-2}\text{s}^{-1}$. Considering the heat flow equation (\ref{heat_transf}), the characteristic time for the redistribution of temperature over a distance $L$ can be estimated as \begin{equation} \label{estim_time} t_{\text{char}}(L)\sim\frac{L^2}{D}. \end{equation} Let us consider two relevant distances in Eq.~(\ref{estim_time}) $L_r=$~\SI{549}{\micro\meter}, which is equal to the beam size, and in the depth direction $L_z=a=$~\SI{21}{\micro\meter}. For the heat redistribution in the radial direction, $t_\text{char,r}=t_\text{char}(L_r)=$~\SI{181}{\micro\second}, whereas in the depth direction $t_\text{char,z}=t_\text{char}(L_z)=265$~ns. Hence $t_\text{char,r} >> t_\text{char,z}$. The duration of pulses at European XFEL is estimated to be $\sim$10-100~fs, which is many orders of magnitude shorter than the characteristic heat redistribution time, see Eq.~(\ref{estim_time}). Therefore the assumption of instantaneous heating of the crystal by a pulse is justified. Moreover, the delay time between individual pulses at European XFEL is typically between 220 and 880~ns which is about two orders of magnitude smaller than $t_{\text{char,r}}$. Thus, neglecting heat flow in the radial direction is justified from one pulse to the next one inside the pulse train. However, $t_\text{char,z}$ is of the same order of magnitude as the time delay between pulses and therefore heat flow in the depth direction during a train must be accounted for in the simulations. On the other hand, the 0.1\,s interval between pulse trains is much larger than both $t_\text{char,r}$ and $t_\text{char,z}$ and therefore, by the time the next pulse train arrives, the crystal has fully recovered to the initial temperature and hence a non-deformed state. In the experiment, the crystal is 2 cm thick and is kept at a constant cryogenic temperature. Therefore, the boundary condition Eq.~(\ref{bound_heat}c) defining a constant temperature at depth $z_m$ is applicable. \section{Dynamical diffraction simulations}\label{DynDifrSim} In the framework of kinematical diffraction, the resulting amplitude in a point with radius-vector $\vec{\varrho}$ is defined by the integral \begin{equation} \label{kinem_diffr} E_s\propto\int\displaylimits_{\varrho_0}\frac{\exp(ik|\vec{\varrho}-\vec{\varrho_0}|)}{|\vec{\varrho}-\vec{\varrho_0}|}d^3\varrho_0, \end{equation} which represents the sum of waves scattered by each point of a scattering object with radius-vectors $\vec{\varrho_0}$ and where $k=2\pi/\lambda$ with $\lambda$ being the wavelength. In case of a crystal lattice where the atoms are positioned in a regular manner, secondary scattering of X-rays will have a significant effect on the resulting diffraction amplitude. This re-scattering in ideal crystals is described by the theory of dynamical diffraction of X-rays \cite{authier}. The Bragg law \begin{equation} \label{Bragg} 2d\sin\theta_B=\lambda \end{equation} represents the condition for coherent addition of waves scattered by a lattice and defines the Bragg angle $\theta_B$ at which the strongest scattering is observed for the given lattice spacing $d$. The beam induced heating of the crystal described in the previous section causes a deformation of the lattice, which is different in each point of the crystal. Considering dynamical diffraction, only the component of the deformation normal to the crystal surface is relevant, since this affects the lattice spacing used in Eq.~(\ref{Bragg}). In order to estimate the effect of crystal deformation on the diffraction, we consider the heating of the crystal on the surface, i.e. at $z=0$. The lattice deformation $\epsilon(t,r)$ in the direction normal to the crystal surface caused by heating from $T_\text{init}$ to $T(t,r,z)$ is given by the accumulated expansion and \begin{equation} \label{deformation} \epsilon(t,r)\equiv\frac{\Delta d(t,r)}{d_\text{init}}=\int_{T_\text{init}}^{T(t,r,z=0)}\alpha_T\left(T\right) dT, \end{equation} where $\alpha_T(T)$ is the temperature-dependent linear expansion coefficient of silicon which is close to zero near 100~K \cite{silinexp}, $d_\text{init}$ is the lattice spacing at temperature $T_\text{init}$ and $\Delta d(t,r)$ is the lattice spacing change after heating from $T_\text{init}$. We assume that the Bragg condition is fulfilled for a photon energy $E_0=hc/\lambda$ in the case of a non-deformed crystal lattice. The diffraction of X-rays with a photon energy $E$ at an instant $t$ and at radius $r$ is defined by the deviation of the wave vector from the exact Bragg condition \cite{Bushuev2013,Bushuev2016} \begin{equation} \label{alphaDynTheory_vec} \eta(E,t,r)=\frac{k^2-(\vec{k}+\vec{h}(t,r))^2}{k^2}, \end{equation} where $\vec{k}$ is the wave vector for the incident beam with photon energy $E$ and $\vec{h}(t,r)$ is the reciprocal lattice vector $h(t,r)=2\pi/d_\text{init}[1+\epsilon(t,r)]$. In the simulations we assume that all photon energies are incident at the same angle $\theta_B$ (the Bragg angle for photon energy $E_0$ of the non-deformed crystal) and hence Eq.~(\ref{alphaDynTheory_vec}) can be written as \begin{equation} \label{alphaDynTheory} \eta\left(E,t,r\right)=2\sin{2\theta_\text{B}}\left(\frac{\Delta E}{E_0}+\epsilon(t,r)\right)\tan{\theta_\text{B}}, \end{equation} where $\Delta E=E-E_0$. The reflection amplitude from an infinitely thick crystal is calculated as \cite{BushuevJSR2008} \begin{equation} \label{Refl} R(E,t,r)=\frac{\eta(E,t,r)\pm\sqrt{\eta(E,t,r)^2-4\chi_h^2}}{2\chi_h},|R|\leq1, \end{equation} where $\chi_h$ is the first Fourier component of the crystal susceptibility. Eq.~(\ref{Refl}) defines the reflection amplitude at each point of the crystal surface and at a given photon energy $E$. The total reflection intensity from the crystal is defined as an integral of Eq.~(\ref{Refl}) over the crystal surface \begin{equation} \label{Integr_2cryst_RC} I_E\left(E,t\right)\sim \int\displaylimits_0^{r_n}|R_0\left(E,t,r\right)|^2\cdot |R\left(E,t,r\right)|^2 \cdot \exp\left(-\frac{r^2}{2\sigma^2}\right)\frac{r}{\sigma^2} dr, \end{equation} where $R_0\left(E,t,r\right)$ is the reflection amplitude (\ref{Refl}) for the non-deformed crystal, i.e. $\epsilon\left(E,t,r\right)\equiv 0$. The spectral width of the Bragg reflection of Si(111) at 9~keV is $\sim$1~eV, whereas the spectral width of the XFEL pulses is $\sim$20~eV. Thus only a narrow fraction of X-rays is reflected by the first crystal of the monochromator. Therefore we assume that the second crystal remains unheated and thus non-deformed and oriented parallel to the first crystal. The reflectivity from two crystals $I_R\left(t\right)$ in that case can be calculated as an integral of the reflection intensity (\ref{Integr_2cryst_RC}) over the photon energies as follows: \begin{equation} \label{Integr2bounceInt} I_R\left(t\right)\sim\int \displaylimits_{\Delta E_0}I_E(E,t)dE, \end{equation} where $\Delta E_0$ is the range of photon energies. \begin{figure}[ht!] \centering\includegraphics[width=0.9\textwidth]{simul.png} \caption{Simulations for the effect of heating on the cryo-cooled Si monochromator performance. a) - simulated temperature profile at surface for various pulses in a train, the legend provides the total energy that has impinged the crystal since the beginning of the pulse train for a given pulse number, the energy of each pulse matches the ones measured at experiment, see inset in Fig.~\ref{transm_exp_sim}. b) - reflection intensity (\ref{Integr_2cryst_RC}) from the crystal within a range of photon energies for the temperature distributions in a).} \label{simul_T_RC} \end{figure} Let us analyze the effect of heating on the monochromator performance using the parameters of the experiment described in Sec. \ref{Experimental}: pulse size $w=549$~\textmu m, repetition rate $2.25$~MHz, $T_\text{init}=100$~K, pulse energy ranging between 1 and $1.5$~mJ, as shown in the inset in Fig.~\ref{transm_exp_sim}, and the simulations are done for 30\% of the pulse energy impinging on the monochromator. The temperature in the center of the beam footprint reaches values in excess of 300~K, see Fig.~\ref{simul_T_RC}a), and Bragg's condition is no longer fulfilled because, due to the temperature bump, it is several Darwin widths away from the condition that was met at 100~K \cite{petrovfel2019}. Nevertheless, even when the temperature in the center of the illuminated area is high, there are areas of the crystal that are cold enough to reflect within the acceptance of the second crystal, which is assumed to stay cold at 100 K. The temperature gradient over the illuminated crystal area leads to a broadening of the reflectivity curve and a decrease of the reflected intensity in Fig.~\ref{simul_T_RC}b). At the end of the train of 150 pulses arriving at 2.25~MHz repetition rate, i.e. after a total of 47.8~mJ pulse energy has been absorbed by the first crystal, the monochromator transmission has decreased to less than half of the initial value. \section{Experimental}\label{Experimental} In order to measure the intensity of the pulses after the monochromator, a porous silica (Vycor) sample was used to scatter X-rays in the forward direction (small-angle X-ray scattering, SAXS). \begin{figure}[ht!] \centering\includegraphics[width=0.7\textwidth]{MID_layout.png} \caption{Selected components of MID station at European XFEL and their positions relative to the source.} \label{layout} \end{figure} \begin{figure}[ht!] \centering\includegraphics[width=0.8\textwidth]{1st_frame_zoom.png} \caption{a) intensity distribution of a pulse after monochromator on the YAG imager, the red ellipse is a contour at the FWHM of the two-dimensional Gaussian fit. b) Average over 60 images of the AGIPD area with the strongest SAXS signal. The red solid-line rectangles in b) denote the four areas of the detector used for data analysis.} \label{1stframe_zoom} \end{figure} An overview of the beamline layout used at the experiment is shown in Fig.~\ref{layout}. The two-dimensional intensity distribution of the beam was measured using the yttrium aluminium garnet (YAG) screen imager device at the end of MID photon tunnel. The size of the beam was found by applying a two-dimensional Gaussian fit to the intensity distribution, see Fig.~\ref{1stframe_zoom}a). The horizontal FWHM width of the Gaussian fit $w_x=607$~\textmu m, the vertical - $w_y=496$~\textmu m; in the simulations, $w=\sqrt{w_xw_y}=549$~\textmu m, such that the average density of the circular pulse that is used in Eq.~(\ref{gaus_beam}) is equivalent to the elliptical beam shown in Fig.~\ref{1stframe_zoom}a). The scattered SAXS intensity was measured by the Adaptive Gain Integrating Pixel Detector (AGIPD) megapixel detector, which is designed to acquire full-frame data at frequencies up to 4.5 MHz \cite{AGIPDALLAHGHOLI2019162324}. The pulse intensity incident on the monochromator is measured using the X-ray gas monitor (XGM) device\cite{Maltezopoulosxt5015} installed after the undulator. Attenuators installed after the XGM are used to reduce the photon flux on the monochromator and the attenuator transmission was 30\% during the experiment. Collimating compound refractive lenses (CRLs) were used to compensate for the divergence of the beam\cite{rothcrl}. In order to measure the scattering from the Vycor sample on AGIPD, only the pixels located closest to the center of the detector and having the strongest scattering signal were used for analysis of the monochromator transmission (Fig.~\ref{1stframe_zoom}b). The ratio of the sum of the intensity captured by the selected pixels to the XGM value provides a figure of merit for the transmission of a given pulse in a given train. Averaging of this ratio over a large number of trains for each pulse number provides an estimate of the monochromator transmission dependency on the energy that has impinged on the first crystal. \begin{figure}[ht!] \centering\includegraphics[width=0.8\textwidth]{transm_drop.png} \caption{Experimental (dots) and theoretical (line) monochromator transmission during a train of XFEL pulses. The separation between pulses is 0.44~\textmu s, which corresponds to 2.25~MHz repetition rate. A photon energy 9~keV and a Si(111) reflection was used. The horizontal axis at the top represents the pulse number, at the bottom - the total energy that has impinged on monochromator before the respective pulse. The inset shows the energy of each pulse in a train measured by the XGM and averaged over the trains with 30\% of the energy impinges the monochromator. The experimental monochromator transmission is calculated as the ratio of the sum of AGIPD pixels to the XGM signal for each pulse in a train, averaged over 498 trains and normalized to the maximum value. The experimental and theoretical transmission values are normalized to the maximum values during the train.} \label{transm_exp_sim} \end{figure} The measurements show that the monochromator transmission reduces by a factor of two after $\sim$50~mJ of X-ray energy or around 150 pulses under the aforementioned conditions, have impinged on the first crystal at 2.25~MHz repetition rate (Fig.~\ref{transm_exp_sim}). The experimental curves are not shown with error bars, since the transmission values are averaged over many trains. That is, for a fixed pulse number in a train, the scattering is produced by statistically independent and intrinsically random SASE pulses \cite{SALDIN1998383}. Even for an ideal monochromator the transmission is determined by the spectral intensity of the pulse in a bandwidth given by the Darwin width of the monochromator. Due to the random nature of the spectral fine structure of SASE\cite{SALDIN1998383}, averaging over a large number of pulses provides an accurate estimate of the effect of heating on the monochromator transmission. We attribute the initial rise of the measured monochromator transmission seen in Fig.~\ref{transm_exp_sim} to possible systematic drifts of photon energies and/or beam pointing during a pulse train. The good agreement between theoretical and experimental values in Fig.~\ref{transm_exp_sim} indicates that the simulation model presented in Secs. \ref{HeatingSim} and \ref{DynDifrSim} provides a qualitatively correct behaviour of monochromator transmission during heating by intense X-ray pulses. Therefore the model can be employed as a simulation framework to aid the design of crystal optical devices when a high heat load from intense XFEL pulses is anticipated. The implementation of the code in Python is available to the public \cite{Petrovgithub}. \section{Conclusion} The intra-train transmission of a double-bounce Si(111) cryo-cooled monochromator has been measured at European XFEL using SASE pulses arriving at 2.25~MHz repetition rate. It has been shown that after around 150~pulses, which corresponded in this case to a total incident energy of around 50~mJ, the monochromator transmission decreases by about a factor of two. A simple one-dimensional model of crystal heating and dynamical diffraction qualitatively reproduces the measured monochromator transmission. A simulation code is made available to the public \cite{Petrovgithub} and can be used to simulate the heat load effect on perfect crystal optical elements at XFELs. \section*{Acknowledgments} We are thankful to Roman Shayduk for fruitful discussions. We acknowledge EuXFEL for provision of beam time and would in particular like to thank the optics and vacuum groups for their help in designing and operating the cryo-cooled monochromator. The photon diagnostics group is acknowledged for enabling pulse-resolved X-ray pulse energy measurements using the XGM. The data department at EuXFEL is acknowledged for IT assistance, data acquisition, data analysis, and detector operation. AGIPD was developed and built by a consortium led by the photon science detector group at DESY. \section*{Disclosures} The authors declare no conflicts of interest. \section*{Data availability} Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. \bibliographystyle{ieeetr}
proofpile-arXiv_065-61
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Web search engines are used by billions of people every day. Powered by results of decades of information retrieval research, they help find the documents people are looking for or directly answer their questions. While basic query-document matching according to whether the documents contain all the words from the query might be sufficient for small document collections, the ever increasing quantity of documents available on the web makes it usually impossible for the user to go through all results that match given query words. Moreover, because of query-document vocabulary mismatch \cite{zhao_query_mismatch} and multiple possible word meanings, simple matching might exclude relevant documents. Therefore, there is a need for sophisticated natural language understanding (NLU) and document ranking methods. As the tasks might be intuitive for humans but difficult to describe algorithmically, such methods are usually based on machine learning utilizing examples provided by human annotators. A popular document ranking model option is a Gradient Boosted Regression Trees (GBRT) ranker \cite{zheng2007general}. It allows to easily and robustly combine hundreds of ranking features ranging from classical ones like BM25 \cite{robertson_BM25} or PageRank \cite{pagerank} to outputs of other statistical models. A number of features deal with the relevance of a document text to the query, which is basically a natural language processing (NLP) task. Recently, the NLP community embraced BERT \cite{bert} inspired by the influential transformer architecture \cite{vaswani2017attention}. While BERT variants reach SoTA performance on many NLP tasks, they are computationally demanding and thus difficult to deploy in a search engine that strives to deliver results to users under a~second. In this work, we create a new text relevance model based on Electra-small~\cite{electra} (a~variant of BERT) that improves relevance ranking while being sufficiently fast. We use the siamese architecture~\cite{sentence_bert} that allows us to precompute document embeddings and compare them with a query embedding at search time. We discuss several methods to compute the relevance score from the query and the document embeddings and propose a new neural-based interaction module. Most relevance research published so far deals with English queries and documents. We are interested in model performance on Czech data. To this end, we pretrain an Electra-small model on a Czech corpus and fine-tune it for relevance ranking on a Czech query-document dataset, which we also release to facilitate further research in this area. Our main contributions are: \begin{itemize} \item We develop and train an Electra-based siamese model for relevance ranking that has also been deployed in a search engine, where it improves performance by 3.8\%. \item We release DaReCzech\footnote{{\url{https://github.com/Seznam/DaReCzech}}}, a large Czech relevance dataset with real user queries and relevance annotations provided by human experts. \item We release Small-E-Czech\footnote{{\url{https://huggingface.co/Seznam/small-e-czech}}}, an Electra-small model pretrained on a Czech corpus. \end{itemize} \section{Related Work}\label{sec:related_work} This section provides an overview of related work. It describes transformer models, model compression and siamese transformers. The section is concluded with reviews of existing datasets for document ranking. \subsection{Transformer models}\label{subsec:transformers} Transformer model architecture, introduced by \citet{vaswani2017attention}, brought a revolution into NLP. They proposed an encoder-decoder model, intended for sequence transduction, based on a multi-head self-attention mechanism that enabled to learn long-term dependencies. \citet{bert} introduced BERT, which was a novel encoder-only language model pre-trained on a large text corpus through masked tokens and next sentence prediction. Subsequently, the model was fine-tuned on a plethora of NLU tasks and reached SoTA results. Here, we rely on Electra~\cite{electra}, which shares its architecture with BERT, but it promises more efficient pre-training and it has been demonstrated that it can be trained in a smaller configuration than the one known as BERT-base (14M vs 110M parameters) without a dramatic drop in performance. \subsection{Knowledge Distillation and Model Compression}\label{subsec:KD} Knowledge distillation is a technique for transferring knowledge from large or ensemble models (teachers) to their smaller or single counterparts (students) \cite{bucila-model_compression}. Current transformers, though SoTA, are prohibitively slow to use in some settings, such as real-time web search. Therefore, many works have been dedicated to distilling knowledge to more compact models, e.g. \citet{distilbert} introduced DistilBERT, a compressed model with 6 layers, which resulted in $2.5\times$ speedup while retaining 97\% of the performance of BERT-base. During our work, we also distilled smaller variants of our Electra model having promising results. Although they provided us with a single-digit speed-up, calculating all query-document embeddings during online serving was still infeasible and we thus focus on siamese models. \subsection{Siamese Transformers}\label{subsec:siamese} Siamese architecture \cite{sentence_bert} is an orthogonal approach to speeding up online inference by offline pre-computation of document embeddings. In this setting, the model is separately fed two texts to obtain their embeddings. Subsequently, these two vectors are compared using e.g. cosine similarity to estimate a similarity score. This approach was proved to be proficient in a first-stage document retrieval. \citet{repbert} computed the document relevance to a query as the scalar product of their embeddings and showed their BERT-based solution beat four traditional IR baselines. Similar approach with some additional adjustments was considered for ColBERT with likewise promising results \cite{colbert}. There, the similarity between a query and a document is evaluated over a bag of embeddings (i.e. there are multiple vectors for a query or a document). This, however, leads to high memory requirements as all embedding vectors need to be stored. \citet{twinbert} presented TwinBERT, which is likely the closest work to ours. In that work, they first obtained query and document embeddings through [CLS] retrieved from the last BERT's layer. Afterwards, they compared the embeddings using an \textit{interaction module} which took an element-wise maximum of two embedding vectors and ran it through a residual fully-connected layer followed by a logistic regression layer to obtain the relevance score. Our work differs from \citet{twinbert} in several aspects. (1) We use Electra instead of BERT due to more efficient pre-training. (2) We explore a deeper structure for the embedding interaction module. (3) We evaluate our model in the scenario of a web search instead of a sponsored search. (4) We fully focus on Czech, which is a much less resource-rich language than English. We release the manually annotated dataset to further support this research. \begin{table*}[h!] \centering\footnotesize \begin{tabular}{lr|rrr|rrr|rrr|rrr|wr{0.8cm}wr{0.45cm}|wr{0.8cm}wr{0.45cm}}\toprule & & \multicolumn{3}{c|}{Words per query} & \multicolumn{3}{c|}{Words per doc} & \multicolumn{3}{c|}{Words per title} & \multicolumn{3}{c|}{Docs per query} & \multicolumn{2}{c|}{Random} & \multicolumn{2}{c}{Oracle}\\ Dataset & \#records & \sfrac{1}{4} & avg & \sfrac{3}{4} & \sfrac{1}{4} & avg & \sfrac{3}{4} & \sfrac{1}{4} & avg & \sfrac{3}{4} & \sfrac{1}{4} & avg & \sfrac{3}{4} & P@10 & DCG & P@10 & DCG\\ \midrule Train-big & 1\,431\,730 & 2 & 2.9 & 4 & 7 & 533.8 & 392 & 3 & 5.4 & 8 & 3 & 8.1 & 7 & 18.1 & 1.2 & 22.1 & 1.5\\ Train-small & 97\,386 & 2 & 3.0 & 4 & 1 & 300.3 & 198 & 2 & 4.5 & 6 & 37 & 52.6 & 65 & 36.2 & 6.9 & 82.2 & 8.2\\ Dev & 41\,220 & 2 & 2.9 & 4 & 2 & 310.7 & 218 & 2 & 4.5 & 6 & 36 & 52.0 & 66 & 34.9 & 6.7 & 80.4 & 8.0\\ Test & 64\,466 & 2 & 2.9 & 4 & 4 & 371.9 & 322 & 2 & 5.1 & 7 & 7 & 27.8 & 43 & 37.9 & 3.2 & 59.3 & 4.0\\ \bottomrule \end{tabular} \caption{DaReCzech statistics. We report the number of words (whitespace separated) per extracted document body and title, number of annotated documents per query, and P@10 and Discounted Cumulative Gain (DCG) for random ranking (100 runs average) and ideal (oracle) ranking. For number of words and documents we report the mean and 0.25 and 0.75 quantiles.} \label{table:dataset_stats} \end{table*} \subsection{Review of Existing Datasets}\label{subsec:existing_dataset} To the best of our knowledge, there is no annotated dataset in Czech for relevance ranking. Also, many datasets for document retrieval tasks were collected several years ago and are therefore outdated. The non-exhaustive review of some of the most prominent datasets is provided below. The dataset most related to ours is MS MARCO \cite{ms_marco}. This dataset contains a collection of 1\,M user queries, together with 8.8\,M passages retrieved from 3.6\,M web documents obtained by the Bing search engine. In contrast to ours, all data are in English. Another dataset based on the Bing search logs is ORCAS~\cite{orcas} containing 20\,M query-document pairs, although it lacks annotations for any relevance task. TREC2009 Web Track \cite{trec2009overview} overviewed retrieval techniques, and was based on a large corpus of 10 billion web pages in 10 languages crawled in 2009 called ClueWeb2009.\footnote{ \url{http://lemurproject.org/clueweb09/}} TREC2009 consists of several tasks including ad-hoc search where the aim was to provide a list of most relevant documents for unseen topics. Another two datasets (US and Asian versions) were published by Yahoo for a learning-to-rank challenge \cite{chapelle2011yahoo}. They consist of annotated query-document pairs accompanied with relevance labels. All queries originate from real Yahoo search logs. \section{Problem and Data}\label{sec:problem_data} For performance reasons, the document index currently has about 200 shards on 100 machines and the relevance ranking in the search engine consists of several stages (similar to those described by \citet{yahoopaper}, see Figure \ref{fig:production_schema} for an illustration). First, the retrieval stage selects documents containing all words from the original query or its enhanced variants (generated by typo correction, declension, etc.). Then the so-called Stage-1 selects about 20\,000 candidate documents using a GBRT ranker with fast features (PageRank, BM25 variants, etc.). In our research, we focus on Stage-2, which selects top 10 documents, again using a GBRT ranker. In addition to the features from Stage-1, Stage-2 uses also more complex ones (text relevance utilizing distances of query words matches in the document, etc.), totalling to more than 500 features. Finally, the top 10 documents are reordered by Stage-3. \begin{figure}[!htb] \centering \includegraphics[scale=0.53]{images/schema-simple-gray.pdf} \caption{Ranking schema of the search engine. Indexed documents that match given query are evaluated by Stage-1 ranking model and top documents are sent to Stage-2, which we focus on. Stage-2 ranking model selects top 10 documents and sends them to Stage-3, which determines their final ordering on the search engine result page.} \label{fig:production_schema} \end{figure} We improve Stage-2 by adding a new feature to the GBRT ranker. This is not easy as the ranking features have been tuned for years, and such efforts often result in negligible improvements. The quality of the ranker is periodically evaluated on a set of about 2\,500 queries sampled from the past 3-month period of the query log. For each query, top 10 results are retrieved and their relevance is evaluated by human annotators. As the order of the top 10 results might be changed by Stage-3, we primarily measure Precision@10 (P@10), i.e.\ the ratio of relevant documents among the top 10. After a new evaluation query set is sampled, the annotated query-document pairs from the last set are added to an old data pool and can be used for training and preliminary testing of models. Note there are much fewer annotated documents per query in the data pool than ca. 20\,000 candidates available in production Stage-2. Generally, these annotated documents must have been deemed relevant by a previously evaluated model. A substantially different model that would be able to bring new relevant documents to the top in production is thus at a disadvantage. We hence consider our test set only as an approximation of the final evaluation. Another problem with old data is that documents might have changed (or their relation to the world, e.g.\ in case of current events, shifted word meanings, user expectations, etc.) and thus the relevance annotations might be outdated. This is the reason why the GBRT rankers are usually trained only on a recent subset of the old data pool. The rest can then be used for text features training, the rationale being that text content relevance might be less ephemeral. \subsection{DaReCzech} \begin{table*}[t!] \centering\footnotesize \begin{tabular}{lp{0.88\linewidth}}\toprule Field & Value\\ \midrule Query & volno otec po porodu\\[0.3cm] URL & \url{https://www.seznamzpravy.cz/clanek/novinka-pro-cerstve-otce-tyden-placene-dovolene-po-narozeni-potomka-41487?autoplay=1}\\[0.3cm] Doc repr. & title: novinka pro čerstvé otce týden placené dovolené po narození potomka url: seznamzpravy.cz/clanek/novinka pro cerstve otce tyden placene dovolene po narozeni potomka 41487?autoplay=1 bte: Novinka pro čerstvé otce: týden placené dovolené po narození potomka Zapojení otců má pomoci matce v kritické fázi šestinedělí. A zároveň posílit vztah mezi dítětem a rodiči. Patří otcovská do ranku předvolebních dárků minulé vládní koalice? (\ldots)\\[0.3cm] Title & novinka pro čerstvé otce týden placené dovolené po narození potomka seznam zprávy\\[0.3cm] Label & 1.0\\ \midrule \multicolumn{2}{c}{\textit{English translation}} \\ \midrule Query & \textit{father's leave after childbirth} \\[0.3cm] Doc repr. & \textit{title: news for fresh fathers a week of paid leave after the birth of offspring url: seznamzpravy.cz/clanek/news for fresh fathers a week of paid leave after the birth of offspring 41487?autoplay=1 bte: News for fresh fathers: a week of paid leave after the birth of the offspring The involvement of fathers is intended to help the mother at the critical stage of the six-week period. And at the same time strengthen the relationship between the child and the parents. Is paternity leave one of the last government coalition's pre-election gifts? (\ldots)} \\[0.3cm] Title & \textit{news for fresh fathers a week of paid leave after the birth of offspring} \\ \bottomrule \end{tabular} \caption{Example dataset record with an English translation. The document representation was slightly shortened.} \label{table:dataset_example} \end{table*} DaReCzech is a new Czech dataset for text relevance ranking that we created from the old data pool. It is divided into four parts: \textit{Train-big} comprising more than 1.4\,M query-document pairs (intended for training of a (neural) text relevance model used as a feature in the GBRT model), \textit{Train-small} (97\,k records, intended for GBRT training), \textit{Dev} (41\,k records) and \textit{Test} (64\,k records), which contains the newest annotations. There is no intersection between query-document pairs in the training, development and test data. The basic statistics of the dataset are presented in Table~\ref{table:dataset_stats}. Each dataset record contains a query, a URL, a document title, a document representation and a relevance label. The queries are real user queries with some typos corrected. A~document representation consists of three parts: \begin{itemize} \item Title -- document title words that were classified by an internal model of the search engine as words corresponding to that particular document, as opposed to words corresponding to the whole web site (usually domain name or description). It is lowercased. \item URL -- a preprocessed document URL, with \% sequences decoded, plus signs converted to spaces and some parts (matching the regex \verb#(https?:\/\/(www\.)?|[-_\t])#) removed. \item Body Text Extract (BTE) -- document body filtered with an internal model of the search engine, i.e. supposedly without headers, menus, etc. \end{itemize} The processed parts are then prepended with identifiers and concatenated: \texttt{title: <title> url: <url> bte: <bte>}. The relevance labels were mapped from the original annotations as follows: \textit{(1) Useful}: 1, \textit{(2) A little useful}: 0.5 (0.75 for \textit{Test}), \textit{(3) Almost not useful}: 0.5 (0.25 for \textit{Test} and \textit{Train-big}), \textit{(4) Not useful}: 0. Note that because we track P@10, i.e. the ratio of useful (label $> 0.5$) documents among top 10, the exact values of other mapped annotations are less important on \textit{Dev} and \textit{Test} set. For an example dataset record, see Table~\ref{table:dataset_example}. Some documents have empty bodies or titles, either because they did not contain any text in these fields or they were not indexed at the time of dumping the data from the search engine database. We dropped empty documents from the training set, as initial experiments showed this helps the fine-tuning. \subsection{Czech Corpus for Language Model Pretraining} \label{ssec:czech_corpus_pretraining} For self-supervised LM pretraining, we use an in-house Czech corpus (253~GB) that is once a year generated from documents downloaded by the search engine crawler. During the corpus generation, document language is detected, non-Czech, duplicate, SPAM and too short texts are dropped and the remainder is cleaned and lowercased. \subsection{Baseline GBRT Ranker} Relevance ranking in Stage-2 is done by a GBRT ranker using hundreds of features. The exact list changes over time as new features are implemented and old systems are turned off. In our work, we tried to improve a baseline model with 575 features. Examples of the most influential include: \begin{itemize} \item dynamic text relevance -- scores depending on distances between matches of query words in the document, averaged across different generated query variants, \item PageRank, \item logistic regression using a query and title words as features, \item Okapi BM25 and its several variants. \end{itemize} \section{Model Architecture}\label{sec:model_architecture} The core of our system is a Czech Electra model pretrained on the web corpus gathered by the search engine crawler. On top of this model, we build two alternative architectures: First, the \emph{query-doc model}, which uses a simple linear layer to transform the output of Electra's [CLS] token into a single number describing the relevance between the concatenated query and document. Second, the \textit{siamese model}, which uses the underlying Electra model to compute query and document embeddings. These embeddings are further compared using cosine similarity or a~small feed-forward network that outputs the~final relevance score. The \textit{query-doc model} has a clear advantage over the \emph{siamese model} as it can directly compare subwords of both the document and the query. The \emph{siamese model}, on the other hand, has to encode all information about a query or a document in a vector of a limited size and compare these later. Nonetheless, at inference time, when the best document should be selected for a query, all query-document pairs need to be evaluated by the whole model for the \textit{query-doc} approach. This turned out to be computationally infeasible even in Stage-2 as 20\,000 document embeddings would have to be computed for each query. In this section, we first describe the \textit{query-doc model} and the \textit{siamese model} architectures. We then elaborate on a set of improvements applied to the latter model to decrease the gap between the performance of these two systems while keeping the latency low. Finally, we describe the training details. \subsection{Query-Doc Model} The \textit{query-doc model} follows the original approach for sequence classification \cite{bert} by adding an additional linear layer on top of the Electra embedding for the artificial [CLS] token. We add a sigmoid activation to project scores between 0 and 1. The input to this model is a single sequence: a tokenized query and a document representation separated by the special [SEP] token. The model outputs a number predicting the document relevance for the query. \subsection{Siamese Model} The \textit{siamese model} utilizes an underlying Electra model to compute embeddings separately for a query and a document. Similarly to \citet{sentence_bert}, we experimented with three strategies of whole token sequence embedding computation: mean or maximum of all output vectors or the output for the [CLS] token. We found the [CLS] token to work best. The embeddings are then compared using cosine distance serving as a relevance proxy. \subsection{Improving the Siamese Model} \begin{figure}[!htb] \centering \includegraphics[width=0.89\linewidth]{images/CustomMetric_CMYK.jpg} \caption{The final \textit{siamese model}. The tokenized query and document are inputted to Electra separately (tokens $W^Q_i$ and $W^D_i$), embeddings from their [CLS] tokens are compared using a custom interaction module. The module comprises a 2-layer feed-forward network and Euclidean distance and cosine similarity, followed by a linear transformation and hyperbolic-tangent non-linearity.} \label{fig:my_label} \end{figure} \subsubsection{Custom Interaction Module} \label{ssec:custom_metric} Cosine similarity has proven to be an effective and fast way to compare embeddings~\cite{sentence_bert}, but its simplicity might limit performance. Therefore, similarly to \citet{karpukhin2020dense}, we define a feed-forward network that compares the embeddings and returns a relevance score. The small size of the network ensures that it still remains fast enough. Following \citet{twinbert}, the input to our interaction module is an embedding $e(q)$ of a query $q$ and an embedding $e(d)$ of a document $d$, each being of dimension $n = 256$. First, we compute the element-wise maximum of the input embeddings $$m = \max(e(q), e(d)).$$ This is processed by two fully-connected layers inspired by the fully-connected block in the transformer model. The first layer maps the input vector to a space with twice as many dimensions and is followed by dropout (drop probability 0.25) and GELU activation \cite{gelu2016}. The second layer maps the vector back to the original space and again applies GELU. We also use a residual connection circumventing the nonlinearity: \begin{align*} h_1 &= \mathrm{Dropout_{0.25}}(\mathrm{GELU}(W_1 m)),\\ h_2 &= \mathrm{GELU}(W_2 h_1) + m, \end{align*} where $W_1 \in \mathbb{R}^{2n \times n}$ and $W_2 \in \mathbb{R}^{n \times 2n}$ are learnable weight matrices. The output $h_2$ of this residual block is concatenated with cosine similarity and Euclidean distance between the query and document embeddings. We found that this improves the stability of training. $$h_3 = [h_2, \hspace{0.1cm} \cos(e(q), e(d)), \hspace{0.1cm} \Vert e(q) - e(d)\Vert]$$ Finally, a linear layer with a \emph{tanh} activation is used to produce the final relevance score: $$r = \tanh(w_\mathrm{out} \cdot h_3),$$ where $\ w_\mathrm{out} \in \mathbb{R}^{n + 2}$ is a learnable weight vector. \subsubsection{Considering Multiple Electra Layers} \citet{tenney-etal-2019-bert} have shown that different tasks benefit more from different layers of BERT. Following the approach of \citet{kondratyuk201975}, we do not use only the last-layer embedding of the [CLS] token, but learn a weighted combination of all layer outputs for this token and take that as the embedding of the input sequence. \subsubsection{Learning with a Teacher} The query-doc model performs better than the siamese one, but is impractical to deploy due to its computational demands. Therefore, we use a variant of knowledge distillation to bridge this gap in quality. Specifically, for each training sample, we compute the prediction of the query-doc (teacher) model, average it with the original label and use the result as a training label for the siamese (student) model. \subsubsection{Initialization from the Teacher.} We initialize the student model weights with the fine-tuned teacher weights. \subsubsection{Ensemble} Ensembling multiple models (i.e. combining their outputs) proves to improve results at the cost of increased inference time \cite{ensemble_ml}. Having a fast enough siamese model, we found out that having two models in an ensemble is a viable option. To diversify the models, only the random seed was changed when training the second one. This affected the initialization of the interaction module weights, the order of training samples and dropout. We tried combining outputs of the models by taking either the mean or the maximum prediction and found the former to work better. \subsection{Pretraining Small-E-Czech} An internal 253~GB Czech web corpus was used for unsupervised pretraining. The texts are tokenized into subwords with a standard BERT WordPiece tokenizer~\cite{schuster2012japanese}. The tokenizer is trained on a subset of the corpus and its vocabulary size is limited to 30\,522 items. The Electra model is pre-trained using the official code\footnote{\url{https://github.com/google-research/electra}} in the Electra-small configuration. We train the model for 4\,M training steps, which took ca. 20 days on a single GPU. \subsection{Training Details} We train our model on the \textit{Train-big} set and select the best checkpoint using early-stopping on the \textit{Dev} set. Subsequently, we train a GBRT ranker on the \textit{Train-small} set with our model output as an additional feature and evaluate both on the \textit{Test} set. All input texts are lowercased to match the pretraining corpus. We use Adam optimizer with learning rate $5\cdot10^{-5}$ without any warmup or learning rate decay to optimize weights of the Electra model and a custom interaction module if present. We use MSE loss for the query-doc and the siamese models. We also experimented with other loss functions such as \textit{triplet} loss, but found them to perform worse. We cap each sentence at 128 tokens and train with batches of size 256. For siamese models, we map the labels into $[-1, 1]$ to match the model output range. For knowledge distillation, our loss function is a mean of MSE between student and teacher prediction (soft labels) and conventional MSE with respect to (hard) gold labels. Otherwise, all training parameters remain the same. We code our experiments using PyTorch \cite{pytorch2019} and the Transformers library \cite{transformers2020}. The GBRT ranker is trained using the Catboost library with 1\,500 trees of depth 6, RMSE loss function and early stopping on 100 iterations. \section{Results}\label{sec:results} In this section, we present the results of training our \emph{query-doc} and \emph{siamese} models. We train each model 4 times with different random seeds (affecting the initialization of the custom interaction module if present and the dropouts), select the best checkpoint for each run on the development set and report the mean and the standard deviation of the 4 runs on the test set. We report two types of results – the first one labeled as \textit{Standalone} for the respective model being used alone for ranking; and the second one labeled as \textit{with GBRT} for the respective model being used as an additional feature for a GBRT ranker that already utilizes hundreds of existing features. Note that we use the \textit{Train-big} data to train the neural models and, subsequently, the \textit{Train-small} data to train the GBRT ranker with the exception of the production search engine baseline that is trained on the entire \textit{Train-big} data. We evaluate the models in two scenarios: (1) on the new DaReCzech dataset, (2) in a production setting. \subsection{DaReCzech} Table~\ref{table:results_main} presents an evaluation on DaReCzech dataset. In~the top part, we show results of baselines – the random ranking, BM25 and the production GBRT ranker (\textit{Search engine baseline}), and P@10 achievable by ideal ranking (\textit{Oracle}). The \textit{query-doc model} outperforms the baseline results by a large margin, achieving P@10 46.3 and GBRT P@10 46.93. These results set the upper bound for the siamese model as the query-doc approach may compare tokens of both query and document directly. \begin{table}[!htb] \centering\footnotesize \begin{tabular}{lll}\toprule & \multicolumn{2}{c}{Precision@10}\\ Model & Standalone & with GBRT \\\midrule Random Baseline & 37.90 & -- \\ BM25 & 40.47 & -- \\ Search engine baseline & -- & 45.14 \\ Oracle & 59.30 & -- \\\midrule Query-Doc & 46.30 $\pm$ 0.17 & 46.93 $\pm$ 0.12 \\\midrule Siamese-Cosine & 42.46 $\pm$ 0.15 & 45.41 $\pm$ 0.14 \\ + custom inter. mod. & 43.82 $\pm$ 0.45 & 45.90 $\pm$ 0.17 \\ + weighted CLS & 44.72 $\pm$ 0.39 & 46.02 $\pm$ 0.18 \\ + knowledge distillation & 45.00 $\pm$ 0.36 & 46.26 $\pm$ 0.19\\ + teacher initialization & 45.26 $\pm$ 0.22 & 46.42 $\pm$ 0.14\\ + ensemble (2 best) & 45.49 & 46.61 \\\bottomrule \end{tabular} \caption{Results on DaReCzech. For each model / additive improvement, we report Precision@10 of the model and the GBRT ranker with the model output as an additional feature.} \label{table:results_main} \end{table} Despite its relative simplicity, the feature from the \textit{Siamese-Cosine} model helps the GBRT ranker by ca. 0.3 percent, but is not very competitive when used alone, and even with the GBRT ranker it lags behind the query-doc approach. When the cosine distance is replaced with a more sophisticated neural based interaction module, the performance improves, and this modification appears as the strongest one. Using a weighted combination of multiple Electra layers instead of the last layer output seems to improve the performance. However, we found that this may be due to our choice of the interaction module. When the weighting is used with the simplest model with the cosine similarity, it increases its performance only by ca. 0.2. Both knowledge distillation from a query-doc teacher and weight initialization from the teacher help the model. All described improvements to the baseline model proved to be effective. Their combination and the final ensembling reduced the gap between the siamese and the query-doc model greatly. Moreover, we can see that already our best non-ensemble siamese model has better performance (45.26) than the baseline production GBRT ranker (45.14). When we add the ensemble output to the features and retrain the GBRT ranker, its P@10 increases by 1.48 to 46.61. \subsection{Real Traffic} Model evaluation on a fixed test set is cheap and stable, but does not take into account the multitude of documents retrieved for a query in production from which the model can select the top 10. To account for this, 3\,000 queries were sampled from the search log. Top 10 documents were retrieved for each using the original GBRT ranker and the new GBRT ranker utilizing new Electra ensemble signals as additional features. The query-documents pairs were then assigned relevance levels by human experts. The new features increased P@10 of the model by 3.8\% (relative). \section{Ablation Studies}\label{sec:ablation_studies} In this section, we present several ablation studies. First, we inspect the importance of individual document parts. We then explore the effect of training data volume on model performance. Third, we study different interaction modules. Fourth, we evaluate a different initialization of the underlying Electra model and also experiment with bigger underlying models. Finally, we present model quantization results. \subsection{Document Representation} The document is represented using its title, URL and BTE. To explore the individual effects of these parts on model performance, we trained a different \textit{siamese} model on each part. No teacher was used during the training, because this would require training the teacher on the respective data part as well, i.e. we used \textit{+weighted CLS} model configuration from Table~\ref{table:results_main}. The testing was then performed on the test set comprising only the respective data part. The results of this experiment are displayed in Table~\ref{table:dataset_single_parts}. We can see that BTE contains the most useful information, but all data parts are useful, as the respective models are significantly better than the random baseline of 37.9 P@10 (see Table~\ref{table:dataset_stats}). Moreover, we conducted an experiment where the individual data parts are added incrementally, i.e. title, URL and BTE. The results are shown in Table~\ref{table:dataset_additive_parts}. \begin{table}[!htb] \centering\footnotesize \begin{tabular}{lcc}\toprule & \multicolumn{2}{c}{Precision@10}\\ Model & Standalone & with GBRT \\\midrule Title & 42.73 $\pm$ 0.09 & 45.46 $\pm$ 0.08\\ URL & 41.40 $\pm$ 0.63 & 45.37 $\pm$ 0.15 \\ BTE & 43.75 $\pm$ 0.46 & 45.76 $\pm$ 0.10 \\\bottomrule \end{tabular} \caption{Effect of using only a single data part (no teacher).} \label{table:dataset_single_parts} \vspace{0.5cm} \begin{tabular}{lcc}\toprule & \multicolumn{2}{c}{Precision@10}\\ Model & Standalone & with GBRT \\\midrule Title & 42.73 $\pm$ 0.09 & 45.46 $\pm$ 0.08\\ + URL & 43.74 $\pm$ 0.37 & 45.84 $\pm$ 0.17 \\ + BTE & 44.72 $\pm$ 0.39 & 46.02 $\pm$ 0.18 \\\bottomrule \end{tabular} \caption{Effect of using different subsets of document parts (cumulative, no teacher).} \label{table:dataset_additive_parts} \end{table} \subsection{Training Data Volume} We inspect the effect of the number of training samples on model performance in Figure~\ref{figure_train_size}. Specifically, for each predefined training set size, we randomly sample this amount of data from the training set and train a \textit{siamese model} on it. We do not use a teacher and run each experiment four times to account for the randomness in sampling. The results show that the performance increases with the number of training samples, both of the model alone and the GBRT ranker using model output as an additional feature, while the gap between them decreases. The effect on performance slowly levels off, but the model might still benefit from more data. \begin{figure}[!htb] \centering \includegraphics[width=0.9\linewidth]{images/train_size-True.pdf} \caption{Precision@10 of the model when trained only on a subset of the training data of particular size. We report the performance of the sole model and also of the GBRT ranker using the model output as an additional feature.} \label{figure_train_size} \end{figure} \subsection{Interaction Module Variants} As we already discussed in Section \textit{Custom Interaction Module}, the interaction module comparing two embeddings and returning a single relevance score may be cosine similarity or a feed-forward neural network. The final interaction module we use is a result of several preliminary experiments. We compare here five different architectures: \begin{itemize} \item Cosine -- compares the query and document embedding using cosine similarity \item Single Hidden -- a neural network mapping the query and document embeddings into a vector of size 3, concatenating it with their Euclidean distance and cosine similarity and finally using a simple feed forward layer with sigmoid activation to obtain the relevance score \item TwinBERT interaction module as proposed by \citet{twinbert} and described in Section \textit{Siamese Transformers}. Additionally, we use a weighted combination of token embeddings from different layers as it turned out to consistently improve performance. \item Final w/o cos/Euc -- our final interaction module as described in Section \textit{Custom Interaction Module} but without cosine similarity and Euclidean distance concatenated to the last hidden layer. \item Final -- our final interaction module as described in Section \textit{Custom Interaction Module} \end{itemize} \begin{table}[!htb] \centering\footnotesize \begin{tabular}{lccwc{1.0cm}}\toprule & \multicolumn{2}{c}{Precision@10} & \\ Model & Standalone & with GBRT & Speed-up \\\midrule Cosine & 42.46 $\pm$ 0.15 & 45.41 $\pm$ 0.14 & $2.7\times$ \\ Single Hidden & 44.37 $\pm$ 0.17 & 46.06 $\pm$ 0.08 & $1.8\times$\\ TwinBERT & 45.09 $\pm$ 0.17 & 46.22 $\pm$ 0.11 & $1.5\times$ \\ Final w/o cos/Euc & 45.09 $\pm$ 0.16 & 46.30 $\pm$ 0.09 & $1.4\times$ \\ Final & 45.26 $\pm$ 0.22 & 46.42 $\pm$ 0.14 & $1.0\times$ \\\bottomrule \end{tabular} \caption{Performance of the systems utilizing different interaction modules. Speed-up measurements regard the sole siamese model, not the GBRT.} \label{table:ablation_custom_metrics} \end{table} Table~\ref{table:ablation_custom_metrics} presents results and also relative speed-ups of the considered interaction modules. We can see that the better the model quality, the worse the model speed. The simplest cosine similarity is the fastest way to compare embeddings, but it performs the worst. On the other hand, our final interaction module surpasses the performance of all other approaches, but is the slowest one. Still, depending on the document length, using the custom metric on top of the precomputed embeddings is roughly 1000$\times$ faster than running the entire query-doc model. Two other noteworthy points are that using the Euclidean and cosine distances as additional features provides a slight gain in the final score, and that our final model surpasses the original TwinBERT interaction module. \subsection{Base Models} We decided to use Electra-small model due to its small size and high performance. Apart from the Electra-small model pretrained on Czech web documents, we experimented with three other base models: \begin{itemize} \item Electra-small model with the same vocabulary but initialized randomly \item mBERT~\cite{bert} -- a well-known multilingual BERT language representation model \item RobeCzech~\cite{straka2021robeczech} -- Roberta-base model trained on Czech texts \end{itemize} \begin{table}[!htb] \setlength{\tabcolsep}{0.27em} \centering\footnotesize \begin{tabular}{lc|wc{1.05cm}wc{1.15cm}|wc{1.05cm}wc{1.15cm}}\toprule & & \multicolumn{4}{c}{Precision@10} \\ & & \multicolumn{2}{c|}{Query-Doc} & \multicolumn{2}{c}{Siamese} \\ Model & Params. & Standal. & w. GBRT & Standal. & w. GBRT \\\midrule Electra (rand.) & 13\,M & 44.21 & 45.67 & 41.55 & 45.39 \\ Electra & 13\,M & 46.30 & 46.93 & 42.46 & 45.41 \\ mBERT & 167\,M & 46.07 & 46.70 & -- & -- \\ RobeCzech & 125\,M & 46.73 & 47.25 & 40.01 & 45.20 \\\bottomrule \end{tabular} \caption{Precision@10 of using different underlying BERT-based models. We report both results when trained in the query-doc and in the siamese mode. For simplicity, siamese models are trained with cosine similarity and without a teacher.} \label{table:other_base_models} \end{table} We trained all models in the \textit{query-doc} setting. As can be seen in Table~\ref{table:other_base_models}, the RobeCzech model performs the best, but is ca. $10\times$ bigger than our Electra-small model. We can also see that despite the relatively large finetuning dataset, the pretraining on monolingual data is still beneficial as the pretrained model outperforms the not-pretrained model. In the \textit{siamese} mode, we trained all models except for mBERT which we omitted as RobeCzech provided better results in the query-doc setting. We use only cosine similarity as the embedding interaction module. Although the results show a big performance gap between Electra-small models and RobeCzech model, we think that the RobeCzech model would require more tuning of the learning rate schedule and other hyperparameters to fully exploit its capabilities. \subsection{ONNX and Quantization} Apart from using siamese architecture and an Electra-small variant, we tried to speed up our model using ONNX runtime\footnote{\url{https://github.com/microsoft/onnxruntime}} and model quantization \cite{polino2018quantization}, i.e. reducing the precision of the computation. While our approach allows to precompute document embeddings offline, there are billions of documents in the database and generating embeddings can thus take a lot of time. We measured different combinations of ONNX conversion and quantization of the embedding module or the interaction module in Python using one thread on a CPU with one AVX-512 FMA unit. The results are in Table \ref{tab:quantization}. The interaction module running on ONNX with UINT8 model quantization is about $1.9\times$ faster than the Pytorch version, while the difference in quality is small. As for the embedding model, the difference in both speed and quality is bigger. \begin{table}[!h] \centering\footnotesize \begin{tabular}{llcc} \toprule Embedding model & Interaction module & P@10 & Speed-up \\\midrule Pytorch FP32 & Pytorch FP32 & 45.27 & $1.0\times$\\ Pytorch FP32 & ONNX FP32 & 45.27 & $1.2\times$\\ Pytorch FP32 & ONNX UINT8 & 45.26 & $1.9\times$\\\midrule Pytorch FP32 & Pytorch FP32 & 45.27 & $1.0\times$\\ ONNX FP32 & Pytorch FP32 & 45.27 & $1.5\times$ \\ ONNX UINT8 & Pytorch FP32 & 45.13 & $3.0 \times$\\\bottomrule \end{tabular} \caption{Effect of model quantization on quality and speed. Relative speed-up values shown in the top part refer to the interaction module execution time and values in the bottom part refer to the embedding model execution times.} \label{tab:quantization} \end{table} \section{Model Size Effect on Response Times} The query evaluation time depends on many factors, making it complicated to evaluate meaningfully. To give rough estimates, the query preprocessing phase gets prolonged by 10 ms on average when using the new Electra-small model. If we replaced it with a BERT-base model, the query embedding generation would take ca. 64 ms instead of 10 ms. The retrieval and ranking phase used to take about 133 ms. With our new feature included, the computation takes about 136 ms (+3 ms) on average. Replacing Electra-small embeddings of size 256 with BERT-base embeddings of size 768 is expected to slow down the ranking stage to 143 ms (+10 ms). \section{Conclusion}\label{sec:conclusion} In this work, we presented a strong and fast variant of a siamese model for relevance ranking based on an Electra language model. We described and evaluated a set of improvements to the baseline siamese model and showed their effect on overall model performance. The model was successfully deployed as an additional feature for a GBRT ranker in a commercial search engine and led to a substantial improvement of 3.8\% in quality. Moreover, we released Small-E-Czech, a pretrained Electra-small model, and DaReCzech, a new dataset for text relevance ranking in Czech. The dataset consists of more than 1.6\,M annotated query-documents pairs, which makes it one of the largest available datasets for this task. \section{Acknowledgements} We thank the developers and product managers who helped to put our prototype into production. Namely, Jaroslav Gratz, Aleš Kučík, Daniel Mészáros, Martina Pomikálková, Jakub Šmíd and Petr Vondrášek. We also thank the annotators who annotated the DaReCzech dataset and Ondřej Dušek and the anonymous reviewers for their valuable comments. \footnotesize
proofpile-arXiv_065-62
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \small In this contribution, we show how deep learning (DL) algorithms like CNNs are incorporated into the analysis workflow of the MAGIC telescopes to perform full-event reconstruction. We also explore the robustness of CNN-based methods, when applying them to real observational data and compare the sensitivity to the standard analysis of \texttt{MARS}~\citep{Zanin:2013,Aleksic:2016}. The DL workflow consists of three main building bricks (see Fig.~\ref{fig:MAGICDL_workflow}). The Monte Carlo (MC) simulations and observational data are processed by the \texttt{MARS} software. A complementary macro translate crucial information into \texttt{uproot}\footnote{\href{https://github.com/scikit-hep/uproot4}{https://github.com/scikit-hep/uproot4}}-readable branches~\citep{pivarski:2021}. Then, the \texttt{DL1-Data-Handler}\footnote{\href{https://github.com/cta-observatory/dl1-data-handler}{https://github.com/cta-observatory/dl1-data-handler}}~\citep{kim:2020} (DL1DH) assembles several data levels from the standard approach and unifies them in a common data format in \texttt{HDF5} designed for DL purposes. The training of the CNN-based models and their prediction, the actual full-event reconstruction, are performed with \texttt{CTLearn}\footnote{\href{https://github.com/ctlearn-project/ctlearn}{https://github.com/ctlearn-project/ctlearn}}~\citep{Nieto:2019ak,brill:2019}, a backend for IACT analysis using \texttt{TensorFlow}. \articlefigure[width=1.0\textwidth]{X0-007_f1.eps}{fig:MAGICDL_workflow}{Diagram depicting the main analysis steps of the MAGIC DL analysis with CTLearn.} \section{DL analysis with the MAGIC telescopes} \small \paragraph{Model selection} For this work, \texttt{CTLearn}’s Thin-ResNet (TRN) was selected based on previous studies ~\citep{Grespan:2021,Miener:2021}. Stereoscopic information are explored by concatenating the images (integrated pixel charges and signal arrival times) of MAGIC1 and MAGIC2 channel-wise before feeding the network. We explored two different analysis schemes, where we trained the same TRN model with raw images, containing besides the Cherekov light of the shower also the fluctuations of the Night Sky Background (NSB), and cleaned images, where pixels dominated by noise rather than Cherenkov light emission are set to zero. The cleaning mask are obtained with the standard \texttt{MARS} analysis cleaning. Since the pixel layout of the MAGIC cameras is a hexagonal lattice, we mapped them to a Cartesian lattice using bilinear interpolation to directly apply CNNs~\citep{Nieto:2019uj}. \paragraph{Validation on simulations} To evaluate the performance common metrics like ROC curves, energy and angular resolution curves are used, applying the same quality cuts (see Fig.~\ref{fig:Simulation_validation}). The reconstruction performance is obtained using MC gamma simulations coming uniformly from a $ 0.4^{\circ} $ offset of the telescope pointing (ringwobble). To demonstrate the robustness of CNNs trained with cleaned images, we tested all methods for the background rejection against MC proton simulations and observational off data, where we do not expect any gamma-ray signal. \articlefigurethree{X0-007_f2.eps}{X0-007_f3.eps}{X0-007_f4.eps}{fig:Simulation_validation}{The performance parameters are obtained using the MC gamma simulations (ringwobble). \emph{Left)} ROC curves tested against MC proton simulations and observational off data. \emph{Center)} Angular resolution vs. reconstructed energy. \emph{Right)} Energy resolution vs. simulated energy.} \paragraph{Results on observational data} In this work, 2.93h observation of the standard gamma-ray candle Crab Nebula, taken on four different nights in 2018 under good weather condition at low zenith (zd < $35^{\circ}$), are considered. The data have been analyzed with latest \texttt{MARS} software using the standard settings for the analysis focusing of the medium energy (ME) - and low energy (LE) range. For \texttt{CTLearn}, we strictly adopted the quality cuts from the \texttt{MARS} analysis. ME analysis (> $250$ GeV) apply the cuts: valid stereo reconstruction, $\theta^{2}$ < 0.009 $\text{deg}^{2}$, hadronness < 0.16 and both hillas intensity sizes > 300 phe, while the LE analysis (> $100$ GeV) apply the cuts: valid stereo reconstruction, $\theta^{2}$ < 0.02 $\text{deg}^2$, hadronness < 0.28 and both hillas intensity sizes > 60 phe. To fairly compare the results, obtained with CNN-based models, with the standard approach (random forest (RF) for the background rejection, Look-Up tables (LUTs) for the energy estimation and RF for bidimensional direction reconstruction), the hadronness cut is adjusted in the \texttt{CTLearn} analysis to equalize the background (bkg) rates for the corresponding standard MARS analyses (ME or LE). A source detection is determined using a $\theta^{2}$ plot (see Fig.~\ref{fig:theta2CTLearnME} for the \texttt{CTLearn} ME analysis with cleaned images) and the significance (Sig. in Tab.~\ref{tab:results_summary}) calculation (Eq. 17 in~\citep{LiMa:1983}). The main properties of all analyses are summarized in Tab.~\ref{tab:results_summary}. The sensitivity (Sen. in Tab.~\ref{tab:results_summary}) is computed as the strength of the source that gives excess/sqrt(background) = 5 after 50h. \begin{table} \centering \resizebox{1\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Analysis & $ N_{on} $ & $ N_{off} $ & $ N_{ex} $ & $ \gamma $ rate [/min]& bkg rate [/min] & Sen. [\% Crab] & Sig. (Li\&Ma) \\ \hline \hline MARS – ME & $ 819 $ & $21.0\pm2.6$ & $798.0\pm28.7$ & $4.54\pm0.16$ & $0.119\pm0.015$ & $0.70\pm0.05$ & $43.0\sigma$\\ \hline CTLearn – ME (raw) & $ 629 $ & $23.3\pm3.1$ & $605.7\pm25.3$ & $3.45\pm0.14$ & $0.133\pm0.018$ & $0.97\pm0.08$ & $36.5\sigma$\\ \hline CTLearn – ME (cleaned) & $ 844 $ & $22.0\pm2.7$ & $822.0\pm29.2$ & $4.68\pm0.17$ & $0.125\pm0.015$ & $0.69\pm0.05$ & $43.6\sigma$\\ \hline \hline MARS – LE & $ 3579 $ & $679.0\pm15.0$ & $2900.0\pm61.7$ & $16.49\pm0.35$ & $3.861\pm0.086$ & $1.09\pm0.03$ & $61.1\sigma$\\ \hline CTLearn – LE (raw) & $ 2730 $ & $673.7\pm20.0$ & $2056.3\pm56.0$ & $11.70\pm0.32$ & $3.832\pm0.114$ & $1.53\pm0.05$ & $47.5\sigma$\\ \hline CTLearn – LE (cleaned) & $ 3536 $ & $680.7\pm15.1$ & $2855.3\pm61.3$ & $16.24\pm0.35$ & $3.872\pm0.086$ & $1.11\pm0.03$ & $60.4\sigma$\\ \hline \end{tabular}} \caption{Summary of all performed analyses of the same Crab Nebula sample.} \label{tab:results_summary} \end{table} \articlefigure[width=0.95\textwidth]{X0-007_f5.eps}{fig:theta2CTLearnME}{$\theta^{2}$ plot for the CTLearn ME analysis with cleaned images.} \section{Conclusions and outlook} \small This contribution shows for the first time that CNN-based full-event reconstruction works for the MAGIC telescopes and that \texttt{CTLearn} analyses are capable of detecting the Crab Nebula with a clear signal. We demonstrate that CNNs trained with cleaned images rather than raw images show a stronger robustness, when applying them to observational data, and the performance already matches the sensitivity of detection of the conventional analysis on real data. The performance of CNNs trained with raw images can be optimized by pixel-wise tuning of the NSB noise of the MCs~\citep{Vuillaume:2021} to match the NSB level of each observation. The selected TRN model is relatively shallow and further performance enhancements are foreseen by increasing the model depth/complexity. Future work is planned, where the full performance of CNNs under various observation conditions are evaluated. \acknowledgements \scriptsize We would like to thank the Instituto de Astrofísica de Canarias for the excellent working conditions at the Observatorio del Roque de los Muchachos in La Palma. The financial support of the German BMBF, MPG and HGF; the Italian INFN and INAF; the Swiss National Fund SNF; the ERDF under the Spanish Ministerio de Ciencia e Innovación (MICINN) (FPA2017-87859-P, FPA2017- 85668-P, FPA2017-82729-C6-5-R, FPA2017-90566-REDC, PID2019-104114RB-C31, PID2019-104114RB-C32, PID2019- 105510GB-C31C42, PID2019-~107847RB-C44, PID2019-107988GB-C22); the Indian Department of Atomic Energy; the Japanese ICRR, the University of Tokyo, JSPS, and MEXT; the Bulgarian Ministry of Education and Science, National RI Roadmap Project DO1-268/16.12.2019 and the Academy of Finland grant nr. 317637 and 320045 are gratefully acknowledged. This work was also supported by the Spanish Centro de Excelencia “Severo Ochoa” SEV-2016- 0588, SEV-2017-0709 and CEX2019-000920-S, and "María de Maeztu” CEX2019-000918-M, the Unidad de Excelencia “María de Maeztu” MDM-2015-0509-18-2 and the "la Caixa" Foundation (fellowship LCF/BQ/PI18/11630012), by the Croatian Science Foundation (HrZZ) Project IP-2016-06-9782 and the University of Rijeka Project 13.12.1.3.02, by the DFG Collaborative Research Centers SFB823/C4 and SFB876/C3, the Polish National Research Centre grant UMO-2016/22/M/ST9/00382 and by the Brazilian MCTIC, CNPq and FAPERJ. TM acknowledges support from PID2019-104114RB-C32. JLC and DN acknowledges partial support from The European Science Cluster of Astronomy \& Particle Physics ESFRI Research Infrastructures funded by the European Union’s Horizon 2020 research and innovation program under Grant Agreement no. 824064. SY acknowledges financial support from Google LLC through the Google Summer of Code 2020 program. We acknowledge the support of NVIDIA Corporation with the donation of a Titan X Pascal GPU used for part of this research. \\ \\ This paper has gone through internal review by the MAGIC Collaboration. \tiny
proofpile-arXiv_065-63
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The classical precision limit of interfermetric measurements is determined by quantum projection noise. Entangled many-body spin states with { correlated quantum fluctuations} can overcome this limit and may offer significant precision enhancements~\cite{CavesPRD1981,WinelandPRA1992,BollingerPRA1996,GiovannettiNATPHOT2011,PezzeRMP2018}. { A widely known strategy offering quantum-enhanced precision} in atomic Ramsey spectroscopy measurements is { spin squeezing \cite{KitagawaPRA1993,WinelandPRA1992}}: By { redistributing} the quantum noise into unmeasured observables, the variance of the spin { component} that contains the information about the phase parameter $\phi$ of interest can be reduced below the { standard quantum limit (SQL)~: $(\Delta\phi)_{\mathrm{SQL}}^2=1/N$ that is the minimum uncertainty for $N$ non-entangled atoms}. To generate the required quantum entanglement, well controlled interactions are used. In Bose-Einstein condensates, atomic collisions naturally generate entanglement~\cite{SorensenNATURE2001,EPJD,GrossNATURE2010,Treutlein2010}. Alternatively, { effective interactions mediated by an electromagnetic field can be implemented} in optical cavities~\cite{LerouxPRL2010}. In both cases, the one-axis-twisting (OAT) Hamiltonian $\hat{H}=\hbar\chi \hat{S}_z^2$, nonlinear in the spin component $\hat{S}_z$ where $\chi$ is determined by the interaction strength, allows for a unifying description of these interactions. Starting from a coherent spin state, an eigenstate of $\hat{S}_x$, the one-axis-twisting { evolution allows for} the generation of states where a linear (L) spin component, a combination of $\hat{S}_y$ and $\hat{S}_z$, is squeezed, i.e., its uncertainty is decreased below { the standard quantum limit}. Although it was shown to be an experimentally { robust} to improve measurement precision in atomic interferometers~\cite{GrossNATURE2010,Treutlein2010,LerouxPRL2010,MitchellPRL2012,HostenNATURE2016,CoxPRL2016}, this approach offers a { quantum gain that is limited to} $(\Delta\phi)_{\mathrm{SQL}}^2/(\Delta\phi)^2_{\rm L}\propto N^{2/3}$~\cite{KitagawaPRA1993}. One-axis-twisting generates states that are more sensitive than spin-squeezed states when the evolution is continued beyond the best linear squeezing time, { eventually reaching the Heisenberg limit $(\Delta\phi)_{\mathrm{SQL}}^2/(\Delta\phi)^2_{\rm HL}= N$ that is the maximum gain allowed by quantum mechanics. Recently, an experiment reaching the Heisenberg limit was realized using the spin of a highly magnetic atom $2J=16$~\cite{ChalopinNCom2018}, and an experimental demonstration of a quantum gain reaching Heisenberg scaling, i.e. $(\Delta\phi)_{\mathrm{SQL}}^2/(\Delta\phi)^2_{\rm HS}= aN$ with $a<1$, was realized with up to $N=350$ Ytterbium atoms~\cite{VuleticArxiv2021}. To exploit the sensitive features of these highly entangled states, measurement-after-interaction (MAI) strategies such as squeezing echos have been developed \cite{YurkePRA1986,SchleierSmithPRL2016,FrowisPRL2016,MacriPRA2016,HostenSCIENCE2016,NolanPRL2017,HainePRA2018,HammererQuantum2020} that reduce the sensitivity to imperfections and detection noise. However, their { fragility towards} decoherence~\cite{HuelgaPRL1997,MonzPRL2011,DemkowiczNATCOMMUN2012,Kittens}, and the need for stable and coherent interactions on sufficiently long time scales renders the reach of Heisenberg scaling in systems with large atom number extremely challenging.} A promising alternative is provided by over-squeezed spin states~\cite{StrobelSCIENCE2014,BohnetSCIENCE2016,EvrardPRL2019,XuArXiv2021} that are generated by OAT after the linear squeezing time but on time scales that are shorter than those needed to reach Heisenberg scaling. The sensitivity of these states cannot be captured in terms of the squeezing of linear spin observables, but instead requires the measurement of nonlinear spin observables~\cite{GessnerPRL2019} whose squeezing can lead to significant { quantum enhancements} beyond the reach of linear spin squeezing. { Theoretically, the metrological potential of this relevant class of states in the limit of large $N$} is only accessible by analytical approaches since numerical simulations are limited to moderate particle numbers that are too low to extrapolate the scaling behavior. In this paper, after recalling the most important results of the squeezing of a linear (L) spin observable, we focus on the squeezing of nonlinear spin observables generated by the OAT evolution, its sensitivity enhancement beyond the linear spin squeezing and its scaling with the atom number for $N\gg 1$. First, we show that { when a single nonlinear spin observable (NL) of the form $\{S_x,S_z\}$} is added to the linear components { in the ensemble of accessible observables, the best quantum gain} scales as $(\Delta\phi)_{\mathrm{SQL}}^2/(\Delta\phi)^2_{\rm NL}\propto N^{4/5}$ and it is reached on the time scale $\chi t\propto N^{-3/5}$; { while for an optimal linear combination of arbitrary linear and quadratic (Q) spin observables, the best quantum gain scales as $(\Delta\phi)_{\mathrm{SQL}}^2/(\Delta\phi)^2_{\rm Q}\propto N^{6/7}$ and is reached on a time scale $\chi t\propto N^{-4/7}$}. Second, we show that the { measurement-after-interaction} technique gives access to a continuous family of nonlinear spin observables { that} reproduce all the scaling laws mentioned above. More generally, we show that on time scales $\chi t\propto N^{-\alpha}$ of the { one-axis twisting} evolution with $1\geq \alpha\geq 1/2$, the MAI technique allows { one} to achieve a maximal quantum gain that scales as $(\Delta\phi)_{\mathrm{SQL}}^2/(\Delta\phi)^2_{\rm MAI}\propto N^{2-2\alpha}$. By comparing to the quantum Fisher information, which quantifies the maximal sensitivity enhancement over all possible measurements, we demonstrate that the scaling law of the MAI technique is optimal at any time in the considered time window $1 \geq \alpha \geq 1/2$. In order to study the effect of decoherence on this scaling law, we include two collective dephasing { processes} corresponding to realistic noise in atomic experiments into our analytical study: { For} a ballistic dephasing processes, described by fluctuating energy levels in the Hamiltonian, we predict a critical value of the preparation time at which we observe a discontinuous change in the scaling law { of the quantum gain}. { For} a dephasing of diffusive nature, described by a Lindblad master equation, we find that the scaling exponent is reduced by a factor of 2 independently of the dephasing strength. { In addition to the scaling laws in the large-$N$ limit, first reported in Ref.~\cite{BaamaraPRL2021}, we present general expressions of the quantum gain for arbitrary atom numbers and identify finite-size corrections}. Finally we study the effect of particle losses on the squeezing of a nonlinear or a quadratic spin observable. \section{Optimization over rotation axis and measurement observables}\label{sec:methode} We consider an ensemble of $N$ two-level atoms that is described in terms of the collective spin observables $\hat{\vec{S}}=(\hat{S}_x,\hat{S}_y,\hat{S}_z)^T$, where $\hat{S}_{k}=\sum_{i=1}^N\hat{\sigma}^{(i)}_k/2$ and $\sigma^{(i)}_k$ with $k=x,y,z$ is the Pauli matrix for the $i$-th atom. Starting from the spin-coherent state $|\psi_0\rangle$ such that \begin{align}\label{eq:CCS} \hat{S}_x|\psi_0\rangle=\frac{N}{2}|\psi_0\rangle, \end{align} an entangled spin state $|\psi_t\rangle=\hat{U}_t|\psi_0\rangle$ is generated via the OAT evolution $\hat{U}_t=e^{-i\chi t \hat{S}_z^2}$ at time $t$. A phase $\phi$ is imprinted at this time by the rotation $e^{-i\hat{S}_{\vec{n}}\,\phi}$, with $\hat{S}_{\vec{n}}=\vec{n}\cdot\hat{\vec{S}}$, where $\vec{n}$ is a unit vector in the plane perpendicular to the initial spin polarization, $\vec{e}_x$ in this case. The goal of the protocol is to infer the best estimate of the phase $\phi$ from the measurement of an observable $\hat{X}$, subsequently to the phase imprinting. The inferred phase uncertainty is given by \cite{WinelandPRA1992} \begin{align}\label{eq:uncertainty} (\Delta\phi)^2=\left.\frac{(\Delta \hat{X})^2}{|\partial_{\phi}\langle\hat{X}\rangle|^2}\right|_{\phi=0}, \end{align} where $\partial_{\phi}\equiv \partial/\partial\phi$, while $\langle\hat{X}\rangle$ and $(\Delta \hat{X})^2$ are the mean value and the variance of the measured observable $\hat{X}$ respectively. Since any additional shift can be absorbed by the initial state, we focus on the estimation of the phase in the vicinity of zero without restriction of generality. The denominator in (\ref{eq:uncertainty}) is given by \begin{equation} \left.\frac{\partial \langle\hat{X}\rangle}{\partial\phi}\right|_{\phi=0}=\frac{\partial}{\partial\phi}\langle\psi_0|\hat{U}^{\dagger}_t e^{i\hat{S}_{\vec{n}}\,\phi}\hat{X}e^{-i\hat{S}_{\vec{n}}\,\phi}\hat{U}_t|\psi_0\rangle|_{\phi=0} =i\langle\psi_t|[\hat{S}_{\vec{n}},\hat{X}]|\psi_t\rangle. \end{equation} By replacing it in (\ref{eq:uncertainty}), we obtain \begin{equation} (\Delta\phi)^2=\frac{(\Delta \hat{X})^2}{|\langle[\hat{S}_{\vec{n}},\hat{X}]\rangle|^2}. \end{equation} For the initial non-correlated state (\ref{eq:CCS}) and $\hat{X}=\hat{S}_{\vec{m}}$ a spin component in the $yz$-plane with $\vec{m}\perp\vec{n}$, the phase uncertainty reaches the SQL. With respect to this limit, we quantify the quantum metrological gain given by the state prepared at the time $t$ of the OAT evolution, with a rotation around $\vec{n}$ and a measurement of an observable $\hat{X}$, by the parameter~\cite{GessnerPRL2019} \begin{align}\label{eq:Xidef} \xi^{-2}(\chi t,\hat{S}_{\vec{n}},\hat{X})\equiv\frac{(\Delta\phi)_{\mathrm{SQL}}^2}{(\Delta\phi)^2}=\frac{|\langle[\hat{S}_{\vec{n}},\hat{X}]\rangle|^2}{N(\Delta\hat{X})^2}, \end{align} where all the averages are taken in the state $\hat{U}_t|\psi_0\rangle$. In order to analytically optimize the metrological gain (\ref{eq:Xidef}) with respect to the rotation axis $\vec{n}$ and the measurement observable $\hat{X}$, we assume that we have a family of $q$ accessible operators $\hat{\vec{X}}=(\hat{X}_1,...,\hat{X}_q)^T$ and we can measure any arbitrary linear combination $\hat{X}_{\vec{m}}=\vec{m}\cdot\hat{\vec{X}}= \sum_{k=1}^q m_k\hat{X}_k$. For a given measurement direction $\vec{m}$, we can re-express (\ref{eq:Xidef}) as \begin{align}\label{eq:xireex} \xi^{-2}(\chi t,\hat{S}_{\vec{n}},\hat{X}_{\vec{m}})=\frac{|\vec{n}^T C[\chi t,\hat{\vec{S}},\hat{\vec{X}}] \vec{m}|^2}{N (\vec{m}^T \Gamma[\chi t,\hat{\vec{X}}] \vec{m})}. \end{align} where we introduced the $2\times q$ commutator matrix \begin{align}\label{eq:Cdef} C[\chi t,\hat{\vec{S}},\hat{\vec{X}}]_{kl}\equiv-i\langle[\hat{S}_k,\hat{X}_l]\rangle. \end{align} and the $q\times q$ covariance matrix \begin{equation}\label{eq:gammadef} \Gamma[\chi t,\hat{\vec{X}}]_{kl}\equiv\mathrm{Cov}(\hat{X}_k,\hat{X}_l) =\frac{1}{2}\langle \hat{X}_k \hat{X}_l+\hat{X}_l\hat{X}_k\rangle-\langle \hat{X}_k\rangle\langle \hat{X}_l\rangle. \end{equation} For a state prepared at time $t$ of the OAT evolution, the maximum of~(\ref{eq:xireex}) over the rotation direction $\vec{n}$ and the measurement direction $\vec{m}$ corresponds to the maximum eigenvalue $\lambda_{\rm max}$ of the matrix $C \Gamma^{-1}C^T$~\cite{GessnerPRL2019}: \begin{align}\label{eq:Xilambda} \xi^{-2}_{\hat{\vec{X}}}(\chi t)=\max_{\vec{m},\vec{n}}\xi^{-2}(\chi t,\hat{S}_{\vec{n}},\hat{X}_{\vec{m}})=\frac{\lambda_{\max}(C \Gamma^{-1}C^T)}{N}, \end{align} and is reached with the choice $\vec{n}=\vec{n}_{\max}$ where $\vec{n}_{\max}$ is the eigenvector of $C \Gamma^{-1}C^T$ corresponding to $\lambda_{\max}$. The optimal measurement direction is $\vec{m}_{\mathrm{opt}}=\alpha\Gamma^{-1}C^T\vec{n}_{\max}$, where $\alpha \in \mathbb{R}$ is a normalization constant. \begin{figure*}[tb] \centering \includegraphics[width=\textwidth]{Fig1_GainVsTime.pdf} \caption{{ Metrological gain $\xi^{-2}$ in the limit of large $N$ including finite size corrections (\ref{eq:xifiniteL}), (\ref{eq:xifiniteNL}), (\ref{eq:xifiniteq}) and (\ref{eq:MAIxinorm}) (solid lines) for (a) linear (L), (b) nonlinear (NL), (c) quadratic (Q), and (d) MAI measurement strategies, compared to the exact metrological gain (dashed lines). The spin number is $N=10^3$ (top row) and $N=10^4$ (bottom row).}} \label{unified} \end{figure*} \section{Squeezing of linear and quadratic spin observables} Starting with a coherent spin state, at short times, the OAT evolution $\hat{U}_t$ leads to the squeezing of a linear spin component $\hat{X}_{\rm L}=\hat{S}_{\vec{m}}$. An evolution beyond the best linear squeezing time allows for the generation of non-Gaussian spin states where nonlinear spin observables are squeezed. For each given choice of a family $\hat{\vec{X}}$ of accessible operators that can contain nonlinear spin observables, in addition to $\hat{S}_{\vec{m}}$, the optimization explained in Sec. \ref{sec:methode} allows us to identify, at any time $t$ of the one-axis-twisting evolution, the best squeezed observable and the corresponding metrological gain. \subsection{Linear spin squeezing}\label{seq. FamiliesL} Let us first consider the squeezing of a linear spin observable $\hat{X}_{\rm L}=\hat{S}_{\vec{m}}=\sum_{i=x,y,z}m_i\hat{S}_i$. By considering that the initial state of the system is (\ref{eq:CCS}) where the collective spin is in the $x$ direction, we can show that $\langle\psi_0|\hat{U}_t^{\dagger}[\hat{S}_x,\hat{S}_i]\hat{U}_t|\psi_0\rangle=0$ for any $i=x,y,z$. This allows us to restrict $\vec{m}$ to the $yz$-plane. In order to identify the best squeezed linear observable and the corresponding metrological gain, we use the technique explained in Sec. \ref{sec:methode} and we set $\hat{\vec{X}}_{\mathrm{L}}=\left(\hat{S}_y,\hat{S}_z\right)^T$, meaning that we study the squeezing of a linear observable of the form \begin{align}\label{eq:Xl} \hat{X}_{\mathrm{L}}=m_y\hat{S}_y+m_z\hat{S}_z. \end{align} The fact that the one-axis-twisting evolution is analytically solvable allows us to determine, see \ref{A}, the commutator (\ref{eq:Cdef}) and the covariance (\ref{eq:gammadef}) matrices for a given $N$ at each time $t$. The optimization over the rotation $\vec{n}$ and the measurement $\vec{m}$ directions gives us the metrological gain (\ref{eq:Xilambda}) in the limit $N\gg1$ at $\chi t< 1/\sqrt{N}$ \begin{equation}\label{eq:XiThLimitL} \left(\xi^{-2}_{\rm L}(\chi t)\right)_{N\to\infty}=\frac{N^2(\chi t)^2}{1+N^4(\chi t)^{6}/6}. \end{equation} The best metrological gain and the corresponding time can be obtained from a maximization of (\ref{eq:XiThLimitL}) over $\chi t$ as \cite{SinatraFro2012} \begin{equation} \chi t_{\mathrm{L,best}}=3^{1/6} N^{-2/3} \qquad ; \qquad \left(\xi_{\mathrm{L,best}}^{-2}\right)_{N\rightarrow\infty}=\frac{2}{3^{2/3}} N^{2/3}. \label{eq:bestL} \end{equation} By introducing the rescaled time $\tilde{\chi t}=\chi t/(\chi t_{\rm L,best})$ and by expanding the exact metrological gain $\xi^{-2}_{\rm L}(\chi t)$ up to $\mathcal{O}(N^{0})$, we obtain \begin{equation}\label{eq:xifiniteL} \quad\frac{\xi^{-2}_{\mathrm{L}}}{\left(\xi^{-2}_{\mathrm{L,best}}\right)_{N\to\infty}}= \frac{3}{2}\frac{(\tilde{\chi t})^2}{1+(\tilde{\chi t})^{6}/2} \left[1-3^{1/3}(\tilde{\chi t})^2 N^{-1/3}+\mathcal{O}(N^{-2/3})\right]. \end{equation} This { expression is shown as a solid line in Fig.~\ref{unified}(a) as a function of $\tilde{\chi t}$ for $N=10^3,10^4$, and compared to the exact metrological gain. For $\tilde{\chi t}=1$ we obtain the best metrological gain including finite size corrections} \begin{eqnarray} \xi^{-2}_{\mathrm{L,best}}&=\left(\xi_{\mathrm{L,best}}^{-2}\right)_{N\rightarrow\infty}\left[1-3^{1/3} N^{-1/3}+\mathcal{O}(N^{-2/3})\right].\label{eq:XibestL} \end{eqnarray} that is shown as the red horizontal dashed line in Fig.~\ref{best_scaling}(a). The optimal rotation is $\hat{S}_{\vec{n}_{\max}}=\vec{n}_{\rm max}\cdot\hat{\vec{S}}_\perp$ where $\vec{n}_{\max}$ is a unit vector in the $yz$-plane with $\vec{n}_{\max}=(\cos\theta_{n}^{\rm L},\sin\theta_{n}^{\rm L})^T$, and the best squeezed linear spin observable is $\hat{S}_{\vec{m}_{\mathrm{opt}}}=\vec{m}_{\mathrm{opt}}\cdot\hat{\vec{X}}_{\mathrm{L}}$ with $\vec{m}_{\rm opt}=(\cos\theta_{m}^{\rm L},\sin\theta_{m}^{\rm L})^T$. In the limit of large $N$, we obtain \begin{equation} \theta_{n}^{\rm L}=3^{-1/6}N^{-1/3}+\mathcal{O}(N^{-2/3}) \qquad ; \qquad \theta_{m}^{\rm L}=-\frac{\pi}{2}+3^{-1/6}N^{-1/3}+\mathcal{O}(N^{-2/3}). \end{equation} The interferometric estimation of the unknown phase $\phi$ using the state prepared at $\chi t_{\rm L, best}$ of the OAT dynamics with the rotation generator $\hat{S}_{\vec{n}_{\max}}$ and the measurement of the best squeezed linear observable $\hat{S}_{\vec{m}_{\rm opt}}$ lead to the sub-SQL phase uncertainty \begin{align}\label{eq:uncl} \Delta\phi\simeq\frac{3^{1/3}}{\sqrt{2}}\frac{1}{N^{5/6}}. \end{align} \subsection{Nonlinear spin squeezing}\label{seq. Families} In addition to $\hat{S}_{\vec{m}}$, we first consider a single second-order observable $\frac{1}{2}\{\hat{S}_x,\hat{S}_z\}$, where $\{\hat{A},\hat{B}\}=\hat{A}\hat{B}+\hat{B}\hat{A}$ denotes the anticomutator of $\hat{A}$ and $\hat{B}$. This corresponds to the choice of the nonlinear family $\hat{\vec{X}}_{\mathrm{NL}}=\left(\hat{S}_y,\hat{S}_z,\frac{1}{2}\{\hat{S}_x,\hat{S}_z\}\right)^T$. We thus explore the squeezing of a nonlinear observable of the form \begin{align}\label{eq:Xnl} \hat{X}_{\mathrm{NL}}=m_y\hat{S}_y+m_z\hat{S}_z+\frac{m_{xz}}{2}\{\hat{S}_x,\hat{S}_z\}. \end{align} The analytical calculation of the commutator (\ref{eq:Cdef}) and covariance (\ref{eq:gammadef}) matrices (given in \ref{A}), allows us to deduce the nonlinear metrological gain for $N\gg1$ at $\chi t< 1/\sqrt{N}$ as{\footnote{We calculate analytically the inverse of the $3\times 3$ covariance matrix $\Gamma$ and diagonalize the $2\times 2$ matrix $C\Gamma^{-1}C^T$.}} \begin{equation}\label{eq:XiThLimit} \left(\xi^{-2}_{\rm NL}(\chi t)\right)_{N\to\infty}=\frac{N^2(\chi t)^2}{1+N^6(\chi t)^{10}/270}. \end{equation} By maximizing (\ref{eq:XiThLimit}) over $\chi t$, { we find the scaling with $N$ of the} best metrological gain and the corresponding time: \begin{equation}\label{eq:bestinf} \chi t_{\mathrm{NL,best}}=\left(\frac{5}{2}\right)^{1/10} 3^{3/10} N^{-3/5} \qquad ; \qquad \left(\xi_{\mathrm{NL,best}}^{-2}\right)_{N\rightarrow\infty}=2\left(\frac{2}{5}\right)^{4/5} 3^{3/5} N^{4/5}. \end{equation} In order to obtain the first finite-size corrections to (\ref{eq:XiThLimit}), we introduce the rescaled time $\tilde{\chi t}=\frac{\chi t}{\chi t_{\rm NL,best}}$ to obtain \begin{equation} \frac{\xi^{-2}_{\mathrm{NL}}}{\left(\xi^{-2}_{\mathrm{NL,best}}\right)_{N\to\infty}} = \frac{5(\tilde{\chi t})^2}{4+(\tilde{\chi t})^{10}}\left[1-\left(\frac{135}{2}\right)^{1/5}(\tilde{\chi t})^2N^{-1/5}+ \frac{5(\tilde{\chi t})^4}{4+(\tilde{\chi t})^{10}}\: \frac{221 (\tilde{\chi t}) ^{10}+672} {40500^{1/5} \,28}\: N^{-2/5}+\mathcal{O}(N^{-3/5})\right]. \label{eq:xifiniteNL} \end{equation} A representation of (\ref{eq:xifiniteNL}) as a function of $\tilde{\chi t}$ for $N=10^3,10^4$ compared to the exact metrological gain is shown in Fig.~\ref{unified}(b). For $\tilde{\chi t}=1$, we obtain { the best nonlinear metrological gain including finite-size corrections} \begin{equation} \xi^{-2}_{\mathrm{NL,best}}=\left(\xi_{\mathrm{NL,best}}^{-2}\right)_{N\rightarrow\infty}\left[1-\left(\frac{5}{2}\right)^{1/5} 3^{3/5} N^{-1/5}\right. \left.+\frac{893}{2^{2/5}3^{4/5}5^{3/5}28}N^{-2/5}+\mathcal{O}(N^{-3/5})\right]. \label{eq:XibestNL} \end{equation} shown as the orange horizontal dashed line in Fig.~\ref{best_scaling}(a). The optimal rotation direction $\vec{n}_{\max}=(\cos\theta_{n}^{\rm NL},\sin\theta_{n}^{\rm NL})^T$ is given in the limit of large $N$ by \begin{align} \theta_{n}^{\rm NL}=\left(\frac{2}{5}\right)^{1/10}3^{-3/10}N^{-2/5}+\mathcal{O}(N^{-3/5})\,, \end{align} { and the best spin observable among the nonlinear family $\vec{X}_{\mathrm{NL}}$ is $\hat{X}_{\vec{m}_{\mathrm{opt}}}=\vec{m}_{\mathrm{opt}}\cdot\hat{\vec{X}}_{\mathrm{NL}}$ where we write \begin{equation} \vec{m}_{\rm opt}=(\sin\varphi_{m}^{\rm NL}\cos\theta_{m}^{\rm NL},\sin\varphi_{m}^{\rm NL}\sin\theta_{m}^{\rm NL},\cos\varphi_{m}^{\rm NL})^T \qquad \mbox{with} \quad \theta_{m}^{\rm NL} \in [0,2\pi] \quad \mbox{and} \quad \varphi_{m}^{\rm NL}\in [0,\pi] \end{equation} and, in the limit of large $N$ we find, \begin{equation} \theta_{m}^{\rm NL}=-\frac{\pi}{2}+\frac{3^{7/10}}{2^{9/10}5^{1/10}}N^{-2/5}+\mathcal{O}(N^{-3/5}) \qquad ; \qquad \varphi_{m}^{\rm NL}=\frac{\pi}{2}+\frac{1}{N}+\mathcal{O}(N^{-6/5}). \label{eq:nlcomp} \end{equation} { Note that, since $\hat{S}_x$ is of order of $N$, the contribution $m_{xz}\{\hat{S}_x,\hat{S}_z\}$ of the nonlinear observable to $\hat{X}_{\rm NL}$ (\ref{eq:Xnl}) is comparable to that of the linear observable although $m_{xz}=\cos\varphi_{m}^{\rm NL}$ is of order $1/N$}. { If a phase $\phi$ is imprinted in the system at $\chi t_{\rm NL,best}$ after the OAT evolution, the measurement of $\hat{X}_{\rm NL}$ allows us to estimate the value of the phase $\phi$ with an uncertainty } \begin{align}\label{eq:uncnl} \Delta\phi\simeq\frac{1}{\sqrt{2}}\left(\frac{5}{2}\right)^{4/10}3^{-3/10}\frac{1}{N^{9/10}}\,, \end{align} clearly surpassing the squeezing of a linear observable (\ref{eq:uncl}) and approaching the Heisenberg limit $\Delta\phi= 1/N$.} \subsection{Quadratic spin squeezing} We now explore the squeezing of an arbitrary linear combination of spin observables up to second order. First, we find numerically that in the time window $0<\chi t\leq1/\sqrt{N}$ of the one-axis-twisting evolution, the best squeezed quadratic observable is a combinaition of only four observables $\hat{\vec{X}}_{\rm Q}=(\hat{S}_y,\hat{S}_z,\frac{1}{2}\{\hat{S}_x,\hat{S}_z\},\frac{1}{2}\{\hat{S}_x,\hat{S}_y\})^T$. For this reason, we limit ourselves, in the following, to the observables $\hat{X}_{\mathrm{Q}}$ of the form \begin{align}\label{eq:Xnlq} \hat{X}_{\mathrm{Q}}=m_y \hat{S}_y+m_z \hat{S}_z +\frac{m_{xz}}{2} \{\hat{S}_x,\hat{S}_z\}+\frac{m_{xy}}{2} \{\hat{S}_x,\hat{S}_y\}. \end{align} { By proceeding similarly to the nonlinear case\footnote{ We calculate this time the inverse of the $4\times 4$ covariance matrix which is still analytically possible, and we take the limit $N\rightarrow\infty$ in the metrological gain $\xi^{-2}_{\rm Q}(\chi t)$ calculated by (\ref{eq:Xilambda}). The elements of the covariance and commutator matrices for the quadratic case are given in \ref{A}.}} we obtain for $\chi t<1/\sqrt{N}$: \begin{figure}[tb] \centering \includegraphics[width=0.9\textwidth]{Fig2_Xi_chit.pdf} \caption{(a) Quantum metrological gain { for the linear $\xi^{-2}_{\mathrm{L}}$, nonlinear $\xi^{-2}_{\mathrm{NL}}$, quadratic $\xi^{-2}_{\mathrm{Q}}$ and MAI $\xi^{-2}_{\mathrm{MAI}}$ measurement strategies as a function of time, compared to the quantum Fisher information with $N=10^4$. The solid vertical and horizontal lines represent the corresponding (exact) best meteorological gain and best time, while the dashed horizontal and vertical lines represent the analytical scaling laws in the limit of large $N$ with finite size corrections}. (b) Optimisation over the second interaction time $\tau$ { in the MAI technique. The plot shows the best time $\tau_{\mathrm{opt}}$} as a function of the squeezing time $t$ for $N=10^{3}$ (blue) and $N=10^8$ (orange). In the relevant time frame $\chi t\leq 1/\sqrt{N}$, we have $\tau_{\mathrm{opt}}\simeq -a t +b$ where $a$ and $b+1$ are represented in the inset as functions of $N$.} \label{best_scaling} \end{figure} \begin{equation}\label{eq:XiThLimitq} \left(\xi^{-2}_{\rm Q}(\chi t)\right)_{N\to\infty}=\frac{N^2(\chi t)^2}{1+N^8(\chi t)^{14}/875}. \end{equation} { The best metrological gain and the corresponding time are obtained by maximizing (\ref{eq:XiThLimitq}) over $\chi t$:} \begin{equation} \chi t_{\mathrm{Q,best}}\simeq\left(\frac{7}{6}\right)^{1/14} 5^{3/14} N^{-4/7} \qquad ; \qquad \left(\xi_{\mathrm{Q,best}}^{-2}\right)_{N\rightarrow\infty}=\left(\frac{6}{7}\right)^{6/7} 5^{3/7} N^{6/7}. \label{eq:bestxiq} \end{equation} Introducing the rescaled time $\tilde{\chi t}=\frac{\chi t}{\chi t_{\rm Q,best}}$ we obtain the first finite-size corrections to (\ref{eq:XiThLimitq}) as \begin{align}\label{eq:xifiniteq} \quad\frac{\xi^{-2}_{\mathrm{Q}}}{\left(\xi^{-2}_{\rm Q,best}\right)_{N\to\infty}}=\frac{7(\tilde{\chi t})^2}{6+(\tilde{\chi t})^{14}} \left[1-\left(\frac{875}{6}\right)^{1/7}(\tilde{\chi t})^2 N^{-1/7} + \frac{7(\tilde{\chi t})^4}{6+(\tilde{\chi t})^{14}} \: \frac{86 (\tilde{\chi t})^{14}+297}{5^{1/7}6^{2/7}7^{5/7}15}\: N^{-2/7} \notag \right. \\ \left. \quad -\frac{7(\tilde{\chi t})^6}{6+(\tilde{\chi t})^{14}} \: \frac{61(\tilde{\chi t})^{14}+147}{3^{10/7}5^{5/7}7^{4/7}2^{3/7}}\: N^{-3/7} +\mathcal{O}(N^{-4/7})\right] \end{align} represented in Fig.~\ref{unified}(c), and finite size corrections to the best quadratic metrological gain $\xi^{-2}_{\rm Q,best}$ \begin{equation}\label{eq:XibestQ} \xi^{-2}_{\mathrm{Q,best}}=\left(\xi_{\mathrm{Q,best}}^{-2}\right)_{N\rightarrow\infty}\left[1-\left(\frac{7}{6}\right)^{1/7} 5^{3/7} N^{-1/7}+\frac{383}{5^{1/7}6^{2/7}7^{5/7}15}N^{-2/7} -\frac{104\times 2^{4/7}}{3^{10/7}5^{5/7}7^{4/7}}N^{-3/7}+\mathcal{O}(N^{-4/7})\right] \end{equation} that is represented as the green dashed horizontal line in Fig.~\ref{best_scaling}(a). The optimal rotation direction is $\vec{n}_{\rm max}=(\cos\theta_{n}^{\rm Q},\sin\theta_{n}^{\rm Q})^T$, and the best observable is $\hat{X}_{\vec{m}_{\rm opt}}=\vec{m}_{\rm opt}\cdot\hat{\vec{X}}_{\rm Q}$ where $\vec{m}_{\rm opt}$ is in this case a four-dimensional unit vector corresponding to the set of observables $(\hat{S}_y,\hat{S}_z,\frac{1}{2}\{\hat{S}_x,\hat{S}_z\},\frac{1}{2}\{\hat{S}_x,\hat{S}_y\})^T$ that can be written as \begin{equation} \vec{m}_{\rm opt}=(\sin\omega_m^{\rm Q}\sin\varphi_m^{\rm Q}\cos\theta_m^{\rm Q}, \sin\omega_m^{\rm Q}\sin\varphi_m^{\rm Q}\sin\theta_m^{\rm Q}, \sin\omega_m^{\rm Q}\cos\varphi_m^{\rm Q}, \cos\omega_m^{\rm Q})^T \,. \end{equation} In the limit $N\gg 1$, we obtain \begin{eqnarray} \theta_{n}^{\rm Q}&=&\left(\frac{6}{7}\right)^{1/14}5^{-3/14}N^{-3/7}+\mathcal{O}(N^{-4/7}) \qquad;\qquad \theta_m^{\rm Q}=-\frac{\pi}{2}+\frac{2^{15/14}}{3\times 5^{3/14}}\left(\frac{7}{3}\right)^{13/14}N^{-3/7}+\mathcal{O}(N^{-4/7}) \\ \varphi_m^{\rm Q}&=&\frac{\pi}{2}-\frac{4}{3N}+\mathcal{O}(N^{-8/7}) \qquad ; \qquad \qquad \qquad \omega_m^{\rm Q}=\frac{\pi}{2}+\frac{2}{3}\left(\frac{2}{7}\right)^{1/14}\frac{1}{3^{13/14}5^{3/14}}N^{-10/7}+\mathcal{O}(N^{-11/7}). \end{eqnarray} { By taking into the account that $\hat{S}_x$ is of the order of $N$, we note that the contribution of the two nonlinear observables $\{\hat{S}_x,\hat{S}_z\}$ and $\{\hat{S}_x,\hat{S}_y\}$ to $\hat{X}_{\vec{m}_{\rm opt}}$ are respectively of the same order and $N^{-3/7}$ smaller than the contribution of the linear observable}. The squeezing of the quadratic observable~(\ref{eq:Xnlq}) allows to achieve an { uncertainty \begin{align}\label{eq:uncq} \Delta\phi\simeq\left(\frac{7}{6}\right)^{6/14} 5^{-3/14} \frac{1}{N^{13/14}}, \end{align} on the inferred phase} which is even closer to the Heisenberg limit than the uncertainty~(\ref{eq:uncnl}) attained by the squeezing of the nonlinear observable~(\ref{eq:Xnl}). As expected, the uncertainty on the phase $\Delta\phi$ decreases as { both the preparation time of the state by OAT evolution and the nonlinearity of the measured spin observable increase}. \section{Scaling laws of measurement-after-interaction technique} \label{sec:MAI} As shown above, { the evolution with the one-axis twisting Hamiltonian, used as a system preparation before phase imprinting,} allows to achieve a high metrological gain through the squeezing of nonlinear spin observables. Such observables, that are higher moments of the spin components, can be extracted from the statistics of linear spin observables~\cite{LuckeSCIENCE2011,StrobelSCIENCE2014,BohnetSCIENCE2016,EvrardPRL2019,XuArXiv2021}. However, due to the increased measurement time and the need for low detection noise, this is challenging to achieve in systems with large atom numbers. As we will show in this section, the MAI technique~\cite{SchleierSmithPRL2016,FrowisPRL2016,NolanPRL2017} represents an alternative method for measuring a nonlinear spin observable directly. For that, after the phase impinting and prior to the measurement of a linear spin observable $\hat{S}_{\vec{m}}$ with $\vec{m}=(m_x,m_y,m_z)^T$, we allow a second evolution $\hat{U}_{\tau}$ of the system with the OAT Hamiltonian $\hat{H}=\hbar\chi\hat{S}_z^2$. Mathematically, this is equivalent to the measurement of the nonlinear spin observable \begin{align}\label{eq:maix} \hat{X}_{\rm MAI}=\hat{U}_{\tau}^{\dagger}\hat{S}_{\vec{m}}\hat{U}_{\tau}=\sum_{k=x,y,z}m_k e^{i\chi\tau\hat{S}_z^2}\hat{S}_k e^{-i\chi\tau\hat{S}_z^2}. \end{align} By expanding (\ref{eq:maix}) up to linear order in $\chi \tau$, we obtain \begin{align}\label{eq:maixapp} \hat{X}_{\rm MAI}=\hat{S}_{\vec{m}}{ -\chi\tau m_x\{\hat{S}_y,\hat{S}_z\}}+\chi\tau m_y\{\hat{S}_x,\hat{S}_z\}+ \mathcal{O}(\chi\tau)^2. \end{align} Hence, a OAT evolution up to $\chi\tau= m_{xz}/(2m_y)$ followed by a measurement of the linear spin observable $\hat{S}_{\vec{m}}$ { with $m_x=0$} is equivalent to first order in $\chi\tau$ to the measurement of the nonlinear spin observable~(\ref{eq:Xnl}). Motivated by this correspondence, we systematically study the metrological potential that is offered by the continuous set of observables~(\ref{eq:maix}), which is parametrized by $\chi \tau$ and accessible by the MAI technique. The analytical optimization (\ref{eq:Xilambda}) allows us to obtain the maximal metrological gain $\xi^{-2}_{\mathrm{MAI}}(\chi t)$ over all rotation directions $\vec{n}$ and measurement directions $\vec{m}$ for a fixed interaction time $\tau$ at time $t$ of the OAT evolution{ \footnote{{ Starting with the coherent spin state (\ref{eq:CCS}), the metrological gain associated to the state prepared at time $\chi t$ of the OAT evolution with the measurement of the observable~(\ref{eq:maix}) for fixed $\chi\tau$ is written according to~(\ref{eq:xireex}), (\ref{eq:Cdef}) and (\ref{eq:gammadef}), with $\vec{X}=\hat{U}_{\tau}^{\dagger}\hat{\vec{S}}\hat{U}_{\tau}$. We thus have to evaluate \begin{align}\label{c} C[\chi t,\hat{\vec{S}},\hat{U}_{\tau}^{\dagger}\hat{\vec{S}}\hat{U}_{\tau}]_{kl}=-i\langle[\hat{S}_k,\hat{U}_{\tau}^{\dagger}\hat{S}_l \hat{U}_{\tau}]\rangle, \qquad \mbox{and} \qquad \Gamma[\chi t,\hat{U}_{\tau}^{\dagger}\hat{\vec{S}}\hat{U}_{\tau}]_{kl}=\mathrm{Cov}(\hat{U}_{\tau}^{\dagger}\hat{S}_k \hat{U}_{\tau},\hat{U}_{\tau}^{\dagger}\hat{S}_l \hat{U}_{\tau})=\Gamma[\chi (t+\tau),\hat{\vec{S}}]_{kl}, \end{align} where we used the property $\hat{U}_t\hat{U}_{\tau}=\hat{U}_{t+\tau}$. First, we note that $\langle\psi_0|\hat{U}^{\dagger}_t[\hat{S}_x,\hat{U}^{\dagger}_{\tau}\hat{S}_{\vec{m}}\hat{U}_{\tau}]\hat{U}_t|\psi_0\rangle=\langle\psi_0|\hat{U}^{\dagger}_t[\hat{S}_{\vec{n}},\hat{U}^{\dagger}_{\tau}\hat{S}_{x}\hat{U}_{\tau}]\hat{U}_t|\psi_0\rangle=0$ for any linear spin observable $\hat{S}_{\vec{m}}$ and $\hat{S}_{\vec{n}}$. This allows us to restrict the optimization of both the rotation direction $\vec{n}$ and the measurement direction $\vec{m}$ to the plane perpendicular to the initial spin direction $\vec{e}_x$. The $2\times 2$ commutator and covariance matrices are given in \ref{B}.}}}. As shown in Fig.~\ref{best_scaling}(b), numerical optimization over $\tau$ reveals that, in the limit of large $N$, for a given $\chi t\leq 1/\sqrt{N}$, the optimal interaction time $\chi\tau_{\mathrm{opt}}$ which maximizes the metrological gain $\xi^{-2}_{\rm MAI}(\chi t)$ is given by \begin{align}\label{eq:tauopt} \chi\tau_{\mathrm{opt}}\underset{N\gg 1}{\to}-\chi t. \end{align} This corresponds to the echo protocol that was first suggested in Ref.~\cite{SchleierSmithPRL2016} where, after the first one-axis-twisting evolution up to $t$ and phase imprinting, we implement a second one-axis-twisting evolution of a duration $t$ where we invert the sign of the constant $\chi\to -\chi$ in the nonlinear Hamiltonian. Motivated by the result (\ref{eq:tauopt}), we replace $\chi\tau$ by $-\chi t$ in the expression of the observable $\hat{X}_{\rm MAI}$ (\ref{eq:maix}). Using $\cos^N(\chi t)\simeq e^{-N(\chi t)^2/2}$ for $\chi t\to 0$, the metrological gain $\xi_{\rm MAI}^{-2}(\chi t)$ for the MAI technique is given for $\chi t\leq 1/\sqrt{N}$ in the limit of large $N$ by \begin{align}\label{eq:MAIxi} \left(\xi_{\rm MAI}^{-2}(\chi t)\right)_{N\to\infty}=N^2(\chi t)^2e^{-N(\chi t)^2}. \end{align} The scaling laws of the metrological gain $\xi_{\rm MAI}^{-2}$ for $N\gg 1$ on the time scales \begin{align}\label{eq:tscal} \chi t=\sigma N^{-\alpha}, &\quad 1\geq\alpha \geq 1/2, \end{align} can easily obtained from Eq.~(\ref{eq:MAIxi}) and read\footnote{For $1\geq\alpha > 1/2$, we do not include the first correction whose form depends on the value of $\alpha$} \begin{align}\label{eq:MAIopt} \xi^{-2}_{\mathrm{MAI}}= \begin{cases} \sigma^2 N^{2-2\alpha}, &\: 1\geq\alpha > 1/2\\ &\\ \sigma^2e^{-\sigma^2} N\left[1+(\frac{1+e^{\sigma^2}}{\sigma^2}+\frac{5\sigma^2}{3} -\frac{\sigma^4}{6}-2)\frac{1}{N}+\mathcal{O}(N^{-2})\right], &\: \alpha = 1/2 \end{cases}. \end{align} We first note that to the leading order in the limit of large $N$, the result (\ref{eq:MAIopt}) reproduces the scaling laws of the metrological gain of the linear, the nonlinear and the quadratic spin squeezing discussed above: for $\alpha=2/3$, we recover the scaling of $\xi^{-2}_{\rm L}\propto N^{2/3}$ for the linear spin squeezing. For $\alpha=3/5$, the scaling law $\xi^{-2}_{\rm NL}\propto N^{4/5}$ of the squeezing of the nonlinear observable (\ref{eq:Xnl}) and for $\alpha=4/7$, the scaling law $\xi^{-2}_{\rm Q}\propto N^{6/7}$ of the squeezing of a quadratic observable. The best metrological gain of the echo protocol~\cite{SchleierSmithPRL2016}, yielding the Heisenberg scaling $\xi^{-2}_{\mathrm{MAI,best}}=N/e$ at the time $\chi t_{\mathrm{MAI,best}}=1/\sqrt{N}$, is obtained from (\ref{eq:MAIopt}) by maximization over both $\sigma$ and $\alpha$. Simlarly to the previous section, the time rescaling $\tilde{\chi t}=\chi t/\chi t_{\rm MAI,best}$ allows us to write \begin{align}\label{eq:MAIxinorm} \frac{\xi^{-2}_{\rm MAI}}{\xi^{-2}_{\rm MAI,best}}=(\tilde{\chi t})^2e^{1-(\tilde{\chi t})^2}+\mathcal{O}(N^{-1}). \end{align} This is represented in Fig.~\ref{unified}(d). Note that the first finite size correction to the metrological gain (\ref{eq:MAIxinorm}) of the MAI method, { of order} $1/N$, are very small compared to the case of the nonlinear ($1/N^{1/5}$) and the quadratic ($1/N^{1/7}$ ) spin squeezing. The optimal rotation direction for a given $\alpha$ and $\sigma$ is written as $\vec{n}_{\rm max}=\cos\theta_n\vec{e}_y+\sin\theta_{n}\vec{e}_z$ where we obtain in the limit of large $N$ \begin{align}\label{eq:MAIrot} \theta_{n}= \begin{cases} \arctan (\frac{2}{\sigma}N^{\alpha-1}), &\: 1\geq\alpha > 1/2\\ \frac{1}{\sigma}e^{\sigma^2/2}\frac{1}{\sqrt{N}}+\mathcal{O}(N^{-3/2}),&\: \alpha = 1/2 \end{cases}. \end{align} $\hat{X}_{\vec{m}_{\rm opt}}=\hat{U}_{-t}^{\dagger}(\vec{m}_{\rm opt}\cdot\hat{\vec{S}})\hat{U}_{-t}$, where $\vec{m}_{\rm opt}=\cos\theta_{m}\vec{e}_y+\sin\theta_{m}\vec{e}_z$ is a unit vector with \begin{align}\label{eq:MAImes} \theta_{m}= \begin{cases} \arctan (-\frac{1}{\sigma}N^{\alpha-1}), &\: 1\geq\alpha > 1/2\\ -\frac{1}{\sigma}\frac{1}{\sqrt{N}}+\mathcal{O}(N^{-1}),&\: \alpha = 1/2 \end{cases}, \end{align} represents, among the continuous set of observables (\ref{eq:maix}), the best squeezed nonlinear observable at the time (\ref{eq:tscal}) of the one-axis-twisting evolution\footnote{Here again in (\ref{eq:MAIrot}) and (\ref{eq:MAImes}), the first corrections for $1\geq\alpha > 1/2$ depend on the value of $\alpha$.}. For $\alpha=1/2$, Eqs.~(\ref{eq:MAIrot}) and~(\ref{eq:MAImes}) confirm the optimality of the rotation direction $\vec{n}=\vec{e}_y$ and the measurement direction $\vec{m}=\vec{e}_y$ made in Ref.~\cite{SchleierSmithPRL2016} for $N\to \infty$. \section{Quantum Fisher information}\label{sec:QFI} The full metrological potential of a state is given { by the quantum Fisher information $F_Q$~\cite{BraunsteinPRL1994} obtained by optimization over all possible measurements} $\max_{\hat{X}}\xi^{-2}=F_Q/N$. In order to assess the quality of the MAI technique, we compare $\xi^{-2}_{\rm MAI}$, given in Eq.~(\ref{eq:MAIopt}), to $F_Q/N$ of the { states generated by one-axis twisting}. Starting with the state~(\ref{eq:CCS}), for a phase imprinting rotation around $\hat{S}_{\vec{n}}$ with $\vec{n}$ in the $yz$-plane, the quantum Fisher information at a time $t$ of the one-axis-twisting evolution is given by $F_Q=4\lambda_{\rm max,F}$, where $\lambda_{\rm max,F}$ is the largest eigenvalue of the covariance matrix $\Gamma[\chi t,\hat{\vec{X}}_{yz}=(\hat{S}_y,\hat{S}_z)^T]$ \cite{PezzeRMP2018}. This can be obtained by restricting the covariance matrix of the quadratic measurement given in \ref{A} to the first two rows and columns. In the limit of large $N$ { and for} $\chi t\leq 1/\sqrt{N}$, we obtain \begin{align}\label{eq:Fisher} \left(F_Q/N\right)_{N\to\infty}=\frac{1}{2}\left(1-e^{-2 N (\chi t)^2}\right)N. \end{align} Using (\ref{eq:Fisher}), we obtain the scaling law of the quantum Fisher information $F_Q/N$ at the time scales $\chi t=\sigma N^{-\alpha}$ with $1\geq\alpha\geq 1/2$ \begin{align}\label{eq:Fisheropt} F_Q/N= \begin{cases} \sigma^2 N^{2-2\alpha}, & 1\geq\alpha > 1/2\\ \frac{1}{2}(1-e^{-2\sigma^2}) N+\mathcal{O}(N^{0}), &\: \alpha = 1/2 \end{cases}. \end{align} { Comparison of this last equation to (\ref{eq:MAIopt}), shows} that the MAI technique reaches the optimal scaling law of sensitivity enhancement over the entire range of time $1\geq\alpha > 1/2$. We note that the metrological gain $\xi^{-2}_{\rm L}$ (\ref{eq:XiThLimitL}), $\xi^{-2}_{\rm NL}$ (\ref{eq:XiThLimit}) and $\xi^{-2}_{\rm Q}$ (\ref{eq:XiThLimitq}) discussed above have the same structure and can be summarized in a unifying formula that gives the metrological gain in the limit of large $N$ for different mesurement strategies. For $\chi t<1/\sqrt{N}$, we have \begin{equation}\label{eq:XiThLimitGen} \left(\xi^{-2}(\chi t)\right)_{N\to\infty}=\frac{F_Q/N}{1+M}, \end{equation} where \begin{equation} M_{\rm L}=\frac{N^4(\chi t)^{6}}{6} \qquad ; \qquad M_{\rm NL}=\frac{N^6(\chi t)^{10}}{270} \qquad ; \qquad M_{\rm Q}=\frac{N^8(\chi t)^{14}}{875} \end{equation} for a linear, nonlinear and quadratic measurement respectively. In the case of the MAI technique, the metrological gain in the limit of large $N$ is given by (\ref{eq:XiThLimitGen}) for $\chi t\leq1/\sqrt{N}$ with \begin{align} M_{\rm MAI}=\frac{\sinh[N(\chi t)^2]}{N(\chi t)^2}-1. \end{align} These expressions quantify the limitation of the metrological gain due to suboptimal measurements ($M$). In this sense, $M$ can be interpreted as the information that cannot be extracted from the state in a given measurement strategy. \section{Dephasing noise} In experiments, { for physical systems that are not perfectly isolated from the environment or that have other degrees of freedom coupled to the spin degrees of freedom we are interested in,} decoherence affects the OAT evolution and limits the metrological gain $\xi^{-2}$. Realizations of the OAT evolution based on Bose-Einstein condensates are { fundamentally} limited by particle losses and finite temperature~\cite{LiYunPRL2008,SinatraPRL2011}. It has been shown that for spin squeezing these effects can be described with a dephasing model that leads to a ballistic behavior of spin fluctuations $(\Delta\hat{S}_y)^2$~\cite{SinatraFro2012}. In OAT realizations using trapped ions~\cite{MolmerPRL1999,MonzPRL2011,BohnetSCIENCE2016}, magnetic field fluctuations cause a similar ballistic collective dephasing~\cite{MonzPRL2011,LanyonPRL2013,CarnioPRL2015}. On the contrary, in cavity-induced squeezing of atomic ensembles, the collective dephasing of the spin due to cavity losses is of a diffusive nature~\cite{LerouxPRA2012, PawlowskiEPL2016}. In the following, we focus on these classes of processes, i.e. on ballistic or diffusive fluctuations of a collective spin observable and we quantify the resulting limitations on the metrological gain $\xi^{-2}$. The ballistic dephasing model is based on a Hamiltonian evolution with a parameter that fluctuates from a realization to the other, which on average, leads to incoherent evolution. The diffusive dephasing model is obtained from a Lindblad master equation~\cite{Tannoudji,Breuer}. \subsection{Ballistic dephasing}\label{Bal} To describe the OAT evolution in the presence of a ballistic collective dephasing, we consider the Hamiltonian \begin{equation}\label{Hbal} \hat{H}_{\mathrm{bal}}=\hbar \chi (\hat{S}_z^2 + D \hat{S}_z), \end{equation} where, $\chi D$ represents an energy shift in the two-level systems. The constant $D$, here, is a classical random variable whose value fluctuates between different repetitions of the experiment. We consider $D$ to follow a Gaussian distribution $p(D)$ { with zero average and a possibly extensive variance \begin{equation} p(D)=\frac{1}{\sqrt{2\pi \langle D^2 \rangle}}e^{-\frac{D^2}{2\langle D^2 \rangle}} \qquad \mbox{where} \qquad \langle D^2 \rangle = \epsilon N^\gamma \quad \textrm{with} \quad 0 \leq \gamma \leq 1, \end{equation} and $\epsilon$ a small parameter.} Starting again with the coherent spin state (\ref{eq:CCS}), the state of the system becomes $|\psi_t\rangle=e^{-i\hat{H}_{\rm bal}t/\hbar}|\psi_0\rangle$, and the expectation value $\langle \hat{A}\rangle$ of any observable $\hat{A}$ is given by \begin{align} \langle \hat{A}\rangle=\int_{-\infty}^{+\infty}p(D)\langle\psi_t|\hat{A}|\psi_t\rangle dD. \end{align} \subsubsection{Linear, nonlinear, and quadratic spin observables} The metrological gain of the state $|\psi_t\rangle$, with a rotation around $\vec{n}$ and a measurement of $\hat{X}$ can always be written as in Eq.~(\ref{eq:xireex}) where the corresponding analytical expressions for $C$ and $\Gamma$ are given in~\ref{C}. Following an analogous strategy as in the noiseless case, to the leading order in the limit of large $N$, the metrological gain of the linear, the nonlinear and the quadratic spin squeezing is obtained for $\chi t< 1/\sqrt{N}$ as \begin{equation}\label{eq:bal} \left(\xi^{-2}_{\rm bal}(\chi t)\right)_{N\to\infty}=\frac{N^2(\chi t)^2}{1+M+\epsilon N^{1+\gamma}(\chi t)^2}, \end{equation} with the appropriate expression $M$ of each measurement strategy given in Sec.~\ref{sec:QFI}. The precise scaling in the large-$N$ limit now depends on the interplay between the terms in the denominator. Generally, we note that as soon as the noise-dependent term becomes non-negligible over $1+M$, it will determine the scaling of the maximal quantum gain. Thus, the effect of ballistic dephasing, in the limit of large $N$, is to set the upper bound $\xi^{-2}_{\lim}=N^{1-\gamma}/\epsilon$ to the scaling of the metrological gain, independently of the measurement strategy. { Due to the form of~(\ref{eq:bal}), the maximisation over $\chi t$ is not affected by the ballistic dephasing. The best time $\chi t_{\rm best}$ is then unchanged and the} metrological gain is \begin{equation}\label{eq:bal_all} \xi_{\mathrm{best,L (bal)}}^{-2}\simeq\frac{2\times 3^{-2/3}N^{2/3}}{1+2\epsilon\times 3^{-2/3} N^{\gamma-1/3}}\quad ; \quad \xi_{\mathrm{best,NL (bal)}}^{-2}\simeq\frac{2\left(\frac{2}{5}\right)^{4/5}3^{3/5}N^{4/5}}{1+2\epsilon\left(\frac{2}{5}\right)^{4/5}3^{3/5} N^{\gamma-1/5}}\quad ; \quad \xi_{\mathrm{best,Q (bal)}}^{-2}\simeq\frac{\left(\frac{6}{7}\right)^{6/7}5^{3/7}N^{6/7}}{1+\epsilon\left(\frac{6}{7}\right)^{6/7}5^{3/7}N^{\gamma-1/7}}, \end{equation} for the linear, the nonlinear and the quadratic spin squeezing respectively. Equations~(\ref{eq:bal_all}) show that for a linear measurement, a collective ballistic dephasing with $\gamma\leq 1/3$ does not change the best noiseless metrological gain. This is also true for a nonlinear measurement if $\gamma\leq 1/5$ and for a quadratic measurement if $\gamma \leq 1/7$. \subsubsection{MAI measurements} We have shown { in section \ref{sec:MAI}} that the MAI method allows, with an appropriate value of $\alpha$, to reproduce all the scaling laws for the linear, nonlinear and the quadratic spin squeezing in the noiseless case. To show that this observation can be extended to realistic scenarios, we identify the limitations of the MAI metrological gain~(\ref{eq:MAIopt}) in the presence of ballistic dephasing\footnote{{The metrological gain is given by Eq.~(\ref{eq:xireex}) with $\vec{X}=\hat{U}_{\tau}^{\dagger}\hat{\vec{S}}\hat{U}_{\tau}$. The elements of the commutator and the covariance matrices including the average over the random variable $D$ \begin{align} C_{kl}&=-i\int dD p(D)\langle\psi_t|[\hat{S}_k,e^{i\hat{H}_{\rm bal}\tau/\hbar}\hat{S}_le^{-i\hat{H}_{\rm bal}\tau/\hbar}]|\psi_t\rangle \label{eq:C}\\ \Gamma_{kl}&=\frac{1}{2}\int dD p(D)\langle\psi_{t}|e^{i\hat{H}_{\rm bal}\tau/\hbar}\{\hat{S}_k,\hat{S}_l\}e^{-i\hat{H}_{\rm bal}\tau/\hbar}|\psi_{t}\rangle -\Pi_{j=l,k}\int dD p(D)\langle\psi_t|e^{i\hat{H}_{\rm bal}\tau/\hbar}\hat{S}_j e^{-i\hat{H}_{\rm bal}\tau/\hbar}|\psi_t\rangle \label{eq:gam} \end{align} where we take $\chi \tau=-\chi t$ as before and $D \tau=D t$, are given in~\ref{C}. }}. \begin{figure}[tb] \centering \includegraphics[width=0.9\textwidth]{Fig3_Bal.pdf} \caption{{ Metrological gain as a function of time using the measurement-after-interaction strategy in presence of decoherence. The atom number is $N=10^3$ (top row) and $10^4$ (bottom row). (a) Ballistic dephasing with $\gamma=0,0.5,1$ and $\epsilon=0.05$. Solid lines are the analytic formulas (\ref{eq:MAIxibal}) for $\xi^{-2}_{\rm MAI,bal}$ in the limit of large $N$, and dashed lines are exact results. (b) Diffusive dephasing with $\varepsilon=0,0.01,0.05$. Solid lines are the analytical predictions (\ref{eq:MAIxidif}) for $\xi^{-2}_{\rm MAI,dif}$ in the large $N$ limit, and dashed lines the exact result. } } \label{ballistic_diffusive} \end{figure} The dominant effect of this random dephasing process is to increase, in a ballistic way, i.e., quadratically in $\chi t$, the variance of the optimal measurement observable $\approx\hat{S}_y$. { Indeed, for a small $\epsilon$, large $N$, and $\chi t\leq 1/\sqrt{N}$, after the second one-axis twisting evolution in presence of ballistic noise we obtain} \begin{align}\label{eq:balnoise} (\Delta\hat{S}_y)^2_{\mathrm{bal}}=\frac{N}{4}\left[1+4\epsilon N^{1+\gamma}(\chi t)^2+\mathcal{O}(\chi t)^4\right]. \end{align} This decreases the metrological gain (\ref{eq:MAIxi}) by a factor $(1+4\epsilon N^{1+\gamma}(\chi t)^2)^{-1}$ \begin{align}\label{eq:MAIxibal} \left(\xi_{\rm MAI,bal}^{-2}(\chi t)\right)_{N\to\infty}=\frac{N^2(\chi t)^2e^{-N(\chi t)^2}}{1+4\epsilon N^{1+\gamma}(\chi t)^2}. \end{align} This expression is compared to the exact result in Fig.~\ref{ballistic_diffusive}(a) for different values of $\gamma$ and $N$. { Using Eq.~(\ref{eq:MAIxibal}), we can deduce the scaling laws for large $N$ of the gain on time scales} $\chi t=\sigma N^{-\alpha}$ with $1\geq\alpha \geq 1/2$: \begin{align}\label{eq:MAIopt0} \xi^{-2}_{\mathrm{MAI,bal}}= \begin{cases} \frac{\sigma^2 N^{2-2\alpha}}{1+4\epsilon \sigma^2 N^{1+\gamma-2\alpha}}, &\: 1\geq\alpha > 1/2\\ &\\ \frac{\sigma^2 e^{-\sigma^2}N}{1+4\epsilon\sigma^2 N^{\gamma}}, &\: \alpha = 1/2 \end{cases}. \end{align} We thus observe the existence of a critical value of $\alpha$ \begin{align}\label{eq:alpha_c} \alpha_c=\frac{1+\gamma}{2}, \end{align} such that for $\alpha\geq\alpha_c$ { the gain (\ref{eq:MAIopt0}) corresponds to the noiseless scaling law (\ref{eq:MAIopt}), while for $\alpha<\alpha_c$}, the gain is affected by the dephasing and becomes independent of $\alpha$. \begin{align}\label{eq:MAIblong} \xi^{-2}_{\mathrm{MAI,bal}}\simeq \frac{1}{4\epsilon } N^{1-\gamma} \,. \end{align} A maximization of $\xi^{-2}_{\rm MAI,bal}$ over $\sigma$ and $\alpha$ allows us to find, for a given $\gamma$, the scaling law of the best metrological gain and the corresponding time. For $\gamma=0$, we obtain \begin{equation} \chi t_{\rm MAI,bal,best}\simeq(1-2\epsilon)N^{-1/2} \qquad ; \qquad \xi^{-2}_{\rm MAI,bal,best}\simeq\frac{\epsilon}{e}N, \end{equation} while for $\gamma\neq 0$, the scaling law (\ref{eq:MAIblong}) represents the maximum metrological gain. This is achieved exactly at the critical point $\alpha_{c}$, as well as by all longer times. By including first finite size corrections to (\ref{eq:MAIopt0}), we obtain \begin{align}\label{eq:xibalcor} \xi^{-2}_{\rm MAI,bal}\simeq\frac{\sigma^2 N^{2-2\alpha}}{1+4\epsilon \sigma^2 N^{1+\gamma-2\alpha}}[1-\sigma^2 N^{1-2\alpha}+\frac{\sigma^4}{4} N^{2-4\alpha}]. \end{align} A maximization over $\alpha$ and $\sigma$ of (\ref{eq:xibalcor}), shows that $\xi^{-2}_{\rm MAI,bal}$ attains its maximal value (\ref{eq:MAIblong}) at $\chi t=(4\epsilon)^{-1/4}N^{-1/2-\gamma/4}$. In general, for a desired value of $\alpha$, Eq.~(\ref{eq:alpha_c}) sets a maximal tolerable level of ballistic dephasing noise $\gamma=2\alpha-1$ up to which the noiseless metrological gain is not affected by the ballistic dephasing. As we { already} observed in Eqs.~(\ref{eq:bal_all}), for the linear spin squeezing where the best time corresponds to $\alpha=2/3$, the tolerable noise level is $\gamma=1/3$; for the nonlinear squeezing where $\alpha=3/5$, this is given by $\gamma=1/5$ and it is given by $\gamma=1/7$ for the quadratic spin squeezing where $\alpha=4/7$. We thus demonstrate, as in the noiseless case, that the MAI technique allows to reproduces all the scaling laws of the metrological gain of different squeezing strategies also in the presence of ballistic dephasing. \subsection{Diffusive dephasing} The OAT evolution in some experimental realizations is accompanied by collective spin fluctuations of diffusive nature. To describe these fluctuations, we consider a collective dephasing process at a rate $\gamma_C$ where the dynamics is governed by the master equation~\cite{Tannoudji,Breuer} with the Lindblad operator $\hat{L}=\hat{S}_z$ \begin{equation} \frac{\partial \hat{\rho}}{\partial t}=-\frac{i}{\hbar}[\hat{H},\hat{\rho}]+\gamma_C\mathcal{L}[\hat{\rho}], \end{equation} where, $\mathcal{L}[\hat{\rho}]=\hat{S}_z\hat{\rho}\hat{S}_z-\frac{1}{2}\{\hat{S}_z^2,\hat{\rho}\}$ and $\hat{H}$ is the noiseless OAT Hamiltonian $\hat{H}=\hbar\chi\hat{S}_z^2$. Starting from the coherent spin state $\hat{\rho}_0=|\psi_0\rangle\langle\psi_0|$ where $|\psi_0\rangle$ is given by (\ref{eq:CCS}), the evolution of the system is given by \begin{equation} \hat{\rho}(t)=e^{\varepsilon\chi t\mathcal{L}}[\hat{U}_t \hat{\rho}_0\hat{U}_t^{\dagger}] \quad \textrm{with} \quad \varepsilon=\frac{\gamma_C}{\chi}, \end{equation} where we used the fact that $[\hat{H},\hat{L}]=0$. Using $\mathcal{L}^{\dagger}=\mathcal{L}$, the expectation value of any operator $\hat{A}$ can be obtained from the adjoint master equation~\cite{Breuer} as \begin{equation}\label{eq:Exp} \langle\hat{A}\rangle=\mathrm{tr}\{\hat{A}\hat{\rho}(t)\}=\mathrm{tr}\{e^{\varepsilon\chi t\mathcal{L}}[\hat{A}]\hat{U}_t \hat{\rho}_0\hat{U}_t^{\dagger}\}. \end{equation} These expressions can then be inferred from the noiseless expectation values by explicitly determining the transformed operator $e^{\varepsilon\chi \tau\mathcal{L}}[\hat{A}]$. \subsubsection{Linear, nonlinear, and quadratic spin observables} In the limit $N\gg 1$, the metrological gain of the linear, the nonlinear and the quadratic spin squeezing in the presence of a diffusive dephasing for $\chi t<1/\sqrt{N}$ are obtained using the same steps as before\footnote{{ The elements of the commutator and covariance matrices to be used in the metrological gain~(\ref{eq:xireex}) now read \begin{align} C_{kl}&=-i\langle\psi_t|e^{\varepsilon\chi t\mathcal{L}_C}\left[[\hat{S}_k,\hat{X}_l]\right]|\psi_t\rangle \qquad ; \qquad \Gamma_{kl}=\frac{1}{2}\langle\psi_t|e^{\varepsilon\chi t\mathcal{L}_C}[\{\hat{X}_k,\hat{X}_l\}]|\psi_t\rangle -\langle\psi_t|e^{\varepsilon\chi t\mathcal{L}_C}[\hat{X}_k]|\psi_t\rangle \langle\psi_t|e^{\varepsilon\chi t\mathcal{L}_C}[\hat{X}_l]|\psi_t\rangle, \end{align} where $|\psi_t\rangle=e^{-i\chi t\hat{S}_z^2}|\psi_0\rangle$. Their analytical expression is given in~\ref{D}.}} and read \begin{equation}\label{eq:dif} \xi^{-2}_{\rm dif}(\chi t)\simeq\frac{N^2(\chi t)^2}{1+M+\varepsilon N\chi t}, \end{equation} with the appropriate expression of $M$, which is given in Sec.~\ref{sec:QFI}. To maximize over $\chi t$ in the limit of large $N$ at fixed $\varepsilon$, where $\varepsilon\gg1/(N \chi t_{best})$, we can approximate (\ref{eq:dif}) as \begin{equation}\label{eq:difapp} \xi^{-2}_{\rm dif}(\chi t)\simeq\frac{N^2(\chi t)^2}{M+\varepsilon N\chi t}. \end{equation} We then find { the best time} for the linear, nonlinear and the quadratic squeezing in presence of the diffusive dephasing \begin{equation}\label{dift_all} \chi t_{\mathrm{best,L (dif)}}\simeq\left(\frac{3\varepsilon}{2}\right)^{1/5}N^{-3/5}\quad ; \quad \chi t_{\mathrm{best,NL (dif)}}\simeq\left(\frac{5\varepsilon}{4}\right)^{1/9}3^{1/3}N^{-5/9}\quad ; \quad \chi t_{\mathrm{best,Q (dif)}}\simeq\left(\frac{7}{3}\right)^{1/13}\frac{5^{3/13}\varepsilon^{1/13}}{2^{2/13}}N^{-7/13}\,, \end{equation} and corresponding best metrological gain \begin{equation}\label{difxi_all} \xi_{\mathrm{best,L (dif)}}^{-2}\simeq\frac{2\times 3^{1/5}}{5}\left(\frac{2}{\varepsilon}\right)^{4/5} N^{2/5}\quad ; \quad \xi_{\mathrm{best,NL (dif)}}^{-2}\simeq\frac{4}{3}\frac{2^{7/9}5^{1/9}}{3^{5/3}\varepsilon^{8/9}}N^{4/9} \quad ; \quad \xi_{\mathrm{best,Q (dif)}}^{-2}\simeq \frac{2}{13}\frac{2^{11/13}3^{12/13}5^{3/13}7^{1/13}}{\varepsilon^{12/13}} N^{6/13}. \end{equation} For a linear measurement, the Eqs.~(\ref{dift_all}) and~(\ref{difxi_all}) confirm the optimal scaling laws $\chi t_{\rm best}\propto N^{-3/5}$ and $\xi^{-2}_{\rm best}\propto N^{2/5}$ found in the presence of diffusive dephasing due to cavity losses in cavity induced spin squeezing~\cite{PawlowskiEPL2016,MonikaPRA2010,LerouxPRA2012}. \subsubsection{MAI measurements} For the MAI measurement, the quantum gain is again given by (\ref{eq:xireex}) with $\vec{X}=\hat{U}_{\tau}^{\dagger}\hat{\vec{S}}\hat{U}_{\tau}$ with the following elements of $C$ and $\Gamma$ \begin{align} C_{kl}&=-i\langle\psi_t|e^{\varepsilon\chi t\mathcal{L}_C}\left[[\hat{S}_k,\hat{U}_{\tau}^{\dagger}e^{\varepsilon\chi\tau\mathcal{L}_C}[\hat{S}_l]\hat{U}_{\tau}]\right]|\psi_t\rangle,\label{eq:MAIdifc} \\ \Gamma_{kl}&=\frac{1}{2}\langle\psi_t|e^{\varepsilon\chi t\mathcal{L}_C}\left[\hat{U}_{\tau}^{\dagger}e^{\varepsilon\chi\tau\mathcal{L}_C}[\{\hat{S}_k,\hat{S}_l\}]\hat{U}_{\tau}]\right]|\psi_t\rangle - \Pi_{j=k,l} \langle\psi_t|e^{\varepsilon\chi t\mathcal{L}_C}\left[\hat{U}_{\tau}^{\dagger}e^{\varepsilon\chi\tau\mathcal{L}_C}[\hat{S}_j]\hat{U}_{\tau}]\right]|\psi_t\rangle \label{eq:MAIdifgam} \end{align} The analytical expressions of (\ref{eq:MAIdifc}) and (\ref{eq:MAIdifgam}) are given in Appendix~\ref{D}. Taking the optimization~(\ref{eq:tauopt}) into account, we replace $\tau=-t$. The variance of the optimal measurement observable $\hat{S}_y$ here increases as \begin{align}\label{eq:balnoise} (\Delta\hat{S}_y)^2_{\mathrm{dif}}=\frac{N}{4}\left[1+2 \varepsilon N \chi t+\mathcal{O}(\chi t)^2\right], \end{align} showing a diffusive behavior, i.e., linear in $\chi t$. This limits the quantum metrological gain of the MAI technique (\ref{eq:MAIxi}) and indeed we find for $\chi t\leq 1/\sqrt{N}$ \begin{equation}\label{eq:MAIxidif} \xi^{-2}_{\rm MAI,dif}(\chi t)=\frac{N^2(\chi t)^2 e^{-N(\chi t)^2}}{1+2 \varepsilon N \chi t}. \end{equation} This expression is represented and compared to exact results in Fig.~\ref{ballistic_diffusive}(b) for varying $\varepsilon$ and $N$. Again, we obtain the scaling laws of the metrological gain on the time scales $\chi t=\sigma N^{-\alpha}$ in the limit of large $N$ \begin{align}\label{xidif} \left(\xi^{-2}_{\mathrm{MAI,dif}}\right)_{N\to\infty}= \begin{cases} \frac{\sigma}{2\varepsilon}N^{1-\alpha}, &\quad 1\geq\alpha > 1/2\\ &\\ \frac{\sigma e^{-\sigma^2}}{2\varepsilon}N^{1/2}, &\quad \alpha = 1/2 \end{cases}. \end{align} Due to the diffusive dephasing, the scaling law of the metrological gain for the MAI method passes from $\xi^{-2}_{\rm MAI}\propto N^{2-2\alpha}$ to $\xi^{-2}_{\rm MAI}\propto N^{1-\alpha}$ for a given $\alpha$. As expected, the scaling for the MAI method reproduces the scaling laws (\ref{difxi_all}) for the states prepared at the times (\ref{dift_all}). For $1/2\leq\alpha \leq 1$, an optimization of (\ref{xidif}) over $\alpha$ and $\sigma$ gives us the best metrological gain and the corresponding time for $N\gg 1$ \begin{equation} \chi t_{\rm MAI,dif,best}=\frac{1}{\sqrt{2}}N^{-1/2} \qquad ; \qquad \xi^{-2}_{\rm MAI,dif,best}=\frac{N^{1/2}}{\sqrt{8\varepsilon^2 e}}. \end{equation} This analytically confirms a result that was obtained numerically in Ref.~\cite{HammererQuantum2020}. \subsection{Unified expression} Taking $e^{-N(\chi t)^2}\approx 1$ for $\chi t<1/\sqrt{N}$ and $N\gg 1$, Eqs.~(\ref{eq:bal}), (\ref{eq:MAIxibal}), (\ref{eq:dif}) and~(\ref{eq:MAIxidif}) show that in the presence of decoherence, the metrological gain can again be written { with an unified expression}: \begin{align}\label{eq:unif} \xi^{-2}(\chi t)\simeq\frac{F_Q/N}{1+M+B}, \end{align} where $B_{\rm bal}=\epsilon N^{1+\gamma}(\chi t)^2$ and $B_{\rm dif}=\varepsilon N\chi t$ describes the loss of sensitivity due to ballistic and diffusive dephasing, for the linear, nonlinear and quadratic measurements. In the case of an MAI measurement, the nonlinear OAT evolution is effectively twice as long, which increases the effect of the decoherence. This effect can be easily accounted for by replacing $\chi t$ by $2\chi t$ in the case of MAI for the decoherence terms, leading to $B_{\rm bal}=4\epsilon N^{1+\gamma}(\chi t)^2$ and $B_{\rm dif}=2\varepsilon N\chi t$. The result~(\ref{eq:unif}) allows us to obtain, in a simple way, the scaling laws and optimal times in all cases discussed above. \section{Particle losses} Up to now we have considered dephasing processes perturbing the coherent evolution with the OAT Hamiltonian. In this { last} section we will explore the limitiations imposed by particle losses to the linear, nonlinear and quadratic spin squeezing. \subsection{ Loss model} For convenience we write here the collective spin components using { the creation $\hat{c}_a^{\dagger}$ ($\hat{c}_b^{\dagger}$) and the annihilation $\hat{c}_a$ ($\hat{c}_b$)} operators corresponding to the mode $a$ ($b$) respectively~: \begin{align} \hat{S}_x =\frac{\hat{c}_a^{\dagger}\hat{c}_b+\hat{c}_b^{\dagger}\hat{c}_a}{2}\; , \; \hat{S}_y =\frac{\hat{c}_a^{\dagger}\hat{c}_b-\hat{c}_b^{\dagger}\hat{c}_a}{2i}\; , \; \hat{S}_z =\frac{\hat{c}_a^{\dagger}\hat{c}_a-\hat{c}_b^{\dagger}\hat{c}_b}{2}\,, \end{align} { and we} introduce the phase state \begin{align}\label{eq:phstate} |\varphi\rangle_N\equiv\frac{1}{\sqrt{N !}}\left(\frac{e^{i \varphi }\hat{c}_a^{\dagger}+e^{-i \varphi }\hat{c}_b^{\dagger}}{\sqrt{2}}\right)^N|0\rangle\,. \end{align} Note that $|\varphi=0\rangle_N$ corresponds to the coherent spin state (\ref{eq:CCS}) with $\langle\varphi=0| \hat{N}_l|\varphi=0\rangle|_{l=a,b}=N/2$ where $\hat{N}_l=\hat{c}_l^{\dagger}\hat{c}_l$ is the operator of number of particles in the mode $l$. The presence of $m$-body losses, in addition to the one-axis-twisting dynamics $\hat{H}=\hbar\chi\hat{S}_z^2$, can be described by the master equation \cite{SinatraEuro1998}, \begin{align} \frac{\partial\hat{\rho}}{\partial t}=-\frac{i}{\hbar}[\hat{H},\hat{\rho}]+\sum_{l=a,b}\gamma_l^{(m)}\left([\hat{c}_l]^m \hat{\rho} [\hat{c}_l^{\dagger}]^m-\frac{1}{2}\left\{[\hat{c}_l]^m [\hat{c}_l^{\dagger}]^m,\hat{\rho}\right\}\right) \end{align} where $\gamma^{(m)}_l$ is the $m$-body loss rate in the mode $l$. This evolution can be equivalently represented in terms of the Monte-Carlo wave function formalism~\cite{MolmerOpt1993}. In this point of view, the system is described by a wave function whose evolution is generated by an effective Hamiltonian $\hat{H}_{\mathrm{eff}}$ in time intervals of duration $\tau_j$ separated by random quantum jumps, described by the jump operators $\hat{J}_l^{(m)}$, at times $t_j$: \begin{align}\label{eq:Heff} \hat{H}_{\rm eff}=\hat{H}-\frac{i\hbar}{2}\sum_{l=a,b}\hat{J}_l^{(m) \dagger}\hat{J}_l^{(m)}\quad \textrm{with}\quad \hat{J}_l^{(m)}=\sqrt{\gamma_l^{(m)}} [\hat{c}_l]^m. \end{align} As long as the fraction of lost particles is weak we can approximate the effective Hamiltonian (\ref{eq:Heff}) by \cite{SinatraEuro1998} \begin{align}\label{eq:approx} \hat{H}_{\rm eff}=\hat{H}-\frac{i\hbar}{2}\lambda, \end{align} where $\lambda=\sum_{l=a,b}\lambda_l$ with $\lambda_l=\gamma_l^{(m)}\langle \hat{c}_l^{\dagger m}\hat{c}_l^{m}\rangle_{\psi_0}$. For simplicity, we restrict, in the following, to the symmetric case where $\gamma_a^{(m)}=\gamma_b^{(m)}=\gamma^{(m)}$. We assume that the system is initially in the phase state (\ref{eq:phstate}) with $\varphi=0$. In a particular Monte-Carlo realization with $k$ quantum jumps, each resulting in $m$-body losses in the mode $l_i=a,b$ at times $t_i$ with $i=1,...,k$, the state of the system at time $t$ is given by \begin{align}\label{eq:StateLoss} |\psi(t)\rangle =\mathcal{N} e^{-\frac{i}{\hbar}\hat{H}_{\rm eff}(t-t_k)}\hat{J}_{l_k}e^{-\frac{i}{\hbar}\hat{H}_{\rm eff}(t_k-t_{k-1})}...\hat{J}_{l_1}e^{-\frac{i}{\hbar}\hat{H}_{\rm eff}t_1}|\varphi=0\rangle_N \end{align} with $\mathcal{N}$ a normalization constant. By using the identity \begin{align} \hat{c}_l^m f(\hat{N}_a,\hat{N}_b)=f(\hat{N}_a+m\delta_{l,a},\hat{N}_b+m\delta_{l,b})\hat{c}_l^m \end{align} for $l=a,b$ and the properties of phase states (\ref{eq:phstate}) \begin{equation} \hat{c}_l|\varphi\rangle_N=\sqrt{\frac{N}{2}}e^{i\varphi(\delta_{l,a}-\delta_{l,b})}|\varphi\rangle_{N-1} \qquad ; \qquad e^{-i\alpha(\hat{N}_a-\hat{N}_b)}|\varphi\rangle_N=|\varphi+\alpha\rangle_N, \end{equation} we can show that, in the approximation (\ref{eq:approx}), { the state (\ref{eq:StateLoss}) for a particular Monte-Carlo realization can be written as a shifted phase state with less particles, evolved with the one-axis-twisting hamiltonian. In terms of a normalization factor $F(t)$ and a random relative phase shift $D$~:} \begin{equation} |\psi(t)\rangle= F(t) e^{-i\chi t \hat{S}_z^2} |D\rangle_{N-mk} \qquad ; \qquad D=m\sum_{i=1}^k \chi t_i\left( \delta_{b,c_i}-\frac{1}{2}\right). \end{equation} The expectation value of any operator $\hat{O}$ can be calculated by averaging the single realization mean value \begin{equation}\label{eq:SinglReal} \langle\psi(t)|\hat{O}|\psi(t)\rangle = \vphantom{}_{N-mk} \langle D|e^{i\chi t \hat{S}_z^2}\hat{O}e^{-i\chi t \hat{S}_z^2} |D\rangle_{N-mk} \end{equation} over all Monte-Carlo realizations, that is to average (\ref{eq:SinglReal}) over the random variables $k$, $t_i$ and $\delta_{b,c_i}$~\cite{SinatraEuro1998}. This allows us to analytically calculate the commutator and the covariance matrices (given in \ref{E}) and thus to obtain the quantum metrological gain (\ref{eq:Xilambda}) corresponding to the squeezing of a linear, nonlinear and quadratic spin observable in presence of $m$-body losses. In Fig. \ref{Xidif} (b), we compare the analytical metrological gain for the linear, nonlinear and quadratic spin squeezing in presence of one-body losses { in the approximation (\ref{eq:approx}), valid for the loss of a small fraction of the particles, to the exact numerical Monte-Carlo simulation with the effective Hamiltonian (\ref{eq:Heff}).} \subsection{Scaling laws of the linear, nonlinear and quadratic spin squeezing} Let us focus on the case of 1-body losses ($m=1$) with a loss rate $\gamma^{(1)}$. To obtain the best metrological gain of the linear spin squeezing in the limit of large $N$ , { we use the best linear squeezing time to introduce an auxiliary dimensionless variable ${ r=N^{-1/3}}$ and rescale} the time as $\chi t=\theta r^{2}$. By expanding the linear metrological gain $\xi^{-2}_{\rm L}$ for $r\ll 1$ and $\gamma^{(1)}/\chi$ constant, we obtain \begin{align}\label{eq:LosL} \left(\xi^{-2}_{\rm L}(t)\right)_{N\to\infty}=\frac{N^2(\chi t)^2}{1+N^4(\chi t)^6/6+(\gamma^{(1)} t/3) N^{2}(\chi t)^2}. \end{align} Similarly, using the best nonlinear squeezing time (\ref{eq:bestinf}), we set ${ r=N^{-1/5}}$ and we rescale the time as $\chi t=\theta r^{3}$ to obtain \begin{align}\label{eq:LosNL} \left(\xi^{-2}_{\rm NL}(t)\right)_{N\to\infty}=\frac{N^2(\chi t)^2}{1+N^6(\chi t)^{10}/270+(\gamma^{(1)} t/3) N^{2}(\chi t)^2}. \end{align} For the quadratic squeezing, after setting ${ r=N^{-1/7}}$, rescaling the time as $\chi t=\theta r^{4}$ we obtain \begin{align}\label{eq:LosQ} \left(\xi^{-2}_{\rm Q}(t)\right)_{N\to\infty}=\frac{N^2(\chi t)^2}{1+N^8(\chi t)^{14}/875+(\gamma^{(1)} t/3) N^{2}(\chi t)^2}. \end{align} By comparing the equations (\ref{eq:LosL})-(\ref{eq:LosQ}) to the equation (\ref{eq:bal}), we deduce that the effect of one-body losses is equivalent to the ballistic dephasing effect discussed in paragraph \ref{Bal} with $\gamma =1$ and $\epsilon = \gamma^{(1)}t/3$ where $\gamma^{(1)}t$ corresponds to the lost fraction of atoms at time $t$. { For the three measurement strategies, the metrological gain in the large $N$ limit, taken at constant lost fraction at $t_{\rm best}$, is then limited by the fraction of lost atoms} \begin{align}\label{eq:lim} \xi^{-2} = \frac{3}{\gamma^{(1)} t}. \end{align} We then conclude, as shown in Fig. \ref{Xidif}. that for a fixed atom number $N$, a nonlinear measurement can enhance the linear metrological gain as long as $3/(\gamma^{(1)}t_{\rm L,best}) > \xi^{-2}_{\rm L, best}$. Such a regime can be reached as long as the $1$-body loss rate $\gamma^{(1)}$ is not too large (Fig. \ref{Xidif} b). \begin{figure}[htb] \includegraphics[width=0.9\textwidth]{Fig4_ParticlLosses.pdf} \caption{Linear, nonlinear and quadratic metrological gain for $N=10^6$ as a function of time in presence of one-body losses with (a) {$\gamma^{(1)}/\chi=20$ and (b) $\gamma^{(1)}/\chi=0.05$} compared to the limit (\ref{eq:lim}). The metrological gain at the limit $N\gg 1$ given by (\ref{eq:LosL}), (\ref{eq:LosNL}) and (\ref{eq:LosQ}) are represented in dashed lines. Points in (b) are results of numerical Monte-Carlo simulation with 600 realizations.} \label{Xidif} \end{figure} \section{Conclusion} We have analytically found the scaling laws of the metrological gain in the limit of large atom numbers $N$ for the squeezing of nonlinear spin observables. For the effective measurement of a nonlinear spin observable, we have identified the measurement-after-interaction technique that consists in adding a second nonlinear evolution before the direct measurement of a linear spin observable as a feasible possibility. This method indeed gives rise to a general scaling law for the metrological gain that continuously connects the different cases of measurement strategies based on linear and second-order spin observables. We have identified the limits imposed by two different models of decoherence, describing dominant decoherence processes in different physical realizations of the one-axis-twisting evolution. In the presence of ballistic collective dephasing, our results predict, in the thermodynamic limit, an abrupt change of the metrological gain at a critical preparation time that depends on the noise. This transition determines the longest state preparation time by one-axis-twisting for which the quantum scaling enhancement can be sustained in the presence of dephasing. Below this critical evolution time, the quantum gain is not affected by decoherence. In contrast, for diffusive dephasing, the scaling law corresponds to the square root of the gain in the noiseless case, independently of the preparation time. Finally, in the presence of particle losses, the best linear, nonlinear and quadratic spin squeezing are limited by the fraction of lost particles at the best squeezing time. Our work analytically identifies the maximally achievable quantum sensitivity gain offered by the squeezing of nonlinear spin observables during a realistic one-axis-twisting evolution with an arbitrary number of atoms. As a function of the chosen measurement strategy, we identify optimal rotation directions, measurement observables and preparation times. These results may serve as a guide for designing feasible strategies for achieving high quantum enhancements in quantum phase estimation protocols with a relatively large number of atoms. \section*{Acknowledgment} M.G. acknowledges funding from the LabEx ENS-ICFP: ANR-10-LABX-0010 / ANR-10-IDEX-0001-02 PSL, from MCIN / AEI for the project PID2020-115761RJ-I00, and support of a fellowship from `la Caixa” Foundation (ID 100010434) and from the European Union’s Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska-Curie grant agreement No 847648, fellowship code LCF/BQ/PI21/11830025.
proofpile-arXiv_065-64
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} We can explore the topological nature of quantum field theories via topological terms. Recently, gauge theories with a theta term have been studied by \textquoteright t Hooft anomaly matching. In particular, there is a constraint on the phase structure of the 4D SU(N) pure Yang-Mills theory by a \textquoteright t Hooft anomaly involving the CP and center symmetries at $\theta=\pi$ \cite{Gaiotto:2017yup}. The constraint is consistent with the well-known scenario at large $N$ \cite{Witten:1980sp}, where the theory at $\theta =\pi$ is confined with spontaneously broken CP at low temperature and then has a transition to deconfined phase with restored CP at a finite temperature. However, it is highly nontrivial whether or not this structure persists for small $N$ since there are various ways to satisfy the anomaly matching condition. For instance, the theory for small $N$ at low temperature may be deconfined or gapless as well as spontaneously broken CP. Therefore it is an interesting challenge to investigate the phase structure by first-principle calculation at the smallest $N$ i.e.~$N=2$. The effect of the theta term is genuinely non-perturbative. The theory with a theta term should be analyzed by non-perturbative calculations based on the lattice gauge theory. However, the Monte Carlo simulation of the theory including the theta term is difficult due to the sign problem. The complex Langevin method (CLM) is one of the approaches which allow us to avoid the sign problem \cite{Klauder:1983sp,Parisi:1984cs,Aarts:2009uq,Aarts:2011ax,Nagata:2015uga,Nagata:2016vkn}. We use the CLM to study 4D SU(2) gauge theory with the theta term since its computational cost is cheaper than the other methods. The topological charge on the 4D lattice is contaminated by short range fluctuations. Thus, we apply the stout smearing \cite{Morningstar:2003gk} to recover the topological property. In this method, the effect of the smearing can be included dynamically. We discuss the behavior of the topological charge for $\theta\neq0$ in the CLM. \section{4D SU(2) gauge theory with a theta term} We consider 4D SU(2) gauge theory on the Euclidean space. The action for the gauge field $A_{\mu}^{a}$ ($a=1,2,3$) ($\mu=1,\cdots,4$) is given by \begin{equation} S_{g}=\frac{1}{4g^{2}}\int d^{4}xF_{\mu\nu}^{a}F_{\mu\nu}^{a}, \end{equation} where $g$ is the gauge coupling constant and $F_{\mu\nu}^{a}$ is the field strength \begin{equation} F_{\mu\nu}^{a}=\partial_{\mu}A_{\nu}^{a}-\partial_{\nu}A_{\mu}^{a}-\epsilon^{abc}A_{\mu}^{b}A_{\nu}^{c}. \end{equation} The topological charge $Q$ is defined by \begin{equation} Q=\frac{1}{64\pi^{2}}\int d^{4}x\epsilon_{\mu\nu\rho\sigma}F_{\mu\nu}^{a}F_{\rho\sigma}^{a},\label{eq:Q} \end{equation} which takes integer values unless the space has a boundary. We introduce the theta term $S_{\theta}=-i\theta Q$, and thus the action is $S=S_{g}+S_{\theta}$. This theory has the $2\pi$ periodicity of the parameter $\theta\in\mathbb{R}$, since the partition function \begin{equation} Z=\int\mathcal{D}Ae^{-S_{g}+i\theta Q} \end{equation} is invariant under the shift $\theta\rightarrow\theta+2\pi$. Next, we consider the lattice action for the numerical study. We introduce link variables $U_{n,\mu}\in\mathrm{SU}(2)$ and define plaquettes. \begin{equation} P_{n}^{\mu\nu}=U_{n,\mu}U_{n+\hat{\mu},\nu}U_{n+\hat{\nu},\mu}^{-1}U_{n,\nu}^{-1} \end{equation} The index $n$ labels the lattice site and $\hat{\mu}$ represents the unit vector along the $\mu$-th direction. Note that we use $U_{n,\mu}^{-1}$ instead of $U_{n,\mu}^{\dagger}$ to respect holomorphicity, which is necessary to justify the CLM. We define the plaquette action by \begin{equation} S_{\beta}=-\frac{\beta}{4}\sum_{n}\sum_{\mu\neq\nu}\mathrm{Tr}P_{n}^{\mu\nu} \end{equation} with the coupling constant $\beta$. For the topological charge on the lattice, we consider the simplest discretization \cite{DiVecchia:1981aev} given by the so called \textquotedbl clover leaf\textquotedbl{} formula. \begin{equation} Q_{\textrm{cl}}=-\frac{1}{32\pi^{2}}\sum_{n}\frac{1}{2^{4}}\sum_{\mu,\nu,\rho,\sigma=\pm1}^{\pm4}\tilde{\epsilon}_{\mu\nu\rho\sigma}\mathrm{Tr}\left[P_{n}^{\mu\nu}P_{n}^{\rho\sigma}\right]\label{eq:Qcl} \end{equation} Here the orientation of the plaquette is generalize to negative directions. Correspondingly, the anti-symmetric tensor $\tilde{\epsilon}_{\mu\nu\rho\sigma}$ also has negative indices, for example \begin{equation} 1=\tilde{\epsilon}_{1234}=-\tilde{\epsilon}_{2134}=-\tilde{\epsilon}_{(-1)234}=\cdots. \end{equation} Usually the topological charge $Q_{\textrm{cl}}$ does not take integer values on the lattice due to the discretization effect. We can recover the topological property of the gauge field by eliminating short-range fluctuations. Some smoothing techniques, such as the gradient flow, stout smearing and so on, make the topological charge close to integers. In this study, we apply the stout smearing to the complex Langevin method, which is discussed in section \ref{sec:Stout-smearing}. \section{Complex Langevin method} Since the theta term is purely imaginary, Monte Carlo studies of the theory with $\theta\neq0$ is extremely difficult due to the sign problem. We avoid this problem by using the complex Langevin method (CLM) \cite{Klauder:1983sp,Parisi:1984cs,Aarts:2009uq,Aarts:2011ax,Nagata:2015uga,Nagata:2016vkn}, which is a generalization of the Langevin method to the system with a complex action. Its computational cost grows linearly with the system size, so that we can easily apply the CLM to large systems in a straightforward manner. In this section, we briefly review how to apply the method to 4D SU(2) gauge theory. In the CLM, we consider a fictitious time evolution of the dynamical variables, which is described by the complex Langevin equation. The discretized complex Langevin equation for the link variables is given by \begin{equation} U_{n,\mu}(t+\epsilon)=\exp\left[-i\epsilon D_{n,\mu}^{a}S\tau^{a}+i\sqrt{\epsilon}\eta_{n,\mu}(t)\right]U_{n,\mu}(t), \end{equation} where $\tau^{a}=\sigma^{a}/2$ are the generators of SU(2). The parameter $\epsilon\ll1$ is a step size of the discretized fictitious time. The differential operation $D_{n,\mu}^{a}f$ of the function $f(U)$ with respect to the link variables (Lie group elements) is defined by \begin{equation} D_{n,\mu}^{a}f\left(U_{n,\mu}\right)=\lim_{\epsilon\rightarrow0}\frac{1}{\epsilon}\left[f\left(e^{i\epsilon\tau^{a}}U_{n,\mu}\right)-f\left(U_{n,\mu}\right)\right]. \end{equation} The term including $D_{n,\mu}^{a}S$ is called the drift term. The other term is a real Gaussian noise $\eta_{n,\mu}(t)=\eta_{n,\mu}^{a}(t)\tau^{a}$ normalized by \begin{equation} \left\langle \eta_{n,\mu}^{a}(t)\eta_{m,\nu}^{b}(t^{\prime})\right\rangle =2\delta_{nm}\delta_{\mu\nu}\delta^{ab}\delta_{tt^{\prime}}. \end{equation} The drift term $D_{n,\mu}^{a}S$ is no loner Hermitian for the complex action. Thus, the link variables deviates from SU(2) in the complex Langevin simulation. We treat the link variables as SL(2,$\mathbb{C}$) elements instead of SU(2), which corresponds to complexifying the gauge field. For the complexified configuration, the drift term and observables should also be complexified respecting holomorphicity. The expectation value of $\mathcal{O}$ is calculated from an ensemble of configurations, which is given by solving the complex Langevin equation numerically. We can obtain the expectation value $\left\langle \mathcal{O}\right\rangle _{\textrm{CLM}}$ as an average of $\mathcal{O}(U)$ in the ensemble. However, it will not always agree with the correct expectation value defined by the path integral. This problem is known as the wrong convergence of the CLM, which occurs depending on the system, the parameter and the choice of the dynamical variables. Although we cannot figure out whether the problem occurs or not a priori, there is a practical criterion for the correct convergence \cite{Nagata:2016vkn}. We obtain the correct expectation value $\left\langle \mathcal{O}\right\rangle _{\textrm{CLM}}=\left\langle \mathcal{O}\right\rangle $ only if the probability distribution of the drift term falls off exponentially or faster. We can easily check the criterion by plotting the histogram of the magnitude $u$ of the largest drift defined by \begin{equation} u=\frac{1}{\sqrt{2}}\max_{n,\mu}\left\Vert D_{n,\mu}^{a}S\tau^{a}\right\Vert, \label{eq:max_drift} \end{equation} where the norm of the matrix is defined by $\left\Vert A\right\Vert ^{2}:=\mathrm{Tr}\left[A^{\dagger}A\right]$. We can stabilize the complex Langevin simulation by using a technique called \textquotedbl gauge cooling\textquotedbl{} \cite{Seiler:2012wz}. The condition of the correct convergence tends to be violated if the link variables deviates far away from SU(2). The gauge cooling reduces the non-unitarity of link variables as much as possible. Thus, it helps the condition to be satisfied. It was also shown that this procedure does not affect any gauge invariant observable \cite{Nagata:2015uga,Nagata:2016vkn}. We apply the gauge cooling at each Langevin step in order to suppress a rapid growth of non-unitarity. \section{Stout smearing for the CLM \label{sec:Stout-smearing}} The theory with a theta term has the $2\pi$ periodicity of $\theta$ , which plays an important role in the appearance of the nontrivial phase structure at $\theta=\pi$. However, it is difficult to retain this property on the lattice because the topological charge (\ref{eq:Qcl}) defined by the naive discretization does not takes integer values. It approaches integers only for the configurations sufficiently close to the continuum limit. In fact, it is difficult to suppress the short range fluctuations enough simply by increasing $\beta$. Thus, we need a smearing method which makes the configuration sufficiently smooth even for small $\beta$. In this work, we use the stout smearing \cite{Morningstar:2003gk}, which is applicable to the CLM. In fact, its application to the CLM was discussed in the analysis of QCD at nonzero baryon density \cite{Sexty:2019vqx}. In this section, we review how to apply the stout smearing to the complex Langevin simulation of the gauge theory with the theta term. The procedure of the stout smearing is given by the iteration of the smearing step, starting from the original configuration $U_{n,\mu}$. \begin{equation} U_{n,\mu}=U_{n,\mu}^{(0)}\rightarrow U_{n,\mu}^{(1)}\rightarrow\cdots\rightarrow U_{n,\mu}^{(N_{\rho})}=\tilde{U}_{n,\mu}\label{eq:smearing_steps} \end{equation} After $N_{\rho}$ iterations we obtain the smeared configuration $\tilde{U}_{n,\mu}$. In one (isotropic) smearing step from $k$ to $k+1$, the link variable $U_{n,\mu}^{(k)}\in\textrm{SL}(2,\mathbb{C})$ is mapped to $U_{n,\mu}^{(k+1)}\in\textrm{SL}(2,\mathbb{C})$ defined by following formulae. \begin{equation} U_{n,\mu}^{(k+1)}=e^{iY_{n,\mu}}U_{n,\mu}^{(k)} \end{equation} \begin{equation} iY_{n,\mu}=-\frac{\rho}{2}\mathrm{Tr}\left[J_{n,\mu}\tau^{a}\right]\tau^{a}\label{eq:Y} \end{equation} \begin{equation} J_{n,\mu}=U_{n,\mu}\Omega_{n,\mu}-\bar{\Omega}_{n,\mu}U_{n,\mu}^{-1} \end{equation} \begin{equation} \Omega_{n,\mu}=\sum_{\sigma(\neq\mu)}\left(U_{n+\hat{\mu},\sigma}U_{n+\hat{\sigma},\mu}^{-1}U_{n,\sigma}^{-1}+U_{n+\hat{\mu}-\hat{\sigma},\sigma}^{-1}U_{n-\hat{\sigma},\mu}^{-1}U_{n-\hat{\sigma},\sigma}\right)\label{eq:OMG} \end{equation} \begin{equation} \bar{\Omega}_{n,\mu}=\sum_{\sigma(\neq\mu)}\left(U_{n,\sigma}U_{n+\hat{\sigma},\mu}U_{n+\hat{\mu},\sigma}^{-1}+U_{n-\hat{\sigma},\sigma}^{-1}U_{n-\hat{\sigma},\mu}U_{n+\hat{\mu}-\hat{\sigma},\sigma}\right)\label{eq:OMGbar} \end{equation} The parameter $\rho>0$ should be chosen appropriately, depending on the system. We use the topological charge (\ref{eq:Qcl}) calculated from the smeared configuration $\tilde{U}_{n,\mu}$ \begin{equation} Q:=Q_{\textrm{cl}}(\tilde{U})\label{eq:def_Q} \end{equation} to define the theta term $S_{\theta}=-i\theta Q$ on the lattice. For the complex Langevin simulation, we need to calculate the drift term $D_{n,\mu}^{a}S_{\theta}$ from the theta term. Although $S_{\theta}$ is a complicated function of the original link variable $U_{n,\mu}$, it is possible to calculate the drift force \begin{equation} F_{n,\mu}=i\tau^{a}D_{n,\mu}^{a}S_{\theta}\label{eq:drift_theta} \end{equation} by reversing the smearing steps (\ref{eq:smearing_steps}). We define the drift force for the link variables $U_{n,\mu}^{(k)}$ as \begin{equation} F_{n,\mu}^{(k)}=i\tau^{a}D_{n,\mu}^{(k)a}S_{\theta}, \end{equation} where $D_{n,\mu}^{(k)a}$ represents a differential operation with respect to $U_{n,\mu}^{(k)}$. As a first step to calculate (\ref{eq:drift_theta}), the calculation of the drift force $\tilde{F}_{n,\mu}=F_{n,\mu}^{(N_{\rho})}$ for the smeared link $\tilde{U}_{n,\mu}=U_{n,\mu}^{(N_{\rho})}$ is straightforward. Once we obtain the initial drift force $\tilde{F}_{n,\mu}$, the subsequent ones are given by the map from $F_{n,\mu}^{(k)}$ to $F_{n,\mu}^{(k-1)}$ iteratively. \begin{equation} \tilde{F}_{n,\mu}=F_{n,\mu}^{(N_{\rho})}\rightarrow F_{n,\mu}^{(N_{\rho}-1)}\rightarrow\cdots\rightarrow F_{n,\mu}^{(0)}=F_{n,\mu} \end{equation} The map of the drift force is given by the following formulae, where the final step from $F_{n,\mu}^{\prime}=F_{n,\mu}^{(1)}$ to $F_{n,\mu}=F_{n,\mu}^{(0)}$ is shown as an example. \begin{equation} F_{n,\mu}=e^{-iY_{n,\mu}}F_{n,\mu}^{\prime}e^{iY_{n,\mu}}+\rho\mathrm{Tr}\left[(U_{n,\mu}M_{n,\mu}+\bar{M}_{n,\mu}U_{n,\mu}^{-1})\tau^{a}\right]\tau^{a} \end{equation} \begin{align} M_{n,\mu} & =-\Omega_{n,\mu}\Lambda_{n,\mu}\nonumber \\ & +\sum_{\nu(\neq\mu)}\left[U_{n+\hat{\mu},\nu}U_{n+\hat{\nu},\mu}^{-1}(U_{n,\nu}^{-1}\Lambda_{n,\nu}+\Lambda_{n+\hat{\nu},\mu}U_{n,\nu}^{-1})\right.\nonumber \\ & \hphantom{=\sum_{\nu(\neq\mu)}}+U_{n+\hat{\mu}-\hat{\nu},\nu}^{-1}U_{n-\hat{\nu},\mu}^{-1}(\Lambda_{n-\hat{\nu},\mu}-\Lambda_{n-\hat{\nu},\nu})U_{n-\hat{\nu},\nu}\nonumber \\ & \hphantom{=\sum_{\nu(\neq\mu)}}\left.-\Lambda_{n+\hat{\mu},\nu}U_{n+\hat{\mu},\nu}U_{n+\hat{\nu},\mu}^{-1}U_{n,\nu}^{-1}+U_{n+\hat{\mu}-\hat{\nu},\nu}^{-1}\Lambda_{n+\hat{\mu}-\hat{\nu},\nu}U_{n-\hat{\nu},\mu}^{-1}U_{n-\hat{\nu},\nu}\right] \end{align} \begin{align} \bar{M}_{n,\mu} & =-\Lambda_{n,\mu}\bar{\Omega}_{n,\mu}\nonumber \\ & +\sum_{\nu(\neq\mu)}\left[(\Lambda_{n,\nu}U_{n,\nu}+U_{n,\nu}\Lambda_{n+\hat{\nu},\mu})U_{n+\hat{\nu},\mu}U_{n+\hat{\mu},\nu}^{-1}\right.\nonumber \\ & \hphantom{=\sum_{\nu(\neq\mu)}}+U_{n-\hat{\nu},\nu}^{-1}(\Lambda_{n-\hat{\nu},\mu}-\Lambda_{n-\hat{\nu},\nu})U_{n-\hat{\nu},\mu}U_{n+\hat{\mu}-\hat{\nu},\nu}\nonumber \\ & \hphantom{=\sum_{\nu(\neq\mu)}}\left.-U_{n,\nu}U_{n+\hat{\nu},\mu}U_{n+\hat{\mu},\nu}^{-1}\Lambda_{n+\hat{\mu},\nu}+U_{n-\hat{\nu},\nu}^{-1}U_{n-\hat{\nu},\mu}\Lambda_{n+\hat{\mu}-\hat{\nu},\nu}U_{n+\hat{\mu}-\hat{\nu},\nu}\right] \end{align} \begin{equation} \Lambda_{m,\nu}=\mathrm{Tr}\left[\hat{\Lambda}_{m,\nu}\tau^{b}\right]\tau^{b} \end{equation} \begin{equation} \hat{\Lambda}_{m,\nu}=-\frac{1}{2\kappa_{m,\nu}^{2}}\left(1-\frac{\sin2\kappa_{m,\nu}}{2\kappa_{m,\nu}}\right)\mathrm{Tr}\left[F_{m,\nu}^{\prime}iY_{m,\nu}\right]iY_{m,\nu}+\frac{\sin\kappa_{m,\nu}}{\kappa_{m,\nu}}e^{-iY_{m,\nu}}F_{m,\nu}^{\prime} \end{equation} \begin{equation} \kappa_{n,\mu}=\sqrt{-\det Y_{n,\mu}} \end{equation} Note that $Y_{n,\mu}$, $\Omega_{n,\mu}$ and $\bar{\Omega}_{n,\mu}$ are defined by (\ref{eq:Y}), (\ref{eq:OMG}) and (\ref{eq:OMGbar}) respectively. They are calculated from $U_{n,\mu}$ in this case. The drift term calculated in this way respects the holomorphicity. The calculation time and the memory size required for the simulation are proportional to the number of steps $N_{\rho}$. \section{Result of the CLM \label{sec:Result}} In this section, we show the results of the complex Langevin simulation. So far, we have found that the CLM using the naive definition (\ref{eq:Qcl}) of the topological charge without the smearing works in the high-temperature region (deconfined phase). As a first step, we focus on the high-temperature region and try to see the effect of the stout smearing on the topological charge. Before introducing the theta term, we check the effect of the smearing by changing the smearing parameters for $\theta=0$. The number of steps $N_{\rho}$ and the step size $\rho$ should be large enough to eliminate the short range fluctuations. However, it is difficult to increase $N_{\rho}$ a lot since the calculation time and the memory size increase with $N_{\rho}$. If $\rho$ is too large, the nontrivial topological excitation will be destroyed. For $\beta>2.4$, which corresponds to the high-temperature region in our setup, we find that $N_{\rho}=20$ is enough to recover the topological property. In figure \ref{fig:history_Q}, we show the history of the topological charge defined by (\ref{eq:def_Q}) in the real Langevin simulation for $\theta=0$. There are three series of data with $\rho=0$, 0.06 and 0.1. We plot the topological charge without the smearing namely $\rho=0$ for comparison. The topological charge with $\rho=0$ is noisy, and it is difficult to see the topological property. Once we introduce the smearing, we can see the transitions between the topological sectors clearly. \begin{figure} \begin{centering} \includegraphics[scale=0.6]{pp-ReQ-L24b250t000n20rX} \par\end{centering} \caption{\label{fig:history_Q} The history of the topological charge defined by (\ref{eq:def_Q}) in the Langevin simulation for $\theta=0$. The lattice size is $24^{3}\times4$, and the coupling constant is $\beta=2.5$. The horizontal axis is the fictitious time $t$ of the Langevin simulation.} \end{figure} Next, we show the results of the complex Langevin simulation for $\theta=\pi/4$. In this simulation, the lattice size is $24^{3}\times4$, and the smearing parameters are $N_{\rho}=20$ and $\rho=0.06$. In figure \ref{fig:drift_histogram}, we show the histogram of the magnitude $u$ of the largest drift term defined in (\ref{eq:max_drift}). The distribution falls off rapidly for $\beta=2.55$, but it does not for $\beta=2.5$. Thus, the criterion for correct convergence is satisfied only for $\beta=2.55$. Typically, the coupling constant $\beta$ should be large enough to satisfy the criterion. We found that the CLM works if $\beta\gtrsim2.55$ for $\theta=\pi/4$ on the $24^{3}\times4$ lattice. \begin{figure} \begin{centering} \includegraphics[scale=0.6]{pp-dhist-L24bXt025n20r006} \par\end{centering} \caption{\label{fig:drift_histogram} The histogram of the maximum drift term (\ref{eq:max_drift}) for $\theta=\pi/4$ in log scale. The horizontal axis is $\log_{10}u$. The lattice size is $24^{3}\times4$, and the smearing parameters are $N_{\rho}=20$ and $\rho=0.06$.} \end{figure} In figure \ref{fig:history_complexQ}, we show the history of the topological charge for $\beta=2.55$. Since the gauge group is extended to SL(2,$\mathbb{C}$) in the CLM, the topological charge has an imaginary part in general. We plot both of the real part and the imaginary part. There are some topological excitations in the history of $\mathrm{Re}Q$. The imaginary part vanishes after the smearing in most cases, but it increases rapidly when the real part changes. \begin{figure} \centering{}\includegraphics[scale=0.6]{pp-ReImQ-L24b255t025n20r006} \caption{\label{fig:history_complexQ} The history of the topological charge for $\theta=\pi/4$. The upper plot show the real part and the lower plot show the imaginary part. The lattice size is $24^{3}\times4$, and the coupling constant is $\beta=2.55$. The horizontal axis is the fictitious time $t$ of the Langevin simulation.} \end{figure} The expectation value of the topological charge has a nonzero imaginary part if CP is broken. Since the theta term breaks CP explicitly for $\theta/\pi\notin\mathbb{Z}$, it is consistent that $\mathrm{Im}Q$ becomes nonzero in our simulation. We find that the fluctuation of $\mathrm{Re}Q$ is necessary to obtain the nonzero $\mathrm{Im}Q$. Indeed, the imaginary part are close to zero while the configuration stays in a single topological sector. We also find that the rapid growth of $\mathrm{Im}Q$ makes the simulation unstable. The imaginary part originates from the non-unitarity of the configuration, which can be a source of the large drift. We need to set $\beta$ large enough to avoid this problem. However, the fluctuation of $Q$ is highly suppressed for larger $\beta$, and the autocorrelation time of $Q$ becomes longer than the simulation time. It is known as freezing of the topological charge, which causes a problem with the ergodicity. Therefore, it is difficult to avoid the large drift simply by increasing $\beta$ further. \section{Summary} The sign problem prevents us from studying gauge theories with a theta term by the Monte Carlo simulation. In this work, we applied the complex Langevin method (CLM) to 4D SU(2) gauge theory to avoid the problem. We found that the criterion for correct convergence of the CLM is satisfied in the high temperature region. However, the naively defined topological charge does not take integer values due to the contamination by short range fluctuations. For this reason, we introduce the stout smearing in the CLM in order to recover the topological property. The effect of the smearing can be included in the Langevin dynamics itself as well as in observables. We confirmed that the real part of the topological charge becomes close to an integer after the smearing. On the other hand, the imaginary part vanishes mostly, but it grows rapidly as the real part changes. This behavior is consistent with the topological nature of the theory, although it is difficult to deal with in the numerical simulation. We need to increase $\beta$ to suppress the large drift. On the other hand, we cannot increase it due to the topology freezing. It seems to be necessary to resolve either of the topology freezing or the large drift in the CLM. However, it is possible that the appearance of large drift is related to the topology change, as we found in our previous study of 2D U(1) gauge theory \cite{Hirasawa:2020bnl}. In that case, we need to modify the boundary condition or try some possible ways to suppress the large drifts, such as improving the gauge cooling or the smearing method. \acknowledgments We would like to thank R.~Kitano, N.~Yamada, T.~Ishikawa and Y.~Tanizaki for valuable discussions. The computations were carried out on the PC clusters in KEK Computing Research Center and KEK Theory Center. This work is supported by the Particle, Nuclear and Astro Physics Simulation Program No.2020-009 (FY2020) and No.2021-005 (FY2021) of Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK). The work of M.~Honda is supported by MEXT Q-LEAP, JST PRESTO Grant Number JPMJPR2117 and JSPS Grant-in-Aid for Transformative Research Areas (A) JP21H05190. \bibliographystyle{JHEP}
proofpile-arXiv_065-65
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Preface} \label{s_preface} This paper primarily serves as a reference for my Ph.D. dissertation, which I am currently writing. As a consequence, the framework is not under active development. The presented concepts, problems, and solutions may be interesting regardless, even for other problems than Neural Architecture Search (NAS). The framework's name, UniNAS, is a wordplay of University and Unified NAS since the framework was intended to incorporate almost any architecture search approach. \section{Introduction and Related Work} \label{s_introduction} An increasing supply and demand for automated machine learning causes the amount of published code to grow by the day. Although advantageous, the benefit of such is often impaired by many technical nitpicks. This section lists common code bases and some of their disadvantages. \subsection{Available NAS frameworks} \label{u_introduction_available} The landscape of NAS codebases is severely fragmented, owing to the vast differences between various NAS methods and the deep-learning libraries used to implement them. Some of the best supported or most widely known ones are: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item {NASLib~\citep{naslib2020}} \item { Microsoft NNI \citep{ms_nni} and Archai \citep{ms_archai} } \item { Huawei Noah Vega \citep{vega} } \item { Google TuNAS \citep{google_tunas} and PyGlove \citep{pyglove} (closed source) } \end{itemize} Counterintuitively, the overwhelming majority of publicly available NAS code is not based on any such framework or service but simple and typical network training code. Such code is generally quick to implement but lacks exact comparability, scalability, and configuration power, which may be a secondary concern for many researchers. In addition, since the official code is often released late or never, and generally only in either TensorFlow~\citep{tensorflow2015-whitepaper} or PyTorch~\citep{pytorch}, popular methods are sometimes re-implemented by some third-party repositories. Further projects include the newly available and closed-source cloud services by, e.g., Google\footnote{\url{https://cloud.google.com/automl/}} and Microsoft\footnote{\url{https://www.microsoft.com/en-us/research/project/automl/}}. Since they require very little user knowledge in addition to the training data, they are excellent for deep learning in industrial environments. \subsection{Common disadvantages of code bases} \label{u_introduction_disadvantages} With so many frameworks available, why start another one? The development of UniNAS started in early 2020, before most of these frameworks arrived at their current feature availability or were even made public. In addition, the frameworks rarely provide current state-of-the-art methods even now and sometimes lack the flexibility to include them easily. Further problems that UniNAS aims to solve are detailed below: \paragraph{Research code is rigid} The majority of published NAS code is very simplistic. While that is an advantage to extract important method-related details, the ability to reuse the available code in another context is severely impaired. Almost all details are hard-coded, such as: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item { the used gradient optimizer and learning rate schedule } \item { the architecture search space, including candidate operations and network topology } \item { the data set and its augmentations } \item { weight initialization and regularization techniques } \item { the used hardware device(s) for training } \item { most hyper-parameters } \end{itemize} This inflexibility is sometimes accompanied by the redundancy of several code pieces that differ slightly for different experiments or phases in NAS methods. Redundancy is a fine way to introduce subtle bugs or inconsistencies and also makes the code confusing to follow. Hard-coded details are also easy to forget, which is especially crucial in research where reproducibility depends strongly on seemingly unimportant details. Finally, if any of the hard-coded components is ever changed, such as the optimizer, configurations of previous experiments can become very misleading. Their details are generally not part of the documented configuration (since they are hard-coded), so earlier results no longer make sense and become misleading. \paragraph{A configuration clutter} In contrast to such simplistic single-purpose code, frameworks usually offer a variety of optimizers, schedules, search spaces, and more to choose from. By configuring the related hyper-parameters, an optimizer can be trivially and safely exchanged for another. Since doing so is a conscious and intended choice, it is also documented in the configuration. In contrast, the replacement of hard-coded classes was not intended when the code was initially written. The disadvantage of this approach comes with the wealth of configurable hyper-parameters, in different ways: Firstly, the parametrization is often cluttered. While implementing more classes (such as optimizers or schedules) adds flexibility, the list of available hyper-parameters becomes increasingly bloated and opaque. The wealth of parametrization is intimidating and impractical since it is often nontrivial to understand exactly which hyper-parameters are used and which are ineffective. As an example, the widely used PyTorch Image Models framework~\citep{rw2019timm} (the example was chosen due to the popularity of the framework, it is no worse than others in this respect) implements an intimidating mix of regularization and data augmentation settings that are partially exclusive.\footnote{\url{https://github.com/rwightman/pytorch-image-models/blob/ba65dfe2c6681404f35a9409f802aba2a226b761/train.py}, checked Dec. 1st 2021; see lines 177 and below.} Secondly, to reduce the clutter, parameters can be used by multiple mutually exclusive choices. In the case of the aforementioned PyTorch Image Models framework, one example would be the selection of gradient-descent optimizers. Sharing common parameters such as the learning rate and the momentum generally works well, but can be confusing since, once again, finding out which parameters affect which modules necessitates reading the code or documentation. Thirdly, even with an intimidating wealth of configuration choices, not every option is covered. To simplify and reduce the clutter, many settings of lesser importance always use a sensible default value. If changing such a parameter becomes necessary, the framework configurations become more cluttered or changing the hard-coded default value again results in misleading configurations of previous experiments. To summarize, the hyper-parametrization design of a framework can be a delicate decision, trying for them to be complete but not cluttered. While both extremes appear to be mutually exclusive, they can be successfully united with the underlying configuration approach of UniNAS: argument trees. \paragraph{} Nonetheless, it is great if code is available at all. Many methods are published without any code that enables verifying their training or search results, impairing their reproducibility. Additionally, even if code is overly simplistic or accompanied by cluttered configurations, reading it is often the best way to clarify a method's exact workings and obtain detailed information about omitted hyper-parameter choices. \section{Argument trees} \label{u_argtrees} The core design philosophy of UniNAS is built on so-called \textit{argument trees}. This concept solves the problems of Section~\ref{u_introduction_disadvantages} while also providing immense configuration flexibility. As its basis, we observe that any algorithm or code piece can be represented hierarchically. For example, the task to train a network requires the network itself and a training loop, which may use callbacks and logging functions. Sections~\ref{u_argtrees_modularity} and~\ref{u_argtrees_register} briefly explain two requirements: strict modularity and a global register. As described in Section~\ref{u_argtrees_tree}, this allows each module to define which other types of modules are needed. In the previous example, a training loop may use callbacks and logging functions. Sections~\ref{u_argtrees_config} and~\ref{u_argtrees_build} explain how a configuration file can fully detail these relationships and how the desired code class structure can be generated. Finally, Section~\ref{u_argtrees_gui} shows how a configuration file can be easily manipulated with a graphical user interface, allowing the user to create and change complex experiments without writing a single line of code. \subsection{Modularity} \label{u_argtrees_modularity} As practiced in most non-simplistic codebases, the core of the argument tree structure is strong modularity. The framework code is fragmented into different components with clearly defined purposes, such as training loops and datasets. Exchanging modules of the same type for one another is a simple issue, for example gradient-descent optimizers. If all implemented code classes of the same type inherit from one base class (e.g., AbstractOptimizer) that guarantees specific class methods for a stable interaction, they can be treated equally. In object-oriented programming, this design is termed polymorphism. UniNAS extends typical PyTorch~\citep{pytorch} classes with additional functionality. An example is image classification data sets, which ordinarily do not contain information about image sizes. Adding this specification makes it possible to use fake data easily and to precompute the tensor shapes in every layer throughout the neural network. \begin{figure*}[ht] \hfill \begin{minipage}[c]{0.97\textwidth} \begin{python} @Register.task(search=True) class SingleSearchTask(SingleTask): @classmethod def args_to_add(cls, index=None) -> [Argument]: return [ Argument('is_test_run', default='False', type=str, is_bool=True), Argument('seed', default=0, type=int),` Argument('save_dir', default='{path_tmp}', type=str), ] @classmethod def meta_args_to_add(cls) -> [MetaArgument]: methods = Register.methods.filter_match_all(search=True) return [ MetaArgument('cls_device', Register.devices_managers, num=1), MetaArgument('cls_trainer', Register.trainers, num=1), MetaArgument('cls_method', methods, num=1), ] \end{python} \end{minipage} \vskip-0.3cm \caption{ UniNAS code excerpt for a SingleSearchTask. The decorator function in Line~1 registers the class with type ''task'' and additional information. The method in Line~5 returns all arguments for the task to be set in a config file. The method in Line~13 defines the local tree structure by stating how many modules of which types are needed. It is also possible to specify additional requirements, as done in Line~14. } \label{u_fig_register} \end{figure*} \subsection{A global register} \label{u_argtrees_register} A second requirement for argument trees is a global register for all modules. Its functions are: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item { Allow any module to register itself with additional information about its purpose. The example code in Figure~\ref{u_fig_register} shows this in Line~1. } \item { List all registered classes, including their type (task, model, optimizer, data set, and more) and their additional information (search, regression, and more). } \item { Filter registered classes by types and matching information. } \item { Given only the name of a registered module, return the class code located anywhere in the framework's files. } \end{itemize} As seen in the following Sections, this functionality is indispensable to UniNAS' design. The only difficulties in building such a register is that the code should remain readable and that every module has to register itself when the framework is used. Both can be achieved by scanning through all code files whenever a new job starts, which takes less than five seconds. Python executes the decorators (see Figure~\ref{u_fig_register}, Line~1) by doing so, which handle registration in an easily readable fashion. \subsection{Tree-based dependency structures} \label{u_argtrees_tree} \begin{figure*} \vskip-0.7cm \begin{minipage}[l]{0.42\linewidth} \centering \includegraphics[trim=0 320 2480 0, clip, width=\textwidth]{./images/uninas/args_tree_s1_col.pdf} \vskip-0.2cm \caption{ Part of a visualized SingleSearchTask configuration, which describes the training of a one-shot super-network with a specified search method (omitted for clarity, the complete tree is visualized in Figure~\ref{app_u_argstree_img}). The white colored tree nodes state the type and number of requested classes, the turquoise boxes the specific classes used. For example, the \textcolor{red}{SingleSearchTask} requires exactly one type of \textcolor{orange}{hardware device} to be specified, but the \textcolor{cyan}{SimpleTrainer} accepts any number of \textcolor{green}{callbacks} or loggers. \\ \hfill } \label{u_argstree_trimmed_img} \end{minipage} \hfill \begin{minipage}[r]{0.5\textwidth} \begin{small} \begin{lstlisting}[backgroundcolor = \color{white}] "cls_task": <@\textcolor{red}{"SingleSearchTask"}@>, "{cls_task}.save_dir": "{path_tmp}/", "{cls_task}.seed": 0, "{cls_task}.is_test_run": true, "cls_device": <@\textcolor{orange}{"CudaDevicesManager"}@>, "{cls_device}.num_devices": 1, "cls_trainer": <@\textcolor{cyan}{"SimpleTrainer"}@>, "{cls_trainer}.max_epochs": 3, "{cls_trainer}.ema_decay": 0.5, "{cls_trainer}.ema_device": "cpu", "cls_exp_loggers": <@\textcolor{black}{"TensorBoardExpLogger"}@>, "{cls_exp_loggers#0}.log_graph": false, "cls_callbacks": <@\textcolor{green}{"CheckpointCallback"}@>, "{cls_callbacks#0}.top_n": 1, "{cls_callbacks#0}.key": "train/loss", "{cls_callbacks#0}.minimize_key": true, \end{lstlisting} \end{small} \vskip-0.2cm \caption{ Example content of the configuration text-file (JSON format) for the tree in Figure~\ref{u_argstree_trimmed_img}. The first line in each text block specifies the used class(es), the other lines their detailed settings. For example, the \textcolor{cyan}{SimpleTrainer} is set to train for three epochs and track an exponential moving average of the network weights on the CPU. } \label{u_argstree_trimmed_text} \end{minipage} \end{figure*} A SingleSearchTask requires exactly one hardware device and exactly one training loop (named trainer, to train an over-complete super-network), which in turn may use any number of callbacks and logging mechanisms. Their relationship is visualized in Figure~\ref{u_argstree_trimmed_img}. Argument trees are extremely flexible since they allow every hierarchical one-to-any relationship imaginable. Multiple optional callbacks can be rearranged in their order and configured in detail. Moreover, module definitions can be reused in other constellations, including their requirements. The ProfilingTask does not need a training loop to measure the runtime of different network topologies on a hardware device, reducing the argument tree in size. While not implemented, a MultiSearchTask could use several trainers in parallel on several devices. The hierarchical requirements are made available using so-called MetaArguments, as seen in Line~16 of Figure~\ref{u_fig_register}. They specify the local structure of argument trees by stating which other modules are required. To do so, writing the required module type and their amount is sufficient. As seen in Line~14, filtering the modules is also possible to allow only a specific subset. This particular example defines the upper part of the tree visualized in Figure~\ref{u_argstree_trimmed_img}. The names of all MetaArguments start with "cls\_" which improves readability and is reflected in the visualized arguments tree (Figure~\ref{u_argstree_trimmed_img}, white-colored boxes). \subsection{Tree-based argument configurations} \label{u_argtrees_config} While it is possible to define such a dynamic structure, how can it be represented in a configuration file? Figure~\ref{u_argstree_trimmed_text} presents an excerpt of the configuration that matches the tree in Figure~\ref{u_argstree_trimmed_img}. As stated in Lines~6 and~9 of the configuration, CudaDevicesManager and SimpleTrainer fill the roles for the requested modules of types "device" and "trainer". Lines~14 and~17 list one class of the types ''logger'' and ''callback'' each, but could provide any number of comma-separated names. Also including the stated "task" type in Line~1, the mentioned lines state strictly which code classes are used and, given the knowledge about their hierarchy, define the tree structure. Additionally, every class has some arguments (hyper-parameters) that can be modified. SingleSearchTask defined three such arguments (Lines~7 to~9 in Figure~\ref{u_fig_register}) in the visualized example, which are represented in the configuration (Lines~2 to~4 in Figure~\ref{u_argstree_trimmed_text}). If the configuration is missing an argument, maybe to keep it short, its default value is used. Another noteworthy mechanism in Line~2 is that "\{cls\_task\}.save\_dir" references whichever class is currently set as "cls\_task" (Line~1), without naming it explicitly. Such wildcard references simplify automated changes to configuration files since, independently of the used task class, overwriting "\{cls\_task\}.save\_dir" is always an acceptable way to change the save directory. A less general but perhaps more readable notation is "SingleSearchTask.save\_dir", which is also accepted here. A very interesting property of such dynamic configuration files is that they contain only the hyper-parameters (arguments) of the used code classes. Adding any additional arguments will result in an error since the configuration-parsing mechanism, described in Section~\ref{u_argtrees_build}, is then unable to piece the information together. Even though UniNAS implements several different optimizer classes, any such configuration only contains the hyper-parameters of those used. Generated configuration files are always complete (contain all available arguments), sparse (contain only the available arguments), and never ambiguous. A debatable design decision of the current configuration files, as seen in Figure~\ref{u_argstree_trimmed_text}, is that they do not explicitly encode any hierarchy levels. Since that information is already known from their class implementations, the flat representation was chosen primarily for readability. It is also beneficial when arguments are manipulated, either automatically or from the terminal when starting a task. The disadvantage is that the argument names for class types can only be used once ("cls\_device", "cls\_trainer", and more); an unambiguous assignment is otherwise not possible. For example, since the SingleSearchTask already owns "cls\_device", no other class that could be used in the same argument tree can use that particular name. While this limitation is not too significant, it can be mildly confusing at times. Finally, how is it possible to create configuration files? Since the dynamic tree-based approach offers a wide variety of possibilities, only a tiny subset is valid. For example, providing two hardware devices violates the defined tree structure of a SingleSearchTask and results in a parsing failure. If that happens, the user is provided with details of which particular arguments are missing or unexpected. While the best way to create correct configurations is surely experience and familiarity with the code base, the same could be said about any framework. Since UniNAS knows about all registered classes, which other (possibly specified) classes they use, and all of their arguments (including defaults, types, help string, and more), an exhaustive list can be generated automatically. However, resulting in almost 1600 lines of text, this solution is not optimal either. The most convenient approach is presented in Section~\ref{u_argtrees_gui}: Creating and manipulating argument trees with a graphical user interface. \begin{algorithm} \caption{ Pseudo-code for building the argument tree, best understood with Figures~\ref{u_argstree_trimmed_img} and~\ref{u_argstree_trimmed_text} For a consistent terminology of code classes and tree nodes: If the $Task$ class uses a $Trainer$, then in that context, $Trainer$ the child. Lines starting with \# are comments. } \label{alg_u_argtree} \small \begin{algorithmic} \Require $Configuration$ \Comment{Content of the configuration file} \Require $Register$ \Comment{All modules in the code are registered} \State{} \State{$\#$ recursive parsing function to build a tree} \Function{parse}{$class,~index$} \Comment{E.g. $(SingleSearchTask,~0)$} \State $node = ArgumentTreeNode(class,~index)$ \State{} \State{$\#$ first parse all arguments (hyper-parameters) of this tree node} \ForEach{($idx, argument\_name$) \textbf{in} $class.get\_arguments()$} \Comment{E.g. (0, $''save\_dir''$)} \State $value = get\_used\_value(Configuration,~class,~index,~argument\_name)$ \State $node.add\_argument(argument\_name,~value)$ \EndFor \State{} \State{$\#$ then recursively parse all child classes, for each module type...} \ForEach{$child\_class\_type$ \textbf{in} $class.get\_child\_types()$} \Comment{E.g. $cls\_trainer$} \State $class\_names = get\_used\_classes(Configuration,~child\_classes\_type)$ \Assert{ The number of $class\_names$ is in the specified limits} \State{} \State{$\#$ for each module type, check all configured classes} \ForEach{($idx,~class\_name$) \textbf{in} $class\_names$} \Comment{E.g. (0, $''SimpleTrainer''$)} \State $child\_class = Register.get(child\_class\_name)$ \State $child\_node = $\Call{parse}{$child\_class,~idx$} \State $node.add\_child(child\_class\_type,~idx,~child\_node)$ \EndFor \EndFor \Returnx{ $node$} \EndFunction \State{} \State $tree = $\Call{parse}{$Main, 0$} \Comment{Recursively parse the tree, $Main$ is the entry point} \Ensure every argument in the configuration has been parsed \end{algorithmic} \end{algorithm} \subsection{Building the argument tree and code structure} \label{u_argtrees_build} The arguably most important function of a research code base is to run experiments. In order to do so, valid configuration files must be translated into their respective code structure. This comes with three major requirements: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item{ Classes in the code that implement the desired functionality. As seen in Section~\ref{u_argtrees_tree} and Figure~\ref{u_argstree_trimmed_img}, each class also states the types, argument names and numbers of additionally requested classes for the local tree structure. } \item{ A configuration that describes which code classes are used and which values their parameters take. This is described in Section~\ref{u_argtrees_config} and visualized in Figure~\ref{u_argstree_trimmed_text}. } \item{ To connect the configuration content to classes in the code, it is required to reference code modules by their names. As described in Section~\ref{u_argtrees_register} this can be achieved with a global register. } \end{itemize} Algorithm~\ref{alg_u_argtree} realizes the first step of this process: parsing the hierarchical code structure and their arguments from the flat configuration file. The result is a tree of \textit{ArgumentTreeNodes}, of which each refers to exactly one class in the code, is connected to all related tree nodes, and knows all relevant hyper-parameter values. While they do not yet have actual class instances, this final step is no longer difficult. \begin{figure*}[h] \vskip -0.0in \begin{center} \includegraphics[trim=30 180 180 165, clip, width=\linewidth]{images/uninas/gui/gui1desc.png} \hspace{-0.5cm} \caption{ The graphical user interface (left) that can manipulate the configurations of argument trees (visualized right). Since many nodes are missing classes of some type ("cls\_device", ...), their parts in the GUI are highlighted in red. The eight child nodes of DartsSearchMethod are omitted for visual clarity. } \label{fig_u_gui} \end{center} \end{figure*} \subsection{Creating and manipulating argument trees with a GUI} \label{u_argtrees_gui} Manually writing a configuration file can be perplexing since one must keep track of tree specifications, argument names, available classes, and more. The graphical user interface (GUI) visualized in Figures~\ref{fig_u_gui} and~\ref{app_u_gui} solves these problems to a large extent, by providing the following functionality: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item{ Interactively add and remove nodes in the argument tree, thus also in the configuration and class structure. Highlight violations of the tree specification. } \item{ Setting the hyper-parameters of each node, using checkboxes (boolean), dropdown menus (choice from a selection), and text fields (other cases like strings or numbers) where appropriate. } \item{ Functions to save and load argument trees. Since it makes sense to separate the configurations for the training procedure and the network design to swap between different constellations easily, loading partial trees is also supported. Additional functions enable visualizing, resetting, and running the current argument tree. } \item{ A search function that highlights all matches since the size of some argument trees can make finding specific arguments tedious. } \end{itemize} In order to do so, the GUI manipulates \textit{ArgumentTreeNodes} (Section~\ref{u_argtrees_build}), which can be easily converted into configuration files and code. As long as the required classes (for example, the data set) are already implemented, the GUI enables creating and changing experiments without ever touching any code or configuration files. While not among the original intentions, this property may be especially interesting for non-programmers that want to solve their problems quickly. Still, the current version of the GUI is a proof of concept. It favors functionality over design, written with the plain Python Tkinter GUI framework and based on little previous GUI programming experience. Nonetheless, since the GUI (frontend) and the functions manipulating the argument tree (backend) are separated, a continued development with different frontend frameworks is entirely possible. The perhaps most interesting would be a web service that runs experiments on a server, remotely configurable from any web browser. \subsection{Using external code} \label{u_external} There is a variety of reasons why it makes sense to include external code into a framework. Most importantly, the code either solves a standing problem or provides the users with additional options. Unlike newly written code, many popular libraries are also thoroughly optimized, reviewed, and empirically validated. External code is also a perfect match for a framework based on argument trees. As shown in Figure~\ref{u_fig_external_import}, external classes of interest can be thinly wrapped to ensure compatibility, register the module, and specify all hyper-parameters for the argument tree. The integration is seamless so that finding out whether a module is locally written or external requires an inspection of its code. On the other hand, if importing the AdaBelief~\citep{zhuang2020adabelief} code fails, the module will not be registered and therefore not be available in the graphical user interface. UniNAS fails to parse configurations that require unregistered modules but informs the user which external sources can be installed to extend its functionality. Due to this logistic simplicity, several external frameworks extend the core of UniNAS. Some of the most important ones are: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item{ pymoo~\citep{pymoo}, a library for multi-objective optimization methods. } \item{ Scikit-learn~\citep{sklearn}, which implements many classical machine learning algorithms such as Support Vector Machines and Random Forests. } \item{ PyTorch Image Models~\citep{rw2019timm}, which provides the code for several optimizers, network models, and data augmentation methods. } \item{ albumentations~\citep{2018arXiv180906839B}, a library for image augmentations. } \end{itemize} \begin{figure*} \hfill \begin{minipage}[c]{0.95\textwidth} \begin{python} from uninas.register import Register from uninas.training.optimizers.abstract import WrappedOptimizer try: from adabelief_pytorch import AdaBelief # if the import was successful, # register the wrapped optimizer @Register.optimizer() class AdaBeliefOptimizer(WrappedOptimizer): # wrap the original ... except ImportError as e: # if the import failed, # inform the user that optional libraries are not installed Register.missing_import(e) \end{python} \end{minipage} \vskip-0.3cm \caption{ Excerpt of UniNAS wrapping the official AdaBelief optimizer code. The complete text has just 45 lines, half of which specify the optimizer parameters for the argument trees. } \label{u_fig_external_import} \end{figure*} \section{Dynamic network designs} \label{u_networks} As seen in the previous Sections, the unique design of UniNAS enables powerful customization of all components. In most cases, a significant portion of the architecture search configuration belongs to the network design. The FairNAS search example in Figure~\ref{app_u_argstree_img} contains 25 configured classes, of which 11 belong to the search network. While it would be easy to create a single configurable class for each network architecture of interest, that would ignore the advantages of argument trees. On the other hand, there are many technical difficulties with highly dynamic network topologies. Some of them are detailed below. \subsection{Decoupling components} In many published research codebases, network and architecture weights jointly exist in the network class. This design decision is disadvantageous for multiple reasons. Most importantly, changing the network or NAS method requires a lot of manual work. The reason is that different NAS methods need different amounts of architecture parameters, use them differently, and optimize them in different ways. For example: \begin{itemize}[noitemsep,parsep=0pt,partopsep=0pt] \item{ DARTS~\citep{liu2018darts} requires one weight vector per architecture choice. They weigh all different paths, candidate operations, in a sum. Updating the weights is done with an additional optimizer (ADAM), using gradient descent. } \item{ MDENAS~\citep{mdenas} uses a similar vector for a weighted sample of a single candidate operation that is used in this particular forward pass. Global network performance feedback is used to increase or decrease the local weightings. } \item{ Single-Path One-Shot~\citep{guo2020single} does not use weights at all. Paths are always sampled uniformly randomly. The trained network is used as an accuracy prediction model and used by a hyper-parameter optimization method. } \item{ FairNAS~\citep{FairNAS} extends Single-Path One-Shot to make sure that all candidate operations are used frequently and equally often. It thus needs to track which paths are currently available. } \end{itemize} \begin{figure}[t] \vskip -0.0in \begin{center} \includegraphics[trim=0 0 0 0, clip, width=\linewidth]{images/draw/search_net.pdf} \hspace{-0.5cm} \caption{ The network and architecture weights are decoupled. \textbf{Top}: The structure of a fully sequential super-network. Every layer (cell) uses the same set of candidate operations and weight strategy. \textbf{Bottom left}: One set of candidate operations that is used multiple times in the network. This particular experiment uses the NAS-Bench-201 candidate operations. \textbf{Bottom right}: A weight strategy that manages everything related to the used NAS method, such as creating the architecture weights or which candidates are used in each forward pass. } \label{fig_u_decouple} \end{center} \end{figure} The same is also true for the set of candidate operations, which affect the sizes of the architecture weights. Once the definitions of the search space, the candidate operations, and the NAS method (including the architecture weights) are mixed, changing any part is tedious. Therefore, strictly separating them is the best long-term approach. Similar to other frameworks presented in Section~\ref{u_introduction_available}, architectures defined in UniNAS do not use an explicit set of candidate architectures but allow a dynamic configuration. This is supported by a \textit{WeightStrategy} interface, which handles all NAS-related operations such as creating and updating the architecture weights. The interaction between the architecture definition, the candidate operations, and the weight strategy is visualized in Figure~\ref{fig_u_decouple}. The easy exchange of any component is not the only advantage of this design. Some NAS methods, such as DARTS, update network and architecture weights using different gradient descent optimizers. Correctly disentangling the weights is trivial if they are already organized in decoupled structures but hard otherwise. Another advantage is that standardizing functions to create and manage architecture weights makes it easy to present relevant information to the user, such as how many architecture weights exist, their sizes, and which are shared across different network cells. An example is presented in Figure~\ref{app_text}. \begin{figure}[hb!] \begin{minipage}[c]{0.24\textwidth} \centering \includegraphics[height=11.5cm]{./images/draw/mobilenetv2.pdf} \end{minipage} \hfill \begin{minipage}[c]{0.5\textwidth} \small \begin{python} "cell_3": { "name": "SingleLayerCell", "kwargs": { "name": "cell_3", "features_mult": 1, "features_fixed": -1 }, "submodules": { "op": { "name": "MobileInvConvLayer", "kwargs": { "kernel_size": 3, "kernel_size_in": 1, "kernel_size_out": 1, "stride": 1, "expansion": 6.0, "padding": "same", "dilation": 1, "bn_affine": true, "act_fun": "relu6", "act_inplace": true, "att_dict": null, "fused": false } } } }, \end{python} \end{minipage} \caption{ A high-level view on the MobileNet~V2 architecture~\citep{sandler2018mobilenetv2} in the top left, and a schematic of the inverted bottleneck block in the bottom left. This design uses two 1$\times$1 convolutions to change the channel count \textit{n} by an expansion factor of~6, and a spatial 3$\times$3 convolution in their middle. The text on the right-hand side represents the cell structure by referencing the modules by their names ("name") and their keyworded arguments ("kwargs"). } \label{u_fig_conf} \end{figure} \subsection{Saving, loading, and finalizing networks} \label{u_networks_save} As mentioned before, argument trees enable a detailed configuration of every aspect of an experiment, including the network topology itself. As visualized in Figure~\ref{app_u_argstree_img}, such network definitions can become almost arbitrarily complex. This becomes disadvantageous once models have to be saved or loaded or when super-networks are finalized into discrete architectures. Unlike TensorFlow~\citep{tensorflow2015-whitepaper}, the used PyTorch~\citep{pytorch} library saves only the network weights without execution graphs. External projects like ONNX~\citep{onnx} can be used to export limited graph information but not to rebuild networks using the same code classes and context. The implemented solution is inspired by the official code\footnote{\url{https://github.com/mit-han-lab/proxylessnas/tree/master/proxyless_nas}} of ProxylessNAS~\citep{proxylessnas}, where every code module defines two functions that enable exporting and importing the entire module state and context. As typical for hierarchical structures, the state of an outer module contains the states of all modules within. An example is visualized in Figure~\ref{u_fig_conf}, where one cell in the famous MobileNet V2 architecture is represented as readable text. The global register can provide any class definition by name (see Section~\ref{u_argtrees_register}) so that an identical class structure can be created and parameterized accordingly. The same approach that enables saving and loading arbitrary class compositions can also be used to change their structure. More specifically, an over-complete super-network containing all possible candidate operations can export only a specific configuration subset. The network recreated from this reduced configuration is the result of the architecture search. This is made possible since the weight strategy controls the use of all candidate operations, as visualized in Figure~\ref{fig_u_decouple}. Similarly, when their configuration is exported, the weight strategy controls which candidates should be part of the finalized network architecture. In another use case, some modules behave differently in super-networks and finalized architectures. For example, Linear Transformers~\citep{ScarletNAS} supplement skip connections with linear 1$\times$1 convolutions in super-networks to stabilize the training with variable network depths. When the network topology is finalized, it suffices to simply export the configuration of a skip connection instead of their own. Another practical way of rebuilding code structures is available through the argument tree configuration, which defines every detail of an experiment (see Section~\ref{u_argtrees_config}). Parsing the network design and loading the trained weights of a previous experiment requires no further user interaction than specifying its save directory. This specific way of recreating experiment environments is used extensively in \textit{Single-Path One-Shot} tasks. In the first step, a super-network is trained to completion. Afterward, when the super-network is used to make predictions for a hyper-parameter optimization method (such as Bayesian optimization or evolutionary algorithms), the entire environment of its training can be recreated. This includes the network design and the dataset, data augmentations, which parts were reserved for validation, regularization techniques, and more. \section{Discussion and Conclusions} \label{u_conclusions} We presented the underlying concepts of UniNAS, a PyTorch-based framework with the ambitious goal of unifying a variety of NAS algorithms in one codebase. Even though the use cases for this framework changed over time, mostly from DARTS-based to SPOS-based experiments, its underlying design approach made reusing old code possible at every step. However, several technical details could be changed or improved in hindsight. Most importantly, configuration files should reflect the hierarchy levels (see Section~\ref{u_argtrees_config}) for code simplicity and to avoid concerns about using module types multiple times. The current design favors readability, which is now a minor concern thanks to the graphical user interface. Other considered changes would improve the code readability but were not implemented due to a lack of necessity and time. In summary, the design of UniNAS fulfills all original requirements. Modules can be arranged and combined in almost arbitrary constellations, giving the user an extremely flexible tool to design experiments. Furthermore, using the graphical user interface does not require writing even a single line of code. The resulting configuration files contain only the relevant information and do not suffer from a framework with many options. These features also enable an almost arbitrary network design, combined with any NAS optimization method and any set of candidate operations. Despite that, networks can still be saved, loaded, and changed in various ways. Although not covered here, several unit tests ensure that the essential framework components keep working as intended. Finally, what is the advantage of using argument trees over writing code with the same results? Compared to configuration files, code is more powerful and versatile but will likely suffer from problems described in Section~\ref{u_introduction_available}. Argument trees make any considerations about which parameters to expose unnecessary and can enforce the use of specific module types and subsets thereof. However, their strongest advantage is the visualization and manipulation of the entire experiment design with a graphical user interface. This aligns well with Automated Machine Learning (AutoML), which is also intended to make machine learning available to a broader audience. {\small \bibliographystyle{iclr2022_conference}
proofpile-arXiv_065-66
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{intro} Under suitable assumptions, a branching process can be decomposed into a spine and side branches. Heuristically speaking, the spine has the distribution of the driving process conditioned on non-extinction. We will prove this claim for the ``Fleming-Viot branching process'' introduced in \cite{BHM00}. In this paper, individuals follow independent Brownian motions and are killed on the boundary of a bounded Lipschitz set. A Fleming-Viot process is an extreme case of the Moran model introduced in \cite{Moran} (see \cite[Def.~5.12]{AE} for the modern discussion). In the Moran model, individuals branch at a (bounded) intensity while in our model, they do not branch at all when they are in the ``main part'' of the domain but they branch instantaneously when they hit a ``small set.'' Our main result on the asymptotic spine distribution is limited to Fleming-Viot processes driven by Brownian motion. We conjecture that an analogous result holds for every Fleming-Viot process (perhaps under mild technical assumptions). An analogous theorem was proved for Fleming-Viot processes driven by continuous time Markov processes on finite spaces in \cite{BiBu}. That article contained extra results on the branching structure, missing from the present paper, namely, it was proved that the rate of branching along the spine converges to twice the rate for a generic particle and the distribution of a side branch converges to the distribution of a branching process with the limiting branching rate. The literature on branching processes is huge so we will mention only a few key publications. Most of them contain extensive reference lists. A precursor of our model can be found in a paper by Moran \cite{Moran}. The book by Jagers \cite{Jagers75} is a classical treatise on branching processes and their applications to biology. A modern review of continuous time and space branching can be found in a book by Etheridge \cite{AE}. The ``Evans' immortal particle picture'' was introduced in \cite{Evans}. The ``look-down'' process was defined by Donnelly and Kurtz in \cite{DK96}. Modern approaches to the spine can be found in \cite{EK2004} and \cite{Henard}. Some of the most profound analysis of the genealogical structure of the Moran and related models appeared in \cite{GLW05,DGP12,GPW13,SeidelThesis}. Our paper is organized as follows. Our proof is complicated so we start with a non-technical review of the proof strategy in Section \ref{j30.1}. Section~\ref{review} contains basic definitions and a review of known results. Section \ref{sec:main} contains the statement of Theorem \ref{a1.7}, our main result, and its proof. The proof is based on many estimates that are relegated to Section \ref{sec:est}. \section{Heuristic outline of the argument}\label{j30.1} Consider a fixed population of $n\ge 2$ individuals moving according to independent Brownian motions. They all start inside a domain $\Lambda$ and are killed when they exit $\Lambda$. When an individual is killed, another randomly (uniformly) chosen individual branches into two individuals. Thus the population size is always equal to $n$. Assume that $\Lambda$ is a Lipschitz domain with the Lipschitz constant 1. It has been proved that there exists a single trajectory inside the branching structure (a ``spine'') which extends over the whole time interval $[0,\infty)$ and never hits the boundary of $\Lambda$ (see Sections \ref{ssec:DHPs} and \ref{de2.1}). Our main result is stated in Theorem \ref{a1.7}. It says that, as the population size $n$ goes to infinity, the spine processes converge in distribution to Brownian motion conditioned to stay in $\Lambda$ forever (an example of a space-time $h$-process of Doob). It is known (see the main theorem in \cite{Pin}) that the distribution of Brownian motion conditioned to stay in $\Lambda$ for a very long time is close to the distribution of Brownian motion conditioned to stay in $\Lambda$ forever; the latter process is ``$h$-transformed'' space-time Brownian motion. We state and prove our own version of the claim in Lemma \ref{a14.3} because we need a specific order of quantifiers. We will also prove that if we fix an arbitrarily large $s>0$ then we can find $t>s$ such that if we condition Brownian motion to stay in $\Lambda$ on $[0,t]$ and we condition it to have an arbitrary distribution at time $t$, then its distribution on $[0,s]$ will be arbitrarily close to the distribution of Brownian motion conditioned to stay in $\Lambda$ forever. It follows that it will suffice to prove that for any sufficiently large fixed $t>0$, when the population size $n$ goes to infinity, the spine distributions on the time interval $[0,t]$ converge to the distribution of Brownian motion conditioned to stay in $\Lambda$ on $[0,t]$ and conditioned to have a given, arbitrarily chosen distribution at time $t$. We will apply a theorem of Villemonais \cite{Villemonais13}. The theorem says that when the population size $n$ is large then the distribution of the location of a randomly (uniformly) chosen individual at time $t$ is close to the distribution of the position at time $t$ of the driving process conditioned to stay in $\Lambda$ until time $t$. Villemonais' theorem applies to branching populations driven by very general Markov processes. This has been used in \cite{BiBu} to show that Villemonais' theorem can be applied not only to the positions of individuals at time $t$ but also to the whole trajectories (genealogies) of individuals alive at time $t$. In other words, if time $t>0$ is fixed and the population size $n$ is large, and one chooses an individual alive at time $t$ randomly (uniformly), then the distribution of the genealogy of this individual on $[0,t]$ is close to the distribution of the driving process on $[0,t]$ conditioned to stay in $\Lambda$ until time $t$. Given the location of all individuals at time $t$, the individual residing on the spine is determined by the Fleming-Viot process on the interval $[t,\infty)$. There is no reason why the spine position should be chosen uniformly among all individuals at time $t$. Thus we cannot apply the generalized Villemonais' theorem directly. In order to surmount this difficulty, and make Villemonais' result applicable to our problem, we divide the domain $\Lambda$ into small cubes. For a fixed small cube, we show that the probability that the spine passes through any individual inside the cube at time $t$ is very close to the probability that it passes through any other individual that happens to be in the same cube at time $t$. The reason is that, given the positions of individuals in a cube at time $t$ and their positions at a slightly bigger time $t+\Delta t$, Brownian motions conditioned to connect these points are almost equally likely to choose any permutation to connect the two sets of points; this follows from the form of the multidimensional Gaussian distribution. Hence, on the scale of a single small cube, it is true that the spine position is chosen almost uniformly from all individuals present in the cube at time $t$. Villemonais proved a quantitative version of his theorem in \cite{Villemonais13}. We can use this version to show that the genealogical trajectory of an individual chosen randomly (uniformly) in a small cube at time $t$ has a distribution close to the distribution of Brownian motion conditioned to stay in $\Lambda$ on the time interval $[0,t]$ and reach the cube at time $t$. Since we do not know through which small cube the spine passes at time $t$, all we can say is that the distribution of the spine on $[0,t]$ is close to the distribution of Brownian motion conditioned to stay in $\Lambda$ on $[0,t]$ and to have some (unknown to us) distribution at time $t$. In view of earlier remarks, this completes the main argument but there are several loose ends that need to be taken care of. First, trajectories may branch on the small time interval $[t, t+\Delta t]$ mentioned above. We show that this effect is negligible. Next, the argument based on small cubes applies only to those cubes that are far from the boundary (relative to the cube size) so that we can assume that the trajectories of individuals starting from the cube at time $t$ have negligible chance of hitting the boundary of $\Lambda$ during the time interval $[t,t+\Delta t]$. The cubes that are close to the boundary are taken care of by proving that the spine is unlikely to be close to the boundary of $\Lambda$. This is accomplished by showing that there are no trajectories in the branching structure that stay close to the boundary of $\Lambda$ for a long time. Hence, the spine, being one of these trajectories, must stay away from the boundary much of the time. \section{Notation, definitions and known results}\label{review} This section is based on \cite{BiBu}. Our main theorem will be concerned with Fleming-Viot processes driven by Brownian motion in $\mathbb R^d$. Nevertheless we need to consider Fleming-Viot processes with an abstract underlying state space because our arguments will be based on ``dynamical historical processes'' which are Fleming-Viot processes driven by Markov processes with values in function spaces. Let $\Gamma$ be a topological space and let $\Lambda$ be a Borel proper subset of $\Gamma$. We will write $\Lambda^c= \Gamma\setminus \Lambda$. Let $\{B_t,t\geq 0\}$ be a continuous time strong Markov process with state space $\Gamma$ whose almost all sample paths are right continuous. For $s\geq 0$, let \begin{equation}\label{m29.1} \tau_{\Lambda,s}=\inf\set{t>s:B_t\in \Lambda^c}, \qquad \tau_\Lambda = \tau_{\Lambda, 0}. \end{equation} We assume that $\Lambda^c$ is absorbing, i.e., $B_t = B_{\tau_{\Lambda,s}}$ for all $t\geq \tau_{\Lambda,s}$, a.s. We make the following assumptions. (A1) $ \P\left( s <\tau_{\Lambda,s}<\infty \mid B_s = x\right)=1 $ for all $x\in \Lambda$ and $s\geq 0$. (A2) For every $x\in \Lambda$ and $s\geq 0$, the conditional distribution of $\tau_{\Lambda,s}$ given $\{B_s = x\}$ has no atoms. Consider an integer $n\geq 2$ and a family $\set{U_k^i,\, 1\leq i\leq n,\, k\geq 1}$ of jointly independent random variables such that $U_k^i$ has the uniform distribution on the set $\set{1,\dotsc,n}\setminus\set{i}$. We will use induction to construct a Fleming-Viot type process $\mathbf X^n_t=(X^1_t,\dotsc,X^n_t)$, $t\geq 0$, with values in $\Lambda^n$. Let $\tau_0=0$ and consider the (possibly random) initial configuration $(X_0^{1,1},\dotsc,X_0^{1,n})\in \Lambda^n$. Let \begin{equation}\label{o26.1} X_t^{1,1},\dotsc,X_t^{1,n},\quad t\geq 0, \end{equation} be independent and have the transition probabilities of the process $B$. We assume that processes in \eqref{o26.1} are independent of the family $\set{U^i_k,\, 1\leq i\leq n,\, k\geq 1}$. Let \begin{equation*} \tau_1=\inf\set{t>0:\exists_{1\leq i\leq n}\, X_t^{1,i}\in \Lambda^c}. \end{equation*} By assumption (A2), no pair of processes can exit $\Lambda$ at the same time, so the index $i$ in the above definition is unique, a.s. For the induction step, assume that the families \begin{equation*} X_t^{j,1},\dotsc,X_t^{j,n}, \quad t\geq 0, \end{equation*} and the stopping times $\tau_j$ have been defined for $j\leq k$. For each $j\leq k$, denote by $i_j$ the unique index such that $X_{\tau_j}^{j,i_j}\in \Lambda^c$. Let \begin{equation*} X_{\tau_k}^{k+1,m}=X_{\tau_k}^{k,m}\quad \text{for $m\neq i_k$,} \end{equation*} and \begin{equation*} X_{\tau_k}^{k+1,i_k}=X_{\tau_k}^{k,U^{i_k}_k}. \end{equation*} In words: it is the particle indexed by $(k,i_k)$, i.e., $X_{\cdot}^{k,i_k}$, which hits the boundary and then jumps on another, randomly chosen particle inside. Let the conditional joint distribution of \begin{equation* X_t^{k+1,1},\dotsc,X_t^{k+1,n},\quad t\geq \tau_k, \end{equation*} given $\set{X_t^{j,m},\, 0\leq t \leq \tau_j, 1\leq m\leq n}$, $j\leq k$, and $\set{U_k^i,\, 1\leq i\leq n,\, k\geq 1}$, be that of $n$ independent processes with transition probabilities of $B$, starting from $X_{\tau_k}^{k+1,m}$, $1\leq m\leq n$. Let \begin{equation*} \tau_{k+1}=\inf\set{t>\tau_k:\exists_{1\leq i\leq n}\, X_t^{k+1,i}\in \Lambda^c}. \end{equation*} We define $\mathbf X^n_t:=(X^1_t,\dotsc,X^n_t)$ by \begin{equation*} X_t^m=X_t^{k,m},\qquad \text{ for } \tau_{k-1}\leq t<\tau_k,\ k\geq 1,\ m=1,2,\dotsc,n. \end{equation*} Note that the process $\mathbf X^n$ is well defined only up to the time \begin{equation*} \tau_\infty :=\lim_{k\to\infty} \tau_k. \end{equation*} We will say that $X^k$ experiences branching on the interval $[s_1,s_2]$ if some other process $X^m$ jumps from $\partial \Lambda$ to $X^k_s$ at a time $s\in[s_1,s_2]$. Note that the particle that is jumping is not considered to experience branching at that time. The processes $X^k$, $k=1,\dots,n$, are ``driven'' by independent copies $B^k$ of $B$ that can be distilled from $X^k$'s in the following way, \begin{align}\label{a22.1} B^k_t= X^k_t - X^k_0 - \sum_{\tau_i \leq t} \left(X^k_{\tau_i} - X^k_{\tau_i-}\right), \qquad t\geq 0. \end{align} \subsection{Dynamical historical processes and spine}\label{ssec:DHPs} Heuristically speaking, for each $k\in\set{1,\dotsc,n}$, the ``dynamical historical process'' $\{H^k_t(s), 0\leq s\leq t\}$ (to be defined rigorously below) represents the unique path in the branching structure of the Fleming-Viot process which goes from $X^k_t$ to one of the points $X^1_0, \dots, X^n_0$ along the trajectories of $X^1, \dots, X^n$ and does not jump at times $\tau_k$. Let $\mathcal A$ be the family of all sequences of the form $((a_1,b_1), (a_2,b_2), \dots, (a_k,b_k))$, where $a_i \in \{1,\dots, n\}$ and $b_i \in \mathbb N$ for all $i$. If $\mathbf a=((a_1,b_1), (a_2,b_2), \dots, (a_k,b_k))$ then we will write $\mathbf a + (m,r)$ to denote $((a_1,b_1), (a_2,b_2), \dots, (a_k,b_k), (m,r))$. We will define a function $\mathcal L : \{1,\dots, n\} \times [0,\tau_\infty) \to \mathcal A$. We interpret $\mathcal L(i,s)$ as a label of $X^i_s$ so, by abuse of notation, we will write $\mathcal L(X^i_s)$ instead of $\mathcal L(i,s)$. We let $\mathcal L(X^i_s)=((i,0))$ for all $0\leq s < \tau_1$ and $1\leq i\leq n$. If $\mathcal L(X^i_s)=\mathbf a$ for $\tau_{k-1} \leq s < \tau_k$, $i \ne i_k$ and $i \ne U^{i_k}_k $ then we let $\mathcal L(X^i_s)=\mathbf a$ for $\tau_{k} \leq s < \tau_{k+1}$. Suppose that $i = U^{i_k}_k $ and $\mathcal L(X^i_s)=\mathbf a$ for $\tau_{k-1} \leq s < \tau_k$. Then we let $\mathcal L(X^i_s)=\mathbf a+(i, k)$ and $\mathcal L(X^{i_k}_s)=\mathbf a+(i_k,k)$ for $\tau_{k} \leq s < \tau_{k+1}$. Suppose that $\mathcal L(X^\ell_t)=((a_1,b_1), (a_2,b_2), \dots, (a_k,b_k))$ for some $k\geq 1$. The assumption (A1) on the driving process $B$ implies that $X^\ell$ will ``hit'' $\Lambda^c$ at some time greater than $t$ (more precisely, $\ell =i_j $ for some $j> b_k$) with probability 1. Before that time, it may also happen that some other $X^i$ will jump onto $X^\ell$; more precisely, it may happen that $\ell =U^{i_j}_j $ for some $j> b_k$. Let $\tau'$ be the minimum of all such times. From the definition of $\mathcal L$ we easily infer that $0=b_1<b_2<\dotsc<b_k$ and $\tau_{b_k}\leq t$, so that $0<\tau_{b_1}<\dotsc<\tau_{b_k}\leq t<\tau'$. For $\tau_{b_m}\leq s<\tau_{b_{m+1}}$ with $1\leq m<k$ we let $H^\ell_t(s)=X^{a_m}_s$, and for $\tau_{b_k}\leq s\leq t$ we let $H^\ell_t(s)=X^{a_k}_s$. We will call $\{H^\ell_t(s), 0\leq s\leq t\}$ a \emph{dynamical historical process} (DHP) corresponding to $X^\ell_t$. Note that $H^\ell_t$ is defined for $1\leq \ell \leq n$ and $0\leq t < \tau_\infty$. The spine process will be defined below the statement of Theorem \ref{thm:uniquespine}. Roughly speaking, the spine is the unique DHP that extends from time 0 to time $\tau_\infty$. The existence and uniqueness of the spine was proved in \cite[Thm.~4]{GK} under very restrictive assumptions on the driving process $B$ and under the assumption that the lifetime $\tau_\infty$ is infinite. It was proved in \cite{BiBu} that the claim holds under minimal reasonable assumptions, that is, the strong Markov property of the driving process and non-atomic character of the exit time distributions. \begin{theorem}\label{thm:uniquespine} Fix some $n\geq 2$, suppose that $B$ satisfies assumptions (A1)-(A2) and $\mathbf X^n_0 \in \Lambda^n$, a.s. Then, a.s., there exists a unique infinite sequence $((a_1,b_1), (a_2,b_2), \dots)$ such that its every finite initial subsequence is equal to $\mathcal L(X^i_s)$ for some $1\leq i \leq n$ and $s\geq 0$. \end{theorem} In the notation of the theorem, we define the spine of $\mathbf X^n$ by $J^n(s) = X^{a_m}_s$ for $\tau_{b_m}\leq s<\tau_{b_{m+1}}$, $m\geq 1$. We will write $\chi(s) = a_m$. \subsection{Brownian motion-driven Fleming-Viot process}\label{de2.1} From now on we will assume that the driving process $B$ is Brownian motion in $\mathbb R^d$. We will assume that $\Lambda\subset \mathbb R^d$ is an open bounded connected Lipschitz domain with the Lipschitz constant less than 1. This means that every point in $\partial \Lambda$ has a neighborhood where $\partial \Lambda$ can be represented as the graph of a Lipschitz function with the Lipschitz constant less than 1 in some orthonormal coordinate system. Under these assumptions $\tau_\infty= \infty$, a.s. (see \cite{BBF, GK}). We mention parenthetically that if the driving process is Brownian motion, $\Lambda$ is a polytope and $n=2$ then we also have $\tau_\infty= \infty$, a.s. (see \cite{BBF}). However, it was proved in \cite{BBP} that $\tau_\infty < \infty$, a.s., for every $n$, for some Fleming-Viot processes driven by one-dimensional diffusions. \subsection{Dynamical historical process as a Fleming-Viot process}\label{DHP} Let $C([0,t],\Gamma)$ denote the space of continuous functions with values in $\Gamma$, with the supremum norm. For a function $f: C([0,t],\Gamma)\to \mathbb R$, let \begin{align}\label{m29.4} \mathcal H^n_t(f)&=\frac{1}{n}\sum_{k=1}^n f(H^k_t). \end{align} Let $\mu_n=\frac 1 n\sum_{k=1}^n \delta_{X^k_0}$, i.e., $\mu_n$ denotes the empirical distribution of $\mathbf X_0$. Recall definition \eqref{m29.1} and let \begin{align}\label{m29.3} \widetilde \P_t^\mu(A) = \P\left(\{B_s, 0\leq s \leq t\} \in A\mid \tau_\Lambda > t\right), \end{align} assuming that the distribution of $B_0$ is $\mu$. In the case when $\mu=\mu_n$, we will write $\widetilde \P_t$ instead of $\widetilde \P_t^{\mu_n}$. The corresponding expectations will be denoted $\widetilde \E_t$ and $\widetilde \E_t^\mu$. We will write $\widetilde\E_t(f )$ instead of $\widetilde\E_t(f(\{B_s, 0\leq s \leq t\}) )$. The following theorem is a corollary of \cite[Thm.~2.2]{Villemonais13}. \begin{theorem}\label{a3.2} For $t\geq 0$ and any measurable function $f: C([0,t],\Gamma)\to \mathbb R$ with $\|f\|_\infty\leq 1$, \begin{align* \E \left|\mathcal H_t^n(f) -\widetilde\E_t(f )\right| \leq 2\left(1+\sqrt{2}\right) \left(\E\left( \P_{\mu_n} (\tau_\Lambda > t)^{-2}\right)\right)^{1/2} n^{-1/2}, \end{align*} where $\P_{\mu_n}$ represents the distribution of the driving process $B$ with the initial distribution $\mu_n$. \end{theorem} \begin{proof} It has been shown in the proof of \cite[Thm. 4.2]{BiBu} that DHP can be identified with a space-time time-homogeneous Fleming-Viot process. Hence, our theorem follows directly from \cite[Thm. 2.2]{Villemonais13}. \end{proof} \subsection{Conditioned Brownian motion} A major monograph discussing conditioned Brownian motion is \cite{Doob}. The topic and the book are rather technical so the reader may find the presentation of the basic facts about conditioned Brownian motion and conditioned space-time Brownian motion in the introduction of \cite{BS} more accessible. Recall that $\Lambda\subset \mathbb R^d$ is a bounded Lipschitz domain. Let $\varphi>0$ denote the first eigenfunction of $(-\frac 1 2) \Delta$, where $\Delta$ denotes the Dirichlet Laplacian in $\Lambda$. Let $\lambda>0$ be the corresponding eigenvalue. Then the space-time Brownian motion $(B_t,t)$ conditioned by the parabolic function $h(x,t)= e^{\lambda t} \varphi(x)$ stays in $\Lambda \times \mathbb R$ forever. The spatial component of this process can be considered to be ``Brownian motion conditioned to stay in $\Lambda$ forever'' because it is the weak limit, as $t\to \infty$, of Brownian motions conditioned not to exit $\Lambda$ in $[0,t]$ (see \cite{Pin}). We will use $\widehat \P^\mu$ to denote the distribution of Brownian motion conditioned to stay in $\Lambda$ forever, with the initial distribution $c \varphi(x)\mu(dx)$, where $c>0$ is the normalizing constant. \section{Weak convergence of spines}\label{sec:main} Recall that we assume that the driving process $B$ is Brownian motion and $\Lambda\subset \mathbb R^d$ is an open bounded connected Lipschitz domain with the Lipschitz constant less than 1. Our main result is as follows. \begin{theorem}\label{a1.7} Suppose that $\mu$ is a probability measure supported in a set $\Lambda_1\subset \Lambda$ such that $\dist(\Lambda_1, \Lambda^c) >0$. Consider a sequence of Fleming-Viot processes $\mathbf X^n$ in $\Lambda$ driven by Brownian motion. Assume that the measures $\mu_n=\frac 1 n\sum_{k=1}^n \delta_{X^k_0}$ are supported in $\Lambda_1$ and converge weakly to $\mu$ as $n\to \infty$. Then the distributions of spines $\{J^n_t, t\geq 0\}$ converge to $\widehat \P^\mu$. \end{theorem} \begin{remark} (i) We believe that the theorem holds for all bounded Lipschitz domains, not only those with the Lipschitz constant less than 1. At present it is not known whether the lifetime $\tau_\infty$ is finite for the Fleming-Viot process driven by Brownian motion in any Lipschitz domains. However, it is implicit in arguments in \cite{BHM00} that for every Euclidean domain $\Lambda$ (not necessarily Lipschitz), $\tau_\infty\to\infty$ in distribution as the number $n$ of particles goes to infinity. This is enough to extend Theorem \ref{a1.7} to all Lipschitz domains. We omit the proof of this stronger result to avoid another layer of technicalities. We cannot get rid of the assumption of the Lipschitz character of the domain $\Lambda$ because it is an essential ingredient in Lemma \ref{a11.4}. (ii) We assumed that the measures $\frac 1 n\sum_{k=1}^n \delta_{X^k_0}$ are supported in $\Lambda_1\subset \Lambda$ such that $\dist(\Lambda_1, \Lambda^c) >0$ because, to apply Theorem \ref{a3.2} in our argument, we need the following bound: for each fixed $t>0$, \begin{align}\label{a8.1} \limsup_{n\to\infty} \left(\E\left( \P_{\mu_n} (\tau_\Lambda > t)^{-2}\right)\right)^{1/2} <\infty. \end{align} It is easy to see that for every fixed $t>0$, the function $x\to \P (\tau_\Lambda > t\mid B_0 = x)$ is continuous and strictly positive inside $\Lambda$. Hence, $\inf_{x\in \Lambda_1} \P (\tau_\Lambda > t\mid B_0 = x) >0$. This implies \eqref{a8.1}. The following example shows that \eqref{a8.1} fails for some natural initial distributions. Suppose that $\Lambda$ has a smooth boundary, $\mu$ is the uniform probability distribution in $\Lambda$ and $X^k_0$, $k=1,\dots,n$, are i.i.d. with the distribution $\mu$. By an argument similar to that in Lemma \ref{a11.4}, \begin{align*} \P(\tau_\Lambda > t \mid B_0 = x) \leq c_1 \dist(x,\partial \Lambda), \qquad x\in \Lambda, \end{align*} where $c_1$ depends on $\Lambda$ and $t$. It follows that \begin{align* \E\left( \P_{\mu_n} (\tau_\Lambda > t)^{-2}\right) &= \frac 1 {|\Lambda|} \int_\Lambda \P(\tau_\Lambda > t \mid B_0 = x)^{-2} dx \geq \frac 1 {|\Lambda|} \int_\Lambda c_1^{-2} \dist(x,\partial \Lambda)^{-2} dx\\ &\geq c_2 \int_{0+} s^{-2} ds=\infty. \end{align*} Therefore, \eqref{a8.1} fails in this case. (iii) It is conceivable that for a fixed finite number of individuals, the spine of the Fleming-Viot process has the distribution of the process conditioned not to hit the boundary of the domain. Although this seems to be highly unlikely, proving that this is not the case does not seem to be easy. Apparently there are only two examples showing that this is not the case, one in \cite[Sect. 6]{BiBu} and another one in \cite{BuTa}. They both deal with Fleming-Viot processes with only $n=2$ individuals. In the first case, the process has a finite state space; in the second case, the driving process is Brownian motion in $[0,\infty)$. \end{remark} \begin{proof}[Proof of Theorem \ref{a1.7}] The proof is based on a large number of estimates that are relegated to Section \ref{sec:est}. It will suffice to show that for every fixed $t_1>0$, the distributions of $\{J^n_t, 0\leq t\leq t_1\}$ converge weakly to $\widehat \P^\mu$ truncated in the obvious way to the interval $[0,t_1]$. To see this, note that convergence of finite dimensional distributions on $[0,\infty)$ is implied by the same type of convergence on all compact subintervals of $[0,\infty)$. It follows from Proposition 1.5 in Chapter XIII in \cite{RevuzYor99} that tightness on $[0,\infty)$ is implied by tightness on compact time intervals. So we fix an arbitrary $t_1>0$. Let $0 < \gamma<\infty$ be as in Lemma \ref{a11.4}. It is elementary to check that there exist $\alpha, \delta,\xi >0$ satisfying the following conditions, \begin{align} &\alpha \leq (1/2 -2\delta+3\gamma \delta/4 )/(\gamma+2d), \label{a3.3}\\ &0<\xi< 2\alpha -3\delta/2 , \label{a7.1}\\ &\alpha>\delta,\label{a7.8}\\ &-2\alpha +5\delta/4>-1/4+\alpha d -\gamma(-\alpha+3\delta/4). \label{a10.2} \end{align} Fix an arbitrarily small $\varepsilon>0$. Fix some $u=u(t_1,\varepsilon)>t_1$ which is greater than $s_1$ in Lemma \ref{a11.1}, greater than $s_1$ in Lemma \ref{a11.4}, and greater than $s_1$ in Lemma \ref{a14.3}. Then let $t_2 = u + j n^{-2\alpha + \delta}$, where $j\geq 0$ is the smallest integer such that $\P(C_j)\leq \varepsilon$ in the notation of Lemma \ref{a3.6}. Note that $j$ and, therefore, $t_2$ depend on $n$. Since $j\leq k_1$ in the notation of Lemma \ref{a3.6}, we have $t_2 \in[u,u+1]$. Let $t_3 = t_2 + n^{-2\alpha + \delta}$. For $z= (z_1, \dots, z_d)\in \mathbb R^d$ let \begin{align*} Q(z,r) = \left\{(y_1,\dots,y_d)\in \mathbb R^d: \max_{1\leq k \leq d} |y_k - z_k|\leq r\right\}. \end{align*} Let $\mathbf Q$ be the family of all cubes $Q=Q((z_1, \dots, z_d), n^{-\alpha}/2)$ such that every $z_k$ is an integer multiple of $n^{-\alpha}$, and \begin{align}\label{m29.5} \dist(Q, \Lambda^c) \geq 3 n^{-\alpha +3\delta /4}. \end{align} If $0< s_1 < s_2$ and $\omega\in C[0,s_2]$ then $\omega|_{s_1}\in C[0,s_1]$ will denote the truncation of $\omega$ to the interval $[0,s_1]$. Fix a continuous non-negative function $f:C[0,\infty) \to \mathbb R$ with $\|f\|_\infty \leq 1$, which depends only on the values of the process on $[0,t_1]$, i.e., if $\omega', \omega'' \in C[0,\infty)$ and $\omega'|_{t_1} = \omega''|_{t_1}$ then $f(\omega') = f(\omega'')$. In the same spirit, we can apply $f$ to $\omega\in C[0,s]$ for any $s\geq t_1$. Assume that $\widetilde \E_{t_2} (f) >0$ and, therefore, $\widetilde \E_{t_1} (f) >0$. For $Q\in \mathbf Q$, let $G_Q=\{B_{t_2} \in Q\}$ and $f_Q = f\mathbbm{1}_{G_Q}$. Recall definition \eqref{m29.4} and let \begin{align}\label{m29.10} A_1&=\bigcap_{Q\in \mathbf Q}\left\{\left|\mathcal H_{t_2}^n(\mathbbm{1}_{G_Q}) - \widetilde\P_{t_2}(G_Q) \right| \leq n^{-\alpha d +\gamma(-\alpha+3\delta/4) -\delta}\right\},\\ A_2&=\bigcap_{Q\in \mathbf Q}\left\{\left|\mathcal H_{t_2}^n(f_Q) - \widetilde\E_{t_2}(f_Q )\right| \leq n^{-\alpha d +\gamma(-\alpha+3\delta/4) -\delta}\right\}.\label{m31.1} \end{align} For a cube $Q\in\mathbf Q$, let \begin{align}\label{a14.4} \mathcal{M}_Q&=\left\{j: X^{j}_{t_2} \in Q \text{ and } X^{j}\text{ does not experience branching in } [t_2,t_3]\right\},\\ \mathcal{M}_Q^c&= \left\{j: X^{j}_{t_2} \in Q \text{ and } X^{j}\text{ experiences branching in } [t_2,t_3]\right\},\notag\\ \mathcal{M}_Q^+&=\mathcal{M}_Q \cup \mathcal{M}_Q^c.\notag \end{align} Let $|\mathcal{M}_Q|, |\mathcal{M}_Q^c|$ and $|\mathcal{M}_Q^+|$ denote the cardinalitites of $\mathcal{M}_Q,\mathcal{M}_Q^c$ and $\mathcal{M}_Q^+$. We will use the notation $\mathcal{M}_Q= \{i_1, \dots, i_N\}$. Note that $N=|\mathcal{M}_Q|$ is a random integer. Recall \eqref{a22.1} and let \begin{align}\label{a20.1} A_3&=\bigcap_{Q\in \mathbf Q} \left\{|\mathcal{M}_Q^c| /|\mathcal{M}_Q^+|\leq 2 n^{-2\alpha + 5\delta/4}\right\},\\ \label{a2.3} A_4 &= \bigcap_{1\leq i \leq n} \left\{\sup_{s,t\in[t_2,t_3]}\left |B^{i}_s-B^{i}_t\right| < 2 n^{-\alpha +3\delta /4}\right\},\\ A_5 &=A_1\cap A_2\cap A_3\cap A_4. \end{align} Let \begin{align*} \mathcal{F}_t = \sigma\{\mathbf X_s,0\leq s\leq t\},\qquad \mathcal{F}'_t = \sigma(\mathbf X_ t),\qquad \mathcal{F}_t^+ = \sigma\{\mathbf X_s, s\geq t\}. \end{align*} Let the random variable $k_*$ be the largest integer with $\tau_{k_*} \leq t_3$. Let $\mathcal{G}_1$ be the $\sigma$-field generated by $\{\mathbf X_t, 0\leq t \leq t_2\}$, $\set{U_k^i,\, 1\leq i\leq n,\, 1\leq k\leq k_*}$, and by $\{X^k_t, t_2\leq t \leq t_3\}$, $k\notin \bigcup_{Q\in\mathbf Q} \mathcal{M}_Q$. The $\sigma$-fields $\mathcal{G}_1$ and $\mathcal{F}_{t_3}^+$ are independent given $\mathcal{F}_{t_3}'$. Note that $A_1,A_2,A_3 \in \mathcal{G}_1$. Let $\mathcal{G}_2$ be the $\sigma$-field generated by $\mathcal{G}_1$ and $A_4$. Let $\mathcal{G}_3$ be the smallest $\sigma$-field containing $\mathcal{G}_2$ and $\mathcal{F}_{t_3}^+$. In view of \eqref{m29.5}, if $j\in\mathcal{M}_Q$ and $A_4$ holds then $X^{j}$ does not hit $\partial \Lambda$ in the time interval $[t_2, t_3]$ and, therefore, it does not jump during this interval. Thus, on the event $A_5$, the joint conditional distribution of $\left\{ X^{i}_t - X^i_{t_2}, t_2\leq t \leq t_3\right\}$, $i\in\mathcal{M}_Q$, given $\mathcal{G}_2$ is that of independent Brownian motions $\left\{ B^{i}_t-B^i_{t_2}, t_2\leq t \leq t_3\right\}$, $i\in\mathcal{M}_Q$, conditioned by \begin{align* \bigcap_{i\in\mathcal{M}_Q} \left\{\sup_{s,t\in[t_2,t_3]}\left |B^{i}_s-B^{i}_t\right| <2 n^{-\alpha +3\delta /4}\right\}. \end{align*} Let \begin{align*} \Lambda'=\left\{(x^1,x^2,z^1,z^2): x^1,x^2\in Q, z^1,z^2\in\Lambda, |x^m-z^\ell| \leq n^{-\alpha +3\delta /4} \text{ for }m,\ell=1,2\right\}. \end{align*} Since $A_5 \subset A_4$, by combining (i) and (ii) in Lemma \eqref{a2.5}, we obtain the following estimate for the Radon-Nikodym derivative on $\Lambda'$, for any $ j,k \in \mathcal{M}_Q$, and sufficiently large $n$, \begin{align}\label{m29.7} &\frac{\P\left( \{B^{j}_{t_3} \in dz^1,B^{k}_{t_3} \in dz^2 , B^{j}_{t_2} \in d x^1, B^{k}_{t_2} \in d x^2\} \cap {A_5} \mid \mathcal{G}_2 \right)} {\P\left( \{B^{k}_{t_3} \in dz^1,B^{j}_{t_3} \in dz^2 , B^{j}_{t_2} \in d x^1, B^{k}_{t_2} \in d x^2 \} \cap {A_5}\mid \mathcal{G}_2\right) }\\ &\qquad\leq \exp\left( 2\sqrt{d} n^{ -\delta/4}\right) \frac{1+ \exp\left( - c_1 n^{\delta/2}\right)} {1- \exp\left( - c_1 n^{\delta/2}\right)} \leq \exp\left( 4\sqrt{d} n^{ -\delta/4}\right). \notag \end{align} The left hand side of the above formula represents a Radon-Nikodym derivative. This notation will be used throughout the paper. We hope that this informal notation will be easy to grasp and will not cause confusion. Consider $x^1, \dots, x^N \in Q$ and $z^1, \dots,z^N \in \Lambda$ such that $|x^i-z^j| \leq n^{-\alpha +3\delta /4} $ for all $i$ and $j$. By \eqref{m29.7}, for any $j_1,j_2\in \mathcal{M}_Q$, and sufficiently large $n$, \begin{align}\notag &\frac{\P\left( \{B^{j}_{t_2} \in d x^j, j \in\mathcal{M}_Q ; B^{j_1}_{t_3} \in dz^{j_2}; B^{j_2}_{t_3} \in dz^{j_1}; B^{j}_{t_3} \in dz^j, j \in\mathcal{M}_Q\setminus\{ j_1, j_2\} \} \cap {A_5} \mid \mathcal{G}_3 \right)} {\P\left( \{B^{j}_{t_2} \in d x^j, j \in\mathcal{M}_Q ; B^{j}_{t_3} \in dz^j, j \in\mathcal{M}_Q\} \cap {A_5} \mid\mathcal{G}_3\right) }\\ &\qquad\leq \exp\left( 4\sqrt{d} n^{ -\delta/4}\right). \label{m29.8} \end{align} Consider $z^1, \dots,z^N \in \Lambda$ such that $|x-z^j| \leq n^{-\alpha +3\delta /4} $ for all $j$ and $x\in Q$. Let \begin{align}\label{m31.5} A_6=\left\{ \bigcup_{j\in\mathcal{M}_Q}\left\{X^{j}_{t_3}\right\} =\left\{z^1, \dots,z^N\right\} \right\} \end{align} and note that the event concerns the equality of two unordered sets. It $A_6$ holds then for $1\leq m\leq N$, let $k(m)$ be such that $X^{k(m)}_{t_3} = z^m$. Let $A_7 = A_5\cap A_6$. It follows from \eqref{m29.8} that on $A_7$ the distribution of $H^{k(m)}_{t_3}(t_2)$ conditioned on $\mathcal{G}_2$ is almost uniform on $\bigcup_{j\in\mathcal{M}_Q}\left\{X^{j}_{t_2}\right\}$ up to a factor of $ \exp\left( 4\sqrt{d} n^{ -\delta/4}\right)$. Hence, for any $1\leq m_1,m_2\leq N$ and $ j_1,j_2 \in \mathcal{M}_Q$, and sufficiently large $n$, \begin{align}\label{a19.4} \frac {\P\left(\left\{H^{k(m_1)}_{t_3}(t_2) = X^{j_1}_{t_2} \right\}\cap A_7 \mid \mathcal{G}_3\right)} {\P\left(\left\{H^{k(m_2)}_{t_3}(t_2) = X^{j_2}_{t_2} \right\}\cap A_7 \mid \mathcal{G}_3\right)} \leq \exp\left( 8\sqrt{d} n^{ -\delta/4}\right), \end{align} so integrating over all sequences $(z^1, \dots, z^N)$ in the definition \eqref{m31.5} of $A_6$, we obtain for all $1\leq m\leq N$, $ j \in \mathcal{M}_Q$ and sufficiently large $n$, that \begin{align}\notag &\frac 1 {|\mathcal{M}_Q|} \exp\left(- 8\sqrt{d} n^{ -\delta/4}\right) \mathbbm{1}_{A_5} \\ &\qquad\leq \P\left(\left\{H^{k(m)}_{t_3}(t_2) = X^{j}_{t_2} \right\}\cap A_5 \mid \mathcal{G}_3\right) \leq \frac 1 {|\mathcal{M}_Q|} \exp\left( 8\sqrt{d} n^{ -\delta/4}\right) \mathbbm{1}_{A_5}.\label{a24.2} \end{align} Suppose that $R_Q$ is a $ \mathcal{G}_3$-measurable random variable taking values in $\mathcal{M}_ Q$. Then, by \eqref{a19.4}, for any $1\leq m_1,m_2\leq N$ and $ j_1,j_2 \in \mathcal{M}_Q$, and sufficiently large $n$, \begin{align}\label{jy31.1} \frac {\P\left(\left\{H^{k(m_1)}_{t_3}(t_2) = X^{j_1}_{t_2} \right\}\cap A_7 \mid R_Q=k(m_1),\mathcal{G}_3\right)} {\P\left(\left\{H^{k(m_2)}_{t_3}(t_2) = X^{j_2}_{t_2} \right\}\cap A_7 \mid R_Q=k(m_1), \mathcal{G}_3\right)} \leq \exp\left( 8\sqrt{d} n^{ -\delta/4}\right). \end{align} In the following calculation, the first inequality follows from \eqref{jy31.1}. The second inequality follows from \eqref{a24.2}. For any $1\leq m_1 \leq N$, \begin{align*} &\P\left(\left\{H^{R_Q}_{t_3}(t_2) = X^{j_1}_{t_2}\right\}\cap A_5 \mid \mathcal{G}_3\right)\\ & = \sum_{1\leq m \leq N} \P\left(\left\{H^{R_Q}_{t_3}(t_2) = X^{j_1}_{t_2}\right\}\cap A_5 \mid R_Q=k(m), \mathcal{G}_3\right) \P(R_Q =k(m)\mid \mathcal{G}_3)\\ & = \sum_{1\leq m \leq N} \P\left(\left\{H^{k(m)}_{t_3}(t_2) = X^{j_1}_{t_2}\right\}\cap A_5 \mid R_Q=k(m), \mathcal{G}_3\right) \P(R_Q =k(m)\mid \mathcal{G}_3)\\ & = \sum_{1\leq m \leq N} \P\left(\left\{H^{k(m)}_{t_3}(t_2) = X^{j_1}_{t_2}\right\}\cap A_5 \mid \mathcal{G}_3\right) \P(R_Q =k(m)\mid \mathcal{G}_3)\\ & \geq \sum_{1\leq m \leq N} \P\left(\left\{H^{k(m_1)}_{t_3}(t_2) = X^{j_1}_{t_2}\right\}\cap A_5 \mid \mathcal{G}_3 \right)\exp\left(- 8\sqrt{d} n^{ -\delta/4}\right) \times\\ &\qquad \times \P(R_Q =k(m)\mid \mathcal{G}_3)\\ & = \P\left(\left\{H^{k(m_1)}_{t_3}(t_2) = X^{j_1}_{t_2}\right\}\cap A_5 \mid \mathcal{G}_3 \right)\exp\left(- 8\sqrt{d} n^{ -\delta/4}\right) \sum_{1\leq m \leq N} \P(R_Q =k(m)\mid \mathcal{G}_3)\\ & = \P\left(\left\{H^{k(m_1)}_{t_3}(t_2) = X^{j_1}_{t_2}\right\}\cap A_5 \mid \mathcal{G}_3 \right)\exp\left( -8\sqrt{d} n^{ -\delta/4}\right)\\ & \geq \frac 1 {|\mathcal{M}_Q|}\exp\left(- 16\sqrt{d} n^{ -\delta/4}\right) \mathbbm{1}_{A_5}. \end{align*} It follows from this and the definition \eqref{m29.4} that \begin{align}\label{a24.1} \E&\left(\mathcal H_{t_2}^n(\mathbbm{1}_{G_Q}) f\left(H^{R_Q}_{t_3}\right) \mathbbm{1}_{A_5}\mid \mathcal{G}_3\right)\\ &= \sum_{j\in \mathcal{M}_Q} \E\left(\frac {|\mathcal{M}_Q^+|} n f\left(H^{j}_{t_3}\right) \mathbbm{1}_{A_5}\mid \mathcal{G}_3\right) \P\left(\left\{H^{R_Q}_{t_3}(t_2) = X^{j}_{t_2}\right\}\cap A_5 \mid \mathcal{G}_3\right) \notag \\ &\geq \frac {|\mathcal{M}_Q|}n \sum_{j\in \mathcal{M}_Q} \E\left( f\left(H^{j}_{t_3}\right) \mathbbm{1}_{A_5}\mid \mathcal{G}_3\right) \frac 1 {|\mathcal{M}_Q|}\exp\left(- 16\sqrt{d} n^{ -\delta/4}\right) \mathbbm{1}_{A_5} \notag \\ &= \frac 1n \sum_{j\in \mathcal{M}_Q} \E\left( f\left(H^{j}_{t_3}\right) \mathbbm{1}_{A_5}\mid \mathcal{G}_3\right) \exp\left(- 16\sqrt{d} n^{ -\delta/4}\right) . \notag \end{align} There is no indicator $\mathbbm{1}_{A_5}$ at the end of the last line because it is implicitly included in the conditional expectation, since $A_5$ is $\mathcal{G}_3$-measurable. One of the major steps in the proof will be a bound for $\E\left( f\left(H^{R_Q}_{t_3}\right) \mathbbm{1}_{A_5}\right)$. For that we will need the following estimate. To see why this estimate holds, recall the definition \eqref{m29.4}, the fact that $A_5\subset A_2$, and \eqref{m31.1} to see that \begin{align}\label{a21.1} &\left(\widetilde\E_{t_2}(f_Q )- n^{-\alpha d +\gamma(-\alpha+3\delta/4) -\delta} \right) \P(A_5)\\ &= \E\left(\left(\widetilde\E_{t_2}(f_Q )- n^{-\alpha d +\gamma(-\alpha+3\delta/4) -\delta} \right) \mathbbm{1}_{A_5}\right) \leq \E\left(\mathcal H_{t_2}^n(f_Q)\mathbbm{1}_{A_5}\right) \notag \\ &= \frac 1n \E\left(\sum_{i\in\mathcal{M}_Q\cup \mathcal{M}_Q^c} f\left(H^{i}_{t_2}\right)\mathbbm{1}_{A_5}\right). \notag \end{align} By the symmetry of the uniform distribution of each $U^i_k$, every $i\in\mathcal{M}_Q^+$ has the same chance to be in $\mathcal{M}_Q$, so using the fact that $A_5 \subset A_3$ and \eqref{a20.1}, \begin{align}\label{a21.2} & \frac 1n \E\left(\sum_{i\in\mathcal{M}_Q\cup \mathcal{M}_Q^c} f\left(H^{i}_{t_2}\right)\mathbbm{1}_{A_5}\right) =\E\left( \frac 1n \E\left(\sum_{i\in\mathcal{M}_Q\cup \mathcal{M}_Q^c} f\left(H^{i}_{t_2}\right)\mathbbm{1}_{A_5}\mid \mathcal{F}_{t_2}\right)\right)\\ &\qquad= \E\left(\frac 1n \E\left( \frac{|\mathcal{M}_Q^+|}{|\mathcal{M}_Q|} \sum_{i\in\mathcal{M}_Q} f\left(H^{i}_{t_2}\right)\mathbbm{1}_{A_5}\mid \mathcal{F}_{t_2} \right)\right) \notag \\ &\qquad\leq \E\left(\frac 1n \E\left( \sum_{i\in\mathcal{M}_Q} f\left(H^{i}_{t_2}\right)\mathbbm{1}_{A_5}\mid \mathcal{F}_{t_2}\right)\right) \left(1+3 n^{-2\alpha + 5\delta/4}\right)\notag\\ &\qquad= \frac 1n \E\left( \sum_{i\in\mathcal{M}_Q} f\left(H^{i}_{t_2}\right)\mathbbm{1}_{A_5}\right) \left(1+3 n^{-2\alpha + 5\delta/4}\right).\notag \end{align} If follows from the definition of $\mathcal{M}_Q$ and the fact that $f$ depends only on the values of the process in $[0,t_1] $ that \begin{align}\label{a21.3} &\frac 1n \E\left( \sum_{i\in\mathcal{M}_Q} f\left(H^{i}_{t_2}\right)\mathbbm{1}_{A_5}\right) \left(1+3 n^{-2\alpha + 5\delta/4}\right) =\frac 1n \E\left( \sum_{i\in\mathcal{M}_Q} f\left(H^{i}_{t_3}\right)\mathbbm{1}_{A_5}\right) \left(1+3 n^{-2\alpha + 5\delta/4}\right). \end{align} In the following calculation, the first inequality follows from \eqref{a24.1}. The second inequality follows from the fact that $A_5\subset A_1$ and \eqref{m29.10}. \begin{align}\label{a21.4} &\frac 1n \E\left( \sum_{i\in\mathcal{M}_Q} f\left(H^{i}_{t_3}\right)\mathbbm{1}_{A_5}\right) \left(1+3 n^{-2\alpha + 5\delta/4}\right)\\ &=\E\left(\frac 1n \E\left( \sum_{i\in\mathcal{M}_Q} f\left(H^{i}_{t_3}\right)\mathbbm{1}_{A_5}\mid \mathcal{G}_3\right)\right) \left(1+3 n^{-2\alpha + 5\delta/4}\right) \notag\\ &\leq \E\left( \E\left( \mathcal H_{t_2}^n(\mathbbm{1}_{G_Q}) f\left(H^{R_Q}_{t_3}\right) \mathbbm{1}_{A_5}\mid \mathcal{G}_3\right)\right) \left(1+3 n^{-2\alpha + 5\delta/4}\right)\exp\left( 16\sqrt{d} n^{ -\delta/4}\right) \notag \\ &\leq \E\left( \E\left( \left(\widetilde\P_{t_2}(G_Q) + n^{-\alpha d +\gamma(-\alpha+3\delta/4) -\delta} \right) f\left(H^{R_Q}_{t_3}\right) \mathbbm{1}_{A_5}\mid \mathcal{G}_3\right)\right) \notag \\ &\qquad \times \left(1+3 n^{-2\alpha + 5\delta/4}\right)\exp\left( 16\sqrt{d} n^{ -\delta/4}\right) \notag \\ &= \left(\widetilde\P_{t_2}(G_Q) + n^{-\alpha d +\gamma(-\alpha+3\delta/4) -\delta} \right) \E\left( \E\left( f\left(H^{R_Q}_{t_3}\right) \mathbbm{1}_{A_5}\mid \mathcal{G}_3\right)\right) \notag \\ &\qquad \times \left(1+3 n^{-2\alpha + 5\delta/4}\right)\exp\left( 16\sqrt{d} n^{ -\delta/4}\right) \notag \\ &= \left(\widetilde\P_{t_2}(G_Q) + n^{-\alpha d +\gamma(-\alpha+3\delta/4) -\delta} \right) \E\left( f\left(H^{R_Q}_{t_3}\right) \mathbbm{1}_{A_5}\right) \notag \\ &\qquad \times \left(1+3 n^{-2\alpha + 5\delta/4}\right)\exp\left( 16\sqrt{d} n^{ -\delta/4}\right). \notag \end{align} We combine \eqref{a21.1}-\eqref{a21.4} to obtain \begin{align*} &\left(\widetilde\E_{t_2}(f_Q )- n^{-\alpha d +\gamma(-\alpha+3\delta/4) -\delta} \right) \P(A_5)\\ &\qquad\leq \left(\widetilde\P_{t_2}(G_Q) + n^{-\alpha d +\gamma(-\alpha+3\delta/4) -\delta} \right) \E\left( f\left(H^{R_Q}_{t_3}\right) \mathbbm{1}_{A_5}\right) \notag \\ &\qquad\qquad \times \left(1+3 n^{-2\alpha + 5\delta/4}\right)\exp\left( 16\sqrt{d} n^{ -\delta/4}\right), \end{align*} and, therefore, \begin{align}\label{a21.5} &\E\left( f\left(H^{R_Q}_{t_3}\right) \mathbbm{1}_{A_5}\right)\\ &\qquad\geq \frac{\left(\widetilde\E_{t_2}(f_Q )- n^{-\alpha d +\gamma(-\alpha+3\delta/4) -\delta} \right) \P(A_5)} {\left(\widetilde\P_{t_2}(G_Q) + n^{-\alpha d +\gamma(-\alpha+3\delta/4) -\delta} \right) \left(1+3 n^{-2\alpha + 5\delta/4}\right)\exp\left( 16\sqrt{d} n^{ -\delta/4}\right)}.\notag \end{align} By Corollary \ref{a11.6}, \begin{align*} \widetilde\P_{t_2}\left( G_Q \right) \geq c_1 n^{-\alpha d +\gamma(-\alpha+3\delta/4) } , \qquad \widetilde\E_{t_2}\left( f_Q \right) \geq c_2 n^{-\alpha d +\gamma(-\alpha+3\delta/4) } , \end{align*} where $c_2 = c_3 \widetilde\E_{t_2}\left( f \right)$. Thus \eqref{a7.1} and \eqref{a21.5} yield for large $n$, \begin{align}\label{a24.3} \E\left( f\left(H^{R_Q}_{t_3}\right) \mathbbm{1}_{A_5}\right) &\geq \frac{\widetilde\E_{t_2}\left( f_Q\right)(1-n^{ -\delta/4} )\P(A_5)} {\widetilde\P_{t_2}\left( G_Q \right)(1+n^{ -\delta/4}) \left(1+3 n^{-2\alpha + 5\delta/4}\right)\exp\left( 16\sqrt{d} n^{ -\delta/4}\right)}\\ &\geq \frac{\widetilde\E_{t_2}\left( f_Q\right)} {\widetilde\P_{t_2}\left( G_Q \right)} (1- n^{-\delta/8} )\P(A_5)\notag\\ &= \widetilde\E_{t_2}\left( f_Q \mid B_{t_2}\in Q\right) (1- n^{-\delta/8} )\P(A_5)\notag \\ &=\widetilde\E_{t_2}\left( f \mid B_{t_2}\in Q\right) (1- n^{-\delta/8} )\P(A_5). \notag \end{align} Suppose that $R$ is a $\mathcal{F}_{t_3}^+$-measurable random variable taking values in $\{1,\dots,n\} $. Let $A_8=\bigcup_{Q\in\mathbf Q} \{R \in \mathcal{M}_ Q\}$. It follows from \eqref{a24.3} that \begin{align*} \E\left( f\left(H^{R}_{t_3}\right) \mathbbm{1}_{A_5}\right) &\geq\sum_{Q\in\mathbf Q} \E\left( f\left(H^{R}_{t_3}\right) \mathbbm{1}_{A_5} \mid R\in \mathcal{M}_Q\right)\P(R\in \mathcal{M}_Q)\\ &\geq \widetilde\E_{t_2}\left( f \mid B_{t_2}\in Q\right) (1- n^{-\delta/8} )\P(A_5) \sum_{Q\in\mathbf Q} \P(R\in \mathcal{M}_Q)\\ &=\widetilde\E_{t_2}\left( f \mid B_{t_2}\in Q\right) (1- n^{-\delta/8} )\P(A_5) \P(A_8). \end{align*} By Lemma \ref{a11.1}, \begin{align*} \E\left( f\left(H^{R}_{t_3}\right) \mathbbm{1}_{A_5}\right) &\geq\widetilde\E_{t_2}\left( f \right) (1- n^{-\delta/8} )\P(A_5) \P(A_8)-\varepsilon, \end{align*} so \begin{align*} \E\left( f\left(H^{R}_{t_3}\right) \right) &\geq \E\left( f\left(H^{R}_{t_3}\right) \mathbbm{1}_{A_5}\right) \geq\widetilde\E_{t_2}\left( f \right) (1- n^{-\delta/8} )\P(A_5) \P(A_8)-\varepsilon\\ &=\widetilde\E_{t_2}\left( f \right) -\widetilde\E_{t_2}\left( f \right)(1- (1- n^{-\delta/8} )\P(A_5) \P(A_8))-\varepsilon\\ &\geq\widetilde\E_{t_2}\left( f \right) -(1- (1- n^{-\delta/8} )\P(A_5) \P(A_8))-\varepsilon\\ &\geq\widetilde\E_{t_2}\left( f \right) -(1- \P(A_5) \P(A_8))- n^{-\delta/8}-\varepsilon\\ &\geq\widetilde\E_{t_2}\left( f \right) -\P(A_8^c)- \P(A_5^c)- n^{-\delta/8}-\varepsilon. \end{align*} We can prove in a similar way that \begin{align*} \E\left( f\left(H^{R}_{t_3}\right) \right) &\leq \widetilde\E_{t_2}\left( f \right)+ \P(A_8^c)+ \P(A_5^c)+ n^{-\delta/8}+\varepsilon, \end{align*} so \begin{align}\label{a21.7} \left|\E\left( f\left(H^{R}_{t_3}\right)\right) -\widetilde\E_{t_2}\left( f \right) \right| \leq \P(A_8^c)+ \P(A_5^c)+ n^{-\delta/8}+\varepsilon . \end{align} Recall the definition of $\chi$ following the statement of Theorem \ref{thm:uniquespine} and note that $H^{\chi(t_3)}_{t_3}(t) = J^n(t)$ for $t\in[0,t_3]$. Hence we can apply \eqref{a21.7} to $R=\chi(t_3)$ to obtain, \begin{align}\label{a1.1} \left|\E\left(f\left(J^n\right) \right) - \widetilde\E_{t_2}\left( f \right)\right| &\leq n^{-\delta/8} +\varepsilon + 2\left(1-\P\left(A_5\cap\left\{\chi(t_3) \in \bigcup_{Q\in\mathbf Q} \mathcal{M}_Q\right\}\right)\right). \end{align} Note that if $\chi(t_3) \notin \bigcup_{Q\in\mathbf Q} \mathcal{M}_Q$ then either $\dist (J_{t_2} ,\Lambda^c) \leq 4n^{-\alpha +3\delta /4}$ or the spine $J$ branches in $[t_2,t_3]$. Lemma \ref{a3.6} contains an upper bound on the probability of this event. By Lemmas \ref{a2.6}, \ref{a3.1}, \ref{a3.6} and \ref{a15.1}, \begin{align*} &\liminf_{n\to\infty} \P\left(A_5\cap\left\{\chi(t_3) \in \bigcup_{Q\in\mathbf Q} \mathcal{M}_Q\right\}\right)\\ &\qquad = \liminf_{n\to\infty} \P\left(A_1\cap A_2\cap A_3 \cap A_4\cap\left\{\chi(t_3) \in \bigcup_{Q\in\mathbf Q} \mathcal{M}_Q\right\}\right) \geq 1-2\varepsilon. \end{align*} This and \eqref{a1.1} imply that \begin{align* \limsup_{n\to\infty} \left|\E\left(f\left(J^n\right) \right) - \widetilde\E_{t_2}\left( f \right)\right| &\leq 5\varepsilon. \end{align*} By Lemma \ref{a14.3}, \begin{align* \limsup_{n\to\infty} \left|\E\left(f\left(J^n\right) \right) - \widehat\E^\mu\left( f \right) \right| &\leq 6\varepsilon. \end{align*} Since $\varepsilon>0$ is arbitrarily small, \begin{align* \lim_{n\to\infty} \E\left(f(J^n) \right) = \widehat\E^\mu\left( f \right). \end{align*} The function $f$ is an arbitrary continuous non-negative function $f:C[0,\infty) \to \mathbb R$ with $\|f\|_\infty \leq 1$, which depends only on the values of the process on $[0,t_1]$, and $t_1>0$ is also arbitrary, so the theorem follows. \end{proof} \section{Estimates}\label{sec:est} We will use notation presented in Section \ref{review} and in the proof of Theorem \ref{a1.7}. \subsection{Large deviations and likelihood ratio} We will use a somewhat informal notation for Radon-Nikodym derivatives. It will make the statements easy to interpret. \begin{lemma}\label{a2.5} Suppose that $\alpha, \delta, t_2$ and $t_3$ are as in \eqref{a3.3}-\eqref{a10.2} and the paragraph following these conditions. (i) Suppose that $B$ is Brownian motion in $\mathbb R^d$. Let \begin{align* F&=\left\{\sup_{s,t\in[t_2,t_3]} \left|B_{t} - B_{s}\right| < 2 n^{-\alpha +3\delta /4}\right\}. \end{align*} There exists $c_1$ such that for any $x$ and $z$ such that $|x-z| \leq n^{-\alpha +3\delta /4}$ and sufficiently large $n$, the following bounds hold for the Radon-Nikodym derivative, \begin{align* 1- \exp\left( - c_1 n^{\delta/2}\right)\leq \frac {\P(B_{t_3} \in dz \mid F, B_{t_2} = x)} {\P(B_{t_3} \in dz \mid B_{t_2} = x) } \leq 1+ \exp\left( - c_1 n^{\delta/2}\right). \end{align*} (ii) Suppose that $B'$ and $B''$ are independent Brownian motions in $\mathbb R^d$. If $x^1,x^2 \in Q\in \mathbf Q$ and $|x^j-z^k| \leq n^{-\alpha +3\delta /4} $ for $j,k=1,2$ then \begin{align*} &\frac{\P(B'_{t_3} \in dz^2,B''_{t_3} \in dz^1 \mid B'_{t_2} = x^1, B''_{t_2} = x^2) } {\P(B'_{t_3} \in dz^1, B''_{t_3} \in dz^2 \mid B'_{t_2} = x^1, B''_{t_2} = x^2) } \leq \exp\left( 2\sqrt{d} n^{ -\delta/4}\right) \end{align*} \end{lemma} \begin{proof} (i) Standard estimates show that there exists $c_2$ such that for all $n\geq 2$, \begin{align}\notag \P(F^c) &\leq\P\left(\sup_{s,t\in[t_2,t_3]} \left|B_{t} - B_{s}\right| \geq 2 n^{-\alpha +3\delta /4}\right) \leq \exp\left( - c_2 n^{2(-\alpha +3\delta /4)}/ (t_3-t_2)\right)\\ &\qquad= \exp\left( - c_2 n^{2(-\alpha +3\delta /4)}/ n^{-2\alpha + \delta}\right) = \exp\left( - c_2 n^{\delta/2}\right).\label{a2.1} \end{align} Recall that $|x-z| \leq n^{-\alpha +3\delta /4} $. Let $\mathcal{B}(v,r)$ denote a ball with center $v$ and radius $r$. If $B_{t_2}=x$ then $\dist\left(z, \partial\mathcal{B}\left(B_{t_2}, 2 n^{-\alpha +3\delta /4}\right)\right) \geq n^{-\alpha +3\delta /4} $. Let $\tau$ be the hitting time of the sphere $\mathcal{S}:=\partial\mathcal{B}\left(B_{t_2}, 2 n^{-\alpha +3\delta /4}\right)$. The last estimate, \eqref{a2.1} and an application of the strong Markov property at time $\tau$ imply that \begin{align}\label{a2.2} &\P(B_{t_3} \in dz, F^c \mid B_{t_2} = x) \leq \P(\tau <t_3 \mid B_{t_2} = x) \sup_{y\in\mathcal{S}, s\in[0,t_3-t_2]} \P(B_{s} \in dz \mid B_{0} = y) \\ &\ \leq \P(F^c) \sup_{y\in\mathcal{S}, s\in[0,t_3-t_2]} \P(B_{s} \in dz \mid B_{0} = y) \leq \exp\left( - c_2 n^{\delta/2}\right) \P(B_{t_3} \in dz \mid B_{t_2} = x).\notag \end{align} We have \begin{align}\label{oc6.1} &\frac {\P(B_{t_3} \in dz \mid F, B_{t_2} = x)} {\P(B_{t_3} \in dz \mid B_{t_2} = x)} = \frac{\P(B_{t_3} \in dz , F\mid B_{t_2} = x)} {\P(B_{t_3} \in dz \mid B_{t_2} = x)\P( F\mid B_{t_2} = x)}\\ &\qquad= \frac{\P(B_{t_3} \in dz \mid B_{t_2} = x) -\P(B_{t_3} \in dz , F^c\mid B_{t_2} = x)} {\P(B_{t_3} \in dz \mid B_{t_2} = x)\P( F\mid B_{t_2} = x)}\notag\\ &\qquad= \frac{1} {\P( F\mid B_{t_2} = x)}\left(1 -\frac{\P(B_{t_3} \in dz , F^c\mid B_{t_2} = x)} {\P(B_{t_3} \in dz \mid B_{t_2} = x)}\right).\notag \end{align} This and \eqref{a2.1} yield \begin{align}\label{oc6.2} \frac {\P(B_{t_3} \in dz \mid F, B_{t_2} = x)} {\P(B_{t_3} \in dz \mid B_{t_2} = x) } \geq 1 -\frac{\P(B_{t_3} \in dz , F^c\mid B_{t_2} = x)} {\P(B_{t_3} \in dz \mid B_{t_2} = x)} \geq 1- \exp\left( - c_2 n^{\delta/2}\right). \end{align} It follows from \eqref{oc6.1} and \eqref{a2.2} that for some $c_3$ and sufficiently large $n$, \begin{align* \frac {\P(B_{t_3} \in dz \mid F, B_{t_2} = x)} {\P(B_{t_3} \in dz \mid B_{t_2} = x) } &\leq \frac{1} {\P( F\mid B_{t_2} = x)} =\frac{1} {\P( F)} = \frac{1} {1-\P( F^c)}\\ & \leq \frac{1} {1-\exp\left( - c_2 n^{\delta/2}\right)} \leq 1+ \exp\left( - c_3 n^{\delta/2}\right). \end{align*} This estimate and \eqref{oc6.2} imply part (i) of the lemma. (ii) Suppose that $x^1,x^2 \in Q\in \mathbf Q$. Then $|x^1-x^2| \leq \sqrt{d} n^{-\alpha}$. If in addition $|x^j-z^k| \leq n^{-\alpha +3\delta /4} $ for $j,k=1,2$ then \begin{align*} \notag &\frac{\P(B'_{t_3} \in dz^2,B''_{t_3} \in dz^1 \mid B'_{t_2} = x^1, B''_{t_2} = x^2) } {\P(B'_{t_3} \in dz^1, B''_{t_3} \in dz^2 \mid B'_{t_2} = x^1, B''_{t_2} = x^2) }(z^1,z^2) \notag \\ &\qquad=\frac {\exp\left( -|x^2-z^1|^2/(2(t_3-t_2)) \right)} {\exp\left( -|x^1-z^1|^2/(2(t_3-t_2)) \right)} \cdot \frac {\exp\left( -|x^1-z^2|^2/(2(t_3-t_2)) \right)} {\exp\left( -|x^2-z^2|^2/(2(t_3-t_2)) \right)} \notag \\ &\qquad\leq \frac {\exp\left( -(|x^2-x^1|-|x^1-z^1|)^2/(2(t_3-t_2)) \right)} {\exp\left( -|x^1-z^1|^2/(2(t_3-t_2)) \right)} \notag \\ &\qquad\qquad \times \frac {\exp\left( -(|x^1-x^2|-|x^2-z^2|)^2/(2(t_3-t_2)) \right)} {\exp\left( -|x^2-z^2|^2/(2(t_3-t_2)) \right)} \notag \\ &\qquad= \exp\left( -(|x^2-x^1|^2-2|x^2-x^1|\cdot|x^1-z^1|)/(2(t_3-t_2)) \right) \notag \\ &\qquad\qquad \times \exp\left( -(|x^1-x^2|^2-2|x^2-x^1|\cdot|x^2-z^2|)/(2(t_3-t_2)) \right) \notag \\ &\qquad\leq \exp\left( |x^2-x^1|\cdot|x^1-z^1|)/(t_3-t_2) \right) \exp\left( |x^2-x^1|\cdot|x^2-z^2|)/(t_3-t_2) \right) \notag \\ &\qquad \leq \exp\left(2\sqrt{d} n^{-\alpha} n^{-\alpha+3\delta/4} / n^{-2\alpha +\delta}\right) = \exp\left( 2\sqrt{d} n^{ -\delta/4}\right) \end{align*} \end{proof} \begin{lemma}\label{a2.6} $\lim_{n\to\infty} \P(A_4) = 1$. \end{lemma} \begin{proof} We use \eqref{a2.1} to see that \begin{align}\label{a2.4} \P(A_4^c) &\leq \sum_{1\leq j \leq n} \P \left(\sup_{s,t\in[t_2,t_3]}\left |B^{j}_s-B^{j}_t\right| \geq n^{-\alpha +3\delta /4}\right)\leq n \exp\left( - c_1 n^{\delta/2}\right). \end{align} Since $\delta>0$, the lemma follows. \end{proof} \subsection{Conditioned space-time Brownian motion} Let $\P_{x,y,s}$ denote the distribution of $\{B_t, 0 \leq t \leq s\}$ where $B$ is Brownian motion starting from $B_0=x$, conditioned by $\{B_s=y\}$, and conditioned to stay inside $\Lambda$ on the interval $[0,s]$. The distribution $\P_{x,y,s}$ can be thought of as the distribution of the space component of the space-time Brownian motion $(B_t,t)$ conditioned by the parabolic function $h$ in $\Lambda\times [0,s]$, equal to 0 everywhere on the boundary of $\Lambda\times [0,s]$ except for $(y,s)$. Such processes are known as $h$-processes. Let $p_t(x,y)$ denote the transition density for Brownian motion killed upon exiting $\Lambda$. Let $\lambda>0$ be the first eigenvalue for $(-\frac 1 2) \Delta$ with Dirichlet boundary conditions in $\Lambda$ and let $\varphi>0$ be the corresponding eigenfunction. A bounded Lipschitz domain is intrinsically ultracontractive (IU). We will need only one result on IU domains, cited below, so we will not define IU domains here; instead we ask the reader to consult, e.g., \cite{DS,banu}. It follows from \cite[(1.8)]{banu} that for any $\eta>0$ there exists $u$ such that for $s\geq u$ and $x,y\in \Lambda$, \begin{align}\label{a14.2} 1-\eta \leq \frac{ p_{s}(x,y)}{e^{-\lambda s} \varphi(x) \varphi(y)}\leq 1+\eta. \end{align} \begin{lemma}\label{a11.1} Fix any $t_1,\varepsilon>0$. There exists $s_1>t_1$ so large that for all $x,y_1,y_2\in \Lambda$ and $t_2\geq s_1$, \begin{align}\label{m28.1} 1-\varepsilon\leq \frac{d \P_{x,y_1,t_2}}{d \P_{x,y_2,t_2}} \leq 1+\varepsilon \end{align} on the $\sigma$-field $\mathcal{F}^B_{t_1}:=\sigma(B_t, 0\leq t \leq t_1)$. \end{lemma} \begin{proof} It follows from the theory of $h$-processes and the Markov property applied at $t_1$ that \begin{align}\label{a11.3} \left. \frac{d \P_{x,y_1,t_2}}{d \P_{x,y_2,t_2}} \right| _{\mathcal{F}^B_{t_1}} = \frac{ p_{t_2-t_1}(B_{t_1},y_1)}{ p_{t_2-t_1}(B_{t_1},y_2)}. \end{align} If we assume that $B$ is defined on the canonical probability space then we can identify $\{B_t, 0\leq t \leq t_2\}$ with $\{\omega_t, 0\leq t \leq t_2\}$, where $\omega \in C([0,t_2], \Lambda)$, and rewrite \eqref{a11.3} as \begin{align* \left. \frac{d \P_{x,y_1,t_2}}{d \P_{x,y_2,t_2}} \right| _{\mathcal{F}^B_{t_1}}(\omega) = \frac{ p_{t_2-t_1}(\omega_{t_1},y_1)}{ p_{t_2-t_1}(\omega_{t_1},y_2)}. \end{align*} Combining \eqref{a14.2} with \eqref{a11.3} completes the proof. \end{proof} \begin{lemma}\label{a11.4} Let $\widetilde p_t(x,y)$ denote the transition density for Brownian motion conditioned to stay in $\Lambda$ on $[0,t]$. There exists $s_1$ and $c_1,c_2,\gamma\in(0,\infty)$ such that for all $t\geq s_1$ and $x,y\in \Lambda$, \begin{align}\label{a13.7} c_1 \dist(y, \Lambda^c)^\gamma \leq \widetilde p_t(x,y) \leq c_2 . \end{align} \end{lemma} Results of this type are well known but we could not find an exact reference for \eqref{a13.7}. \begin{proof}[Proof of Lemma \ref{a11.4}] Since $\Lambda$ is a bounded Lipschitz domain with the Lipschitz constant less than 1, we can find $\rho>0$, a finite number $k_1$ of points $x^k\in \partial \Lambda$, orthonormal coordinate systems $CS_k$ and Lipschitz functions $\psi_k:\mathbb R^{d-1}\to\mathbb R$ with the Lipschitz constant less than 1, satisfying the following conditions. For $y\in \partial \Lambda$ and $r>0$, let \begin{align*} \Gamma_k(y,r) = \{y+(x_1,\dots,x_d): x_1^2+\dots+x_{d-1}^2 \leq r^2, x_d^2\leq r^2\}, \qquad \text{ in } CS_k. \end{align*} Moreover, in $CS_k$, \begin{align*} &\Lambda\cap \Gamma_k(x^k,\rho) = \{(x_1,\dots,x_d)\in\Gamma_k(x^k,\rho): x_d > \psi_k(x_1,\dots,x_{d-1})\}, \qquad k=1,\dots,k_1,\\ &\{x\in\Lambda: \dist(x, \Lambda^c) \leq \rho/4\} \subset \bigcup_{1\leq k\leq k_1} \Gamma_k(x^k,\rho/2). \end{align*} Fix $CS_k$ and $y\in \partial \Lambda \cap \Gamma_k(x^k, \rho/2)$. Let ${\bf n}_k = (0,\dots,0,1)$ in $CS_k$. Let $L_y=\{y+a {\bf n}_k: a> 0\}$ and let $\mathcal{C}_{y,\alpha}$ be the open cone consisting of all half-lines with the endpoint at $y$ forming the angle less than $\alpha$ with $L_y$. Note that \begin{align*}\notag &\Gamma_k(y,\rho/2) \cap \Lambda \subset \Gamma_k(x^k,\rho)\cap \Lambda,\\ & \mathcal{C}_{y,\pi/8} \cap \Gamma_k(y,\rho/2) \subset \Lambda\cap \Gamma_k(y,\rho/2). \end{align*} It is well known (see, e.g., ``Application'' on page 192 in \cite{Burk}) that there exists a positive harmonic function $h_1(x)$, $x\in \mathcal{C}_{y,\pi/8}$, such that $h_1(x)=0$ for $x\in \partial \mathcal{C}_{y,\pi/8}$ and $h_1(x) = |x-y|^\gamma f(\theta)$ for some $\gamma>0$ and a function $f$, where $\theta$ is the angle between the line segment $\overline {0,x}$ and $L_y$. We can and will assume that $f(0)=1$. Let $h_2(x)$ be a positive harmonic function in $\Lambda\cap \Gamma_k(y,\rho/2)$, with the boundary values $h_2(x) = h_1(x)$ for $x\in \mathcal{C}_{y,\pi/8}\cap \partial \Gamma_k(y,\rho/2)\cap \Lambda$ and $h_2(x) = 0$ for $x\in \partial(\Lambda\cap \Gamma_k(y,\rho/2)) \setminus \mathcal{C}_{y,\pi/8}$. Then $h_2(x) \geq h_1(x)$ for $ x \in \partial(\mathcal{C}_{y,\pi/8}\cap \Gamma_k(y,\rho/2))$. It follows by the elliptic maximum principle that $h_2(x) \geq h_1(x)$ for $ x \in L_y \cap \Lambda\cap \Gamma_k(y,\rho/2)$. Hence, \begin{align}\label{a13.1} &h_2(x) \geq |x-y|^\gamma, \qquad x\in L_y,\\ &h_2(x) \leq c_3 :=\sup \left\{h_1(x): x\in \mathcal{C}_{y,\pi/8}, |x-y| \leq 2\rho\right\}, \qquad x\in \Lambda\cap \Gamma_k(y,\rho/2).\label{a13.2} \end{align} Recall that $p_t(x,z)$ denotes the transition density for Brownian motion killed upon exiting $\Lambda$. Let $g(t,x) = h_2(x)$ for $x\in \Lambda\cap \Gamma_k(y,\rho/2)$ and $t\geq 0$. The functions $(z,t)\to p_t(x,z)$ and $(z,t)\to g(t,z) $ are parabolic, i.e., they are solutions to the heat equation and they have zero boundary values on $(\partial \Lambda\cap \Gamma_k(y,\rho/2))\times \mathbb R$. Hence, by \cite[Thm. 1.6]{FGS}, for $s>0$, there exist $c_4$ and $a>b_1>0$ depending only on $\Lambda$ and $s$, such that for $t\geq s$, $0<b<b_1$ and $x\in \Lambda$, \begin{align*} \frac{g(t,y+b{\bf n}_k)}{p_t(x,y+b{\bf n}_k)} \leq c_4\frac{g(t+2a^2,y+a{\bf n}_k)}{p_{t-2a^2}(x,y+a{\bf n}_k)}, \end{align*} so, using \eqref{a13.1}, \begin{align}\notag &p_t(x,y+b{\bf n}_k) \geq c_4^{-1} \frac{g(t,y+b{\bf n}_k)}{g(t+2a^2,y+a{\bf n}_k)} p_{t-2a^2}(x,y+a{\bf n}_k)\\ &\qquad= c_4^{-1} \frac{h_2(y+b{\bf n}_k)}{h_2(y+a{\bf n}_k)} p_{t-2a^2}(x,y+a{\bf n}_k) \geq c_4^{-1} \frac{b^\gamma}{h_2(y+a{\bf n}_k)} p_{t-2a^2}(x,y+a{\bf n}_k).\label{a13.3} \end{align} Let $\Lambda_a^c$ be the set of all points of the form $y+ a_1{\bf n}_k$ with $0<a_1<a$, where $y$ can be any point in $ \partial \Lambda \cap \Gamma_k(x^k, \rho/2)$ and $k$ can be any $1,\dots, k_1$. Let $\Lambda_a=\Lambda \setminus \Lambda_a^c$. By \eqref{a14.2}, for any $\eta>0$, sufficiently large $s_1$ and $t\geq s_1$, \begin{align}\label{a13.4} &p_{t-2a^2}(x,y+a{\bf n}_k) \geq p_{t}(x,y+a{\bf n}_k)/2,\\ &\inf_{z\in \Lambda_a} p_{t}(x,z) \geq (1-\eta)e^{-\lambda t} \varphi(x) \inf_{z\in \Lambda_a}\varphi(z).\label{a13.5} \end{align} Since $\dist(\Lambda_a, \Lambda^c) >0$, $c_5:=\inf_{z\in \Lambda_a}\varphi(z)>0$. This observation, \eqref{a13.2}, \eqref{a13.3}, \eqref{a13.4} and \eqref{a13.5} yield \begin{align}\label{a13.6} p_t(x,y+b{\bf n}_k) \geq c_4^{-1} \frac{b^\gamma}{c_3}(1-\eta)e^{-\lambda t} \varphi(x) c_5 = c_6 b^\gamma e^{-\lambda t} \varphi(x). \end{align} By \eqref{a14.2}, for sufficiently large $t$, \begin{align*} \int_\Lambda p_t(x,z)dz \leq 2 e^{-\lambda t} \varphi(x)\int_\Lambda \varphi(z)dz = c_7 e^{-\lambda t} \varphi(x) . \end{align*} It follows from this and \eqref{a13.6} that \begin{align*} \widetilde p_t(x,y+b{\bf n}_k)= \frac{p_t(x,y+b{\bf n}_k)} {\int_\Lambda p_t(x,z)dz} \geq \frac{ c_6 b^\gamma e^{-\lambda t} \varphi(x)} {c_7 e^{-\lambda t} \varphi(x)} = c_8 b^\gamma. \end{align*} This proves the lower bound in \eqref{a13.7} for $y\in \Lambda_a^c$ because $b \geq \dist(y+b{\bf n}_k,\Lambda^c) $. We can make the bound valid for all $y\in \Lambda$ by making $c_1>0$ smaller, if necessary. Since \begin{align*} \widetilde p_{t}(x,y) = \frac { p_{t}(x,y)}{\int_{\Lambda} p_{t}(x,y)dy}, \end{align*} it follows from \eqref{a14.2} that for any $\eta_1>0$ there exist $u$ and $c_8$ such that for $s\geq u$ and $x,y\in \Lambda$, \begin{align* c_8(1-\eta_1) \leq \frac{\widetilde p_{s}(x,y)}{ \varphi(y) }\leq c_8(1+\eta_1). \end{align*} The upper bound in \eqref{a13.7} follows because $\sup_{y\in\Lambda} \varphi(y) <\infty$. \end{proof} \begin{corollary}\label{a11.6} Recall from \eqref{m29.3} that $\widetilde\P_{t_2}$ refers to the Wiener measure conditioned on staying inside $\Lambda$ up to $t_2$. We have assumed before \eqref{m29.10} that $\widetilde \E_{t_2} (f) >0$. Assume that $t_2 - t_1 \geq s_1$ where $s_1$ is given in Lemma \ref{a11.4}. Then there exist $c_1$ and $c_2$ such that for every $Q\in\mathbf Q$, \begin{align}\label{a11.7} c_1 n^{-\alpha d+ \gamma (-\alpha+3\delta/4)} &\leq \widetilde\P_{t_2}(G_Q ) \leq c_2 n^{-\alpha d},\\ c_1 \widetilde \E_{t_2} (f) n^{-\alpha d+ \gamma (-\alpha+3\delta/4)} &\leq \widetilde\E_{t_2}(f_Q ) \leq c_2 \widetilde \E_{t_2} (f) n^{-\alpha d}. \label{a11.8} \end{align} \end{corollary} \begin{proof} The volume of $Q$ is $n^{-\alpha d}$. Thus \eqref{a11.7} follows from the assumption \eqref{m29.5} and Lemma \ref{a11.4}. By the Markov property applied at $t_1$, \begin{align*} \widetilde\E_{t_2}(f_Q ) = \widetilde \E_{t_2} (f) \widetilde \P_{t_2-t_1}^{B_{t_1}} (G_Q), \end{align*} so \eqref{a11.8} follows from the assumption \eqref{m29.5} and Lemma \ref{a11.4} applied with $s_1=t_2-t_1$. \end{proof} The next lemma follows essentially from the main theorem in \cite{Pin} but we need a specific order of quantifiers which does not seem to follow directly from that theorem. Recall that $\widehat \P^\mu$ denotes the distribution of Brownian motion conditioned to stay in $\Lambda$ forever, with the initial distribution $c \varphi(x)\mu(dx)$, where $c>0$ is the normalizing constant. \begin{lemma} \label{a14.3} For every $\varepsilon>0$ and $t_1>0$ there exists $s_1$ such that for every positive continuous function $f$ on $C[0,t_1]$ with $\|f\|_\infty \leq 1$ and $t\geq s_1$, \begin{align*} 1-\varepsilon \leq \liminf_{n\to\infty} \frac{\widetilde \E_t^{\mu_n}(f)}{\widehat \E^\mu(f)} \leq \limsup_{n\to\infty} \frac{\widetilde \E_t^{\mu_n}(f)}{\widehat \E^\mu(f)} \leq 1+\varepsilon. \end{align*} \end{lemma} \begin{proof} Let $\varphi>0$ denote the first eigenfunction of $(-\frac 1 2) \Delta$ in $\Lambda$ with Dirichlet boundary conditions and let $\lambda>0$ be the corresponding eigenvalue. The function $h(x,t)= e^{\lambda t} \varphi(x)$ is parabolic in $\Lambda\times (0,\infty)$. The transition density $\widehat p_t(x,y)$ for Brownian motion conditioned not to exit $\Lambda$ is given by, for $t>0$ and $x,y\in \Lambda$, \begin{align}\label{a13.9} \widehat p_t(x,y) = \frac 1 {h(x,0)} p_t(x,y) h(y,t) =e^{\lambda t} \frac{\varphi(y)}{\varphi(x)}p_t(x,y). \end{align} Under both $\widetilde \P_t$ and $\widehat \P $, the process $\{B_s,0\leq s \leq t_1\}$ is the space component of a space-time Brownian motion conditioned to exit $\Lambda\times (0,t_1)$ via $\Lambda \times \{t_1\}$, but with different exit distributions. We will show that for every $\varepsilon>0$ there exists $s_1$ such that for $t\geq s_1$ and $x,y,v,z\in \Lambda$, \begin{align}\label{a14.1} 1-\varepsilon \leq \frac{\widetilde \P_t(B_{t_1}\in dy \mid B_0=x, B_t = v)}{\widehat \P (B_{t_1} \in dy \mid B_0=x, B_t=z)} \leq 1+\varepsilon. \end{align} By \eqref{a13.9}, \begin{align*} \widetilde \P_t(B_{t_1}\in dy \mid B_0=x, B_t = v) &= \frac { p_{t_1}(x,y) p_{t-t_1}(y,v)}{ p_{t}(x,v)}dy,\\ \widehat \P (B_{t_1}\in dy \mid B_0=x, B_t = z) &= \frac {\widehat p_{t_1}(x,y)\widehat p_{t-t_1}(y,z)}{\widehat p_{t}(x,z)}dy\\ &= \frac {e^{\lambda t_1} \frac{\varphi(y)}{\varphi(x)} p_{t_1}(x,y) e^{\lambda (t-t_1)} \frac{\varphi(z)}{\varphi(y)}p_{t-t_1}(y,z)} {e^{\lambda t} \frac{\varphi(z)}{\varphi(x)} p_{t}(x,z)}dy\\ &= \frac { p_{t_1}(x,y) p_{t-t_1}(y,z)}{ p_{t}(x,z)}dy. \end{align*} Hence, \begin{align*} \frac{\widetilde \P_t(B_{t_1}\in dy \mid B_0=x, B_t = v)}{\widehat \P (B_{t_1} \in dy \mid B_0=x, B_t=z)} =\frac { p_{t-t_1}(y,v) p_{t}(x,z)}{ p_{t}(x,v)p_{t-t_1}(y,z)}. \end{align*} By \eqref{a14.2}, for any $\eta>0$ there exists $s_2$ such that for $t-t_1\geq s_2$ and $x,y,v,z\in \Lambda$, \begin{align*} \frac{\widetilde \P_t(B_{t_1}\in dy \mid B_0=x, B_t = v)}{\widehat \P (B_{t_1} \in dy \mid B_0=x, B_t=z)} \leq \frac{(1+\eta)^2}{(1-\eta)^2} \frac { e^{-\lambda(t-t_1)} \varphi(y) \varphi(v) e^{-\lambda t} \varphi(x) \varphi(z) }{ e^{-\lambda t} \varphi(x) \varphi(v) e^{-\lambda(t-t_1)} \varphi(y) \varphi(z)} =\frac{(1+\eta)^2}{(1-\eta)^2} . \end{align*} This proves the upper bound in \eqref{a14.1}. The lower bound can be proved in an analogous way. Suppose that $f$ is a positive continuous function on $C[0,t_1]$ with $\|f\|_\infty \leq 1$. It follows from the theory of $h$-processes (i.e., conditioned Brownian motion; see \cite{Doob}) that under both $\widetilde \P_t$ and $\widehat \P $, the process $\{B_s,0\leq s \leq t_1\}$ conditioned by $\{B_0=x, B_{t_1} = y\}$ has the same distribution. Hence, \eqref{a14.1} implies that for every $\varepsilon>0$ there exists $s_1$ such that for $t\geq s_1$ and $x,v,z\in \Lambda$, \begin{align* 1-\varepsilon \leq \frac{\widetilde \E_t(f \mid B_0=x, B_t = v)}{\widehat \E (f \mid B_0=x, B_t=z)} \leq 1+\varepsilon. \end{align*} Since $v,z\in \Lambda$ are arbitrary, for every $\varepsilon>0$ there exists $s_1$ such that for $t\geq s_1$ and $x\in \Lambda$, \begin{align* 1-\varepsilon \leq \frac{\widetilde \E_t(f \mid B_0=x)}{\widehat \E (f \mid B_0=x)} \leq 1+\varepsilon, \end{align*} and, therefore, \begin{align*} \widetilde \E_t^{\mu}(f) = \int \widetilde \E_t(f \mid B_0=x) \mu(dx) \leq (1+\varepsilon) \int \widehat \E_t(f \mid B_0=x) \mu(dx) =(1+\varepsilon)\widehat \E^\mu(f). \end{align*} This and the analogous lower bound for $\widetilde \E_t^{\mu}(f)$ yield for some $s_2$ and $t\geq s_2$, \begin{align}\label{a24.4} 1-\varepsilon \leq \frac{\widetilde \E_t^{\mu}(f)}{\widehat \E^\mu(f)} \leq 1+\varepsilon. \end{align} We have assumed in Theorem \ref{a1.7} that $\mu$ is supported in a set $\Lambda_1\subset \Lambda$ such that $\dist(\Lambda_1, \Lambda^c) >0$. It follows from \cite[Cor.~1]{CB} that if $x^k \in \Lambda_1$ and $x^k\to x^\infty\in\Lambda_1$ then $\widetilde \E_t(f\mid B_0=x^k) \to\widetilde \E_t(f\mid B_0=x^\infty) $. A standard coupling argument can be applied to construct processes $B^n$ with the same transition probabilities as those of $B$, on the same probability space, with the initial distributions $\mu_n$ for $B^n$ and $\mu$ for $B$, and such that $|B^n_0 - B_0|\to0$, a.s., as $n\to \infty$. These observations imply that $\lim_{n\to\infty} \widetilde \E_t^{\mu_n}(f)=\widetilde \E_t^{\mu}(f)$. Combined with \eqref{a24.4}, this proves the lemma. \end{proof} \subsection{Villemonais' estimate} Recall that we have assumed that $u$ is fixed and $t_2\leq u+1$. This easily implies that for some $c_1>0$ and all probability measures $\mu_n$ supported in $\Lambda_1$, \begin{align}\label{a11.2} \P_{\mu_n} (\tau_\Lambda > t_2) \geq c_1, \end{align} where $\P_{\mu_n}$ represents the distribution of the driving process $B$ with the initial distribution $\mu_n$. \begin{lemma}\label{a3.1} $\lim_{n\to\infty} \P(A_1\cap A_2) = 1$. \end{lemma} \begin{proof} By Theorem \ref{a3.2} and \eqref{a11.2}, \begin{align}\label{a3.4} &\P\left(\left|\mathcal H_{t_2}^n\left(\mathbbm{1}_{G_Q}\right) - \widetilde\P_{t_2}(G_Q )\right| \geq n^{-\alpha d +\gamma(-\alpha+3\delta/4) -\delta}\right)\\ &\qquad\leq \E \left(\left|\mathcal H_{t_2}^n\left(\mathbbm{1}_{G_Q}\right) - \widetilde\P_{t_2}(G_Q )\right|\right) n^{\alpha d -\gamma(-\alpha+3\delta/4) +\delta} \notag \\ &\qquad\leq 2\left(1 + \sqrt{2}\right) c_1^{-1} n^{-1/2} n^{\alpha d -\gamma(-\alpha+3\delta/4) +\delta}.\notag \end{align} Recall from \eqref{a3.3} that we have assumed that $\alpha \leq (1/2 -2\delta+3\gamma \delta/4 )/(\gamma+2d)$. Hence, $-1/2+\alpha d-\gamma(-\alpha+3\delta/4) +\delta\leq -\alpha d -\delta$. This and \eqref{a3.4} show that for some $c_2$, \begin{align}\label{a3.5} &\P\left(\left|\mathcal H_{t_2}^n\left(\mathbbm{1}_{G_Q}\right) - \widetilde\P_{t_2}(G_Q )\right| \geq n^{-\alpha d +\gamma(-\alpha+3\delta/4) -\delta}\right) \leq c_2 n^{-\alpha d -\delta} . \end{align} Since $\Lambda$ is bounded set, the number of $Q$ in $\mathbf Q$ is bounded by $c_3 n^{\alpha d}$. It follows from the definition \eqref{m29.10} of $A_1$ and \eqref{a3.5} that \begin{align} \P(A_1^c) &\leq \sum_{Q\in \mathbf Q} \P\left(\left|\mathcal H_{t_2}^n\left(\mathbbm{1}_{G_Q}\right) - \widetilde\P_{t_2}(G_Q )\right| \geq n^{-\alpha d +\gamma(-\alpha+3\delta/4) -\delta}\right) \leq c_3 n^{\alpha d}c_2 n^{-\alpha d -\delta}\notag\\ &= c_3 c_2 n^{-\delta}.\label{a9.1} \end{align} Hence, $\lim_{n\to\infty} \P(A_1) = 1$. The proof for $A_2$ is completely analogous. \end{proof} \subsection{Paths close to the boundary and branching} \label{m31.6} Our next lemma is comprised of three different claims, corresponding to events $C^1_j, C^2_j$ and $C^3_j$ defined below. The first step of the proof is the only common aspect of the three claims. We believe that some ``unusual'' events are very unlikely to occur at an arbitrarily chosen fixed time $u$. This we cannot prove. But we will prove that for a fixed $u$, there is a time with this property in $[u,u+1]$. Consider any $u>0$, let $\Delta t = n^{-2\alpha + \delta}$, $k_1 = \lfloor 1/\Delta t\rfloor$ and $s_j= u+ j \Delta t$ for $ j=0,\dots, k_1$. Let $\widehat M_j$ be the number of jumps of $\mathbf X_n$ in the time interval $[s_j, s_{j+1}]$, i.e., the number of $i$ with $\tau_i\in [s_j, s_{j+1}]$. Recall that $J^n$ is the spine of $\mathbf X_n$. Let \begin{align*} C^1_j &= \{\dist (J^n_{s_j} ,\Lambda^c) \leq 4n^{-\alpha +3\delta /4}\},\\ C^2_j &= \{J^n \text{ branches in } [s_j, s_{j+1}]\},\\ C^3_j &= \left\{\widehat M_j \geq n^{1-2\alpha +5\delta/4 }\right\},\\ C_j &= C^1_j \cup C^2_j \cup C^3_j. \end{align*} \begin{lemma}\label{a3.6} For every $\varepsilon>0$ and $u>0$ there exists $n_1$ so large that for $n\geq n_1$ there exists $j \in\{0,\dots, k_1\}$ such that $\P(C_j)\leq \varepsilon$. \end{lemma} \begin{proof} \emph{Step 1}. Fix $\varepsilon\in(0,1)$ and $u>0$. Suppose that for a given $n$, there is no $j \in\{0,\dots, k_1\}$ such that $\P(C_j)\leq \varepsilon$. Then \begin{align*} \E\left( \sum_{1\leq j \leq k_1 } \mathbbm{1}_{C_j}\right) \geq \varepsilon k_1. \end{align*} Let \begin{align*} p=\P\left( \sum_{1\leq j \leq k_1} \mathbbm{1}_{C_j} \leq \varepsilon k_1/2\right) . \end{align*} Then \begin{align*} \E\left( \sum_{1\leq j \leq k_1} \mathbbm{1}_{C_j}\right) \leq p\varepsilon k_1/2 + (1-p)k_1. \end{align*} Hence $\varepsilon k_1 \leq p\varepsilon k_1/2 + (1-p)k_1$ and, therefore, \begin{align*} &p \leq \frac {1-\varepsilon}{1-\varepsilon/2} <1. \end{align*} It follows that \begin{align* \P\left( \sum_{1\leq j \leq k_1} \mathbbm{1}_{C_j} > \varepsilon k_1/2\right) = 1-p \geq \frac {\varepsilon/2}{1-\varepsilon/2}>0. \end{align*} This implies that at least one of the following inequalities holds, \begin{align}\label{a3.7} &\P\left( \sum_{1\leq j \leq k_1} \mathbbm{1}_{C_j^1} > \varepsilon k_1/6\right) \geq \frac {\varepsilon/6}{1-\varepsilon/2}>0,\\ \label{a3.8} &\P\left( \sum_{1\leq j \leq k_1} \mathbbm{1}_{C_j^2} > \varepsilon k_1/6\right) \geq \frac {\varepsilon/6}{1-\varepsilon/2}>0,\\ &\P\left( \sum_{1\leq j \leq k_1} \mathbbm{1}_{C_j^3} > \varepsilon k_1/6\right) \geq \frac {\varepsilon/6}{1-\varepsilon/2}>0. \label{a8.3} \end{align} It will suffice to show that each of these inequalities fails for large $n$. \medskip \emph{Step 2}. Recall that $\mathcal{B}(v,r)$ denotes a ball with center $v$ and radius $r$. Let $B$ be Brownian motion and \begin{align*} \widetilde C^1_j &= \{\dist (B_{s_j} ,\Lambda^c) \leq 4n^{-\alpha +3\delta /4}\}. \end{align*} Since $\Lambda$ is a bounded Lipschitz domain, it is easy to see that for some $c_1, c_2>0$ and every $x\in\Lambda$ there exists $z\in \Lambda^c$ such that $|z-x| \leq c_1\dist(x,\Lambda^c)$ and $\mathcal{B}( z, c_2 \dist(x,\Lambda^c))\subset \Lambda^c$. From here up to and including \eqref{a7.2}, $\P$ will denote the distribution of Brownian motion starting from $x$. By Brownian scaling, there exists $p_1>0$ such that for all $x$, \begin{align}\label{a3.9} &\P\left(\inf\{t>0: B_t \in \Lambda^c\} \leq \dist(x,\Lambda^c)^2\right)\\ &\qquad\geq \P\left(\inf\{t>0: B_t \in \mathcal{B}( z, c_2 \dist(x,\Lambda^c))\} \leq \dist(x,\Lambda^c)^2\right) = p_1.\notag \end{align} Let \begin{align*} j_0&= \inf\left\{m\geq 0: \dist( B_{s_m},\Lambda^c) \leq 4n^{-\alpha +3\delta /4}\right\},\\ j_{i+1}&= \inf\left\{m>j_i: \dist( B_{s_m},\Lambda^c) \leq 4n^{-\alpha +3\delta /4}, s_m\geq s_{j_i} + 16 n^{-2\alpha +3\delta/2}\right\}, \quad i\geq 0. \end{align*} By \eqref{a3.9} and the strong Markov property applied at $s_{j_i}$, for every $i$, \begin{align* &\P\left(\inf\{t>s_{j_i}: B_t \in \Lambda^c\} \leq s_{j_{i+1}}\right) \geq p_1. \end{align*} We apply the strong Markov property again to see that for $k\geq 0$, \begin{align}\label{a3.10} &\P\left( \inf\{t>s_0: B_t \in \Lambda^c\} > s_{j_{k+1}} \right)\\ &\qquad\leq\P\left( \bigcap_{i=0}^k \left\{ \inf\{t>s_{j_i}: B_t \in \Lambda^c\} > s_{j_{i+1}} \right\} \right) \leq (1- p_1)^{k+1}.\notag \end{align} If the event $\left\{ \sum_{1\leq j \leq k_1} \mathbbm{1}_{\widetilde C_j^1} > \varepsilon k_1/6\right\}$ occurred then $s_{j_\ell} \leq u+1$, where \begin{align*} \ell = \frac{(\varepsilon k_1/6) \Delta t}{ 16 n^{-2\alpha +3\delta/2}} = \frac{(\varepsilon \lceil n^{2\alpha - \delta} \rceil/6) n^{-2\alpha + \delta} }{ 16 n^{-2\alpha +3\delta/2}} \geq \frac 1 {96} \varepsilon n^{2\alpha -3\delta/2}. \end{align*} This and \eqref{a3.10} imply that \begin{align}\label{a7.2} &\P\left( \sum_{1\leq j \leq k_1} \mathbbm{1}_{\widetilde C_j^1} > \varepsilon k_1/6,\ \inf\{t>s_0: B_t \in \Lambda^c\} > u+1 \right)\\ &\qquad \leq \P\left(s_{j_\ell} \leq u+1,\ \inf\{t>s_0: B_t \in \Lambda^c\} > u+1 \right)\notag\\ &\qquad \leq \P\left( \inf\{t>s_0: B_t \in \Lambda^c\} > s_{j_\ell} \right) \leq (1- p_1)^\ell \leq (1-p_1)^{\varepsilon n^{2\alpha -3\delta/2}/96}.\notag \end{align} We will apply methods and results from the proof of Theorem 1.3 in \cite[p. 688]{BHM00} but we will use different notation. Let $\Lambda_2$ be such that $\dist(\Lambda_1, \Lambda_2^c) >0$ and $\dist(\Lambda_2, \Lambda^c) >0$. Let $\H=\{k: n/4 \leq k\leq n\}$ and $\H^c=\{1,\dots,n\}\setminus \H$. Consider $c_3 >0$ and let $F=F(c_3)$ be the event that at least $c_3 n$ processes $X^k$ with $k\in \H^c$ stay within $\Lambda_2$ on the interval $[0,u+2]$. Recall that $X^k_0 \in \Lambda_1$ for all $k$, a.s. By the law of large numbers, there exists $c_3 >0$ such that \begin{align}\label{a7.4} \lim_{n\to\infty}\P(F)=1. \end{align} Informally, let $M_k$ be the number of branching points on the tree of descendants of $X^k$ on the interval $[0,u+2]$. Formally, let $M_k$ be the number of branching points on DHPs $H^\ell_{u+2}$ such that $H^\ell_{u+2}(0) = X^k_0$. Every branching point appears on two different DHPs but we count it only once. It follows from \cite[(2.11)]{BHM00} that for every $r>0$ there exists $c^*_r<\infty$ such that for every $k\in\H$ and sufficiently large $n$, \begin{align}\label{a7.6} \E (M_k^r\mid F) \leq c^*_r. \end{align} Note that \cite[(2.11)]{BHM00} is an upper bound for the $r$-th power of ``the total number of jumps on the tree of descendants of particle $m$'' defined just below \cite[(2.10)]{BHM00}. But the definition given in \cite{BHM00} makes it clear that ``jumps'' include branching points on the tree of descendants of $X^k$ defined earlier in this paragraph. Formula \cite[(2.11)]{BHM00} does not contain conditioning on $F$. The conditioning on $F$ is implicit on pages 688--691, as indicated on page 688 of \cite{BHM00}. Recall $\xi$ introduced in \eqref{a7.1}. Let $r$ be so large that $\xi r >1$. We have \begin{align*} &\P(M_k\geq n^\xi\mid F) = \P(M_k^r \geq n^{\xi r}\mid F) \leq \E(M_k^r \mid F) n^{-\xi r} \leq c^*_r n^{-\xi r}. \end{align*} If we let \begin{align* K = \bigcup_{ k \in\H} \{M_k\geq n^\xi\}. \end{align*} then \begin{align}\label{a7.5} &\P\left( K \mid F\right) \leq c^*_r n^{1-\xi r}. \end{align} We construct a Brownian motion $B^*$ killed on the boundary of $\Lambda$ as follows. Choose $k$ uniformly from $\H$. Follow $X^k$ from time 0 until the first time when $X^k$ exits $\Lambda$ or until the first branching point, whichever comes first. If the process exits $\Lambda$, stop. At a branching point, start following one of the two branches, with equal probabilities, with the random choice independent of $\mathbf X$. Then apply the inductive construction---follow the branch until it exits $\Lambda$ or the next branching point, whichever comes first. If the process exits $\Lambda$, stop. At a branching point, start following one of the branches, with equal probabilities, with the random choice independent of $\mathbf X$. Let $D^*$ be the event on the first line of \eqref{a7.2} but with random objects defined relative to $B^*$ in place of $B$ so that \begin{align}\label{a7.3} \P(D^*) \leq (1-p_1)^{\varepsilon n^{2\alpha -3\delta/2}/96}. \end{align} Let $D$ be the event that there is a DHP $H^k_{u+2}$ such that $H^k_{u+2}(0) = X^j_0$ for some $j\in\H$ and the event on the first line of \eqref{a7.2} holds for this DHP in place of $B$. If $K^c$ holds then there will be at most $n^\xi$ choices in the construction of $B^*$ on the interval $[0,u+2]$ so for every $k$ such that $H^k_{u+2}(0) = X^j_0$ for some $j\in\H$, \begin{align*} \P(B^*_t=H^k_{u+2}(t), 0\leq t \leq u+2)\geq (1/n) 2^{- n^\xi}. \end{align*} Hence, by \eqref{a7.3}, \begin{align*} \P(K^c \cap D ) &\leq n 2^{ n^\xi} (1-p_1)^{\varepsilon n^{2\alpha -3\delta/2}/96}\\ &= \exp\left( \log n + n^\xi\log 2 + n^{2\alpha -3\delta/2} \log (1-p_1)\varepsilon/96\right). \end{align*} This and our assumption \eqref{a7.1} that $0<\xi< 2\alpha -3\delta/2$ imply that \begin{align}\label{a7.7} \lim_{n\to \infty} \P(K^c \cap D ) &\leq \lim_{n\to \infty} \exp\left( \log n + n^\xi\log 2 + n^{2\alpha -3\delta/2} \log (1-p_1)\varepsilon/96\right)=0. \end{align} Let $\widehat\H=\{k: 1 \leq k\leq 3n/4\}$ and define $\widehat F, \widehat K$ and $\widehat D$ relative to $\widehat H$ in the same way as $F,K$ and $D$ were defined relative to $\H$. Then, by symmetry and \eqref{a7.4}, \eqref{a7.5} and \eqref{a7.7}, \begin{align}\label{n12.1} \lim_{n\to\infty}\P(\widehat F)=1,\quad \P\left( \widehat K \mid \widehat F\right) \leq c^*_r n^{1-\xi r},\quad \lim_{n\to \infty} \P(\widehat K^c \cap \widehat D )=0. \end{align} Note that \begin{align*} \left\{ \sum_{1\leq j \leq k_1} \mathbbm{1}_{C_j^1} > \varepsilon k_1/6\right\} \subset D \cup \widehat D, \end{align*} so \begin{align* \P&\left( \sum_{1\leq j \leq k_1} \mathbbm{1}_{C_j^1} > \varepsilon k_1/6\right) \\ &\leq \P(K^c \cap D ) + \P(K\cap F)+\P(F^c) +\P(\widehat K^c \cap \widehat D ) + \P(\widehat K\cap \widehat F)+\P(\widehat F^c)\\ &\leq \P(K^c \cap D ) + \P(K\mid F)+\P(F^c) +\P(\widehat K^c \cap \widehat D ) + \P(\widehat K\mid \widehat F)+\P(\widehat F^c). \end{align*} Hence, by \eqref{a7.4}, \eqref{a7.5}, \eqref{a7.7} and \eqref{n12.1}, \begin{align* \lim _{n\to \infty} \P\left( \sum_{1\leq j \leq k_1} \mathbbm{1}_{C_j^1} > \varepsilon k_1/6\right) =0, \end{align*} contradicting \eqref{a3.7}. This completes the proof that \eqref{a3.7} fails for all large $n$. \medskip \emph{Step 3}. By \eqref{a7.6}, for $\delta >0$ and all $k\in\H$, \begin{align*} \P(M_k \geq n^{2\alpha-2\delta}\mid F) \leq \E (M_k^r\mid F) /n^{r(2\alpha-2\delta)}\leq c_r^* n^{-r(2\alpha-2\delta)}, \end{align*} so \begin{align*} \P\left(\bigcup_{k\in\H}\{M_k \geq n^{2\alpha-2\delta}\}\mid F\right) \leq c_r^* n^{1-r(2\alpha-2\delta)}. \end{align*} By \eqref{a7.8}, $2\alpha-2\delta>0$ so we can find $r$ so large that $1-r(2\alpha-2\delta) <0$. It follows that \begin{align*} \lim_{n\to\infty} \P\left(\bigcup_{k\in\H}\{M_k \geq n^{2\alpha-2\delta}\}\mid F\right) =0. \end{align*} By \eqref{a7.4}, \begin{align* \lim_{n\to\infty} \P\left(\bigcup_{k\in\H}\{M_k \geq n^{2\alpha-2\delta}\}\right) =0. \end{align*} One can prove in the same way that \begin{align* \lim_{n\to\infty} \P\left(\bigcup_{k\in\widehat\H}\{M_k \geq n^{2\alpha-2\delta}\}\right) =0, \end{align*} so \begin{align}\label{a7.9} \lim_{n\to\infty} \P\left(\bigcup_{1\leq k \leq n}\{M_k \geq n^{2\alpha-2\delta}\}\right) =0. \end{align} The piece of the spine $\{J^n_t, 0\leq t \leq u+2\} $ is a trajectory within the tree of descendants of some $X^k$ on the interval $[0,u+2]$. Therefore, the number $M_J$ of branching points on $\{J^n_t, 0\leq t \leq u+2\} $ must be less than or equal to $M_k$ for some $k$. If the event in \eqref{a3.8} holds then $M_J \geq \varepsilon n^{2\alpha-\delta}/12$. Hence, \eqref{a7.9} implies that \eqref{a3.8} cannot hold for large $n$. \medskip \emph{Step 4}. Let $M_* = \sum_{k\in\H} M_k$ and $\widehat M_* = \sum_{k\in\widehat \H} M_k$. Recall that $\Delta t = n^{-2\alpha + \delta}$ and $k_1 = \lfloor 1/\Delta t\rfloor = \lfloor 1/n^{-2\alpha + \delta}\rfloor \leq n^{2\alpha - \delta}/2$. We have \begin{align*} \left\{ \sum_{1\leq j \leq k_1} \mathbbm{1}_{C_j^3} > \varepsilon k_1/6\right\} &\subset\{M_*+\widehat M_* \geq n^{1-2\alpha +5\delta/4 }\varepsilon k_1/6\} \subset \{M_*+ \widehat M_* \geq \varepsilon n^{1+\delta/4}/12\}\\ &\subset \{M_* \geq \varepsilon n^{1+\delta/4}/24\} \cup \{ \widehat M_* \geq \varepsilon n^{1+\delta/4}/24\}. \end{align*} By \eqref{a7.6}, \begin{align*} \E(M_* \mid F) &\leq n c^*_1,\\ \P(M_* \geq \varepsilon n^{1+\delta/4}/24\mid F) &\leq \E(M_* \mid F) (\varepsilon n^{1+\delta/4}/24)^{-1} \leq (24c^*_1/\varepsilon) n^{-\delta/4}. \end{align*} For similar reasons, \begin{align*} \P(\widehat M_* \geq \varepsilon n^{1+\delta/4}/24\mid \widehat F) \leq (24c^*_1/\varepsilon) n^{-\delta/4}. \end{align*} It follows that \begin{align*} &\P\left( \sum_{1\leq j \leq k_1} \mathbbm{1}_{C_j^3} > \varepsilon k_1/6\right) \leq\P(M_* \geq \varepsilon n^{1+\delta/4}/24) +\P(\widehat M_* \geq \varepsilon n^{1+\delta/4}/24) \\ &\qquad\leq \P(\{M_* \geq \varepsilon n^{1+\delta/4}/24\}\cap F) + \P(F^c) +\P(\{\widehat M_* \geq \varepsilon n^{1+\delta/4}/24\}\cap \widehat F) + \P(\widehat F^c)\\ &\qquad\leq \P(\{M_* \geq \varepsilon n^{1+\delta/4}/12\}\mid F) + \P(F^c) +\P(\{\widehat M_* \geq \varepsilon n^{1+\delta/4}/24\}\mid \widehat F) + \P(\widehat F^c)\\ &\qquad\leq 2(24c^*_1/\varepsilon) n^{-\delta/4} + \P(F^c)+ \P(\widehat F^c). \end{align*} We combine this with \eqref{a7.4} and \eqref{n12.1} to see that \begin{align*} \lim_{n\to \infty}\P\left( \sum_{1\leq j \leq k_1} \mathbbm{1}_{C_j^3} > \varepsilon k_1/6\right)=0. \end{align*} This contradicts \eqref{a8.3} for large $n$, as required. \end{proof} \subsection{Branching frequency} \begin{lemma}\label{a15.1} Recall that we have fixed $\varepsilon>0$ below \eqref{a10.2} and $A_3=A_3(\varepsilon)$. We have \begin{align*} \liminf_{n\to \infty} \P(A_3) \geq 1-\varepsilon. \end{align*} \end{lemma} \begin{proof} Let $\widehat M$ be the number of jumps of $\mathbf X$ in the time interval $[t_2,t_3]$, i.e., the number of $i$ with $\tau_i\in [t_2,t_3]$. Recall that $t_2 = u + j n^{-2\alpha + \delta}$ and $t_3 = t_2 + n^{-2\alpha + \delta}$, where $j\geq 0$ is the smallest integer such that $\P(C_j)\leq \varepsilon$ in the notation of Lemma \ref{a3.6}. By Lemma \ref{a3.6}, \begin{align}\label{a10.5} \limsup_{n\to \infty} \P\left(\widehat M \geq n^{1-2\alpha + 5\delta/4}\right) \leq \varepsilon. \end{align} By \eqref{a11.7}, $\widetilde\P_{t_2}(G_Q ) \leq c_1 n^{-\alpha d}$. Note that $|\mathcal{M}_Q^+|=n \mathcal H_{t_2}^n\left(\mathbbm{1}_{G_Q}\right) $. We have $-\alpha+3\delta/4<0$ because of assumption \eqref{a7.8}. Hence $ \gamma(-\alpha+3\delta/4)<0$ and, therefore, \eqref{a9.1} yields, \begin{align}\notag &\sum_{Q\in \mathbf Q} \P\left(|\mathcal{M}_Q^+| \geq c_1 n^{1-\alpha d }+ n^{1-\alpha d -\delta}\right)\\ &\qquad \leq\sum_{Q\in \mathbf Q} \P\left(\left|\mathcal H_{t_2}^n\left(\mathbbm{1}_{G_Q}\right) - \widetilde\P_{t_2}(G_Q )\right| \geq n^{-\alpha d -\delta}\right) \leq c_2 n^{-\delta}.\label{a10.3} \end{align} We recall \eqref{a11.7} again to see that $\widetilde\P_{t_2}(G_Q ) \geq c_3 n^{-\alpha d+ \gamma (-\alpha+3\delta/4)}$. By \eqref{a9.1}, \begin{align}\notag &\sum_{Q\in \mathbf Q} \P\left(|\mathcal{M}_Q^+| \leq c_3 n^{1-\alpha d + \gamma (-\alpha+3\delta/4)}- n^{1-\alpha d +\gamma(-\alpha+3\delta/4) -\delta}\right)\\ &\qquad \leq\sum_{Q\in \mathbf Q} \P\left(\left|\mathcal H_{t_2}^n\left(\mathbbm{1}_{G_Q}\right) - \widetilde\P_{t_2}(G_Q )\right| \geq n^{-\alpha d +\gamma(-\alpha+3\delta/4) -\delta}\right) \leq c_4 n^{-\delta}.\label{a10.4} \end{align} Let \begin{align*} \widehat A_1 &= \bigcup_{Q\in \mathbf Q} \left\{|\mathcal{M}_Q^+| \geq 2 c_1 n^{1-\alpha d }\right\},\notag\\ \widehat A_2 &= \bigcup_{Q\in \mathbf Q} \left\{|\mathcal{M}_Q^+| \leq (c_3/2) n^{1-\alpha d +\gamma(-\alpha+3\delta/4) }\right\},\\ \widehat A_*&=\widehat A_1^c \cap \widehat A_2^c .\notag \end{align*} It follows from \eqref{a10.3}-\eqref{a10.4} that \begin{align}\label{a10.6} \lim_{n\to\infty} \P(\widehat A_*) =1. \end{align} Let $\mathcal{N}_Q$ be the number of $j$ such that $ X^{j}_{t_2} \in Q$, $X^{j}$ experiences (at least one) branching in $[t_2,t_3]$ (in other words, $j \in \mathcal{M}_Q^c$), and one of the branching events for $X^j$ is one of the first $n^{1-2\alpha + 5\delta/4}$ branching events for the whole process after time $t_2$. Note that $\mathcal{N}_Q=| \mathcal{M}_Q^c|$ if $\widehat M \leq n^{1-2\alpha + 5\delta/4}$. For each $Q\in \mathbf Q$, conditionally on $\mathcal{F}_{t_2}$, $\mathcal{N}_Q$ is stochastically majorized by the binomial distribution with the number of trials bounded by $n^{1-2\alpha +5\delta/4}$ and the success probability bounded by $|\mathcal{M}_Q^+|/n $. We can enlarge the probability space, if necessary, to construct a random variable $Y$ whose conditional distribution given $\mathcal{F}_{t_2}$ is binomial, a.s., with parameters \begin{align}\label{a25.2} \E\left(Y \mid \mathcal{F}_{t_2}\right) = |\mathcal{M}_Q^+| n^{-2\alpha +5\delta/4},\qquad \Var\left(Y \mid \mathcal{F}_{t_2}\right) = |\mathcal{M}_Q^+| n^{-2\alpha +5\delta/4}. \end{align} Recall \eqref{a10.2}. If $\widehat A_2^c$ occurred then for large $n$, \begin{align*} n^{-2\alpha +5\delta/4} \geq n^{3/4}(2/c_3) n^{-1+\alpha d -\gamma(-\alpha+3\delta/4) } \geq n^{3/4}/|\mathcal{M}_Q^+|. \end{align*} This, the stochastic dominance, \eqref{a25.2} and Chebyshev's inequality give, \begin{align*} \P&\left(\mathcal{N}_Q/|\mathcal{M}_Q| \geq 2 n^{-2\alpha +5\delta/4} \mid \mathcal{F}_{t_2}\right) \mathbbm{1}_{\widehat A_*}\\ &\leq \P\left(\mathcal{N}_Q/|\mathcal{M}_Q^+| \geq n^{-2\alpha +5\delta/4}+ n^{3/4}/|\mathcal{M}_Q^+| \mid \mathcal{F}_{t_2}\right) \mathbbm{1}_{\widehat A_*}\\ &=\P\left(\mathcal{N}_Q \geq |\mathcal{M}_Q^+| n^{-2\alpha +5\delta/4}+ n^{3/4} \mid \mathcal{F}_{t_2}\right) \mathbbm{1}_{\widehat A_*}\\ &\leq\P\left(Y \geq \E(Y\mid \mathcal{F}_{t_2})+ n^{3/4} \mid \mathcal{F}_{t_2}\right) \mathbbm{1}_{\widehat A_*}\\ &\leq |\mathcal{M}_Q^+| n^{-2\alpha +5\delta/4-3/2} \mathbbm{1}_{\widehat A_*} \leq 2c_1 n^{1-\alpha d } n^{-2\alpha +5\delta/4-3/2}. \end{align*} The number of cubes in $\mathbf Q$ is bounded by $c_5 n^{\alpha d}$, so \begin{align* &\P\left(\bigcup_{Q\in\mathbf Q} \{\mathcal{N}_Q/|\mathcal{M}_Q^+| \geq 2 n^{-2\alpha +5\delta/4}\} \mid \mathcal{F}_{t_2}\right) \mathbbm{1}_{\widehat A_*} \leq 2c_1c_5 n\cdot n^{-2\alpha +5\delta/4-3/2}. \end{align*} We have assumed in \eqref{a7.8} that $\alpha > \delta$ so $1 -2\alpha +5\delta/4-3/2<0$ and therefore, \begin{align}\label{a10.7} &\lim_{n\to\infty}\P\left(\bigcup_{Q\in\mathbf Q} \{\mathcal{N}_Q/|\mathcal{M}_Q^+| \geq 2 n^{-2\alpha +5\delta/4}\} \mid \mathcal{F}_{t_2}\right) \mathbbm{1}_{\widehat A_*}=0. \end{align} We have \begin{align*} \P(A_3^c)&=\P\left(\bigcup_{Q\in \mathbf Q} \left\{|\mathcal{M}_Q^c|/|\mathcal{M}_Q^+|> 2 n^{-2\alpha + 5\delta/4}\right\}\right)\\ &\leq \E\left( \P\left(\bigcup_{Q\in\mathbf Q} \{\mathcal{N}_Q/|\mathcal{M}_Q^+| \geq 2 n^{-2\alpha +5\delta/4}\} \mid \mathcal{F}_{t_2}\right)\mathbbm{1}_{\widehat A_*}\right)\\ &\quad+ \P\left(\left(\widehat A_*\right)^c\right) +\P\left(\widehat M \geq n^{1-2\alpha + 5\delta/4}\right). \end{align*} This, \eqref{a10.5}, \eqref{a10.6} and \eqref{a10.7} imply that \begin{align*} \liminf_{n\to \infty} \P(A_3) \geq 1-\varepsilon, \end{align*} as claimed. \end{proof} \section{Acknowledgments} We are grateful to Rodrigo Ba\~nuelos for very useful advice. \input{spine.bbl} \end{document}
proofpile-arXiv_065-67
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Introduction} This supplementary material contains examples of the backward differentiation for PIP bases for a diatomic and triatomic molecule, Mathematica code,\cite{Mathematica}, a brief description of Mathematica programs uses, and training and testing RMS errors for energies and gradients, as well as timing results, for the MD17 ethanol potential. Results validating the new PES for ethanol are also included. \section*{Reverse Derivative Examples} We provide here two very simple worked examples to illustrate the principles of the reverse derivative technique for PIPs. The first is the case of a homonuclear diatomic molecule, while the second is the case of a single water molecule. Before continuing, an explanation of notation might be useful. In working the examples, we will often talk about partial derivatives, whereas the examples provided in the two tables below show Fortran code in which the derivatives are apparently normal ones. Of course, the end result, how the potential varies with changes in each of the Cartesian coordinates must be expressed as partial derivatives. However, Fortran does not distinguish partial from normal derivatives, so the tables might appear at first to be at odds with the explanations. In most cases, the derivatives are actually partial ones; usually it is clear from the context. \subsection*{A homonuclear diatomic molecule} Consider the case of a homonuclear diatomic molecule with maximum polynomial order of 3. The permutational symmetry is 2, and the $MSA$ code\cite{Xie-nma,msachen,Xie10,persp9} for the energy is listed in the first column of Table~SM-I. To calculate the energy for a particular geometry, one moves forward (upward) in the column by calculating first the value of the transformed inter nuclear distance, $\tau(1)$, then the $m$ values, then the $p$ values and finally the potential $V=\bm{c}\cdot\bm{p}$. \begin{table}[htbp!] \caption*{Table SM-I. Energy and both Forward and Reverse Automatic Differentiation for a Homonuclear Diatomic Molecule } \centering \begin{scriptsize} \begin{tabular*}{\textwidth}{lll} \hline \hline\noalign{\smallskip} Forward (up) & \hspace{.5 cm}Forward (up) &\hspace{1cm} Reverse (down)\\ $V=\bm{c}\cdot\bm{p}$ & \hspace{.5 cm} $\partial{V} = \bm{c}\cdot\partial{\bm{p}}$ & \hspace{1cm} ${\partial{V}} = \bm{c}\cdot\partial{\bm{p}}$\\ \noalign{\smallskip}\hline\noalign{\smallskip} $p(3)=p(1)*p(2)$& \hspace{.5 cm} $dp(3) = dp(1)*p(2) + p(1)*dp(2)$ & \hspace{1cm} $a(6)=(dV/dp(3)) = c(3)$\\ $p(2)=p(1)*p(1)$ & \hspace{.5 cm} $dp(2) = dp(1)*p(1) + p(1)*dp(1)$ & \hspace{1cm} $a(5)=(dV/dp(2)) + a(6)*(dp(3)/dp(2))$\\ & \hspace{.5 cm} & \hspace{1cm}\hspace{1cm}$= c(2) + a(6)*p(1)$\\ $p(1)=m(1)$ & \hspace{.5 cm} $dp(1) = dm(1)$ & \hspace{1cm}$a(4)=(dV/dp(1)) + a(5)*(dp(2)/dp(1))$\\ & \hspace{.5 cm} & \hspace{1.0cm}\hspace{1cm} $+ a(6)*(dp(3)/dp(1)$\\ & \hspace{.5 cm} & \hspace{1cm}\hspace{1cm}$ =c(1) + a(5)*2*p(1) +a(6)p(2)$\\ $p(0)=m(0)$ & \hspace{.5 cm} $dp(0) = dm(0) = 0$ & \hspace{1cm}$a(3)=(dV/dp(0)) + a(4)*(dp(1)/dp(0))$\\ & \hspace{.5 cm} & \hspace{1.0cm}\hspace{1cm}$ + a(5)*(dp(2)/dp(0)) + a(6)*(dp(3)/dp(0))$\\ & \hspace{.5 cm} & \hspace{1cm} \hspace{1cm}$= c(0)$\\ $m(1)=\tau(1)$ & \hspace{.5 cm} $dm(1) = d\tau(1) = -(m(1)/\lambda)*\frac{\partial{r(1)}}{\partial{\alpha_n}}d\alpha_n$ & \hspace{1cm}$a(2)=a(3)*(dp(0)/dm(1)) + a(4)*(dp(1)/dm(1))$\\ &\hspace{.5 cm} & \hspace{1cm} \hspace{1.cm} $ + $…$ = a(4)$\\ $m(0) = 1$ & \hspace{.5 cm} $dm(0) = 0$ & \hspace{1cm} $a(1)=a(2)*(dm(1)/dm(0)) +a(3)*(dp(0)/dm(0))$\\ & \hspace{.5 cm} & \hspace{1cm}\hspace{1cm}$ + $…$=$ $ 0 + 1 = a(3)$\\ \noalign{\smallskip}\hline \hline\noalign{\smallskip} \end{tabular*} \end{scriptsize} \end{table} The derivative steps for the calculation of the gradient of $V$ are shown in the second column of the Table~SM-I. Moving forward again, we calculate the differential of $\tau(1)$, then the differentials of the $m$, then the differentials of the $p$, and finally the differential of the potential. Note that the differential $d\tau(1)=dm(1)$ depends on which Cartesian coordinate $\alpha_n$ we want. In the equation given, $dm(1) = -(m(1)/\lambda)*\frac{\partial{r(1)}}{\partial{\alpha_n}}d\alpha_n$, the first factor,$ -(m(1)/\lambda)$ comes from the derivative of transformed $m(1)=\tau(1)$ with respect to the inter nuclear distance, $r(1)$, where we have assumed a Morse transform, $\tau(1)=exp(-r(1)/\lambda)$, where $\lambda$ is a range parameter generally taken to be about 2 bohr. It is instructive in this simple case to see how the derivative of the potential depends on $\tau(1)$. One can easily verify using the definitions in the first two columns that $\frac{\partial{V}}{\partial{\tau(1)}}=\sum_{i=1}^{3}\frac{\partial{V}}{\partial{p(i)}}\frac{\partial{p(i)}}{\partial{\tau(1)}} = \sum_{i=1}^{3}c(i)\frac{\partial{p(i)}}{\partial{\tau(1)}}=c(1) + c(2)*2*\tau(1) + c(3)*3*\tau(1)^2 $. In order to get all six partial derivatives of $V$, we would have to evaluate each of $3N=6$ different differentials $dm(1)=d\tau(1)$, then work our way up the middle column for each choice to get the differential of $V$. Now consider the reverse derivative method in the third column of Table SM-I (moving down). The first adjoint is $a(6)$ with conjugate variable $p(3)$, and the derivative of $V$ with respect to $p(3)$ is $c(3)$. For the next adjoint, $a(5)$, the conjugate variable, $dp(2)$, can contribute to a change in $V$ either directly through its contribution in the dot product or indirectly through its contribution to the change in the previously calculated $a(6)$. The contribution from the dot product is $c(2)$, whereas the potential contribution from $a(6)$ is $a(6) (dp(3)/dp(2))$. From the definition of $dp(3)$ in the second column, we find that the $(dp(3)/dp(2)) = p(1)$, so the adjoint $a(5)$ is equal to $c(2) + a(6)*p(1)$. Continuing down the chain, the reasoning is similar. Note that many derivatives are zero. The important line is that for the adjoint $a(2)$ because its conjugate variable is $\tau(1)$, so its value is that of $dV/d\tau(1)$. It is instructive to see what this derivative is in terms of $\tau(1)$. We see that $a(2) = a(4) = c(1) +2a(5)p(1) +a(6)p(2)$, which can be shown to be equal to $c(1)+2*c(1)*\tau(1)+3*c(3)*\tau(1)^2$, exactly the answer we got using the Forward differentiation. To get the gradient we want, we use $\frac{\partial{V}}{\partial{\alpha_n}}=\frac{\partial{V}}{\partial{\tau(1))}}\frac{\partial{\tau(1)}}{\partial{\alpha_n}}=\frac{\partial{V}}{\partial{\tau(1))}}\frac{\partial{\tau(1)}}{\partial{r(1)}}\frac{\partial{r(1)}}{\partial{\alpha_n}}$. From the previous paragraph, we have already seen that the rhs of the last equation is $\frac{\partial{V}}{\partial{\tau(1))}}(-m(1)/\lambda)\frac{\partial{r(1)}}{\partial{\alpha_n}}$. The big difference between the reverse and forward methods is that in one reverse pass we get $\frac{\partial{V}}{\partial{\tau(1))}}$ and that all we need to do to get all $3N=6$ gradients is to multiply this result by $(-m(1)/\lambda)$ and by each of the six partial derivatives $\frac{\partial{r(1)}}{\partial{\alpha_n}}$. \subsection*{A single water molecule} \begin{table}[ht] \caption*{Table SM-II. Energy and both Forward and Reverse Automatic Differentiation for a Single Water Molecule } \centering \begin{scriptsize} \begin{tabular*}{\textwidth}{ l l l } \hline \hline\noalign{\smallskip} Forward (up) & \hspace{.5cm}Forward (up) & \hspace{.5cm}Reverse (down) \\ $V=\bm{c}\cdot\bm{p}$ & \hspace{.5cm}$\partial{V} = \bm{c}\cdot\partial{\bm{p}}$ & \hspace{.5cm}${\partial{V}} = \bm{c}\cdot\partial{\bm{p}}$\\ \noalign{\smallskip}\hline\noalign{\smallskip} $p(12) = p(2)*p(6)$ &\hspace{.5cm} $dp(12) = dp(2)*p(6) + p(2)*dp(6)$ &\hspace{.5cm} $a(18) = c(12)$ \\ $p(11) = p(1)*p(5) $ &\hspace{.5cm} $dp(11) = dp(1)*p(5) + p(1)*dp(5) )$ &\hspace{.5cm} $a(17) = c(11)$ \\ \hspace{.5cm} $- p(8)$ &\hspace{.5cm} \hspace{.5cm} $- dp(8)$ &\hspace{.5cm} \\ $p(10) = p(2)*p(4)$ &\hspace{.5cm} $dp(10) = dp(2)*p(4) + p(2)*dp(4)$ &\hspace{.5cm} $a(16) = c(10)$ \\ $p(9) = p(2)*p(5)$ &\hspace{.5cm} $dp(9) = dp(2)*p(5) + p(2)*dp(5)$ &\hspace{.5cm} $a(15) = c(9)$ \\ $p(8) = p(3)*p(1)$ &\hspace{.5cm} $dp(8) = dp(3)*p(1) + p(3)*dp(1)$ &\hspace{.5cm} $a(14) = c(8) + a(17)*(-1)$ \\ $p(7) = p(2)*p(3)$ &\hspace{.5cm} $dp(7) = dp(2)*p(3) + p(2)*dp(3)$ &\hspace{.5cm} $a(13) = c(7)$ \\ $p(6) = p(2)*p(2)$ &\hspace{.5cm} $dp(6) = dp(2)*p(2) + p(2)*dp(2)$ &\hspace{.5cm} $a(12) = c(6) + a(18)*p(2)$\\ $p(5) = p(1)*p(1)$ &\hspace{.5cm} $dp(5) = dp(1)*p(1) + p(1)*dp(1)$ &\hspace{.5cm} $a(11) = c(5) + a(17)*p(1) + a(15)*p(2)$ \\ \hspace{.7cm}$- p(3) - p(3)$ & \hspace{1.7cm}$- dp(3) - dp(3)$ & \\ $p(4) = p(2)*p(1)$ &\hspace{.5cm} $ dp(4) = dp(2)*p(1) +p(2)*dp(1)$ &\hspace{.5cm} $a(10) = c(4) + a(16)*p(2)$ \\ $p(3) = m(4)$ &\hspace{.5cm} $dp(3) = dm(4)$ &\hspace{.5cm} $a(9) = c(3) + a(14)*p(1) + a(13)*p(2) + $ \\ &\hspace{.5cm} &\hspace{.5cm}\hspace{1cm} $a(11)*(-2)$ \\ $p(2) = m(3)$ &\hspace{.5cm} $dp(2) = dm(3)$ &\hspace{.5cm} $a(8) = c(2) + a(18)*p(6) + a(16)*p(4) + $ \\ &\hspace{.5cm} & \hspace{.5cm}\hspace{1cm} $a(15)*p(5) + a(13)*p(3) + a(12)*2*p(2) + $ \\ &\hspace{.5cm} &\hspace{.5cm}\hspace{1cm} $a(10)*p(1)$ \\ $p(1) = m(1) + m(2)$ &\hspace{.5cm} $dp(1) = dm(1) + dm(2)$ &\hspace{.5cm} $a(7) = c(1) + a(17)*p(5) + a(14)*p(3) + $\\ & & \hspace{.5cm}\hspace{1cm} $ a(11)*2*p(1) + a(10)*p(2)$ \\ $p(0) = m(0)$ &\hspace{.5cm} $dp(0) = 0$ &\hspace{.5cm} $a(6)=pp(0) = c(0)$ \\ $m(4) = m(1)*m(2)$ &\hspace{.5cm} $dm(4) = dm(1)*m(2) + m(1)*dm(2)$ &\hspace{.5cm} $a(5) = a(9)$ \\ $m(3) = \tau(1)$ &\hspace{.5cm} $dm(3) = d\tau(1)$ &\hspace{.5cm} $a(4) = a(8)$ \\ $m(2) = \tau(2)$ &\hspace{.5cm} $dm(2) = d\tau(2)$ &\hspace{.5cm} $a(3) = a(7) + a(5)*m(1)$ \\ $m(1) = \tau(3)$ &\hspace{.5cm} $dm(1) = d\tau(3)$ &\hspace{.5cm} $a(2) = a(7) + a(5)*m(2)$ \\ $m(0) = 1$ &\hspace{.5cm} $dm(0) = 0$ &\hspace{.5cm} $a(1) = a(6)$ \\ \noalign{\smallskip}\hline \hline\noalign{\smallskip} \end{tabular*} \end{scriptsize} \end{table} We next consider the example of a single water molecule. The permutational symmetry is 21, meaning that the two hydrogens permute with one another and the oxygen does not permute. We'll use a maximum polynomial of order 3. The $MSA$ output is listed in the first column of Table SM-II, while the derivative equations are listed in the second column. Here is a slightly different way of thinking about the adjoints in the third column. The operative equation from the main text is \begin{equation} a_j(t_j)= c_i \delta_{t,p} +\sum_{i=j+1}^{i_{max}} a_i \frac{\partial{t_i}}{\partial{t_j}}. \label{eq: adjointrule} \end{equation} Note that the partial derivative in the second term of this equation is that of $t_i$ with respect to $t_j$, where $i>j$. Consider first adjoints whose conjugate variables are among the $p$. The first term contributes a $c_i$ for these. In order for there to be second-term contributions, one or more of the $p_i$ must contain a $dp_j$ term, so for any adjoint we look at the $dp_i$ definitions in the center column to see if any of the right-hand sides contains the derivative of the conjugate variable we are considering, $dp_j$. For example, $a(17)$ does not have a second term because the rhs of the $dp(12)$ equation does not contain $dp(11)$. Similarly $a(15)$, $a(16)$ and $a(17)$ do not have second terms because the rhs of the $dp(12)$, $dp(11)$, and $dp(10)$ definitions do not contain either $dp(11)$, $dp(10)$ or $dp(9)$. However, for $a(14)$, the rhs of the $dp(11)$ definition does contain $dp(8)$, and the derivative of $dp(11)$ with respect to $dp(8)$ is (-1). Thus, the second term will be $a(17)$, the adjoint of $p(11)$, times the derivative, (-1). The remainder of the adjoints corresponding to $p$ conjugate variables can be likewise evaluated. Then we come to the adjoints with $m$ conjugate variables. The adjoint $a(5)$ has conjugate variable $m(4)$ which appears in the definition of $dp(3)$, whose adjoint is $a(9)$. The derivative is 1, so $a(5)=a(9)$. Similarly, the adjoint $a(4)$ has conjugate variable $m(3)$, which appears in the definition of $dp(2)$, chose adjoint is $a(8)$; the derivative is again 1. Thus $a(4)=a(8)$. The adjoints $a(3)$ and $a(2)$ each have two terms. We now focus on $a(4)$, $a(3)$, and $a(2)$, since their conjugate variables are, respectively, $dm(3)=d\tau(1)$, $dm(2)=d\tau(2)$, and $dm(1)=d\tau(3)$. Recalling that the adjoint is the partial derivative of $V$ with respect to the adjoint's conjugate variable, we see that these three adjoints give us, respectively $\frac{\partial{V}}{\partial{\tau(1)}}$, $\frac{\partial{V}}{\partial{\tau(2)}}$, and $\frac{\partial{V}}{\partial{\tau(3)}}$. As in the case of the diatomic molecule, we now use a chain rule to get the desired partial derivatives of $V$ with respect to the Cartesian coordinates. In this case, however, we have three variables, so $\frac{\partial{V}}{\partial{\alpha_n}}=\sum_{m=1}^3\frac{\partial{V}}{\partial{\tau(m))}}\frac{\partial{\tau(m)}}{\partial{\alpha_n)}}=\sum_{m=1}^3\frac{\partial{V}}{\partial{\tau(m))}}\frac{\partial{\tau(m)}}{\partial{r(m)}}\frac{\partial{r(m)}}{\partial{\alpha_n}}$. Two terms contribute to each partial derivative of $V$ with respect to any $\alpha_n$. The results are: \begin{equation} \begin{split} \frac{\partial{V}}{\partial{x_1}} = &+ a(4))*(-m(3)/a)*\frac{\partial{r(1)}}{\partial{x_1}} + a(3)*(-m(2)/a)*\frac{\partial{r(2)}}{\partial{x_1}} \\ \frac{\partial{V}}{\partial{y_1}} = & +a(4)*(-m(3)/a)*\frac{\partial{r(1)}}{\partial{y_1}} + a(3)*(-m(2)/a)*\frac{\partial{r(2)}}{\partial{y_1}} \\ \frac{\partial{V}}{\partial{z_1}} =& +a(4)*(-m(3)/a)*\frac{\partial{r(1)}}{\partial{z_1}} + a(3)*(-m(2)/a)*\frac{\partial{r(2)}}{\partial{z_1}}\\ \frac{\partial{V}}{\partial{x_2}} =& + a(4)*(-m(3)/a)*\frac{\partial{r(1)}}{\partial{x_2}} + a(2)*(-m(1)/a)*\frac{\partial{r(3)}}{\partial{x_2}}\\ \frac{\partial{V}}{\partial{y_2}} =& +a(4)*(-m(3)/a)*\frac{\partial{r(1)}}{\partial{y_2}} + a(2)*(-m(1)/a)*\frac{\partial{r(3)}}{\partial{y_2}}\\ \frac{\partial{V}}{\partial{z_2}} =& +a(4)*(-m(3)/a)*\frac{\partial{r(1)}}{\partial{z_2}} + a(2)*(-m(1)/a)*\frac{\partial{r(3)}}{\partial{z_2}}\\ \frac{\partial{V}}{\partial{x_3}} =& +a(3)*(-m(2)/a)*\frac{\partial{r(2)}}{\partial{x_3}} + a(2)*(-m(1)/a)*\frac{\partial{r(3)}}{\partial{x_3}}\\ \frac{\partial{V}}{\partial{y_3}} =& +a(3)*(-m(2)/a)*\frac{\partial{r(2)}}{\partial{y_3}} + a(2)*(-m(1)/a)*\frac{\partial{r(3)}}{\partial{y_3}}\\ \frac{\partial{V}}{\partial{z_3}} =& +a(3)*(-m(2)/a)*\frac{\partial{r(2)}}{\partial{z_3}} + a(2)*(-m(1)/a)*\frac{\partial{r(3)}}{\partial{z_3}}\\ \end{split} \end{equation} \section*{Reverse Derivative Mathematica Code for an Adjoint} \begin{table}[ht] \caption*{Table SM-III. Mathematica Code for an Adjoint} \centering \begin{scriptsize} \begin{tabular*}{\textwidth}{ l } \hline \hline\noalign{\smallskip} \texttt{GetAdjoint[jval\_,pliststr\_,mliststr\_,qliststr\_,mpqtxt\_,mpqtxttab\_, natoms\_,npoly\_,nq\_,nmono\_,nvar\_]:=Module[}\\ \texttt{\{fouttxtadd,jend,skipm,return,ltri,ltrj,dolist,eqpos,numi,numj, lookfor,lasti\}, (* local variables *)}\\ \texttt{(* \textcolor{blue}{jval} is the index of mpqtxt or mpqtttab for which you are trying to find the adjoint. }\\ \texttt{\textcolor{blue}{pliststr, mliststr}, and \textcolor{blue}{qliststr} are text strings of the list of p's, m's, and q's (if present).}\\ \texttt{They are created in the calling function by these commands: }\\ \hspace{0.5cm}\texttt{pliststr="";Do[(pliststr=pliststr\textless\textgreater''p''\textless\textgreater ToString[i-1]\textless\textgreater",";),\{i,1,npoly+1\}]; }\\ \texttt{\hspace{0.5cm}qliststr=""; If[nq\textgreater0,Do[(qliststr=qliststr\textless\textgreater"q"\textless\textgreater ToString[i]\textless\textgreater",";),\{i,1,nq\}];];}\\ \texttt{\hspace{0.5cm}mliststr=""; Do[( mliststr=mliststr\textless\textgreater''m''\textless\textgreater ToString[i-1]\textless\textgreater","; ),\{i,1,nmono+1\}];}\\ \texttt{\textcolor{blue}{mpqtxt} is the Mathematica code for all m's, p's and q's (if present); its format looks something like this: }\\ \texttt{ \{"m0=1.0D0","m1=x3;","m2=x2;","m3=x1;","m4=m1*m2;","p0=m0; ","p1=m1+m2;","p2=m3;", }\\ \texttt{"p3=m4;","p4=p2*p1;""p5=p1*p1-p3-p3;","p6=p2*p2;", "p7=p2*p3;","p8=p3*p1;", "p9=p2*p5;","p10=p2*p4;", }\\ \texttt{"p11=p1*p5-p8;","p12=p2*p6;"\} }\\ \texttt{\textcolor{blue}{mpqtxttab} is the Fortran code for m's, p's and q's in a Table form of text entries, eg. each term is like }\\ \texttt{\{"m(99)","=","m(8)","*","m(24)"\}. }\\ \texttt{\textcolor{blue}{natoms, npoly, nq, nmono,} and \textcolor{blue}{nvar} are, respectively the numbers of atoms (in the parent), the numbers of p }\\ \texttt{polynomials, the number of q polynomials, the number of monomials, and the number of variables (x). }\\ \texttt{\textcolor{blue}{fouttxtadd} is the Fortran code output, in text format. The adjoints have the format of }\\ \texttt{pp(i), qp(j), mp(k), xxp(n) for the p's, q's, m's, and x's, respectively. *) }\\ \\ \texttt{\textcolor{red}{(* get starting part of fouttxtadd, i.e., add c(i) if the conjugate variable of the adjoint is a p *)} }\\ \texttt{ltrj=StringTake[mpqtxt[[jval]],1]; }\\ \texttt{eqpos=StringPosition[mpqtxt[[jval]],"="][[1,1]]; }\\ \texttt{numj=ToExpression[StringDrop[StringDrop[mpqtxt[[jval]], \{eqpos,StringLength[mpqtxt[[jval]]]\}],1]]; }\\ \texttt{fouttxtadd=ltrj\textless\textgreater "p"\textless\textgreater ToString[numj]\textless\textgreater" = "; }\\ \texttt{If[ltrj=="p",fouttxtadd=fouttxtadd\textless\textgreater"+ c"\textless\textgreater ToString[numj];]; }\\ \\ \texttt{\textcolor{red}{(* accumulate in dolist the indicies of all mpqtxttab elements that have ltrj\textless\textgreater "("\textless\textgreater ToString[jval]\textless\textgreater ")"} }\\ \texttt{on the rhs. These are the ones whose partial derivatives will need to be evaluatied *) }\\ \texttt{dolist=\{\}; }\\ \texttt{lookfor=ltrj\textless\textgreater"("\textless\textgreater ToString[numj]\textless\textgreater")"; }\\ \texttt{lasti=Position[mpqtxttab[[All,1]],lookfor][[1,1]]; }\\ \texttt{Do[( }\\ \texttt{\hspace{0.5 cm}If[MemberQ[Drop[mpqtxttab[[i]],2],lookfor], dolist=Append[dolist,i]; ]; }\\ \texttt{),\{i,Length[mpqtxttab],lasti,-1\}]; }\\ \texttt{If[dolist==\{\}, If[StringTake[fouttxtadd,-2]=="= ",fouttxtadd="";]; Goto[return]; ]; }\\ \\ \\ \\ \\ (Table continued on next page)\\ \noalign{\smallskip}\hline \hline\noalign{\smallskip} \end{tabular*} \end{scriptsize} \end{table} \begin{table}[ht] \caption*{Table SM-III (continued). Mathenatica Code for and Adjoint} \centering \begin{scriptsize} \begin{tabular*}{\textwidth}{ l } \hline \hline\noalign{\smallskip} \texttt{\textcolor{red}{(* perform do loop over the members of dolist*)} }\\ \texttt{Do[( }\\ \texttt{idyn=i; (* global for dynamic display of progress*) }\\ \texttt{Quiet[ ToExpression["Clear["\textless\textgreater pliststr \textless\textgreater"];"]; (* these much be cleared *) }\\ \texttt{ToExpression["Clear["\textless\textgreater mliststr \textless\textgreater"];"]; (* so that derivatives will be *) }\\ \texttt{If[nq!=0,ToExpression["Clear["\textless\textgreater qliststr\textless\textgreater"];"];]; (* symbolic *) ]; }\\ \texttt{ltri=StringTake[mpqtxt[[i]],1]; }\\ \texttt{eqpos=StringPosition[mpqtxt[[i]],"="][[1,1]]; }\\ \texttt{numi=ToExpression[StringDrop[StringDrop[mpqtxt[[i]], {eqpos,StringLength[mpqtxt[[i]]]}],1]]; }\\ \texttt{ToExpression[mpqtxt[[i]]]; }\\ \texttt{(* this statement adds the new text in FortranForm after solving symbolically for the derivative: *) }\\ \texttt{\hspace{0.5cm}fouttxtadd=fouttxtadd\textless\textgreater" + "\textless\textgreater ltri\textless\textgreater"p"\textless\textgreater ToString[numi]\textless\textgreater"*"\textless\textgreater }\\ \texttt{\hspace{0.5cm}\hspace{0.5cm}ToString[FortranForm[D[ToExpression[ltri\textless\textgreater ToString[numi]], }\\ \texttt{\hspace{0.5cm}\hspace{0.5cm}ToExpression[ltrj<>ToString[numj]]]]]; }\\ \texttt{),\{i,dolist\}]; (* end of do loop over dolist *) }\\ \texttt{(* if no rhs, then no new text, thus: *) }\\ \texttt{If[StringTake[fouttxtadd,-2]=="= ",fouttxtadd="";]; }\\ \\ \texttt{\textcolor{red}{(* finish up *)} }\\ \texttt{Label[return]; }\\ \texttt{ If[fouttxtadd!="",fouttxtadd=fouttxtadd \textless\textgreater "(* CR here *) }\\\texttt{n";]; }\\ \texttt{ fouttxtadd (* output *) }\\ \texttt{]; }\\ \noalign{\smallskip}\hline \hline\noalign{\smallskip} \end{tabular*} \end{scriptsize} \end{table} We list in Table SM-III the Mathematica Code used to find the $j^{th}$ adjoint of a computational plan such as the ones produced by $MSA$ software and, simultaneously, to output Fortran code that can be used to implement it. Comments are enclosed in ``(* ... *)''. \clearpage \vspace{2.5cm} \section*{Brief Description of Mathematica Programs} The current versions of Mathematica programs are here briefly described, with emphasis on the work path for producing purification, compaction, and the appending of derivative programs. In general, the first step is to use the permutational symmetry and polynomial order to generate $MSA$ Fortran output.\cite{Xie-nma,msachen,Xie10,persp9} We then convert this output to a ``standard'' Fortran form using a program that allows for fragmentation\cite{NandiBowman2019,conte20_efficientPIP} but can also be used (specifying the parent molecule as a single fragment) to produce the standard form. The output is a Fortran program. The second step is to use this Fortran program as an input to the Mathematica purification routine, which sorts the polynomials into those that have the correct limiting behavior at large distances and those that do not. As an option, this program will also perform the ``compaction'' step, which deletes those polynomials and monomials that are no longer needed as a result of the purification. A further option allows one to include various forms of derivative programs, including the ``normal'' derivative routine, the ``fast (forward)'' derivative routine, or the reverse derivative routine. These derivative routines can be left out of the purified/compacted Fortran output and appended separately in a subsequent Mathematica step. This Mathematica program also allows the derivatives to be generated and appended to programs that do not need purification. A further option of the Mathematica purification program is to increase the number of polynomials\cite{conte20_efficientPIP} and coefficients by generation new PIPs via multiplication of the purified ones and selection of those that have the largest value when calculated using the maximum values of the transformed internuclear distances that occur in the data set. A similar add-on step allows ``pruning'' of polynomials, i.e., elimination of the least important polynomials, those that have the smallest values of those calculated in the same manner.\cite{conte20_efficientPIP} Collectively, these Mathematica-based tools allow one to generate an efficient basis of PIPs, whose number can be increased or decreased, whose long-range limiting behaviour can be controlled, and whose analytical derivative method can be selected. \section*{A New Ethanol PES} To examine the fidelity of this new PES, we performed geometry optimization and normal mode analysis. The agreement with the direct \textit{ab initio} one is excellent. We get the energy of the global minimum geometry as -154.9995845 Hartree, whereas the direct calculation gives -154.9995972 Hartree. Comparison of harmonic frequencies with their corresponding \textit{ab initio} ones given in Table. SM-IV. It is seen that the agreement with the direct B3LYP/6-311+G(d,p) frequencies is very good; the maximum error is 11 cm$^{-1}$, but most of the frequencies are within couple of cm$^{-1}$ of the \textit{ab initio} ones. \begin{table}[htbp!] \caption*{Table SM-IV. Comparison of harmonic frequencies (in cm$^{-1}$) between PES and the corresponding \textit{ab initio} (B3LYP/6-311+G(d,p)) ones of Ethanol.} \label{tab:PES_Freq} \begin{threeparttable} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}cccccccc} \hline \hline\noalign{\smallskip} Mode & PES & \textit{ab initio} & Diff. & Mode & PES & \textit{ab initio} & Diff. \\ \noalign{\smallskip}\hline\noalign{\smallskip} 1 & 237 & 229 & 8 & 12 & 1446 & 1444 & 2 \\ 2 & 268 & 269 & -1 & 13 & 1482 & 1481 & 1 \\ 3 & 419 & 416 & 3 & 14 & 1498 & 1498 & 0 \\ 4 & 821 & 821 & 0 & 15 & 1522 & 1526 & -4 \\ 5 & 896 & 895 & 1 & 16 & 2979 & 2975 & 4 \\ 6 & 1036 & 1028 & 8 & 17 & 3005 & 3001 & 4 \\ 7 & 1094 & 1093 & 1 & 18 & 3031 & 3030 & 1 \\ 8 & 1175 & 1175 & 0 & 19 & 3097 & 3096 & 1 \\ 9 & 1267 & 1256 & 11 & 20 & 3105 & 3103 & 2 \\ 10 & 1298 & 1299 & -1 & 21 & 3841 & 3840 & 1 \\ 11 & 1401 & 1403 & -2 & --- & --- & --- & --- \\ \noalign{\smallskip}\hline \hline \end{tabular*} \end{threeparttable} \end{table} \clearpage \section*{Introduction} There has been dramatic progress in using regression methods from machine learning (ML) to develop high-dimensional potential energy surfaces (PESs). This has also led to a plethora of perspectives in this field which are beyond the scope of this article to review. But we do note that in 2020-present there have been at least 8 perspectives in the mainline journals, J. Chem. Phys. and J. Phys. Chem. Letters.\cite{persp1,persp2,persp3,perps4,persp5,persp6,persp7,persp8} These excellent papers convey the breadth and excitement of this important application of ML to potentials. Perhaps the first Perspective in J. Phys. Chem. Lett. on this topic came from the Bowman group in 2010,\cite{persp9} where the theory and numerous applications of permutationally invariant polynomial (PIP) regression were introduced to the readers of this journal. This ML method, introduced for \ce{CH5+} in 2003,\cite{Bowman2003} is actively used\cite{ARPC2018,robo20,mbpoltests,paespips_21} and further developed.\cite{conte20_efficientPIP,jasperpips_21,greedypip_21} PIPs have also been incorporated in Neural Network methods,\cite{Guo16,persp6,NNZhang16,FINN_18}, Gaussian Process Regression\cite{PIP-GP}, and recently to atom-centered Gaussian Process Regression.\cite{Xie10Allen2021,Oord2020 There are now numerous ML methods and codes to fit electronic energies and energies plus gradients (forces), and many of these are the subjects of the perspectives mentioned above as well as many more reviews and perspectives over the past 10 years. It is clearly important to assess the performance of these various methods on the same datasets and ideally run on the same computer. This was recently done for ML methods applied to molecules with 9 or more atoms in several studies.\cite{paescomps, PIP-GP, Tkatchjcp, dralchemsci} The paper by Pinbeiro et al.\cite{dralchemsci} is particularly noteworthy as it contains a comprehensive study for ethanol, using the the MD17 dataset of energies and forces\cite{MD17}, of the dependence of precision and prediction times on training size for energies and energies plus forces for several popular approaches to constructing ML PESs, such as GAP-SOAP,\cite{GP-2015-1} ANI,\cite{AN1} DPMD,\cite{dpmd2018} sGDML,\cite{Tkatch2018, Tkatch19} PhysNet,\cite{PhysNet} KREG,\cite{KREG} pKREG,\cite{pKREG} and KRR\cite{KREG}. We give brief descriptions of these methods below. As seen in that paper and also below, all the methods can reach RMS fitting errors for energies of 0.1 kcal~mol$^{-1}$ when trained just on energies. However, in the time required for prediction (energies plus forces) there are differences of factors of ten or more. There are of course caveats to timings, but in this case, all timings were done on the same compute node, an Intel Xeon Gold 6240 2-Processor 24.75M Cache 2.60 GHz. A similar assessment of some of these ML methods was very recently made including the recent Atomic Cluster Expansion (ACE) method\cite{ACEPIP_21} using revised MD17 datasets. These methods have been described in detail in the two recent assessment papers \cite{dralchemsci, ACEPIP_21} and so we just give a brief description here. In broad terms these methods can be categorized into kernel-based approaches (e.g., GAP-SOAP, sGDML, KREG, pKREG, KRR) and neural network(NN)-based ones. The kernel-based approaches represent the potential energy as a linear combination of kernel functions that measure the similarity between the input molecular configuration and every configuration in the training set. As a result, the cost of prediction scales as $O(N)$, where $N$ is the size of the training data. The differences between these kernel based methods are the choice of the kernel function and the descriptor used to represent the molecular configuration. For example, in Kernel Ridge Regression (KRR), many descriptors, such as aligned Cartesian coordinates, Coulomb matrix \cite{cmatrix} (in KRR-CM), and RE descriptor \cite{KREG} (in KREG), have been used. Common kernel functions include the Gaussian kernel function. pKREG uses a permutationally invariant kernel, and GAP-SOAP uses Smooth Overlap of Atomic Positions (SOAP) descriptor \cite{soap16} and kernel \cite{GP-2013}. sGDML slightly differs from the other methods as it directly learns the force by training in the gradient domain.\cite{Tkatch2018,Tkatch19} The NN-based approaches studied in the paper by Pinbeiro et al. can be further divided into two families. ANI and DPMD can be viewed as variants of the Behler-Parrinello NN (BPNN) \cite{Behler15}: the potential energy is written as the sum of atomic ones, and the descriptor is a local one centered on each atom that describes the local environment of that atom. The descriptors used in ANI and DPMD are different, but they are both manually designed. On the other hand, the second family, ``message-passing'' NN, inspired by graph convolutional NN, learns the descriptor. The descriptor of an atom gets updated using information from its neighbors. PhysNet, SchNet,\cite{SchNet} and MEGNet\cite{2019Chen} are examples of this family. One advantage of NN-based methods over kernel-based ones is that the cost of prediction is independent of the size of training data once the NN structure is determined. The ACE approach represents the potential as a body-ordered expansion (each atom is one ``body''), which is resummed into atomic energies. Each atomic energy is a linear combination of a set of permutationally-invariant basis functions centered on that atom. In our opinion, computational efficiency is of primary importance for ML PESs to become transformative to the field. By transformative, we mean the leap from classical to quantum simulations of the dynamics and statistical mechanics of molecules, clusters and realistic samples for condensed phase systems. While classical simulations have been done for years using ``direct dynamics'' (slow compared to using a PES of course), this is simply not possible for quantum simulations. For these one must have efficient ML PESs. For example, for a diffusion Monte Carlo (DMC) calculation of the zero-point energy of an isolated molecule, roughly 10$^8$ or more energy evaluations can be required for convergence.\cite{tropolone20,glycine20,conte_glycine20,NandiQuBowman2019} Nuclear quantum effects are known to be important for molecules, clusters, etc. with H atoms, and so incorporating these effects in simulations is important. Here we add PIP to the list of ML methods mentioned above for the ethanol case study. We use ethanol to showcase the performance of the new incorporation of reverse differentiation for gradients in our PIP software. The details for this are given below, followed by a short demonstration for the 4-body water potential that we recently reported\cite{4body} and then the detailed analysis for ethanol. Finally, we present a new PES for ethanol that is based on a dataset of B3LYP energies and gradients that extend to much higher energies than the MD17 dataset. The new PES is used in DMC calculations of the zero-point energy. Such calculations fail using a precise fit to the MD17 dataset, which is severely limited by the sampling of energies from a 500 K MD simulation. \section*{Methods} \subsection*{Recap of the Current PIP Approach } \hspace{\parindent}In the PIP approach to fitting,\cite{Braams2009} the potential $V$ is represented in compact notation by \begin{equation} V(\bm{\tau})= \sum_{i=1}^{n_p} c_i p_i(\bm{\tau}), \label{eq1} \end{equation} where $c_i$ are linear coefficients, $p_i$ are PIPs, $n_p$ is the total number of polynomials for a given maximum polynomial order, and $\bm{\tau}$ are transformed internuclear distances, usually either Morse variables, exp($-r_{n,m}/\lambda$), or inverse distances, $1/r_{n,m}$, where $r_{n,m}$ is the internuclear distance between atoms $n$ and $m$. The range (hyper) parameter, $\lambda$, is usually chosen to be 2 bohr. The linear coefficients are obtained using standard least squares methods for a large data set of electronic energies at scattered geometries (and, for large molecules, using gradients as well). The PIPs, $p_i$ in Eq. (\ref{eq1}), are calculated from \emph{MSA} software\cite{Xie-nma, msachen} and depend on the monomials, $m_j$, which themselves are simple functions of the transformed internuclear distances $\bm{\tau}$. The inputs to the $MSA$ software include both the permutational symmetry and the overall order desired for the permutationally invariant polynomials. In brief the \emph{MSA} software proceeds by producing all monomials obtained from a seed monomial by performing the permutations of like atoms specified in the input. Examples of this step are given in the review by Braams and Bowman.\cite{Braams2009} Then polynomials, which are formally the sum of monomials, are obtained in a recursive method starting with the lowest-order polynomials. In this approach higher-order polynomials are obtained by a binary factorization in terms of lower order polynomials plus a remainder. Details are given elsewhere\cite{Xie-nma, msachen}; however, we mention this essential aspect of the software as it gives some insight into complexity of determining the gradient of this representation of the potential. For some applications, there are further requirements on the PIP basis set so that one can ensure that the fit rigorously separates into non-interacting fragments in asymptotic regions. Identifying polynomials that do not have the correct limiting behavior is what we call ``purification''\cite{purified14, ConteQuBowman2015, purified15c, QuConteHoustonBowman2015} of the basis. Details of a recent example to the 4-body water PIP PES have been given; we refer the interested reader to that\cite{4body} and earlier references. Polynomials that do not have this property (``disconnected terms''\cite{purified13}) are labeled as $q_i$ and separated from the basis set used to calculate the energy in Eq. (\ref{eq1}). Thus, we now have polynomials of two types, those having the correct limiting behavior that will be used in the energy and gradient calculation, $p_i$ (see Eq. (\ref{eq1})), and those, $q_i$, that, while not having the correct limiting behavior, might still be needed because they could occur, for example, with multiplication by a polynomial that does have the correct limiting behavior. \subsection*{PIP Approach with Compaction and Fast Gradient Evaluation } The first enhancement to the PIP approach is what we call ``compaction'' and is aimed at determining which polynomials $q_i$ and which monomials $m_i$ are not necessary. We identify the unneeded monomials by increasing, one at a time, the value of the monomial to see if the values of the surviving $p_i$ polynomials change. If they do not, that monomial may safely be eliminated. We identify the unneeded $q_i$ by enumerating all needed $q_i$ that occur in the definitions of the $p_i$ as well as the $q_j$ with $j < i$ needed to define those $q_i$, and then taking the remainder to be unneeded. The compaction then consists in deleting all references to the unneeded $m_i$ and $q_i$, followed by renumbering of all the surviving $m_i$, $q_i$, and $p_i$. The final steps in the development of the basis set are to add methods for calculating analytical gradients . The first method, which we will call the ``normal'' gradient calculation,\cite{QuBowman2019,NandiQuBowman2019} is obtained formally by differentiating both sides of Eq. (\ref{eq1}) to determine how the change in potential depends on the change in the $p_i$. Of course, the $p_i$ depend on $q_i$, $m_i$, and the internuclear distances, themselves a function of the atomic Cartesian coordinates. Thus, we must also differentiate $q_i$, $m_i$, and $\tau_i$ with respect to each of the $3N$ Cartesian coordinates. We accomplish this conveniently by using the symbolic logic in Mathematica,\cite{Mathematica} a program whose mixture of text manipulation and expression evaluation is also used to write Fortran code for the aforementioned purification and compaction steps. Although the simple differentiation just described for the analytical gradients is straightforward, its implementation does not result in a fast gradient calculation. For example, the straightforward code would need to evaluate all the differentiated monomials, polynomials and $\bm{\tau}$ values $3N$ times for a single geometry. We have also written a ``Fast (forward) Derivative'' method\cite{conte20_efficientPIP} that uses Mathematica's symbolic logic to solve for the derivatives of each $p_i$ with respect to each component of $\bm{\tau}$, which we denote by $\tau_M$, where $M=1,N(N-1)/2$. We start with the derivatives of Eq. (\ref{eq1}): In our case, we have \begin{equation} \frac{\partial{V}}{\partial{\tau_M}} = \frac{\partial{}}{\partial{\tau_M}}(\sum_{i=1}^{n_p} c_i p_i) = \sum_{i=1}^{n_p}c_i \frac{\partial{p_i}}{\partial{\tau_M}} \label{eq: dEdxi} \end{equation} Next let $\alpha_n$ be the $x$, $y$, or $z$ Cartesian coordinate of atom $n$. The calculation is completed by determining \begin{equation} \frac{\partial{V}}{\partial{\alpha_n}} =\frac{\partial{V}}{\partial{\tau_M}}\frac{\partial{\tau_M}}{\partial{\alpha_n}} = \sum_{i=1}^{n_p}c_i \frac{\partial{p_i}}{\partial{\tau_M}}\frac{\partial{\tau_M}}{\partial{\alpha_n}}. \label{eq: dEdxck} \end{equation} For any geometry, the derivatives $\frac{{\partial{p_i}}}{{\partial{\tau_M}}}$ on the rhs of Eq. (\ref{eq: dEdxck}) are stored so that the calculation of each $p_i$ derivative with respect to each $\bm{\tau}$ component does not need to be repeated; only the remaining $3N$ derivatives in Eq. (\ref{eq: dEdxck}) of the $\bm{\tau}$ components with respect to the Cartesian coordinates need to be performed. In addition, since many of the derivatives are zero, we store only the non-zero ones and pair them with an index which indicates to which $p_i$ and $\tau_M$ they belong. This method speeds up the calculation substantially but is still not the best method, which we describe next. Automatic differentiation has been the subject of much current study\cite{BaydinPearlmutter2014,Baydin2018} and is widely disseminated on the internet. It has found its way into computational chemistry, \cite{autodiffSchaefer} mainly via libraries written in Python. We have discussed above some of the issues involved in what is called the ``forward'' method of automatic differentiation. When there are only a few inputs and many outputs, the forward method is usually quite adequate. For our problem, however, there are many inputs ($3N$ Cartesian coordinates) and only one output (the energy, or its gradient). In this case, a ``reverse'' (sometimes called ``backward'') differentiation is often faster. In either case, we start with a computational graph of the steps to be taken in the forward direction that ensures that the needed prerequisites for any step are previously calculated and provides an efficient computational plan; i.e., does not recalculate anything that has been previously calculated. The \emph{MSA} output provides such a plan for calculating the energy, which in our case is a fairly linear plan: to get the potential energy, start with the the Cartesian coordinates, $\alpha_n$, then calculate the transformed internuclear distances $\tau_M$, then the $m_k$, then the $p_i$ (or, in our purified case, the $q_j$ followed by the $p_i$), and finally take the dot product of the $c_i$ coefficients with the evaluated $p_i$ polynomials. Note that, in principal, any of the quantities, $\alpha,\tau,m,q,$ or $p$, can influence any of the ones that go after it (in the forward direction). In addition, note that there is a split at the end, so that, for example, any $p$ can influence the energy either by its contribution to the dot product or through its influence on any of the $p$ that come after it. The sequence of steps in the correct order is, of course, maintained in the purification and compaction steps. For the forward derivatives, everything is the same as for the energy except that each step is replaced by its derivatives, as shown in the left column of Table \ref{tab: computationgraph}, which follows the Fortran notation of putting the subscripts in parentheses. We also note that Fortran code makes no distinction between full and partial derivatives. The ``normal'' differentiation discussed in the previous paragraph is accomplished by working in the forward (up) direction of the left column, but one has to make $3N$ forward passes of the computational plan to get the gradients. The reverse automatic differentiation allows one to work backwards (down in the right column) from the derivative of the potential energy to get all $3N$ gradients in one pass of the computational plan, after having run the energy plan once in the forward direction. The strategy results in an extremely efficient method. It also scales more favorably with an increase in the number of atoms because, in the reverse direction, the cost of the $3N$ gradients is typically 3--4 times the cost of the energy,\cite{Baydin2018,GriewankWalther2008} whereas in the forward direction it is about $3N$ times the cost of the energy. Evidence that this is the case for application to PIPs will be presented in a subsequent section. \begin{table}[ht] \caption{Forward and Reverse Automatic Differentiation for PIP basis sets} \centering \begin{tabular*}{\columnwidth}{ c c c } \hline \hline\noalign{\smallskip} Forward (up) &\hspace{.5cm} &Reverse (down) \\ ${\partial{V}} = \bm{c}\cdot\partial{\bm{p}}$ & & ${\partial{V}} = \bm{c}\cdot\partial{\bm{p}}$\\ \noalign{\smallskip}\hline\noalign{\smallskip} $dp(n_p) = $... & & $a(n_{max})= \frac{\partial{V}}{\partial{p(n_p})} = c(n_p) $\\ $dp(n_p-1) = $... & & $a(n_{max}-1)=$ \\ & &$c(n_p-1) + a(n_{max})\frac{\partial{p(n_p)}}{\partial{p(n_p-1)}} $\\ ... & & ...\\ $dp(0) = dm(0)$& & \\ $dq(n_q) = $ ... & &\\ ... & &...\\ $dq(1) = $ ... & &\\ $dm(n_m) = $ ...& & \\ ... & & ... \\ $dm(0) = 0$ & & $a(j)=\sum_{i=j+1}^{n_{max}}a(i)\frac{\partial{t(i)}}{\partial{m(0)}}$\\ ... & & ... \\ $d\tau(M) =$ ... & & \\ ... & & ... \\ $dx_n= dx_1 = $... & & $a(3N) = \frac{\partial{V}}{\partial{x_1}}$\\ $dy_n= dy_1 = $... & & $a(3N-1) = \frac{\partial{V}}{\partial{y_1}}$\\ ... & & ... \\ $dz_n= dz_N = $... & & $a(1) = \frac{\partial{V}}{\partial{z_N}}$\\ \noalign{\smallskip}\hline \hline\noalign{\smallskip} \end{tabular*} \label{tab: computationgraph} \end{table} We define the adjoint, $a_j$, of a particular instruction as the partial derivative of the potential energy with respect to the conjugate variable, $t_j$, where $dt_j$ is the differential that is defined by the instruction in the forward direction: thus, $a_j = \frac{\partial{V}}{\partial{t_j}}$. The adjoints will provide the instructions for proceeding in the reverse direction, down column two of Table \ref{tab: computationgraph}. When we reach the end, the adjoints $\frac{\partial{V}}{\partial{\alpha_{n}}}$ will give the $3N$ derivatives we seek. Of course, in determining which $t_j$ contribute to the adjoint, we must sum all the ways that a change in $V$ might depend on $t_j$. It is instructive to work a few adjoints in the reverse direction (see Table \ref{tab: computationgraph}). The first equation moving in the reverse direction will be the adjoint of the conjugate variable $dp_{n_p}$, defined the forward direction, so the adjoint to evaluate is $\frac{\partial{V}}{\partial{p_{n_p}}}$ (which is equal to $c_{n_p}$). The next line in the reverse direction defined $dp_{(n_p-1)}$ in the forward direction. The change in $V$ with $p_{n_p-1}$ now depends potentially both on how a change in $p_{n_p-1}$ influences $V$ indirectly through $p_{n_p}$ and on how it influences $V$ directly through the contribution to the dot product. Thus, the adjoint is $\frac{\partial{V}}{\partial{p_{n_p-1}}}=c_{n_p-1} +\frac{\partial{V}}{\partial{p_{n_p}}}\frac{\partial{p_n}}{\partial{p_{n_p-1}}}=c_{n_p-1} +a(n_{max})\frac{\partial{p_n}}{\partial{p_{n_p-1}}}$. For the third line (not shown in the table), the adjoint is $\frac{\partial{V}}{\partial{p_{(n_p-2)}}}$. The change in $V$ with $p_{n_p-2}$ depends potentially both on how a change in $p_{n_p-2}$ influences $V$ indirectly through $p_{n_p}$ and $p_{n_p-1}$ and on how it influences $V$ directly through the contribution to the dot product. Thus, the adjoint is $\frac{\partial{V}}{\partial{p_{n_p-2}}}=c_{n_p-2}+ \frac{\partial{V}}{\partial{p_{n_p-1}}} \frac{\partial{p_{n_p-1}}}{\partial{p_{n_p-2}}} +\frac{\partial{V}}{\partial{p_{n_p}}}\frac{\partial{p_{n_p}}}{\partial{p_{n_p-2}}}=c_{n_p-2}+ a(n_{max}-1) \frac{\partial{p_{n_p-1}}}{\partial{p_{n_p-2}}} + \\a(n_{max})\frac{\partial{p_{n_p}}}{\partial{p_{n_p-2}}}$. Notice in all cases that the adjoint we seek is the $c$ coefficient of the conjugate variable plus the sum of all adjoints that preceded it, each times the derivative of its conjugate variable with respect to the conjugate variable of the adjoint we seek. The direct contribution through the $c_i$ occur only if the conjugate variable is a $p$. This observation gives the formula for assigning all the remaining adjoints: \begin{equation} a_j(t_j)= c_i \delta_{t,p} +\sum_{i=j+1}^{i_{max}} a_i \frac{\partial{t_i}}{\partial{t_j}}, \label{eq: adjointrule} \end{equation} where $a_j$, with conjugate variable $t_j$, is the adjoint to be calculated, $i_{max}$ is the maximum number of adjoints, $c_i$ is the coefficient associated with $p_i$ when $t_j=p_i$, and $\delta_{t,p}$ is 1 if $t$ is a $p$ and 0 otherwise. Two ``toy'' examples, one of a homonuclear diatomic molecule and one of a single water molecule, are worked in detail in the Supplementary Material. In the Mathematica implementation of our software, we pursue two routes in parallel: the first is to evaluate symbolically the adjoints in Eq. (\ref{eq: adjointrule}) using Mathematica code, and the second is to create the Fortran code from the Mathematica code for the same adjoints. The symbolic logic of Mathematica is used to solve the partial derivative factor of the adjoint terms. The resulting adjoint is turned into Fortran format and saved as an instruction list. Because many of the partial derivatives on the rhs of Eq. (\ref{eq: adjointrule}) are zero, it is fastest to enumerate all the $t_i$ with $i>j+1$ that depend on $t_j$ and then perform the terms in the sum only for those values of $i$. The key Mathematical program for evaluating a particular adjoint is provided in the Supplementary Material, as is a brief description of the various Mathematica steps involved in fragmentation, purification, compaction, adding or pruning polynomials, and appending derivative functions. We need to make one point clear: the Mathematica code must be run to generate Fortran output for each permutational symmetry in which the user might be interested. Thus, there is an overhead on the order of an hour or so to generate the fast derivative method for most problems of interest. Once the basis set and derivative method have been established, however, they can be run without further change. Also, the basis and reverse derivative code can be used for any molecule with the same permutational symmetry. \section*{Results} \subsection*{4-body water interaction potential} The first set of results is for the 12-atom, 4-body water potential, which we recently reported.\cite{4body} Here, we used permutational symmetry 22221111 with a total polynomial order of 3. The \emph{MSA} basis was purified by distancing each of the four monomers, one at a time, and distancing each of the sets of dimers, two at a time. In Table \ref{tab: timing} we show the times in seconds for the calculation of the energy and the $3N = 36$ gradients for purified/non-compacted and purified/compacted basis sets using four different gradient methods. Each time is for the evaluation of 20,000 geometrical configurations. It is clear from the table that the reverse derivative method is fastest and that it runs about 17 times faster than the 2-point finite difference method, often used in molecular dynamics calculations. In addition, compaction of the purified basis gives a further speed-up in this case of about 40\%. (Future plans call for using the 4-b PES for a full \emph{ab initio} water potential, so having a fast gradient is important for the usual MD and possibly PIMD simulations.) \begin{table}[ht] \caption{Total time for performing 20 000 gradient sets ($3N = 36$ gradients each) for a 22221111 permutational symmetry basis of maximum order 3 and various derivative methods. This is a 12-atom basis, which was used recently for the 4-b water potential.\cite{4body}} \begin{tabular*}{\columnwidth}{ l c c c c } \hline \hline\noalign{\smallskip} & 2-pt. Finite & Normal & Fast & Reverse \\ & Difference & anaytical & Derivative & Derivative \\ \noalign{\smallskip}\hline\noalign{\smallskip} Fully purified/ & 13.2 s & 9.7 s & 1.0 s & 0.7 s \\ non-compact & & & & \\ & & & & \\ Fully purified/ & 3.2 s & 2.0 s & 0.8 s & 0.2 s \\ compact & & & & \\ \noalign{\smallskip}\hline \hline\noalign{\smallskip} \end{tabular*} \label{tab: timing} \end{table} \subsection*{Ethanol } \subsubsection*{Assessment of ML Methods} As noted already, the performance of a number of ML methods was examined in detail for ethanol, using the MD17 database of energies and forces.\cite{MD17} This assessment provides detailed results with which we can compare our PIP method. The comparison is done using the same protocol used previously,\cite{dralchemsci} namely to obtain the RMSE for energies and gradients using training data sets of increasing size and also for training on just energies or on energies plus gradients. The permutational symmetry we use here is 321111, and we also consider the performance of two PIP bases, one order of 3 and the other of order 4. These have 1898 and 14 752 terms, respectively. We also consider a third basis of size 8895, obtained from pruning the n = 4 one. The procedure to do this is straightforward and is the following. The desired number of terms is the input, and all the polynomials for n = 4 (14 752) are evaluated using the maximum values of the Morse variables (taken over the data set). Then the desired number of polynomials is obtained by starting with the one of largest value and proceeding downward. This procedure can reduce a basis to any desired size. \begin{figure}[htbp!] \centering \includegraphics[width=\columnwidth]{Ethanol_Fitting_Comparison_2.pdf} \caption{Comparison of the different machine learning potentials trained on energies and forces for the MD17-ethanol dataset. The PIP results are for a basis with 14752 terms. The upper panel shows root mean- squared error in energies (eRMSE) vs the number of training points and the lower panel shows root mean- squared error in forces (fRMSE) vs the number of training points.} \label{fig:Ethanol_fitting} \end{figure} \begin{figure}[htbp!] \includegraphics[width=\columnwidth]{Etthanol_Poly4_PES_E.pdf} \caption{Comparison of the different machine learning potentials trained on energies only for the MD17-ethanol dataset. The PIP results are for a basis with 14752 terms. The upper panel shows root mean-squared error in energies (eRMSE) vs the number of training points and the lower panel shows root mean-squared error in forces (fRMSE) vs the number of training points.} \label{fig:Ethanol_fitting2} \end{figure} \begin{figure}[htbp!] \includegraphics[width=\columnwidth]{Etthanol_Poly3_PES_EG.pdf} \caption{Comparison of the different machine learning potentials trained on energies and forces for the MD17-ethanol dataset. The PIP results are for a basis with 1898 terms. The upper panel shows root mean-squared error in energies (eRMSE) vs the number of training points and the lower panel shows root mean-squared error in forces (fRMSE) vs the number of training points.} \label{fig:Ethanol_fitting3} \end{figure} \begin{figure}[htbp!] \includegraphics[width=\columnwidth]{Etthanol_Poly3_PES_E.pdf} \caption{Comparison of the different machine learning potentials trained on energies only for the MD17-ethanol dataset. The PIP results are for a basis with 1898 terms. The upper panel shows root mean-squared error in energies (eRMSE) vs the number of training points and the lower panel shows root mean-squared error in forces (fRMSE) vs the number of training points.} \label{fig:Ethanol_fitting4} \end{figure} \begin{table}[htbp!] \caption{Comparison of prediction time of the different machine learning potentials trained on MD-17 ethanol energies and forces. The times listed are those for calculation of the energy and forces for a total of 20 000 geometric configurations.} \label{tab:train1} \begin{threeparttable} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}rccccccc} \hline \hline\noalign{\smallskip} & \multicolumn{7}{c}{Prediction Timing (sec)} \\ \noalign{\smallskip} \cline{2-8} \noalign{\smallskip} $N_{\rm train}$ & ANI & DPMD & Phys & GAP- & sGDML & PIP\tnote{a} & PIP\tnote{b} \\ & & & -Net & SOAP & & & \\ \noalign{\smallskip}\hline\noalign{\smallskip} 100 & 27 & 180 & 213 & 212 & 7 & 0.23 & --- \\ 250 & 27 & 176 & 215 & 307 & 10 & 0.23 & --- \\ 500 & 27 & 176 & 214 & 560 & 15 & 0.23 & --- \\ 1000 & 27 & 182 & 215 & 1100 & 25 & 0.23 & 2.3 \\ 2500 & 27 & 189 & 216 & --- & 63 & 0.23 & 2.3 \\ 5000 & 27 & 186 & 216 & --- & 195 & 0.23 & 2.3 \\ 10000 & 27 & 188 & 213 & --- & --- & 0.23 & 2.3 \\ 25000 & 27 & 179 & 215 & --- & --- & 0.23 & 2.3 \\ 50000 & 27 & 176 & 214 & --- & --- & 0.23 & 2.3 \\ \noalign{\smallskip}\hline \hline \end{tabular*} \begin{tablenotes} \item[a] Maximum polynomial order 3 is used to fit the data, which leads to 1898 PIP bases. \item[b] Maximum polynomial order 4 is used to fit the data, which leads to 14572 PIP bases. \end{tablenotes} \end{threeparttable} \end{table} \begin{table}[htbp!] \caption{Comparison of prediction time of the different machine learning potentials trained on MD-17 ethanol energies only. The times listed are those for calculation of the energy and forces for a total of 20 000 geometric configurations.} \label{tab:train2} \begin{threeparttable} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}rcccccccc} \hline \hline\noalign{\smallskip} & \multicolumn{8}{c}{Prediction Timing (sec)} \\ \noalign{\smallskip} \cline{2-9} \noalign{\smallskip} $N_{\rm train}$ & ANI & DPMD & Phys & GAP & KRR & pKREG & PIP\tnote{a} & PIP\tnote{b} \\ & & & -Net & SOAP & -CM & & & \\ \noalign{\smallskip}\hline\noalign{\smallskip} 100 & 26 & 99 & 313 & 211 & 2 & 2 & --- & --- \\ 250 & 27 & 95 & 291 & 306 & 5 & 3 & --- & --- \\ 500 & 27 & 95 & 288 & 561 & 11 & 5 & --- & --- \\ 1000 & 26 & 101 & 304 & 1101 & 21 & 10 & --- & --- \\ 2500 & 26 & 94 & 294 & 3611 & 52 & 22 & 0.23 & --- \\ 5000 & 26 & 97 & 304 & --- & 102 & 49 & 0.23 & --- \\ 10000 & 26 & 97 & 301 & --- & 203 & 97 & 0.23 & --- \\ 25000 & 26 & 99 & 298 & --- & 508 & 227 & 0.23 & 2.3 \\ 50000 & 26 & 93 & 295 & --- & 1018 & 438 & 0.23 & 2.3 \\ \noalign{\smallskip}\hline \hline \end{tabular*} \begin{tablenotes} \item[a] Maximum polynomial order 3 is used to fit the data, which leads to 1898 PIP bases. \item[b] Maximum polynomial order 4 is used to fit the data, which leads to 14572 PIP bases. \end{tablenotes} \end{threeparttable} \end{table} \begin{table}[htbp!] \centering \caption{Assessment of the different machine learning potentials trained on energies for the MD17-ethanol dataset. The upper row shows root mean-squared error energy targets of around 0.5 and 0.1 kcal~mol$^{-1}$ for a test set of 20 000 configurations, while the columns show, for each of the methods, the training size required to achieve the targets and the time required for 20 000 energy and forces predictions. Timings based on the same Intel Xeon Gold 6420 processor, see text for details.} \label{tab:compare} \begin{threeparttable} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lrrrr} \hline \noalign{\smallskip} \hline \hline \noalign{\smallskip} Target eRMSE & \hspace{.7 cm}0.5\hspace{.1 cm} & \hspace{.7 cm}0.1\hspace{.1 cm} & \hspace{.7 cm}0.5\hspace{.1 cm} & \hspace{.7 cm}0.1\hspace{.1 cm} \\ (kcal-mol$^{-1}$) & & & & \\ \hline \noalign{\smallskip} \textbf{Method:} & \multicolumn{2}{c}{\textbf{\hspace{.4 cm}Training }} & \multicolumn{2}{c}{\textbf{\hspace{.4 cm}Prediction}} \\ & \multicolumn{2}{c}{\textbf{\hspace{.4 cm}Size}} & \multicolumn{2}{c}{\hspace{.4 cm}\textbf{Time\tnote{a} (sec)}} \\ \hline \noalign{\smallskip} pKREG & 2500 & 25000 & 22 & 227 \\ KRR & 5000 & 50000 & 102 & 508 \\ sGDML\tnote{b} & 100 & 1000 & 7 & 25 \\ GAP-SOAP & 500 & 2500 & 561 & 3611 \\ ANI & 2500 & 50000 & 26 & 26 \\ PhysNet & 5000 & 50000 & 300 & 300 \\ PIP\tnote{c} & 2500 & 10000 & 0.23 & 0.23 \\ PIP\tnote{d} & N/A & 25000 & N/A & 2.3 \\ \hline \noalign{\smallskip}\hline \end{tabular*} \begin{tablenotes} \item[a] Energies and forces (20 000 configurations). \item[b] Trained on forces only. \item[c] 1898-term basis. \item[d] 14 572-term basis. 25 000 is the smallest training size (see text for details). \end{tablenotes} \end{threeparttable} \end{table} Fig. \ref{fig:Ethanol_fitting} shows a comparison of the root-mean-square (RMSE) values for the energies (eRMSE) and forces (fRMSE) for the indicated methods as a function of the size of the training set, based on fits to energies and gradients, with the exception of the sGDML method, which was fit to gradients only. For this PIP fit, the basis set contains 14 752 terms. All methods achieve small RMSE values at sufficiently high training sizes; the GAP-SOAP, sGDML and PIP methods converge more rapidly. Similar plots for fitting to energies only are shown in Fig. \ref{fig:Ethanol_fitting2}. Where the results are available (at high training numbers), the PIP and pKREG precision for both energies and forces are the best. Note that with energies only, because of the large number of coefficients, it is inadvisable to fit the large-basis PIP set to train on data sets that have fewer than about 25 000 configurations because of likely overfitting. Comparable figures to Figs. \ref{fig:Ethanol_fitting} and \ref{fig:Ethanol_fitting2} are shown for the smaller basis set (1898 terms) in Figs. \ref{fig:Ethanol_fitting3} and \ref{fig:Ethanol_fitting4}. As seen, training on energies plus gradients yields essentially the ultimate eRMSE and fRMSE for a training size of 1000 configurations. Although the precision is not as high as for the larger PIP basis and for other ML methods the timing results are much faster, especially for the non-PIP methods, as will be presented next. Training just on energies with this PIP basis does result in a smaller ultimate eRMSE. The results for using the PIP basis of size 8895, obtained from pruning the $n = 4$ one, to fit a dataset of 10 000 configurations are as follows. Training is done on energies plus gradients and produces eRMSE and fRMSE for this training dataset of 0.09 kcal~mol$^{-1}$ and 0.30 kcal~mol$^{-1}$~\AA$^{-1}$, respectively. The testing at 20 000 geometries produces eRMSE and fRMSE of 0.09 kcal~mol$^{-1}$ and 0.34 kcal~mol$^{-1}$~\AA$^{-1}$, respectively. Thus, fitting accuracy of this pruned basis is very similar to the large PIP basis. We now consider the time required for calculation of the energies and gradients as a function of training size. An analysis of timing was also reported in a plot in ref. \citenum{dralchemsci} without the current PIP results. Tables \ref{tab:train1} and \ref{tab:train2} present timing results for different machine learning potentials trained on ethanol energies plus gradients or energies alone, respectively. For all conditions, the time required for calculation of 20 000 geometric configurations is far less than that for other methods listed, in most cases by more than one order of magnitude, particularly at the higher RMSE accuracy provided by larger training sizes. A note on the timing results is in order. All of the results in Tables \ref{tab:train1} and \ref{tab:train2} were obtained with the same Intel processor (Xeon Gold 6240). A summary comparison of these results is provided in Table \ref{tab:compare}. This shows the training size necessary to achieve the eRMSE target values of around 0.5 and 0.1 kcal~mol$^{-1}$ as well as the calculation time required for 20 000 energies and gradients. In order to reach a value of around 0.1 kcal~mol$^{-1}$ eRMSE, although a larger training size is necessary, the time required is approximately 50 times less for the small PIP basis and 5 times less for the large one, as compared to the fastest alternative. We note that, while the training size is important, once one has the method in place, what matters to most users is how fast one can perform molecular dynamics and quantum calculations using the PES. Very recently, the ACE method has been used to fit the ethanol MD17 data set, as well as datasets for other molecules.\cite{ACEPIP_21} This method was trained and tested on splits of 1000 configurations each (energies plus gradients). The ACE method achieves an MAE from around 0.1 kcal~mol$^{-1}$ to a low value of 0.03 kcal~mol$^{-1}$, depending on the values of the hyperparameters in this method. A detailed comparison with the small and large basis PIP fits is given in Table \ref{tab:MAE}. The timings for ACE were obtained on a 2.3 GHz Intel Xeon Gold 5218 CPU, which has essentially the same single-core performance as the Intel Xeon Gold 6240, but slower multi-core performance due to smaller number of cores (16 vs 18). Taking that into account we find that for comparable MAEs the PIP PESs run at factors between roughly 20 and 100 times faster than the ACE fits. \begin{table}[htbp!] \caption{Mean absolute errors (MAE) of energies (kcal-mol$^{-1}$) and forces (kcal-mol$^{-1}$\AA$^{-1}$) for ACE and PIP fits to MD17 datasets of energies and forces for ethanol, along with corresponding timings for 20 000 evaluations of energies and forces. Timings based on two Intel processors that run at about the same speed, see text for details.} \label{tab:MAE} \begin{threeparttable} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}rccc} \hline \hline\noalign{\smallskip} Method & eMAE & fMAE & Timing (sec) \\ \noalign{\smallskip}\hline\noalign{\smallskip} ACE & 0.10 & 0.40 & 29 \\ PIP\tnote{a} & 0.15 & 0.50 & 0.23 \\ ACE & 0.06 & 0.30 & 65 \\ PIP\tnote{b} & 0.06 & 0.12 & 2.3 \\ \noalign{\smallskip}\hline \hline \end{tabular*} \begin{tablenotes} \item[a] 1898-term basis. \item[b] 14 572-term basis. \end{tablenotes} \end{threeparttable} \end{table} \subsubsection*{A New ``DMC-certified'' PES} The MD17 dataset for ethanol has been used to compare the performance of the ML methods with respect to training and testing RMS errors and prediction timings. This dataset has been used for this purpose for a number of molecules.\cite{dralchemsci,Tkatchjcp,ACEPIP_21} Beyond this important utility, one can inquire about the many uses that the PES fits can be put to. In the case of ethanol one important application would be to get a rigorous calculation of the partition function. This is complicated owing to the coupled torsional modes, as pointed out in an approximate state-of-the-art study that, without a PES, relied on a variety of approximations to obtain the partition function.\cite{ethpartit} Another application, already noted above, is a diffusion Monte Carlo calculation of the zero-point energy and wavefunction. For such calculations the dataset must have the wide coverage of the configuration space and corresponding energies. As we show below the MD17 ethanol dataset is distributed over the energy range from 0--12000 cm$^{-1}$ with respect to the minimum energy. This energy range is not sufficient to describe the zero-point energy, which is estimated to be roughly 17,500 cm$^{-1}$ from a normal mode analysis. To emphasize this, we used the large basis PIP PES in DMC calculations. As expected, we encounter a large number of ``holes'', i.e., configurations with large negative energy relative to the minimum in the data base. These holes occurred mainly at regions of high energy, where the MD17 dataset does not have coverage. To address this problem, we generated a new dataset at the B3LYP/6-311+G(d,p) level of theory that has much larger coverage of configuration space and energies. The data sets of energies and gradients were generated using our usual protocol, namely \textit{ab initio} microcanonical molecular dynamics (AIMD) simulations at a number of total energies. These AIMD trajectories were propagated for 20000 time steps of 5.0 a.u. (about 0.12 fs) and with total energies of 1000, 5000, 10 000, 20 000, 30 000, and 40 000 cm$^{-1}$. A total of 11 such AIMD trajectories were calculated; one trajectory at the total energy of 1000 cm$^{-1}$ and two trajectories for each remaining total energies. The geometries and their corresponding 27 force components were recorded every 20 time steps from each trajectory to generate this new fitting dataset. These calculations are done using the Molpro quantum chemistry package.\cite{MOLPRO_brief} The final data set consists of 11 000 energies and corresponding 297 000 forces. We denote this new dataset as ``MDQM21". The distributions of electronic energies of this MDQM21 and MD17 datasets are shown in Fig. \ref{fig:dataset_comparison}. \begin{figure}[htbp!] \includegraphics[width=\columnwidth]{DFT_pot_Hist_Ethannol_comp.pdf} \caption{Distributions of electronic energies (cm$^{-1}$) of MDQM21 and MD17 dataset relative to their minimum value.} \label{fig:dataset_comparison} \end{figure} As seen there, the distribution of the MD17 dataset is the one that can be anticipated for 3$N$-6 classical harmonic oscillators at a thermal distribution at 500 K. For ethanol there are 21 such oscillators and the average potential is roughly 10 kcal~mol$^{-1}$ (roughly 3500 cm$^{-1}$), in accord with the distribution seen. By contrast, the MDQM21 dataset is very broad compared to that of the MD17 dataset. This is a direct result of running sets of direct-dynamics trajectories. For the PES fits we divided the MDQM21 dataset into a training set of 8500 geometries and a test dataset of 2500 geometries. We used the same large PIP basis to fit this dataset using 80 percent for training and 20 percent for testing. The training RMSEs for energies and forces are 40 cm$^{-1}$ (0.114 kcal~mol$^{-1}$) and 0.000334 hartree bohr$^{-1}$ (0.396 kcal~mol$^{-1}$~{\AA}$^{-1}$), respectively. The testing RMSE for energies and forces are 51 cm$^{-1}$ (0.145 kcal~mol$^{-1}$) and 0.000484 hartree~bohr$^{-1}$ (0.574 kcal~mol$^{-1}$~{\AA}$^{-1}$), respectively. These are very similar to energy and force RMSEs for the large-basis PIP PES trained on the MSD17 dataset. The new PES was used successfully in DMC calculations. Each DMC trajectory was propagated for 25 000 time steps using 10 000 random walkers with the step size of 5.0 au. Fifteen DMC simulations were done, and the final ZPE, 17168 cm$^{-1}$, is the average of the 15 trajectories with the standard deviation of 12 cm$^{-1}$. The DMC calculations completed without any holes and the PES ``earns'' the title ``DMC-certified''. We also applied this PES for geometry optimization and normal-mode analysis and the agreement with direct calculations is excellent. Results are given in Supplementary Material. \section*{Discussion} \subsection*{ML Assessments} We have shown for ethanol, the PIP timings are 10 to 1000 times faster than most other widely-cited ML methods considered in a previous comprehensive assessment.\cite{dralchemsci} Similarly large factors were reported earlier in comparison of timings with a straightforward GPR approach just fitting energies but using low-order PIPs as inputs and using Morse variables for energies of four molecules, including 10-atom formic acid dimer. At roughly the same RMS error for energies (0.1 kcal~mol$^{-1}$ or less) the GPR method was factors of 10--50 or more slower than PIP run on the same compute node.\cite{PIP-GP} A second example is 15-atom acetylacetone (AcAc). We recently reported timing for energies on a 4-fragment PIP PES of 0.08 ms per energy,\cite{QuAcAc} using a dataset of MP2 energies and gradients reported earlier to obtain a PhysNet PES for AcAc.\cite{meuwly2020} Timings were not reported on the PhysNet PES for AcAc; however, the time per energy was reported as 4 ms for the smaller molecule malonaldyde (Intel Core i7-2600K).\cite{meuwly2020} This is a factor of 50 larger than for the PIP PES and so consistent with the factor of 100 for larger basis PIP and 1000 for small basis seen for ethanol for PhysNet. A final example is 5-atom \ce{OCHCO+}, where a PIP-PES\cite{OCHCO15} is roughly 1000 times faster than a PES obtained with SchNet\cite{SchNet} and using the PIP-PES CCSD(T) electronic energies and run on the same Intel compute node. That ML method was tested on small molecules where PIP-PESs were previously reported.\cite{brorsen19} Thus, we conclude from a variety of tests, especially those presented here, that PIP PESs are significantly more computationally efficient for energies and now also for gradients than other ML methods, and we can ask why this might be so. The short answer is the simplicity of Eq. \ref{eq1}. This is just a dot product of the expansion coefficients times low-order polynomials. These are obtained using a bi-factorization approach that is also efficient.\cite{Xie10,QuConteHoustonBowman2015} The training time using this approach is also quite efficient since it relies on solving the standard linear least-squares system of equations. A caveat about overall efficiency is the additional overhead, relative to other ML methods, due to the time needed to generate the PIPs using the \emph{MSA} software. In the present case the time requires for the small PIP basis is a few minutes and for the large basis roughly 1 hr. Clearly, these bases could be stored in a library of PIP bases for the given 9-atom symmetry and used for any other molecule with this symmetry. However, given the small computational effort to get these basis, it's not clear that this is needed. Finally, we note that the codes developed for the methods tested previously\cite{dralchemsci} and here use a variety of languages, e.g., Python, FORTRAN, C, Julia. These were presumably selected by developers of the codes based on their specific criteria. For scientific uses, especially for quantum calculations, which is the emphasis here, computational speed is a high priority. As already noted in the Introduction and seen here, this is one aspect that clearly separates the performance of the codes. \begin{figure}[htbp!] \centering \includegraphics[width=\columnwidth]{TimingPlot.pdf} \caption{Calculation time for gradients, relative to that for the potential, for various methods as a function of the number of atoms, $N$.} \label{fig:timingplot} \end{figure} \subsection*{PIP Fast Gradient For Larger Molecules} One might still question whether the advances in computer efficiency made possible by the reverse automated derivative method will stand up for systems other than ethanol and the 4-body water potential. We have examined this question by comparing results from the water 2-body potential (6 atoms, unpublished), the ethanol potential (9 atoms, PIP orders 3 and 4, described above), the water 4-body potential (12 atoms),\cite{4body} the acetylacetone potential (15 atoms),\cite{QuAcAc} and the tropolone potential (15 atoms).\cite{tropolone20} The results for the timing cost for gradients, divided by that for the energy, are shown in Fig. \ref{fig:timingplot} for four different methods of differentiation: 2-point finite difference, normal analytical differentiation, fast (forward) differentiation, and reverse differentiation. The last two methods have been described in this Communication. First-order fit parameters are shown in the legend; the first three are constrained to have zero intercept, while the reverse data is fit to an intercept and slope. We noted earlier that the reverse method is predicted to have a nearly constant time cost, relative to the energy, of about 3-4. The figure shows this to be substantially true for the number of atoms, $N$, between 6 and 15; there is negligible slope and the intercept is 3.6. Scaling of the other methods is roughly as expected. Because there are two calls to the energy for the 2-point finite difference method, we might expect this method to go as $2 \times 3 N$; we find it to scale as $5.8 N$. The normal differentiation needs to be performed $3 N$ times, so one might expect the cost to vary as $3 N$; it appears to vary as $3.7 N$. The cost of the ``fast'' method should be somewhere between that of the normal analytical differentiation and that of the reverse method; the result is $1.7 N$. The reverse method is by far the fastest, and this advantage grows linearly with $N$. It should be noted that the time for the energy calculation itself varies non-linearly with $N$, depending on the symmetry and order. It is roughly proportional to the sum of the number of polynomials and monomials. \subsection*{New Ethanol PES} The new DFT-based PES for ethanol was done in just several ``human-days'' of effort using the well-established protocol in our group. It was fit with the same large PIP basis used for the assessment of PESs based on the MD17 dataset. Thus, we consider this new PES mostly an example of the ease with which PESs for a molecule with 9 atoms and two methyl rotors can be developed and used for quantum simulations. The immediate plan is to correct this PES using the CCSD(T) $\Delta$-ML approach we reported recently for N-methyl acetamide\cite{nandidelta_21} and acetylacetone.\cite{QudeltaJPCL_21}. This new PES along with extensive analysis will be reported shortly. However, for possible use in testing ML methods the extensive B3LYP dataset is available for download at https://scholarblogs.emory.edu/bowman/potential-energy-surfaces/. \section*{Summary and Conclusions} We reported new software incorporating reverse differentiation to obtain the gradient of a potential energy surface fit using permutationally invariant polynomials (PIPs). We demonstrated the substantial speed-up using this method, versus previous methods, for our recent 4-body water interaction potential. Detailed examination of training-testing precision and timing for ethanol using the MD17 database of energies and gradients against GP-SOAP, ANI, sGDML, PhysNet, pKREG, KRR, ACE, etc was given. These methods were recently assessed in detail by Dral and co-workers.\cite{dralchemsci} PIPs bases with 1898, 8895 and 14 572 terms were considered. Training on energies plus gradients for a datasize of 250 configurations, the smallest PIP basis has RMSEs for energies and forces that are close to those from GAP-SOAP and sGDML (which are the best of all the other ML methods). Prediction times for 20 000 energies plus gradients, however, are very different (accounting for small documented differences in the various Intel CPUs). Normalized so that PIP is 1.0, sGDML and GAP-SOAP are 45 and 1395, respectively. The timings for sGDML and GAP-SOAP increase with training size whereas those for this PIP basis do not. However, the eRMSEs for sGDML and GAP-SOAP decrease to a final value of 0.1 kcal-mol$^{-1}$ which is about half the eRMSE for this small PIP basis. However, the prediction times grow substantially for sGDML and GAP-SOAP such that the times relative to this small PIP basis are 886 and 5000, respectively. Ultimately among all the non-PIP methods neural-network PhysNet and ANI methods achieves the lowest energy and force RMSEs, roughly 0.06 kcal~mol$^{-1}$ and 0.20 kcal~mol$^{-1}$~A$^{-1}$ , respectively, when trained on 10 000 configurations. The largest PIP basis of 14 572 achieves very similar RMSEs but runs faster by factors of 144 and 18 compared to PhysNet and ANI, respectively. The middle-sized PIP basis of 8895 runs roughly 26 percent faster than the large PIP basis and when trained on 10 000 energies and gradients at 10 000 configurations achieving a testing energy and force RMSE of and 0.09 kcal~mol$^{-1}$ and 0.34 kcal~mol$^{-1}$~\AA$^{-1}$. Diffusion Monte Carlo calculations of the zero-point energy fail on the largest basis PIP PES trained on MD17 dataset due to many ``holes''. This is explained by noting that the energies of this dataset extend to only about 60\% of the ZPE. A new ethanol PIP PES is reported using a B3LYP dataset of energies and gradients that extend to roughly 92 kcal~mol$^{-1}$. DMC calculations are successful using this PES. \section*{Supplementary Material} The supplementary material contains examples of the reverse differentiation for PIP bases for a diatomic and a triatomic molecule, as well as Mathematica code for calculating an adjoint and a brief description of our Mathematica suite of programs. A comparison of the new ethanol PES and direct B3LYP optimized geometries of the minimum and normal mode frequencies is also given. \section*{Acknowledgements} JMB thanks NASA (80NSSC20K0360) for financial support. We thank Pavlo Dral for useful discussions and providing new plots of the learning curves including the new results using the PIP bases and also for timing using our PIP basis on same intel used to time the ML methods in ref. \cite{dralchemsci}. We thank David Kovacs and Gabor Csanyi for sending details of the ACE timings and eRMSE values and we thank Kurt Brorsen for running the SchNet timing calculations for \ce{OCHCO+}. \section*{Data Availability} The data that support the findings of this study are available from the corresponding author upon reasonable request. The B3LYP ethanol dataset is available for download at https://scholarblogs.emory.edu/bowman/potential-energy-surfaces/. The Mathematica notebooks used in this work are also available upon request. \section*{References}
proofpile-arXiv_065-68
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} Process mining is a family of techniques that facilitate the discovery and analysis of business processes based on execution data. Process mining techniques use event logs extracted from enterprise information systems to, for instance, discover process models or to check the conformance of a process with respect to a reference model~\cite{van2012process}. In this setting, an event log is a dataset capturing the execution of a process step by step, including timestamps, activity labels, case identifiers, resources, and other contextual attributes related to each case or each step within a case. Over time, the scope of process mining has extended to encompass techniques that predict the outcome of ongoing cases of a process based on machine learning models constructed from event logs~\cite{maggi2014predictive}. Predictions, however, only become useful to users when they are combined with recommendations~\cite{Marquez2018}. In this setting, \textit{prescriptive process monitoring} is a family of methods to recommend interventions during the execution of a case that, if followed, optimize the process with respect to one or more performance indicators~\cite{shoush2021prescriptive}. For instance, an intervention might improve the probability of the desired outcome (e.g.\ on-time delivery) or mitigate negative outcomes (e.g. delivery delays) \cite{metzger2020triggering}. Different prescriptive process monitoring methods have been proposed in the literature. These methods vary in relation to -- among others -- their predictive modeling approach and the interventions they prescribe. In some cases, two different methods aim at achieving the same objective but in different ways. For instance, to avoid an undesired outcome, one method prescribes assigning resources for the next task \cite{SindhgattaGD16}, whereas another recommends the next task to be executed~\cite{LeoniDR20}. The benefits of prescriptive process monitoring can only be fully realized if these methods prescribe effective interventions and if these prescriptions are followed~\cite{DeesLAR19}. At present, though, the variety and fragmentation of prescriptive monitoring methods makes it difficult to understand which method is likely to be most effective or more likely to be accepted by end users in a given business situation. There is no overview that captures existing prescriptive monitoring methods, what objectives they pursue, which interventions they prescribe, which data they require, or the extent to which these methods have been validated in real-life settings. Research overviews and classification frameworks have been put forward in the related field of predicative monitoring~\cite{di2018predictive,Marquez2018} and automated resource allocation~\cite{arias2018human,DBLP:journals/corr/abs-2107-07264}, but not in the field of prescriptive process monitoring. To address this gap, we study three research questions: \begin{itemize} \item Given that prescriptive process monitoring methods aim at prescribing interventions that produce business value, i.e., achieve an objective, we formulate the first research question as \RQ{RQ1}{What is the objective for using prescriptive process monitoring methods in the process?} \item The second research question aims at discovering how the objectives can be achieved: \RQ{RQ2}{What are the interventions prescribed by prescriptive process monitoring methods?} \item Finally, we explore the data required by the proposed methods: \RQ{RQ3}{What data do prescriptive process monitoring methods require?} \end{itemize} To answer these questions, we conduct a Systematic Literature Review (SLR) following the guidelines proposed by Kitchenham et al.~\cite{kitchenham2007guidelines}. We identified 36 papers that we analyze these papers to develop a multi-dimensional framework to characterize prescriptive monitoring methods. The contribution of the paper is twofold. We first develop a framework that classifies the prescriptive process monitoring methods according to their objective, metric, intervention types, techniques, data inputs, and policies to trigger the interventions. Second, we provide insights into potential areas for future research in this field. The rest of this paper is structured as follows. Section~\ref{sec:prescriptive} discusses related work. In Section~\ref{sec:method}, we elaborate on the protocol of the SLR, while in Section~\ref{sec:results} we present our findings. In Section~\ref{sec:framework} we present the proposed framework, followed by concluding remarks in Section~\ref{sec:conclusion}. \section{Related Work} \label{sec:prescriptive} Methods for prescriptive process monitoring prescribe interventions that can change the outcomes of an ongoing process case. For instance, if a method detects that an undesired outcome is probable to unfold, an alarm is raised that can lead to an intervention \cite{TeinemaaTLDM18}. This intervention could come in the form of an action performed by a process worker, such as calling a customer, that helps to mitigate or prevent the negative outcome from materializing \cite{TeinemaaTLDM18}. Interventions often entail an intervention cost (e.g., time spent executing an action) and a cost of undesired outcome (e.g., the order is canceled) \cite{fahrenkrog2019fire}. Thus, it is essential to define a policy for when the prescription is generated. The example above \cite{fahrenkrog2019fire} considers the probability of a negative outcome and evaluates the cost model and the mitigation effectiveness before triggering interventions. A few previous studies focus on areas that are related to prescriptive process monitoring. Di Francescomarino et al.~\cite{di2018predictive} introduce a value-driven framework that allows companies to identify when to apply predictive process monitoring methods. Another classification of predictive process monitoring methods is that of Márquez-Chamorro et al.~\cite{Marquez2018}, where the focus is on methods to train predictive models. In Mertens et al.~\cite{8904693}, the authors evaluate predictive methods used to recommend follow-up activities in the healthcare domain. While prescriptive process monitoring methods incorporate predicted outputs, our work only focuses on prescriptive methods and prescribed interventions. Pufahl et al.~\cite{DBLP:journals/corr/abs-2107-07264} present an SLR on automatic resource allocation. Similarly, Arias et al.~\cite{arias2018human} give an overview of resource allocation methods, but with a particular focus on human resources. However, the former work reviews prescriptive methods in general, and the latter two focus on a specific intervention, namely, resource allocation. In this paper, we enrich such works by considering all types of potential interventions in process-aware methods. \section{SLR Method} \label{sec:method} We aim to review the existing body of work on prescriptive process monitoring methods. More specifically, what the objectives for using such methods are (\hrq{RQ1}), what interventions the methods prescribe (\hrq{RQ2}), and what data the methods require (\hrq{RQ3}). Therefore, we use the systematic literature review (SLR) method as it aids us to identify relevant literature in a specific research area \cite{kitchenham2007guidelines}. We follow the guidelines proposed by Kitchenham et al.~\cite{kitchenham2007guidelines}, who proposes three main steps: (1) planning the review, (2) conducting it, and (3) reporting the findings. For the first step (planning), we identified research questions and developed the review protocol~\cite{kitchenham2007guidelines}. The research questions were defined and motivated above. We developed a search string for the review protocol, identified suitable electronic databases, and defined inclusion and exclusion criteria. Finally, we defined the data extraction strategy. In the search string, we included ``process mining'' to scope the study to methods that rely on event logs. We derived the term ``prescriptive'' from the research questions. We also included the terms ``recommender'' and ``decision support'', as we found these to be sometimes used instead of ``prescriptive''. Accordingly, we formulated the search string: \smallskip \noindent \begin{tabular}{|L{12cm}|} \hline \footnotesize \textit{(recommender OR ``decision support'' OR prescriptive) AND ``process mining''} \\ \hline \end{tabular} While conducting the first search, we noted that the term ``prescriptive process monitoring'' is not consistently used. Using this search string only might thus lead us to miss relevant studies. We addressed this by examining the papers identified with the first search string to identify other used terms. We noted that terms such as ``next-step recommendation'', ``next best actions'', ``proactive process adaptation'' are used as synonyms for prescriptive process monitoring. We also noticed that the phrase ``business process'' often appeared in titles and keywords. Therefore, we formulated the second search string as: \noindent \begin{tabular}{|L{12cm}|} \hline \textit{(recommender OR ``next activity'' OR ``next step'' OR ``next resource'' OR proactive) AND ``business process''} \\ \hline \end{tabular} We searched using both strings on ACM Digital Library, Scopus (includes SpringerLink), Web of Science, and IEEE Xplore. The databases were selected based on coverage of publications within the field of process mining. We also ran the search strings on Google Scholar to capture potentially relevant works not yet published (such as arXiv). Finally, we conducted backwards referencing (snowballing)~\cite{okoli2010guide} to identify additional relevant papers. Next, we defined the exclusion and inclusion criteria. We excluded papers not digitally accessible (EC1), not in English (EC2), duplicates (EC3), and shorter than six pages (EC4). Exclusion criteria EC1 and EC2 ensure that the paper can be generally accessed and understood by other researchers. Papers with open access or accessible via institutional access are considered accessible. Criterion EC3 removes duplicates that can appear since several digital libraries are used. We applied criterion EC4, as papers with less than six pages are likely not to contain enough information for our analysis. We also defined three inclusion criteria: (IC1) the paper is relevant to the domain of prescriptive process monitoring, (IC2) the paper presents, reviews, discusses, or demonstrates a method or a case for prescriptive process monitoring, (IC3) the paper describes at least one way to identify candidate interventions for an ongoing process case. IC1 aims to filter out the papers outside of the scope of the prescriptive process monitoring domain. With IC2, the studies that represent any theoretical discussion or practical application of a method are included. Inclusion criterion IC3 ensures that the paper contains enough information to address the research questions. Finally, we defined the data extraction strategy. We first captured the metadata of all papers (title, author, publication venue, year). Then we defined the data required to address the research questions. Thus, for \hrq{RQ1}, we defined the data required to identify the objective of using the prescriptive process monitoring technique and the performance metric(s) targeted in each of the papers. Next, we defined the data to elicit the interventions prescribed, the process perspective, and the users the prescribed interventions are presented to (\hrq{RQ2}). Finally, we defined the required data input for the techniques described in the different papers (\hrq{RQ3}). Additionally, we added modeling techniques and policies used to trigger interventions. We ran the search\footnote{The first search was conducted on 22 Sep 2021, the second on 12 Oct 2021.} and identified a total of 1367 papers. We filtered them using the exclusion criteria EC1 and EC2. This resulted in the removal of 97 papers. Thus, 1270 papers remained and were filtered based on EC3. From the remaining 1010 papers, we removed short papers (EC4). This resulted in 900 papers remaining. These were filtered by title, thus removing papers that were clearly out of scope. The remaining 171 papers were filtered by abstract, resulting in 66 papers remaining. Finally, we applied the inclusion criteria by reading the whole paper and removing 44 papers. As a result, 22 papers remained. A total of 14 papers were added through backwards referencing, resulting in the final list of 36 relevant papers\footnote{Full review protocol: \url{https://doi.org/10.6084/m9.figshare.17091554.v3}}. \begin{table}[] \centering \vspace*{-2mm} \caption{Paper Selection Process}\label{tab:filtering} \begin{tabular}{L{4cm}|P{1.2cm} P{1.1cm}|P{1.2cm} P{1.1cm} |P{1.2cm} P{1.1cm}} Search & First & & Second & & Aggregated \\ \hline Selection criteria & \# found & \# left & \# found & \# left & \# found & \# left \\ \hline Search results & 572 & & 795 & & 1367 \\ Data cleaning & 60 & 512 & 37 & 758 & 97 & 1270 \\ Filtering by duplicates & 116 & 396 & 144 & 614 & 260 & 1010 \\ Filtering by \# of pages & 31 & 365 & 79 & 535 & 110 & 900 \\ Filtering by paper title & 252 & 113 & 477 & 58 & 729 & 171 \\ Filtering by paper abstract & 64 & 49 & 41 & 17 & 105 & 66 \\ Filtering by full paper & 34 & 15 & 10 & 7 & 44 & 22 \\ Backward referencing & 12 & & 2 & \\ \textbf{Total} & & 27 & & 9 & & \textbf{36} \\ \end{tabular} \end{table} To derive the framework, we started with clustering the methods according to what they were aiming to improve (\hrq{RQ1}), e.g., "cycle time minimization", "cost optimization". We then noted that the methods formed two more prominent groups that, in the end, served as the main categorization of the framework, i.e., the objectives. Within the groups, we followed the research questions to classify the methods further, such as according to the interventions they trigger (\hrq{RQ2}), the input data they require (\hrq{RQ3}). \section{SLR Results} \label{sec:results} In the following sections, we present the results of our review. First, we describe the objectives of prescriptive process monitoring methods that we found (\hrq{RQ1}). Then, we present the interventions prescribed to achieve the objectives (\hrq{RQ2}), and, finally, we outline the data used to do so (\hrq{RQ3}). \subsection{Prescriptive Process Monitoring Objectives} \label{subsec:objective} From our review, we identified two main objectives that prescriptive process monitoring methods aim to achieve. The first objective reduces the defect rate, whereas the second relates to optimizing quantitative case performance. The objective of reducing defect rate is expressed with binary metrics. For instance, the objective is achieved by reducing the risk of cost overrun \cite{ConfortiLRAH15}. The objective of optimizing is expressed as, for instance, reducing cycle time \cite{WibisonoNBP15}. As to the objective of reducing the defect rate, five papers discuss undesired temporal outcomes, such as the violation of the planned cycle time or deadline \cite{groger2014prescriptive,SindhgattaGD16,WeinzierlDZM20,HuberFH15,LeoniDR20}. For instance, Gröger et al. \cite{groger2014prescriptive} describe the example of a manufacturing process, where the target is to avoid exceeding the allowed limits for cycle time. Another set of studies focuses on avoiding or mitigating an undesired categorical outcome \cite{TeinemaaTLDM18,fahrenkrog2019fire,metzger2020triggering,shoush2021prescriptive,ghattas2014improving,thomas2017recommending,Mertens20,HaisjacklW10}. For example, Ghattas et al. \cite{ghattas2014improving} try to avoid the customer rejecting the delivery in a bottle manufacturing process. In the domain of healthcare, examples of undesired outcomes are patients entering a critical stage \cite{thomas2017recommending}, or medical mistakes due to patient restrictions \cite{Mertens20}. One paper aims to eliminate or mitigate process risks, i.e., faults in the process that may arise if not addressed \cite{ConfortiLRAH15}. The second main objective considers optimizing quantitative case performance. Most papers consider optimizing the temporal perspective (15 out of 20), such as cycle or processing time. More specifically, in \cite{WibisonoNBP15,kim2013constructing,obregon2013dtminer,thomas2017online}, reducing cycle time is defined as the main objective. For instance, Thomas et al. \cite{thomas2017online} describe a method to minimize the cycle time of an environmental permit application process. Others focus on processing time, i.e., time spent by a resource resolving a task \cite{DumasRMR18}. For instance, in Park et al. \cite{ParkS19} the aim is to reduce the processing time of manual tasks in a loan application process. Another set of methods aim at increasing quality. For example, a method seeks to increase perceived service quality for the users of a financial web service \cite{WeinzierlSZM20}. Finally, two papers \cite{GoossensDH18,TerragniH19} describe methods that aim to improve revenues, e.g., by increasing customer lifetime value \cite{GoossensDH18} \subsection{Prescribed Interventions} \label{subsec:interventions} Prescriptive process monitoring methods prescribe actions to take, i.e., interventions. These interventions can be categorized according to the process perspective of the prescribed intervention. Our review indicates that interventions commonly concern control flow and resource perspectives. A common intervention perspective is control flow, such as prescribing the next task to perform \cite{LeoniDR20,HeberHS15,Batoulis14,nakatumba2012meta}. More specifically, in de Leoni et al. \cite{LeoniDR20}, the next best task is prescribed to the professional who helps a customer in finding a new job, whereas in Weinzierl et al. \cite{WeinzierlSZM20}, the next step is presented to the end-user. Following the prescribed intervention can improve execution time, customer satisfaction or service quality. In other studies, a sequence of next steps is prescribed as an intervention. For instance, in one method, the appropriate treatment of a blood infusion is prescribed for patients based on their personal information \cite{DetroSPLLB20}, whereas another method prescribes steps to be taken in a trauma resuscitation process \cite{YangDSZFXBM17}. Such interventions aim to improve treatment quality. Another group of methods focuses on the resource perspective, e.g., which resource should perform the next task. For instance, Wibisono et al. \cite{WibisonoNBP15} prescribe which police officer is best suited for the next task in a driving license application process based on their predicted performance. In another method, a mechanic is recommended to carry out car repairs because s/he is predicted to finish it within a defined time given their schedule and experience \cite{SindhgattaGD16}. Some papers propose prescribing multiple interventions for one case \cite{shoush2021prescriptive,NezhadB11,BarbaWV11}. For example, an intervention to make an offer to a client is prescribed together with a suggestion for a specific clerk to carry out the task \cite{shoush2021prescriptive}. Similarly, in an IT service management process, recommending the next task and the specialist to perform it can help to resolve open cases quicker \cite{NezhadB11}. When reviewing the identified papers, we noted that interventions could be divided into two aspects; \textit{intervention frequency} and \textit{intervention basis}. Intervention frequency captures \textit{when} interventions are prescribed. In this sense prescriptive monitoring methods can be \textit{continuous} or \textit{discrete}. If the method is continuous, it prescribes an intervention for multiple or all activities of an ongoing case. For example, the best-suited resource for each next task is prescribed \cite{WibisonoNBP15}. Discrete interventions, in comparison, prescribe actions to be taken only when a need is detected. For instance, in Metzger et al. \cite{metzger2020triggering}, interventions are triggered only when it is detected that the probability of a negative outcome exceeds a defined threshold. The intervention basis describes whether a method is \textit{prediction}-based or \textit{similarity}-based. Prediction-based methods predict the outcomes of an ongoing case and then prescribe an intervention. Similarity-based methods in comparison provide recommendations solely based on an analysis of historical traces. For instance, one method predicts the possible outcomes if a task is performed as the next step \cite{LeoniDR20}. The next step is prescribed based on which option leads to a greater metric improvement. In contrast to this method, a set of actions are prescribed based on the similarity rate of a current ongoing case and similar previous cases in another method \cite{TrikiSDH13}. \subsection{Required Data Input} \label{subsec:data} Prescriptive process monitoring methods we identified use control flow, resource, temporal, and domain-specific data. Some methods focus on a single type of data, but other methods combine data input from different types. As expected, methods that prescribe interventions impacting control flow, such as the next task to execute, commonly use control flow data. For example, in Conforti et al. \cite{ConfortiLRAH15}, the authors apply decision trees on data, such as task duration and frequency, to predict the risk of a case fault, e.g., exceeding the maximum cycle time and costs overrun. Goossens et al. \cite{GoossensDH18} prescribe the next task by using the sequence of events as a key feature. Data on resources are used to trigger interventions related to different prescription perspectives. For instance, one method predicts the execution time of past resource performance \cite{yaghoibi2017cycle}. The data on resource performance is used to reallocate pending work items to the resources with higher efficiency. In another method, the authors use resource roles and capabilities combined with domain-specific features, such as vehicle type, to recommend which mechanic should be assigned the next task \cite{SindhgattaGD16}. The data is used to predict which resource would improve the probability of the vehicle repair being finished within a defined time. Temporal data, e.g.\ day of the week, is also used to prescribe interventions. Such data is commonly used in combination with other data, such as control flow or resources. For instance, the best-suited resource to execute the next task is recommended utilizing the period of the day (morning, afternoon, or evening), inter-arrival rate, and task queue data as input \cite{WibisonoNBP15}. In another method, temporal information (month, weekday, hour) of the last event and the inactive period before the most recent event in the log are used to evaluate the effectiveness of an intervention to reduce cycle time \cite{bozorgi2021prescriptive}. Domain-specific features, such as materials used in a manufacturing process \cite{ghattas2014improving}, patient demographics and treatment attributes in a patient treatment process \cite{YangDSZFXBM17}, are also utilized as data input. For instance, data about previously treated patients and data on the current patient is used to assess the predicted outcome of alternative next tasks \cite{Mertens20}. This method recommends the task that has the best-predicted outcome for the patient to reduce the risk of medical mistakes, such as prescribing the wrong medication. \section{Framework and Implications} \label{sec:framework} In this section, we present a framework for characterizing existing work on prescriptive process monitoring. Furthermore, we discuss the implications of our review and conclude with the limitations of the study. \subsection{Framework Overview} The proposed framework (Figure~\ref{fig:framework}) characterizes prescriptive process monitoring methods from ten aspects, derived from the review results (cf. Section ~\ref{sec:results}). The framework\footnote{Link to the full framework: \url{https://doi.org/10.6084/m9.figshare.17091521.v1}} reads from left to right and begins with the aspect of objective. Other aspects include the interventions to reach the objective, the input required, the techniques used in the methods and the policies used to trigger interventions. The main aspect of the framework is the \textit{Objective}. Our analysis shows that the identified methods can be divided into two categories according to the objective they pursue (cf. Section ~\ref{subsec:objective}). The first category aims to reduce the percentage of cases with a negative outcome (i.e.\ the defect rate). Methods in the second category aim to optimize a performance dimension captured via a quantitative performance metric defined at the level of each case (e.g.\ cycle time). The next aspect of the framework is the \textit{Target}: the metric used to assess if the performance is improved by the prescribed interventions. For the objective of reducing the defect rate, the target may be a count of a categorical case outcome (e.g., customer complaints) or of a temporal outcome (deadline violations). Quantitative performance targets include cycle time, labor cost, or revenue. \begin{figure} \centering \includegraphics[width=1.6\textwidth, angle = 90]{framework1.png} \caption{Prescriptive process monitoring framework} \label{fig:framework} \end{figure} The next two columns (\textit{Prescription perspective, Intervention}) capture the interventions that the methods prescribe to pursue the defined objective. Thus, \textit{Prescription perspective} describes to which process aspect the intervention concerns, e.g., resource or control flow. We also included the category "multiple" for methods that describe several interventions. Then, the aspect \textit{Intervention} lists the actual interventions (cf. section~\ref{subsec:interventions}). For instance, actual intervention can be which resource to assign to the next task. The following four columns define the data (Input perspective, Feature encoding), techniques (Modeling technique), and policies (Policy) to trigger interventions. Namely, \textit{Input perspective} describes the types of features, i.e., input data, required for a specific method (cf. section~\ref{subsec:data}). Thus, the categories we elicited are (C) control flow (e.g., activities, sequence, and frequencies), (R) resources (e.g., performers of activities), (T) temporal features (time-related), and (D) domain-specific (features that depend on the domain or type of process). The aspect \textit{Feature encoding} explains how features are further refined by a prescriptive method. For instance, resource-perspective features can be encoded as resource experience, resource performance, or resource workload. The aspect \textit{Modeling technique} relates to the technique used in the method to predict the outcome of the process or the metric performance based on the input. Next, \textit{Policy} relates to the conditions under which an intervention is prescribed. For example, under a similarity-based policy, an intervention is prescribed based on the similarity of the current case to previous cases. Some methods take a set of rules as their policy. For example, the need for an intervention is detected when the probability of a negative outcome exceeds a defined threshold, but also the effectiveness of the intervention is assessed before prescribing it. The aspect \textit{Intervention frequency} shows whether a method is continuous (prescribes actions at every step) or discrete (only when the need is detected). Additionally, \textit{Intervention basis} describes whether a method builds on the prediction of how the current case will continue or its similarity to the past cases (cf. section~\ref{subsec:interventions}). Finally, the aspect \textit{Example} (see full version of the framework) can be used as a reference to how the introduced method with its inputs, technique, and policies was used to trigger interventions to reach the objective of a process in a specific domain. Existing methods could be explored from the framework by objective, target, and prescribed process perspective. As such, if one seeks to minimize the cycle time of a process, the aim is to optimize quantitative case performance (\textit{Objective}), more specifically, cycle time (\textit{Target}). The framework shows that this can be achieved by prescribing interventions related to the control-flow or resources of the respective process (\textit{Prescription perspective}). If we follow the resource perspective, the framework shows that a set of methods can recommend, for example, resources for the next task or a whole team for a specific request (\textit{Intervention}). For instance, two methods \cite{WibisonoNBP15,abdulhameed2018resource} propose to recommend resources for the next task as an intervention. However, they have different input perspectives and techniques. Continuing with selecting control flow and temporal as the input perspectives, it leads to the method which uses the predicted (\textit{Intervention basis}) highest resource performance (\textit{Policy}) to continuously (\textit{Intervention frequency}) prescribe a resource for the next task. \subsection{Research Gaps and Implications} The presented framework provides an overview of existing prescriptive process monitoring methods by categorizing them according to their objectives and targets. The framework also presents different methods available and different ways these methods enable reaching particular objectives. The overview unveils several gaps and associated implications for future research. First, we observe that in the vast majority of previous studies, the proposed prescriptive process monitoring methods have been tested using synthetic and/or real-life event logs. However, the validation of the method is done using a real-world or synthetic \textit{observational} event log, but not in real-life settings. An attempt to test the effectiveness of interventions in real-life settings was made by Dees et al. \cite{DeesLAR19}. Their study showed that the predictions were rather accurate, but the interventions did not lead to the desired outcomes. Thus, validations of methods should be done in real-life settings to ensure their usefulness in practice, as also previously highlighted by \cite{Marquez2018}. Second, the current state-of-the-art in the field is focused on identifying cases in which interventions should be applied and finding the point in time an intervention should be triggered during the execution of a case. In contrast, little attention has been given to the problem of discovering which interventions could be applied to optimize a process with respect to a performance objective. Discrete methods leave it up to the users (stakeholders) to define the possible intervention(s) a priori (e.g., \cite{LeoniDR20,TeinemaaTLDM18}). Those methods that use observational event logs from BPI challenges\footnote{\url{https://www.tf-pm.org/competitions-awards/bpi-challenge}}, rely on winner reports to identify the possible interventions (e.g., \cite{shoush2021prescriptive,TeinemaaTLDM18}). Continuous methods, on the other hand, focus on recommending the next task(s) (e.g., \cite{GoossensDH18,Batoulis14,YangDSZFXBM17}) or resource allocation (e.g., \cite{AriasMS16,abdulhameed2018resource,kim2013constructing}). Thus, one direction for further research is designing methods to support the discovery of interventions from business process event logs, or from textual documentation, or other unstructured or structured metadata about the process. Related to the above problem of discovering interventions, another gap in existing research relates to the problem of designing and tuning policies for prescriptive process monitoring. Existing prediction-based methods (e.g., \cite{SindhgattaGD16,groger2014prescriptive}) prescribe an intervention when the probability of a negative outcome exceeds a defined threshold. However, because the predictive models upon which methods rely are based on correlation (as opposed to causal relations), the prescriptions produced by this method might not address the cause of the negative outcome or poor performance (e.g.\ the cause of delay). In this respect, we note that only a few existing methods take into account causality in policy design (e.g., \cite{shoush2021prescriptive,bozorgi2021prescriptive}). Thus, developing policy design techniques that take into account causality is another direction for further research. As discussed in~\cite{DeesLAR19,LeoniDR20}, the choice of whether or not to apply an intervention or which intervention to apply often depends on contextual factors. Some interventions may lead to prove ineffective or counter-productive, for example, due to second-order effects. For example, an intervention wherein a customer is contacted pro-actively in order to prevent a customer complaint may actually heighten the probability of a complaint~\cite{DeesLAR19}. Similarly, assigning a resource to a case that is running late might lead to other cases being neglected, thus creating delays elsewhere and thus leading to a higher ratio of delayed cases. Detecting such second-order effects requires human judgment and iterative policy validation (e.g.\ via AB testing). In this respect, it is striking that existing prescriptive process monitoring methods do not take into account the need to interact with human decision-makers. A crucial step in this direction is the ability to explain why the prescriptive monitoring system recommends a given intervention. There are two aspects to this question. First, explaining the prediction that underpins a given prescription (prediction explanation), and second, explaining the policy that is used to trigger the prescription (policy explanation). A possible direction to enhance the applicability of prescriptive monitoring methods in practice is to integrate explainability mechanisms into them. While several proposals have been made to enhancement the explainability of predictive process monitoring methods~\cite{HsiehMO21}, the question of policy explanation in the area of prescriptive process monitoring is unexplored. In other words, current methods do not incorporate a mechanism to explain why an intervention is recommended by the method for a given case and in a given state. Besides the above gaps, the SLR highlights that the majority of methods in the field aim to improve processes along the temporal perspective (e.g.\ cycle time, processing time, deadline violations). In comparison, other performance dimensions are represented in only a few examples (quality in \cite{DetroSPLLB20,WeinzierlSZM20,YangDSZFXBM17}, revenue in \cite{GoossensDH18,TerragniH19}). Thus, another research direction is to investigate other performance objectives that could be enhanced via prescriptive process monitoring. Finally, our review also highlights a lack of common terminology in the field. This might be due to the novelty of the research field of prescriptive process monitoring. The literature refers to methods with a wide range of names. As such, the terms "proactive process adaptation", "on-the-fly resource allocation", "next-step recommendation" are all used to describe the development and application of prescriptive process monitoring methods. This highlights the need for common terminology. \subsection{Threats to Validity} SLRs have a number of typical pitfalls and threads to validity~\cite{kitchenham2007guidelines,AMPATZOGLOU2019201}. First, there is a potential risk of missing relevant publications during the search. We mitigated this risk by conducting a two-phase search that included a broad range of key terms, as well as backward referencing. Another potential threat is to exclude relevant publications during screening. We mitigated this threat by using explicitly defined inclusion and exclusion criteria. Furthermore, all unclear cases were examined and discussed by all authors of this paper. Third, there is a threat of data extraction bias as this step involves a degree of subjectivity. We discussed each paper in the final list and refined the data extraction when needed to minimize this risk. \section{Conclusion} \label{sec:conclusion} This paper provided a snapshot of the research field of prescriptive process monitoring via an SLR and outlined a framework for characterizing methods in this field. The framework characterizes existing methods according to their objective, target metric, intervention type, technique, data input, and policy used to trigger interventions. The framework was derived from and used to characterize the 36 relevant studies identified by the SLR. Based on the SLR, we identified research gaps and associated research avenues. In particular, the SLR highlighted: (i) a lack of \emph{in vivo} validation of the proposed methods; (ii) a lack of methods for discovering suitable interventions and assessing their potential effectiveness; (iii) little emphasis on explainability and feedback loops between the prescriptive monitoring system and the end-users; and (iv) a narrow focus on temporal metrics and comparatively little work on applying prescriptive monitoring to other performance dimensions. \smallskip\noindent\textbf{Acknowledgments.} This research is funded by the Estonian Research Council (PRG1226) and the European Research Council (PIX Project). \vspace*{-2mm} \bibliographystyle{splncs04}
proofpile-arXiv_065-69
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec.intro} As classical mathematical problems, some applications in the realms of energy system design \cite{AppEnergy}, image processing \cite{NCZNNCompare}, and robot kinematics \cite{AppRobort}, are abstractly modeled as quadratic minimization (QM) problems. In conclusion, for general QM problems, the traditional approach solves them by employing some numerical or iterative algorithms \cite{VPZNNCompare}. A message-passing scheme for solving QM problems is presented in \cite{AppMessagePass} by Ruozzi and Tatikonda. In addition, Zhang $et~al.$ present a QM-based dual-arm cyclic-motion-generation manipulator control scheme and analyze its properties from the perspective of cybernetics \cite{AppRobort}. It is worth noting that although considerable research has been devoted to solving conventional QM problems, studies aimed explicitly at the time-varying quadratic minimization (TVQM) problem are insufficient. Traditional solutions have serious lag errors when facing large-scale time-varying issues, resulting in inadequate solution accuracy and even the collapse of the solution system \cite{XXCOne}. \par To break through the dilemma that traditional algorithms cannot effectively deal with time-varying problems, Zhang $et~al.$ design the original zeroing neural network (OZNN) model \cite{ZNNProposed}. The OZNN model employs the derivative information of the time-varying problem to predict its evolution direction and continuously adjust the solution strategy of the solution system through a named evolution function \cite{XJ}. Therefore, with the help of the OZNN model, it can cope with various time-varying problems with extremely high accuracy \cite{LiuMei}. The OZNN model has been successfully applied to signal processing and automatic control fields due to its high accuracy and real-time solution advantages \cite{Qi}. However, the chief drawback of the OZNN model is its sensitivity to measurement noise. Its solution accuracy will be reduced distinctly in the presence of noise interference \cite{Wei}. Besides, the scale parameter of the OZNN model requires manually set and tuned. Hence tedious and repeated adjustments are necessary when facing actual engineering application problems \cite{XJ}. In recent years, it has been reported that much research aims to optimize the ZNN models. A versatile recurrent neural network termed the VRNN model is presented by Xiao $et~al.$ to solve the TVQM problem \cite{AppWideUse}, conquers the drawbacks of the OZNN model, takes time to converge when it approaches infinity, and accelerates the model to globally converge within a finite time \cite{FTZNNXiao}. On this basis, theoretical analysis, including the predefined-time convergence of the strictly predefined-time convergent ZNN (PTCZNN) model, is performed with the convergence of models \cite{PTCZNNCompare}. Noteworthily, both the VRNN model and the PTCZNN model implement the convergence properties of the control solution model by constructing a special activation function and not synthesizing the scale parameter in models. Different from these fixed-valued neural-dynamic models, a varying-parameter convergent-differential neural network (VP-CDNN) model is based on time-variant incremental scale parameters \cite{VPZNNCompare}. The VP-CDNN model converges exponentially and maintains better robustness under perturbation conditions than the OZNN model. However, in model implementation and engineering applications, as the time continues to increase, the monotonically increasing time-varying design parameters may be too large to be achieved or violate the objective limit, which results in the solving failure \cite{Huang}. When implementing zeroing-type models, it is unavoidable that models may be interfered with various measurement noises, leading to the reduction of system solution accuracy and even the collapse of the solution system. To this end, Jin $et~al.$ present the modified ZNN (MZNN) model in \cite{MZNNCompare}, which introduces the integral information into the solution evolution formula for the first time. However, parameters of the MZNN model still require tedious manual adjustments, which leads to a lot of additional computational resources and redundant adjustment processes \cite{Alawad}. He $et~al.$ present a residual learning framework to simplify the training process of deep neural networks, which explicitly reformulates layers as learning residual functions concerning the layer inputs \cite{DeepRL}. Inspired by the residual learning framework and combined with the advantages of the abovementioned zeroing-type models, this paper proposes an adaptive zeroing-type neural dynamics (AZTND) model that embeds adaptive scaling coefficients and adaptive feedback coefficients. For the first time, the AZTND model is applied to the TVQM problem with measuring noise interference. \par The remaining part of this paper consists of the following five sections. The problem definition and benchmark scheme are presented in Section \ref{PSF}. The adaptive scale coefficient and adaptive feedback coefficient design framework and the evolution function of the proposed AZTND models are formulated in Section \ref{SMC}. Section \ref{Simulations} contains the corresponding quantitative simulative experiment and results investigation. Finally, the conclusion is arranged in Section \ref{Conclusion}. Besides, the following parts summarise the main contributions: \begin{itemize} \item This paper proposes a novel design framework for constructing the adaptive scale coefficient and adaptive feedback coefficient for the first time, which expedites global convergence and enhances the robustness of the solution system. \begin{table*}[t]\tiny \caption{Comparison between Various Algorithms for TVQM problem (\ref{TVQM})} \resizebox{14cm}{!}{ \begin{tabular}[l]{@{}l c c c c c c c c c} \hline &Derivative &Integral &Adaption &Anti &\multicolumn{4}{c}{MSSRE$^{\ast}$ under Different Noise Conditions}\\ Model &Information &Information &Control &Perturbations &Noise &Constant &Random &Linear\\ &Involved &Involved & & &Free &Noise &Noise &Noise\\ \hline Neural network in \cite{IterCompareOne} &\cmark &\xmark &\xmark &\xmark &NA$\dagger$ &NA$\dagger$ &NA$\dagger$ &NA$\dagger$\\ Adaptive GNN model in \cite{ACGNNCompare}&\cmark &\xmark &\cmark &\xmark &Negligible &Bounded &Bounded &$+\infty$\\ Original ZNN model in \cite{OZNNCompare} &\cmark &\xmark &\xmark &\xmark &Negligible &Bounded &Bounded &$+\infty$\\ NCZNN model in \cite{NCZNNCompare} &\cmark &\xmark &\xmark &\cmark &Negligible &Bounded &Bounded &$+\infty$\\ PTCZNN model in \cite{PTCZNNCompare} &\cmark &\xmark &\xmark &\xmark &Negligible &Bounded &Bounded &$+\infty$\\ MZNN model in \cite{MZNNCompare} &\cmark &\cmark &\xmark &\cmark &Negligible &Negligible &BS$^\ddagger$ &BS$^\ddagger$\\ AZTND model (\ref{RACZNN}) &\cmark &\cmark &\cmark &\cmark &Negligible &Negligible &BS$^\ddagger$ &BS$^\ddagger$\\\hline \end{tabular}\label{RiccatiCompare}} \noindent{\footnotesize{$\ast$ Note that MSSRE denotes the maximal steady-state residual errors.}}\\ \noindent{\footnotesize{$^\dagger$NA indicates that the item does not apply to the algorithm or model in the corresponding papers.}}\\ \noindent{\footnotesize{$^\ddagger$BS indicates that the maximal steady-state residual error of the corresponding situation is bounded tightly.}} \end{table*} \item Based on the design framework, an AZTND model for solving the TVQM problem with perturbed measurement noise is proposed. Subsequently, the global convergence of the AZTND model has analyzed the Lyapunov stability theory. \item Corresponding quantitative numerical experiments are given to substantiate the performance of the AZTND model applied to the TVQM problem with various measurement noise pollution. \item A dynamic localization scheme is proposed based on the AZTND model with adaptive coefficients, which has superior robustness and solution accuracy to existing schemes. \end{itemize} \section{Problem Definition and Related Scheme}\label{PSF} The typical form of the TVQM problem is presented as \begin{equation}\label{TVQM} \text{min}~\frac{1}{2}\vec{z}^{\text{T}}(t)M(t)\vec{z}(t)+\vec{b}^{\text{T}}(t)\vec{z}(t), \end{equation} where parameters $M(t)\in\mathbb{R}^{n\times n}$ and $\vec{b}(t)\in\mathbb{R}^{n}$ mean the smoothly time-varying Hessian matrix and vector, respectively. The parameter $\vec{z}(t)\in\mathbb{R}^{n}$ represents the unknown vector that should be solved online. The transpose symbol is the superscript $^{\text{T}}$. For further investigation and solving the TVQM problem (\ref{TVQM}), a function $F(\vec{z}(t),t)$ is defined as $F(\vec{z}(t),t)=\frac{1}{2}\vec{z}^{\text{T}}(t)M(t)\vec{z}(t)+\vec{b}^{\text{T}}(t)\vec{z}(t)$. Consequently, the gradient of the function $F(\vec{z}(t),t)$ is described as \begin{equation}\label{FGrad} \nabla F(\vec{z}(t),t)=\frac{\partial F(\vec{z}(t),t)}{\partial \vec{z}(t)}=M(t)\vec{z}(t)+\vec{b}(t). \end{equation} Noteworthily, by zeroing $\nabla F(\vec{z}(t),t)$ in each time instant $t\in [0,+\infty]$, the theoretical solution of TVQM problem (\ref{TVQM}) can be obtained in real-time. Hence, the following equation is formulated as $M(t)\vec{z}(t)+\vec{b}(t)=0$. The afterward error function is arranged to tune the evolution direction of the solving system: \begin{equation}\label{ErrFun} \vec{\epsilon} (t)=M(t)\vec{z}(t)+\vec{b}(t). \end{equation} According to the OZNN model construction framework, the evolution direction of the error function (\ref{ErrFun}) should be satisfied that $\vec{\dot\epsilon}(t)=-\eta\Omega (\vec{\epsilon}(t))$, where $\eta$ represents the scale coefficient and $\Omega(\cdot): \mathbb{R}^{n} \to \mathbb{R}^{n}$ denotes the activation function. Therefore, the OZNN model employed for the TVQM problem (\ref{TVQM}) is described as \begin{equation} M(t)\vec{\dot z}(t)=-\dot M(t)\vec{z}(t)-\vec{\dot b}(t)-\eta\Omega\big{(}M(t)\vec{z}(t)+\vec{b}(t)\big{)}, \end{equation} where parameters $\dot M(t)$, $\vec{\dot{z}}(t)$, and $\vec{\dot{b}}(t)$ represent time derivatives of $M(t)$, $\vec{z}(t)$, and $\vec{b}(t)$, respectively. \par The performance comparison among the existing algorithms and the proposed AZTND model when solving the TVQM problem (\ref{TVQM}) are arranged in Table \ref{RiccatiCompare}. \section{AZTND Model Construction}\label{SMC} Various perturbed measurement noises downgrade the accuracy of the solution system and even lead to collapse. Thus, this paper proposes an AZTND model to enhance the robustness under various noise interference. Besides, to avoid the inflexibility of manually setting the scale coefficient and to speed up the convergence of the solving system, this paper formulates a more flexible scale coefficient construction method termed residual-based adaptive scale coefficient. \par The evolution direction of the error function $\vec{\epsilon}(t)$ in the AZTND model is formulated as \begin{equation} \vec{\dot \epsilon}(t)=-\xi(\vec{\epsilon}(t))\vec{\epsilon}(t)-\kappa(\epsilon(t))\int_0^t\vec{\epsilon}(\delta)\text{d}\delta, \end{equation} where parameters $\xi(\cdot)> 0: \mathbb{R}^{n}\to \mathbb{R}$ and $\kappa(\cdot)> 0: \mathbb{R}^{n}\to \mathbb{R}$ represent the adaptive scale and feedback coefficient, respectively. The following method can be employed to construct the adaptive scale coefficient $\xi(\cdot)$: \begin{itemize} \item Power adaptive scale coefficient: \begin{equation}\label{ASCDef} \xi(\epsilon(t)) = \|\vec{\epsilon}(t)\|_{\text{2}}^{\eta} + a, \end{equation} where the parameter $a > 1$. \end{itemize} Further, the following method can be adopted to implement the adaptive feedback coefficient $\kappa(\cdot)$: \begin{itemize} \item Power adaptive feedback coefficient: \begin{equation}\label{AFCDef} \kappa(\vec{\epsilon}(t)) = \|\int_0^t\vec{\epsilon}(\delta)\text{d}\delta\|^b_{\text{2}}+c, \end{equation} where the parameter $b>0$ and $c>0$. \end{itemize} Consequently, the proposed AZTND model with adaptive scale coefficient for solving the TVQM problem (\ref{TVQM}) is written as follows: \begin{eqnarray}\label{RACZNN} \begin{split} M(t)\vec{\dot z}(t)=&-\dot M(t)\vec{z}(t)-\vec{\dot b}(t)\\ &-\xi(\vec{\epsilon}(t))\big{(}M(t)\vec{z}(t)+\vec{b}(t)\big{)} \\ &-\kappa(\vec{\epsilon}(t))\int_{0}^{t}(M(\delta)\vec{z}(\delta)+\vec{b}(\delta))\text{d}\delta. \end{split} \end{eqnarray} Besides, the AZTND model (\ref{RACZNN}) inevitably is perturbed by various measurement noises in the practical application. Thereupon, the AZTND model (\ref{RACZNN}) for solving TVQM problem (\ref{TVQM}) perturbed by noise is described as \begin{eqnarray}\label{RACZNNNoise} \begin{split} M(t)\vec{\dot z}(t)=&-\dot M(t)\vec{z}(t)-\vec{\dot b}(t) \\ &-\xi(\vec{\epsilon}(t))\big{(}M(t)\vec{z}(t)+\vec{b}(t)\big{)}\\ &-\kappa(\vec{\epsilon}(t))\int_{0}^{t}(M(\delta)\vec{z}(\delta)+\vec{b}(\delta))\text{d}\delta + \vec{\vartheta}(t), \end{split} \end{eqnarray} where the noise perturbation item $\vec{\vartheta}(t)\in \mathbb{R}^{n}$. \par Taking into account that convergence is a key criterion for the AZTND model (\ref{RACZNN}), we propose the following theorem and corresponding proof process to analyze the global convergence of the AZTND model (\ref{RACZNN}). \par {\it Theorem 1:} For any solvable TVQM problem (\ref{TVQM}), the proposed AZTND model (\ref{RACZNN}) globally converges to zero from any random initial state. \par {\it Proof:} The $i$th subsystem of the AZTND model evolution function $\vec{\dot \epsilon}(t)=-\xi(\vec{\epsilon}(t))\vec{\epsilon}(t)-\kappa(\vec\epsilon(t))\int_0^t\vec{\epsilon}(\delta)\text{d}\delta$ is written as \begin{equation}\label{SubErr} \dot\epsilon_{i}(t)=-\xi\big{(}\epsilon(t)\big{)}\epsilon_{i}(t)-\kappa(\epsilon(t))\int_{0}^{t}\epsilon_{i}(\delta)\text{d}\delta, \end{equation} The following Lyapunov function is presented for investigating the global convergence of the system (\ref{SubErr}): \begin{equation}\label{Lya} y_{i}(t)=\epsilon_{i}^2(t)+\kappa(\epsilon(t))\big{(}\int_{0}^{t}\epsilon_{i}(\delta)\text{d}\delta\big{)}^2/2\ge 0, \end{equation} which indicates the function $y_{i}(t)>0$ when $\epsilon_{i}(t)\neq 0$ or $\int_{0}^{t}\epsilon_{i}(\delta)\text{d}\delta \neq 0$. If and only if $\epsilon_{i}(t)=\int_{0}^{t}\epsilon_{i}(\delta)\text{d}\delta=0$, $y_{i}(t)=0$. Thus, the Lyapunov function $y_{i}(t)$ is positive semi-definite. Considering that the adaptive feedback coefficient $\kappa(\epsilon(t))$ is a constant $\kappa$ in each time interval and taking the time derivative of the function (\ref{Lya}) leads: \begin{eqnarray*} \begin{split} \frac{\text{d}y_{i}(t)}{\text{d}t}&=\epsilon_{i}(t)\dot\epsilon_{i}(t)+\kappa(\vec{\epsilon}(t))\epsilon_{i}(t)\int_{0}^{t}\epsilon_{i}(\delta)\text{d}\delta \\ &=\epsilon_{i}(t)(\dot\epsilon_{i}(t)+\kappa(\vec{\epsilon}(t))\int_{0}^{t}\epsilon_{i}(\delta)\text{d}\delta)\\ &=-\xi(\vec\epsilon(t))\epsilon_{i}^2(t)\leq 0. \end{split} \end{eqnarray*} That is to say, the Lyapunov function $\dot y_{i}(t)$ is negative semi-definite. Thus, according to the definition of the Lyapunov theory, the function $\epsilon_{i}(t)$ will ultimately converge to zero. It can be generalized and concluded that $\epsilon_{i}(t)$ globally converges to zero for each $i\in 1,2,...,n$. In summary, the error function $\vec\epsilon(t)$ global converges to zero over time. In other words, the proposed AZTND model (\ref{RACZNN}) globally converges to the theoretical solution of the TVQM problem (\ref{TVQM}). \par The proof is thus completed. $\hfill\blacksquare$ \begin{figure}[htbp]\centering \subfigure[]{\includegraphics[scale=0.4]{Non_Fnorm_Linear-eps-converted-to.pdf}} \subfigure[]{\includegraphics[scale=0.4]{Non_Fnorm_Log-eps-converted-to.pdf}} \caption{Performance of the GNN (\ref{GNNCompare}), PTCZNN (\ref{PTCZNNCompare}), NCZNN (\ref{NCZNNCompare}), and AZTND (\ref{RACZNN}) are applied to noise-free TVQM problem (\ref{TVQM}) of the Example (\ref{EA}). (a) Denoting the residual error $||\vec{\epsilon}(t)||_{\text{2}}$ of models. (b) The logarithm of residual error $||\vec{\epsilon}(t)||_{\text{2}}$. } \label{FreeNorm} \end{figure} \section{Simulations}\label{Simulations} Experiments are designed and performed in this section. First, the simulation of the AZTND model (\ref{RACZNN}) applied to the TVQM problem (\ref{TVQM}) is concluded and visualized. Secondly, we compare the performance of the AZTND model (\ref{RACZNN}) with other state-of-the-art neural network models, specifically the gradient-based neural network (GNN) model, predefined-time convergent ZNN (PTCZNN) model, nonconvex and bound constraint ZNN (NCZNN) model. \subsection{Example 1: Time-varying Situation}\label{EA} In this simulation, the time-varying matrix and vector in the TVQM problem (\ref{TVQM}) are constructed as follows: \begin{eqnarray*} M(t)= \begin{bmatrix} 0.5\text{sin}(t)+2& \text{cos}(t)\\ \text{cos}(t) & 0.5\text{cos}(t)+2 \end{bmatrix}, \vec{b}(t)= \begin{bmatrix} \text{sin}(t)\\ \text{cos}(t) \end{bmatrix}. \end{eqnarray*} \begin{figure}[htbp]\centering \subfigure[]{\includegraphics[scale=0.4]{Con_Fnorm_Linear-eps-converted-to.pdf}} \subfigure[]{\includegraphics[scale=0.4]{Con_Fnorm_Log-eps-converted-to.pdf}} \caption{Performance comparison among different models for solving the TVQM problem (\ref{TVQM}) of the Example (\ref{EA}) with constant noise $\vec{\vartheta}(t)=\vec{\vartheta}=[5]^2$. (a) Residual error $||\vec{\epsilon}(t)||_{\text{2}}$ of models. (b) The logarithm of the residual error $||\vec{\epsilon}(t)||_{\text{2}}$. } \label{ConNorm} \end{figure} The adaptive scale coefficient $\xi(\cdot)$ and adaptive feedback coefficient $\kappa(\cdot)$ of the proposed AZTND model (\ref{RACZNN}) are set as $\xi(\vec{\epsilon}(t)) = ||\vec{\epsilon}(t)||_{\text{2}}^{3} + 5$ and $\kappa(\vec{\epsilon}(t)) = 5^{||\int_0^t\vec{\epsilon}(\delta)\text{d}\delta||_{\text{2}}}+5$, respectively. The corresponding quantitative simulation results of the example (\ref{EA}) are arranged in Figures \ref{FreeNorm} to \ref{RandNorm}. Besides, the following models are introduced to solve the TVQM problem (\ref{TVQM}) as a comparison of the AZTND model (\ref{RACZNN}). \begin{itemize} \item The GNN model is presented in \cite{GNNCompare}. \begin{eqnarray}\label{GNNCompare} \begin{split} \vec{\dot z}(t)=- \gamma M^{\text{T}}(t)(M(t)\vec{z}(t)+\vec{b}(t)). \end{split} \end{eqnarray} \begin{figure}[htbp]\centering \subfigure[]{\includegraphics[scale=0.4]{TV_Fnorm_Linear-eps-converted-to.pdf}} \subfigure[]{\includegraphics[scale=0.4]{TV_Fnorm_Log-eps-converted-to.pdf}} \caption{ The performance of different models under linear noise $\vec{\vartheta}(t)\in \mathbb{R}^{n^2}$ with each subelement be set as $0.4\times t$. (a) Denoting the residual error $||\vec{\epsilon}(t)||_{\text{2}}$ of the models. (b) Representing the logarithm of residual error $||\vec{\epsilon}(t)||_{\text{2}}$. } \label{LinearNorm} \end{figure} \item PTCZNN model is presented in \cite{PTCZNNCompare}. \begin{eqnarray}\label{PTCZNNCompare} \begin{split} M(t)\vec{\dot z}(t)=&-\dot M(t)\vec{z}(t)-\vec{\dot b}(t)\\ &-\gamma \frac{\text{exp}(t)-1}{(t_c-t)\text{exp}(t)}(M(t)\vec{z}(t)-\vec{d}(t)). \end{split} \end{eqnarray} \item NCZNN model is presented in \cite{NCZNNCompare}. \begin{eqnarray}\label{NCZNNCompare} \begin{split} M(t)\vec{\dot z}(t)=&-\dot M(t)\vec{z}(t)-\vec{\dot b}(t)\\ &-\gamma R_{\Upsilon}(M(t)\vec{z}(t)-\vec{d}(t)), \end{split} \end{eqnarray} where the parameter $R_{\Upsilon}(\cdot)$ represents the non-convex and bounded activation function. \end{itemize} Note that the scale parameter $\gamma$ of the GNN model (\ref{GNNCompare}), PTCZNN model (\ref{PTCZNNCompare}), and NCZNN model (\ref{NCZNNCompare}) are arranged as 5. \begin{figure}[htbp]\centering \subfigure[]{\includegraphics[scale=0.4]{Rand_Fnorm_Linear-eps-converted-to.pdf}} \subfigure[]{\includegraphics[scale=0.4]{Rand_Fnorm_Log-eps-converted-to.pdf}} \caption{The performance of different models with bounded random noise $\vec{\vartheta}(t)=\vec{\varrho}(t)\in [0.5, 3]^2$. (a) Denoting the residual error $||\vec{\epsilon}(t)||_{\text{2}}$ of the models. (b) Representing the logarithm of residual error $||\vec{\epsilon}(t)||_{\text{2}}$. }\label{RandNorm} \end{figure} \begin{figure*}[htbp]\centering \subfigure[]{\includegraphics[scale=0.35]{Compare_F_norm_Con_Linear-eps-converted-to.pdf}} \subfigure[]{\includegraphics[scale=0.35]{Compare_F_norm_Linear_Linear-eps-converted-to.pdf}} \subfigure[]{\includegraphics[scale=0.35]{Compare_F_norm_Rand_Linear-eps-converted-to.pdf}} \subfigure[]{\includegraphics[scale=0.35]{Compare_F_norm_Con_Log-eps-converted-to.pdf}} \subfigure[]{\includegraphics[scale=0.35]{Compare_F_norm_Linear_Log-eps-converted-to.pdf}} \subfigure[]{\includegraphics[scale=0.35]{Compare_F_norm_Rand_Log-eps-converted-to.pdf}} \caption{Robustness performance of the proposed AZTND model (\ref{RACZNN}) under noise interference cases. (a) and (d) representing the residual error $||\vec{\epsilon}(t)||_{\text{2}}$ of the AZTND model (\ref{RACZNN}) perturbed by constant noise $\vartheta=5$, $\vartheta=20$, and $\vartheta=100$, respectively. (b) and (e) denoting the residual error $||\vec{\epsilon}(t)||_{\text{2}}$ of the AZTND model (\ref{RACZNN}) in case of time-varying linear noise $\vartheta (t)=0.4t$, $\vartheta (t)=2t$, and $\vartheta(t)=20t$, respectively. (c) and (f) denoting the residual error $||\vec{\epsilon}(t)||_{\text{2}}$ of the AZTND model (\ref{RACZNN}) perturbed by bounded random noise $\vartheta(t)=\varrho(t)\in[0.5,3]$, $\vartheta(t)=\varrho(t)\in[0.5,3]$, and $\vartheta(t)=\varrho(t)\in[0.5,3]$, respectively. } \label{Compare} \end{figure*} \subsubsection{AZTND Model without noise} The quantitative experiment visualization results synthesized by the AZTND model (\ref{RACZNN}) for solving the TVQM problem example (\ref{EA}) in the noise-free case are arranged in Figure \ref{FreeNorm}. As demonstrated in Figure \ref{FreeNorm} (a), beginning with a randomly generated initial vector-formed value, the residual error $||\vec{\epsilon}(t)||_{\text{2}}$ of the proposed AZTND model (\ref{RACZNN}) sharply approaches zero, which means the solving system globally converges to the theoretical solution. Among the compared five models, the AZTND model (\ref{RACZNN}) has the second convergence speed. Logarithms of the models' residual error $||\vec{\epsilon}(t)||_{\text{2}}$ are shown in Figure \ref{FreeNorm} (b) which depicts the models' solution accuracy. As observed in Figure \ref{FreeNorm} (b), the proposed AZTND model (\ref{RACZNN}) has significantly higher accuracy when solving the noise-free TVQM problem (\ref{EA}) compared with the GNN model (\ref{GNNCompare}), PTCZNN model (\ref{PTCZNNCompare}), and NCZNN model (\ref{NCZNNCompare}). Specifically, the GNN model (\ref{GNNCompare}) converges to order $10^{-1}$, PTCZNN model (\ref{PTCZNNCompare}) and NCZNN model (\ref{NCZNNCompare}) converge to order $10^{-3}$, and the proposed AZTND model (\ref{RACZNN}) converges to order $10^{-5}$. Furthermore, the AZTND model (\ref{RACZNN}) converges to the steady-state residual error at $8.5$ s. Compared with state-of-the-art models, the proposed AZTND model (\ref{RACZNN}) has a competitive robustness and convergence speed performance. \begin{figure*}[htbp]\centering \subfigure[Theoretical and estimated trajectory]{\includegraphics[scale=0.33]{AoA_RACZNN-eps-converted-to.pdf}} \subfigure[Position error of the AZTND model (\ref{RACZNNforAoA})]{\includegraphics[scale=0.33]{AoA_RACZNN_PositionError-eps-converted-to.pdf}} \subfigure[Position error of the OZNN model (\ref{OZNNforAoA})]{\includegraphics[scale=0.33]{AoA_OZNN_PositionError-eps-converted-to.pdf}} \subfigure[Theoretical and estimated trajectory]{\includegraphics[scale=0.33]{Circle_AoA_RACZNN-eps-converted-to.pdf}} \subfigure[Position error of the AZTND model (\ref{RACZNNforAoA})]{\includegraphics[scale=0.33]{Circle_AoA_RACZNN_PositionError-eps-converted-to.pdf}} \subfigure[Position error of the OZNN model (\ref{OZNNforAoA})]{\includegraphics[scale=0.33]{Circle_AoA_OZNN_PositionError-eps-converted-to.pdf}} \caption{Visualization results of the AoA dynamic positioning scheme. (a) and (d) denoting solution result trajectories (red dashed line) generated by the AZTND model (\ref{RACZNNforAoA}) and the theoretical trajectory (solid black line) of the dynamic target. (b) and (e) representing the corresponding position error synthesized by the AZTND model (\ref{RACZNNforAoA}). (c) and (f) denoting the corresponding position error synthesized by the OZNN model (\ref{OZNNforAoA}). } \label{AoAResult} \end{figure*} \begin{figure}[htbp]\centering \includegraphics[scale=0.62]{AoA.pdf} \caption{ Schematic diagram of AoA dynamic positioning scheme. } \label{AoA} \end{figure} \subsubsection{AZTND Model with Noise} The quantitative experiment visualization results synthesized by the AZTND model (\ref{RACZNN}) for solving the TVQM problem example (\ref{EA}) in the constant noise, linear noise, and bounded random noise cases are arranged in Figure \ref{ConNorm}, Figure \ref{LinearNorm}, and Figure \ref{RandNorm}, respectively. Besides, the adaptive scale coefficient and adaptive feedback coefficient of the AZTND model (\ref{RACZNN}) are $\xi(\vec{\epsilon}(t)) = ||\vec{\epsilon}(t)||_{\text{2}}^{3} + 5$ and $\kappa(\vec{\epsilon}(t)) = 5^{||\int_0^t\vec{\epsilon}(\delta)\text{d}\delta||_{\text{F}}}+5$, respectively. Based on three different situations, the following three situations will be analyzed and discussed in detail. Noticeably, the measurement noises perturbed by the AZTND model can be expressed as equation (\ref{RACZNNNoise}). Firstly, the amplitude of the constant noise in example (\ref{EA}) is provided as $\vec{\vartheta}(t)=\vec{\vartheta}=[5,5]^{\text{T}}$. As shown in Figure \ref{ConNorm} (a), beginning with a random-generated initial value, even though the AZTND model (\ref{RACZNN}) is disturbed by the constant noise, its system residual error $||\vec{\epsilon}(t)||_{\text{2}}$ still accurately converge to the theoretical solution. Meanwhile, Figure (\ref{ConNorm}) (b) depicts that the solution accuracy of the GNN model (\ref{GNNCompare}), PTCZNN model (\ref{PTCZNNCompare}), and NCZNN model (\ref{NCZNNCompare}) remain at a relatively high level. Secondly, the quantitative experimental simulation results of the AZTND model (\ref{RACZNN}) solving the TVQM problem example (\ref{EA}) under linear noise $\vec{\vartheta}(t)=\vec{\vartheta}(t)\in \mathbb{R}^{n}$ interference are arranged in Figure \ref{LinearNorm}. Suffering from the interference of linear noise $\vec{\vartheta}(t)$, the proposed AZTND model (\ref{RACZNN}) not only has the highest solution accuracy compared with the other four comparative models, but also its convergent speed is the second only to the GNN model (\ref{GNNCompare}). Finally, the quantitative experiment simulation results of the AZTND model (\ref{RACZNN}) solving the TVQM problem example (\ref{EA}) under bounded random noise $\vec{\vartheta}(t)=\vec{\varrho}(t)\in [0.5, 3]^2$ interference are arranged in Figure \ref{LinearNorm}. As displayed in Figure \ref{LinearNorm} (a) and (b), although the proposed AZTND model (\ref{RACZNN}) is interrupted by bounded random noise $\vec{\varrho}(t)$, its average steady-state residual error still maintains a high accuracy, specifically, of order $10^{-2}$. By contrast, other commonly used neural network models, $i.e.$, the GNN model (\ref{GNNCompare}), PTCZNN model (\ref{PTCZNNCompare}), and NCZNN model (\ref{NCZNNCompare}) remain relatively large residual errors, specifically, of order $10^{-1}$. Furthermore, a similar conclusion is drawn from Figure \ref{RandNorm}. Therefore, the conclusion is drawn that the proposed AZTND model (\ref{RACZNN}) has higher robustness and stability when facing different noises than other state-of-the-art neural network models. \begin{figure*}[htbp]\centering \subfigure[]{\includegraphics[scale=0.4]{Noise_Con_AoA_OZNN-eps-converted-to.pdf}} \subfigure[]{\includegraphics[scale=0.4]{Noise_Con_AoA_OZNN_PositionError-eps-converted-to.pdf}} \subfigure[]{\includegraphics[scale=0.4]{Noise_Con_AoA_RACZNN-eps-converted-to.pdf}} \subfigure[]{\includegraphics[scale=0.4]{Noise_Con_AoA_RACZNN_PositionError-eps-converted-to.pdf}} \caption{Visualization results of the AoA dynamic positioning scheme with constant noise $\vec{\vartheta}(t)=[20,20]^{\text{T}}$ disturbance. (a) and (c) showing the solution result trajectories generated by the AZTND model (\ref{RACZNNforAoA}) and the OZNN model (\ref{OZNNforAoA}). (b) and (d) denoting the corresponding position errors of the OZNN model (\ref{OZNNforAoA}) and the AZTND model (\ref{RACZNNforAoA}). } \label{AoANoiseResult} \end{figure*} \subsection{AZTND Applied to Target Tracking Scheme} In this part, an angle-of-arrival (AoA) target tracking scheme based on the AZTND model (\ref{RACZNN}) is presented, which is widely used in navigation, guidance, and localization system. The following two-dimensional (2-D) AoA positioning principle is detailed to construct the target tracking scheme further. As shown in Figure \ref{AoA}, rays emitted by the observation base stations $s_1$ and $s_2$ will pass through the dynamic target $u$, and the intersection of two rays is the dynamic target's position. The position coordinate $u(x,y)$ of the dynamic target can be solved by calculating the angles of arrival $\theta_1$ and $\theta_2$ observation base stations to the dynamic target. Consequently, the geometric relationship between the angle of arrival between the observation base station and the target is expressed as \begin{equation}\label{AoATan} \text{tan}\theta_i(t) = \frac{y(t)-y_i}{x(t)-x_i}. \end{equation} Noting that parameters $\theta_i(t)$ and $(x(t),y(t))$ represent real-time changing angles and coordinates. Expand and rearrange the equation (\ref{AoATan}) to get $y_i-x_i\text{tan}\theta_i(t)=-x(t)\text{tan}\theta_i(t)+y(t)$. Therefore, the following linear equation is obtained: \begin{eqnarray}\label{AOAOriginal} \begin{bmatrix} -\text{tan}(\theta_1(t))& 1\\ -\text{tan}(\theta_2(t))& 1\\ \vdots &\vdots\\ -\text{tan}(\theta_n(t))& 1 \end{bmatrix} \begin{bmatrix} x(t)\\ y(t) \end{bmatrix}= \begin{bmatrix} y_1-x_1\text{tan}(\theta_1(t))\\ y_2-x_2\text{tan}(\theta_2(t))\\ \vdots\\ y_n-x_n\text{tan}(\theta_2(t)) \end{bmatrix}, \end{eqnarray} where the parameter $n$ denotes the number of observation base stations, and we further express equation (\ref{AOAOriginal}) as the following equation: \begin{equation}\label{AoALinear} F(t)\vec{g}(t)=\vec{h}(t), \end{equation} where the time-varying matrix $F(t)\in\mathbb{R}^{n\times 2}$, time-varying vector $\vec{g}(t)=[x(t),y(t)]^{\text{T}}$ and $\vec{h}(t)\in\mathbb{R}^{n}$. Besides, the error function for equation (\ref{AoALinear}) is written as $\vec{\epsilon}(t)=F(t)\vec{g}(t)-\vec{h}(t)$. Subsequently, the AZTND model (\ref{RACZNN}) for the target tracking scheme is presented as \begin{eqnarray}\label{RACZNNforAoA} \begin{split} F(t)\vec{\dot g}(t)=&-\dot F(t)\vec{g}(t)-\vec{\dot h}(t)-\xi(\vec{\epsilon}(t))\big{(}F(t)\vec{g}(t)\\ &-\vec{h}(t)\big{)} -\kappa(\vec{\epsilon}(t))\int_{0}^{t}(F(\delta)\vec{g}(\delta)+\vec{h}(\delta))\text{d}\delta. \end{split} \end{eqnarray} For comparison, the OZNN for the AoA target tracking scheme is introduced as follows: \begin{equation}\label{OZNNforAoA} F(t)\vec{\dot g}(t)=-\dot F(t)\vec{g}(t)-\vec{\dot h}(t)-\gamma\big{(}F(t)\vec{g}(t)-\vec{h}(t)\big{)}, \end{equation} where the parameter $\gamma$ represents the scale parameter. The corresponding simulation results are provided in Figure \ref{AoAResult} and Figure \ref{AoANoiseResult}. Figure \ref{AoAResult} shows the target trajectory and system position error obtained by the target tracking scheme based on the OZNN model and the proposed AZTND model. Starting from the initial coordinate point $u_0$, the position error generated by the AZTND model (\ref{RACZNNforAoA}) converges to the $10^{-5}$ order, which is more accurate than the $10^{-3}$ order obtained by the OZNN model (\ref{OZNNforAoA}). Besides, as described in Figure \ref{AoANoiseResult} (a) and (c), the trajectory obtained by the OZNN model (\ref{OZNNforAoA}) does not converge to the true dynamic target trajectory when disturbed by constant noise $\vec{\vartheta}(t)=[20,20]^{\text{T}}$, while the trajectory generated by the AZTND model (\ref{RACZNNforAoA}) is well consistent with the dynamic target trajectory. Figure \ref{AoANoiseResult} (b) and (d) show that the AZTND model (\ref{RACZNNforAoA}) with constant noise $\vec{\vartheta}(t)$ interference still converges to order $10^{-3}$, while the OZNN model (\ref{OZNNforAoA}) diverges. In general, this part fully demonstrates that the AZTND model (\ref{RACZNNforAoA}) is effectively applied to the AoA target tracking scheme regardless of whether there is noise or not. \section{Conclusions}\label{Conclusion} The adaptive zeroing-type neural dynamics (AZTND) model has been proposed in this paper to solve the time-varying quadratic minimization (TVQM) problem in a perturbed environment. Unlike the original zeroing neural network models, the scale coefficient and feedback coefficient of the AZTND model has been presented from the perspective of adaptive optimization to expedite the global convergence and enhance the robustness of the model. Furthermore, this paper has presented the corresponding theorem and proof procedures from the stability perspective to investigate the global convergence of the AZTND model. Then, the corresponding numerical experiment is designed and executed, and the numerical results and visualization results of the investigation are given in tables and figures, respectively. Finally, the potential of the AZTND model in practical applications has been shown, and the simulative experiment demonstrates the effectiveness and superiority of the target tracking scheme based on the AZTND model.
proofpile-arXiv_065-70
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Our model formal equation is the following: \begin{equation}\tag*{GPME}\label{model_problem} \partial_t u(t,x) + \Delta\Phi u(t,x) = f(t,x) \quad \mbox{for every } (t,x) \in (a,b)\times X. \end{equation} This equation is called the \emph{generalized porous medium equation} (GPME) or \emph{filtration equation} whenever $\Delta$ is the Laplace operator and $\Phi$ is the canonical extension to a function space of a map $\phi \colon \mathbb{R} \to \mathbb{R}$ such that $\phi$ is strictly monotone increasing, $\phi(\mathbb{R})=\mathbb{R}$ and $\phi(0)=0$. If $\phi(s)= s^m \coloneqq s|s|^{m-1}$, then the above equation is known as the \emph{porous medium equation} (PME) when $m>1$ and the \emph{fast diffusion equation} (FDE) when $0<m<1$. Clearly, when $m=1$ and $f\equiv0$ we recover the classic heat equation. The \ref{model_problem} has a long story and we invite the interested reader to look at the seminal book by J. L. V\'{a}zquez \cite{vazquez2007porous} for a detailed and exhaustive account. In recent years, research interest about properties of solutions of the \ref{model_problem} has focused on the Riemannian setting as can be seen by the increasing number of related works, see, for example, \cite{bonforte2008fast,lu2009local,grillo2014radial,vazquez2015fundamental,grillo2016smoothing,grillo2017porous,grillo2018porous,grillo2018porous2,bianchi2018laplacian,grillo2019blow,grillo2020nonlinear,grillo2021fast,grillo2021global,meglioli2021blow,dipierro2021global} and references therein for an overview of the most significant developments. In contrast, in the graph setting there are still relatively few results for the \ref{model_problem}. This is despite the fact that, on the one hand, the \ref{model_problem} is being used as a model equation for several real-world phenomena (e.g., the flow of gas through a porous medium, water infiltration or population dynamics) and, on the other hand, graphs are ubiquitous in many applied fields: in physics \cite{nakanishi1971graph}, biology \cite{lesne2006complex,stam2007graph,lieberman2005evolutionary}, image and signal processing \cite{shuman2013emerging,ta2010nonlocal,elmoataz2015p}, engineering \cite{deo2016graph}, etc. To make our setting more precise, let us fix a graph $G=(X,w,\kappa,\mu)$ where $X$ is a countable node set, $w \colon X \times X \to [0,\infty)$ is a symmetric map with zero diagonal, $\kappa \colon X \to [0, \infty)$ is a possibly nontrivial killing term and $\mu \colon X \to (0, \infty)$ is a strictly positive node measure on $X$. For notational convenience, let us fix $a=0$ and $b=T\in (0,\infty]$. We will focus our attention on the following Cauchy problem posed on $G$: \begin{equation}\tag*{Cauchy-GPME}\label{Model_Equation_graph} \begin{cases} \partial_t u(t,x) + \Delta\Phi u(t,x) = f(t,x) & \mbox{for every } (t,x) \in (0,T)\times X,\\ \lim_{t\to 0^+} u(t,x) = u_0(x) & \mbox{for every } x\in X \end{cases} \end{equation} where $f \colon (0,T)\times X \to \mathbb{R}$ and $u_0 \colon X \to \mathbb{R}$ are generic functions at the moment. In this setting, $\Delta$ represents the (formal) graph Laplacian operator defined by the formula $$ \Delta u(x) \coloneqq \frac{1}{\mu(x)}\sum_{y \in X}w(x,y)\left(u(x) - u(y)\right) + \frac{\kappa(x)}{\mu(x)}u(x). $$ The \ref{model_problem} on graphs belongs to the broader class of nonlinear diffusion equations with nonconstant diffusion since the edge-weight function $w(\cdot,\cdot)$ can be seen as a counterpart of the nonconstant diffusion coefficients $\{a_{i,j}(x)\}_{i,j=1}^d$ which characterize the second-order differential operator $\sum_{i=1}^d \partial_i \left( a_{i,j}(x)\sum_{j=1}^d\partial_ju(x)\right)$ acting on smooth functions on $\mathbb{R}^d$. We now give a brief overview of some recent results concerning nonlinear equations in the graph setting. For the counterpart of the Kazdan-Warner equation, see \cite{grigoryan2016kazdan, keller2018kazdan, liu2020multiple}. Concerning the existence and uniqueness of solutions for reaction-diffusion type equations on the lattice $\mathbb{Z}$, see \cite{slavik2019well,stehlik2017exponential}. For the Yamabe and other equations, see \cite{grigoryan2016yamabe, grigoryan2017existence, lin2021heat}. For parabolic equations involving the $p$-Laplacian, see \cite{mugnolo2013parabolic,hua2015time}. Finally, we mention some results concerning the existence and nonexistence of global nonnegative solutions of an abstract semilinear heat equation given in \cite{lin2017existence, lin2018blow-up, wu2021blow-up} which were recently extended to a general setting in \cite{lenz2021blowup}. With reference to the PME in the discrete setting, we highlight \cite{erbar2014gradient} where the authors study the (finite) discrete analogue of the Wasserstein gradient flow structure for the PME in $\mathbb{R}^n$. Concerning the existence and uniqueness of solutions of the \ref{Model_Equation_graph} to the best of our knowledge there is an almost complete lack of a systematic treatment even in the case of finite graphs with one notable exception: In \cite[Corollary 5.4]{mugnolo2013parabolic}, exploiting an interesting link between the PME and the $p$-heat equation (which is well-known in $\mathbb{R}$, see, e.g., \cite[Section 3.4.3]{vazquez2007porous}), it is shown that if $G$ is an infinite tree, uniformly locally finite with $\mu\equiv 1$, then there exists a unique solution of the \ref{Model_Equation_graph} for $\phi(s)=s|s|^{m-1}$ for any $u_0 \in \ell^2(X,\mu)$ and $f\equiv0$. For more details and the definition of solutions in that setting we invite the interested reader to look at the mentioned paper. Our approach is different. The main goal of this article is to prove existence and uniqueness results for classes of solutions of the \ref{Model_Equation_graph} problem under the weakest possible hypotheses on the graph $G$, on the initial datum $u_0$ and on the forcing term $f$. To achieve this, we will borrow techniques from the theory of semigroups on Banach spaces and, as it will become clear later, $\ell^1(X,\mu)$ will turn out to be the ideal space for our considerations. We will consider three classes of solutions: \emph{mild} (Definition \ref{def:weak_solution}), \emph{strong} (Definition \ref{def:strong_solution}) and \emph{classic} (Definition \ref{def:classical_solution}) which are characterized by an increasing ``regularity.'' In particular, mild solutions $u$ are limits of $\epsilon$-approximations $u_\epsilon$ that satisfy the \ref{Model_Equation_graph} for a time discretization well-adapted to $f$. If the operator $\mathcal{L}\coloneqq\Delta\Phi$ is $m$-accretive, then it is possible to immediately infer the existence and uniqueness of mild solutions for the \ref{Model_Equation_graph} problem by appealing to well-known results, see \cite{benilan1972equations,benilan1988evolution,crandall1971generation}. Thus, the bulk of our work consists of establishing the $m$-accretivity of (a restriction of) the operator $\mathcal{L}$ on an appropriate Banach space. Accretivity of an operator $\mathcal{L}$ with respect to a norm $\|\cdot\|$ on a real Banach space $\mathfrak{E}=(E,\|\cdot\|)$ means that $$ \left\|(u - v) + \lambda \left(\mathcal{L}u - \mathcal{L}v\right) \right\|\geq \|u - v\| $$ for every $u,v \in \textnormal{dom}\left(\mathcal{L}\right)\subseteq E$ and for every $\lambda >0$. Furthermore, $m$-accretivity means that $\mathcal{L}$ is accretive and $\operatorname{id} +\lambda\mathcal{L}$ is surjective for every $\lambda>0$. We note that accretivity implies that $\operatorname{id} +\lambda\mathcal{L}$ is injective, thus, $m$-accretivity gives that $\operatorname{id} +\lambda\mathcal{L}$ is bijective. For a more detailed introduction to the concepts of accretivity and $m$-accretivity, see Subsection \ref{ssec:m-accretivity}. As can be seen directly, the accretivity property depends on both the operator $\mathcal{L}$ and on the underlying Banach space. For example, the graph Laplacian $\Delta$ on $\ell^p(X,\mu)$ is $m$-accretive on any finite graph for $p \in [1,\infty)$, see Proposition \ref{lem:m-accretivity_for_finite_graphs}. On the other hand, the nonlinear operator $\mathcal{L}$ can fail to be accretive with respect to the $\ell^2$-norm, see Example~\ref{ex:1}. What is crucial for our analysis is that the restriction of $\mathcal{L}$ to a suitable dense subset of $\ell^1(X,\mu)$ will be shown to be accretive for any graph. For $m$-accretivity to hold some additional hypothesis are required as will be discussed in what follows. We note that there is a parallel development concerning the surjectivity of the formal graph Laplacian $\Delta$ which is always surjective on infinite, locally finite graphs but not necessarily surjective in the not locally finite case, see \cite{ceccherini2012surjectivity,koberstein2020note}. For a complete introduction to the notation we refer to Section \ref{sec:preliminaries}. We denote by $\ell^{1,+}(X,\mu)$ and $\ell^{1,-}(X,\mu)$ the cones of nonnegative and nonpositive integrable functions, respectively, and by $\mathcal{L}$ the operator \begin{align*} &\mathcal{L} \colon \textnormal{dom}\left( \mathcal{L} \right)\subseteq \ell^{1}\left(X,\mu\right) \to \ell^{1}\left(X,\mu\right) , \\ &\textnormal{dom}\left( \mathcal{L} \right)\coloneqq\left\{ u \in \ell^{1}\left(X,\mu\right) \mid \Phi u\in \textnormal{dom}\left(\Delta\right), \Delta\Phi u \in \ell^{1}\left(X,\mu\right) \right\} \end{align*} whose action is given by $$ \mathcal{L}u\coloneqq \Delta\Phi u. $$ For a subset $\Omega \subseteq \textnormal{dom}\left( \mathcal{L} \right)$, we write $\mathcal{L}_{|\Omega}$ for the restriction of $\mathcal{L}$ to $\Omega$. We now state the main results, whose proofs can be found in Section \ref{ssec:proofs}. The first main result discusses the accretivity and $m$-accretivity of $\mathcal{L}$. \begin{theorem}\label{thm:main1} Let $G=(X,w,\kappa,\mu)$ be a graph. Then, there exists a dense subset $\Omega \subseteq \textnormal{dom}(\mathcal{L})$ such that $\mathcal{L}_{|\Omega}$ is accretive. Moreover, for every $\lambda >0$ and for every $g \in \ell^{1,\pm}(X,\mu)$ there exists a unique $u \in \ell^{1,\pm}(X,\mu)\cap\Omega$ such that \begin{equation*}\label{eq:main_equation} \left( \operatorname{id} + \lambda\mathcal{L}\right)u = g. \end{equation*} If one of the following extra hypotheses holds: \begin{enumerate}[label={\upshape(\bfseries H\arabic*)},wide = 0pt, leftmargin = 3em] \item\label{m-accretivity_A}$G$ is locally finite; \item\label{m-accretivity_B} $\inf_{x \in X}\mu(x)>0$; \item\label{m-accretivity_C} $\sup_{x \in X}\frac{\sum_{y \in X}w(x,y)}{\mu(x)}<\infty$ and $\Phi\left(\ell^1(X,\mu)\right)\subseteq \ell^1(X,\mu)$; \end{enumerate} then $\operatorname{id}+\lambda\mathcal{L}$ restricted to $\Omega$ is also surjective. In particular, $\mathcal{L}_{|\Omega}$ is $m$-accretive. Moreover, in all cases, the solution $u$ satisfies the contractivity estimate $$||u||\leq ||g||.$$ \end{theorem} The second main result uses general theory along with the $m$-accretivity established in the first result to yield existence and uniqueness of mild solutions for the \ref{Model_Equation_graph}. For this, we consider two cases, namely, when the initial data is nonnegative or nonpositive and when the initial data changes sign. In the second case, we need to add one of the extra hypotheses appearing in the first result above to guarantee existence. For the definitions of the various types of solutions for the \ref{Model_Equation_graph} and the connections between them, see Section~\ref{sec:Cauchy_model_problem}. \begin{theorem}\label{thm:main2} Let $G=(X,w,\kappa,\mu)$ be a graph. Let \begin{enumerate}[i)] \item\label{hp1} $u_0 \in \ell^1(X,\mu)$; \item\label{hp2} $f \in L^1_{\textnormal{loc}}\left([0,T]; \ell^1\left(X,\mu\right)\right)$. \end{enumerate} If one of the following additional conditions holds: \begin{enumerate}[a)] \item\label{item:nonnegativity/nonpositivity} $u_0,f(t)\geq 0$ (or $\leq 0$) for all $t\geq 0$; \item\label{hpA} $u_0$ or $f(t)$ changes sign and at least one of \ref{m-accretivity_A}, \ref{m-accretivity_B} or \ref{m-accretivity_C} is satisfied; \end{enumerate} then there exists a unique mild solution $u$ of the \ref{Model_Equation_graph}. Furthermore, $u(t) \in \ell^1(X,\mu)$ for all $t \in [0,T]$ and for every $\epsilon >0$ there exists a continuous function $\delta \colon [0,\infty) \to [0,\infty)$ such that $\delta(0)=0$ and if $u_\epsilon$ is an $\epsilon$-approximate solution of the \ref{Model_Equation_graph}, then \begin{equation}\label{uniform_limit} \| u(t) - u_\epsilon(t)\|\leq \delta(\epsilon) \qquad \mbox{for } t \in [0,T-\epsilon]. \end{equation} Moreover, for any pair $(u_0,f),(\hat{u}_0,\hat{f})$, the corresponding mild solutions $u,\hat{u} \in C\left([0,T]; \ell^1\left(X,\mu\right)\right)$ satisfy \begin{equation}\label{contraction_of_solutions} \left\| u(t_2) - \hat{u}(t_2) \right\| \leq \left\| u(t_1) - \hat{u}(t_1) \right\| + \int_{t_1}^{t_2}\left\| f(s) - \hat{f}(s) \right\|\, ds, \quad \forall \, 0\leq t_1<t_2 \leq T. \end{equation} Finally, under hypothesis \ref{item:nonnegativity/nonpositivity}, $u(t)\geq 0$ (or $\leq 0$) for every $t\geq 0$. \end{theorem} The paper is organized in the following way: \begin{itemize} \item In Section \ref{sec:preliminaries} we present the main definitions and describe the tools that we will use. \item In Section \ref{sec:Cauchy_model_problem} we introduce the abstract Cauchy problem along with a classification of types of solutions. \item Section \ref{sec:exisntence&uniqueness} is the core of the paper: We present the proofs of Theorem~\ref{thm:main1} and Theorem~\ref{thm:main2} with an introductory part about the main issues to be addressed. As a concluding application, in Corollary \ref{cor:application} we prescribe some hypotheses on the graph that guarantee that a mild solution is indeed a classic solution. \end{itemize} Since the proofs involved in Section \ref{sec:exisntence&uniqueness} are technical and rely on several auxiliary results, we collect them in Appendix \ref{sec:auxiliary_results} and Appendix \ref{sec:appendix2}. \section{Preliminaries}\label{sec:preliminaries} In this section we collect background material for the graph setting and the main mathematical tools that we will use in our proofs. \subsection{Notation} Given a set $X$ and a real function space $\mathfrak{F}\subseteq\{ u \colon X \to \mathbb{R} \}$, we denote by $\operatorname{id}\colon \mathfrak{F} \to \mathfrak{F}$ the identity operator. If $\phi \colon \textnormal{dom}\left(\phi\right)\subseteq \mathbb{R} \to \mathbb{R}$ is a function, then we denote by the capital letter $\Phi$ the canonical extension of $\phi$ to $\mathfrak{F}$, that is, the operator $\Phi \colon \textnormal{dom}\left(\Phi\right)\subseteq \mathfrak{F} \to \mathfrak{F}$ given by \begin{align*} &\textnormal{dom}\left(\Phi\right)\coloneqq \left\{ u \in \mathfrak{F} \mid u(x) \in \textnormal{dom}\left(\phi\right)\, \forall\, x \in X \right\},\\ &\Phi u(x)\coloneqq \phi(u(x)). \end{align*} Given a pair of real-valued functions $u$ and $v$ on $X$, we write $u\geq v$ if $u(x)\geq v(x)$ for every $x \in X$. All other ordering symbols are defined accordingly. Given a real Banach space $\mathfrak{E}=(E,\|\cdot\|)$, consider an $E$-valued function $f \colon [0,T] \subset \mathbb{R} \to E$, $t \mapsto f(t) \in E$. Such a function $f$ is called simple if $f$ is of the form $$ f(t) = \sum_{k=1}^n e_k\mathds{1}_{I_k}(t),\qquad e_k \in E, $$ where $I_k$ are Lebesgue measurable subsets of $[0,T]$ and $\mathds{1}_{I_k}$ is the indicator function of $I_k$. The integral of an $E$-valued simple function is defined by $$ \int_0^T f(t) \, dt\coloneqq \sum_{k=1}^n e_k m\left(I_k\right), $$ where $m(\cdot)$ is the Lebesgue measure on $[0,T]$. A function $f$ is (strongly) measurable if there exists a sequence $\{f_n\}_{n\in \mathbb{N}}$ of simple functions such that $f_n(t) \to f(t)$ in norm for almost every (a.e.) $t$ in $[0,T]$. A strongly measurable function $f$ is Bochner integrable if there exists a sequence of simple functions such that $f_n \to f$ pointwise a.e.\ in $[0,T]$ and $$ \lim_{n\to \infty}\int_0^T \|f_n(t) - f(t)\| \, dt =0, $$ or equivalently, by a theorem of Bochner, if and only if $\int_0^T \|f(t)\|\, dt < \infty$. The integral of $f$ is then defined by $$ \int_0^T f(t) \, dt= \lim_{n\to \infty}\int_0^T f_n(t) \, dt. $$ We denote the space of Bochner integrable functions from $[0,T]$ to $E$ by $$ L^1([0,T] ; E)\coloneqq \left\{ f \colon [0,T] \to E \mbox{ measurable}\mid \int_0^T \|f(t)\|\, dt < \infty\right\}. $$ In the same fashion, if $T=\infty$, we denote by $L^1_{\textnormal{loc}}([0,T] ; E)$ the space of $E$-valued functions that are locally Bochner integrable, that is, $f \in L^1_{\textnormal{loc}}([0,T] ; E)$ if and only if $f \in L^1([0,a] ; E)$ for every $a\in (0,\infty)$. Clearly, if $T<\infty$, then $L^1_{\textnormal{loc}}([0,T] ; E)=L^1([0,T] ; E)$. A function $f \in L^1_{\textnormal{loc}}([0,T] ; E)$ is weakly differentiable with weak derivative $g \in L^1_{\textnormal{loc}}([0,T] ; E)$ if $$ \int_0^T f(t)\eta'(t)\, dt = - \int_0^T g(t)\eta(t)\, dt, \quad\forall\, \eta \in C_c^\infty(0,T), $$ where the integrals are understood in the Bochner sense. The first Sobolev space for locally Bochner integrable functions is defined as $$ W^{1,1}_{\textnormal{loc}}([0,T] ;E)\coloneqq \left\{ f \in L^1_{\textnormal{loc}}([0,T] ; E) \mid f \mbox{ is weakly differentiable} \right\}. $$ Let us point out that $f \in W^{1,1}_{\textnormal{loc}}([0,T] ;E)$ if and only if $$ f(t) = e_0 + \int_0^t g(s)\, ds. $$ Moreover, $f$ is absolutely continuous and a.e. differentiable in $[0,T]$ with $f'(t)=g(t)$. For a review of integration and weak derivatives of vector-valued functions, see, for example, \cite[Chapter 5, Sections 4 and 5]{yosida1965functional} and \cite[Chapter 1, Section 4.5]{cazenave1998introduction}. Given an operator $\mathcal{L} \colon \textnormal{dom}(\mathcal{L}) \subseteq E \to E$ and a subset $\Omega \subseteq \textnormal{dom}(\mathcal{L})$, then the restriction of $\mathcal{L}$ to $\Omega$ is the operator $\mathcal{L}_{|\Omega} \colon \textnormal{dom}(\mathcal{L}_{|\Omega}) \subseteq E \to E$ such that \begin{equation*} \textnormal{dom}(\mathcal{L}_{|\Omega})= \Omega, \qquad \mathcal{L}_{|\Omega}u= \mathcal{L}u \quad \forall\, u \in \Omega. \end{equation*} As a final piece of notation we mention that if $E\subset \{u \colon X \to \mathbb{R}\}$, then we write $f(t,x)$ to indicate the value of $f(t)\in E$ at $x\in X$. \subsection{The graph setting} For a detailed introduction to the graph setting as presented here, see \cite{keller2021graphs}. We begin with the definition of a graph. \begin{definition}[Graph] A \textit{graph} is a quadruple $G=(X,w,\kappa,\mu)$ given by \begin{itemize} \item a countable set of \emph{nodes} $X$; \item a nonnegative \emph{edge-weight} function $w\colon X\times X \to [0,\infty)$; \item a nonnegative \emph{killing term} $\kappa \colon X \to [0,\infty)$; \item a positive \emph{node measure} $\mu \colon X \to (0,\infty)$ \end{itemize} where the edge-weight function $w$ satisfies: \begin{enumerate}[label={\upshape(\bfseries A\arabic*)},wide = 0pt, leftmargin = 3em] \item\label{assumption:symmetry} Symmetry: $w(x,y)=w(y,x)$ for every $x,y \in X$; \item\label{assumption:loops} No loops: $w(x,x)=0$ for every $x \in X$; \item\label{assumption:degree} Finite sum: $\sum_{y\in X} w(x,y) < \infty$ for every $x \in X$. \end{enumerate} \end{definition} If the cardinality of the node set is finite, i.e., $|X|<\infty$, then $G$ is called a \emph{finite graph}, otherwise, $G$ is called an \emph{infinite graph}. The non-zero values $w(x,y)$ of the edge-weight function $w$ are called \emph{weights} associated with the \emph{edge} $\{x,y\}$. In this case we will write $x\sim y$ meaning that $x$ is \emph{connected} to $y$. On the other hand, if $w(x,y)=0$, then we will write $x\nsim y$ meaning that $x$ and $y$ are not connected by an edge. A \emph{walk} is a (possibly infinite) sequence of nodes $\{x_{i}\}_{i\geq 0}$ such that $x_{i}\sim x_{i+1}$. A \emph{path} is a walk with no repeated nodes. A graph is \emph{connected} if there is a finite walk connecting every pair of nodes, that is, for any pair of nodes $x, y$ there is a finite walk such that $x = x_{0}\sim x_{1}\sim\cdots\sim x_{n}=y$. Moreover, we will say that a subset $A \subseteq X$ is connected if for every pair of nodes $x,y \in A$ there exists a finite walk connecting $x$ and $y$ all of whose nodes are in $A$. A subset $A \subseteq X$ is a \emph{connected component} of $X$ if $A$ is maximal with respect to inclusion. A graph is said to be \emph{locally finite} if for every $x \in X$ there are at most a finite number of nodes $y$ such that $w(x,y)\neq 0$. We define the \emph{degree} $\deg$ and \emph{weighted degree} $\operatorname{Deg}$ of a node $x$ as $$ \deg(x) \coloneqq \sum_{{y\in X}} w(x,y) + \kappa(x) \quad \textup{ and } \quad \operatorname{Deg}(x) \coloneqq \frac{\deg(x)}{\mu(x)}.$$ Clearly, by \ref{assumption:degree}, $\deg(x)$ and $\operatorname{Deg}(x)$ are finite for every $x\in X$. Observe that, if $\kappa\equiv 0$ and $w(x,y)\in \{0,1\}$, then $\deg$ corresponds to the standard definition in the literature on finite graphs (e.g., \cite{estrada2015first}). The set of real-valued functions on $X$ is denoted by $C(X)$ and $C_c(X)$ denotes the set of functions on $X$ with finite support. As usual, for $p\in [1,\infty]$ we define the $\ell^p(X,\mu)$ subspaces as $$ \ell^p(X,\mu)\coloneqq\begin{cases} \left\{ u \in C(X) \mid \sum_{x\in X} |u(x)|^p\mu(x)<\infty \right\} & \mbox{for } p \in [1,\infty),\\ \left\{ u \in C(X) \mid \sup_{x\in X}|u(x)|<\infty \right\} & \mbox{for } p =\infty \end{cases} $$ with their norms $$ \|u\|_p\coloneqq \begin{cases} \left(\sum_{x\in X} |u(x)|^p\mu(x)\right)^{{1}/{p}} & \mbox{for } p \in [1,\infty),\\ \sup_{x\in X}|u(x)| & \mbox{for } p =\infty \end{cases} $$ and with the standard remark that the $\ell^2$-norm is induced by the inner product $$ \langle u, v \rangle_{\ell^2} \coloneqq \sum_{x\in X} u(x)v(x)\mu(x) $$ making $\ell^2(X,\mu)$ into a Hilbert space. In general, we will use the convention $\|\cdot\|\coloneqq\|\cdot\|_1$ since we will work almost always with the $\ell^1$-norm. However, in case of possible ambiguity in the text, we will specify the norm. In addition to the previous standard definitions, we introduce the following restrictions to the nonnegative/nonpositive cones: \begin{align*} \ell^{1,+}\left(X,\mu\right)\coloneqq \ell^{1}\left(X,\mu\right) \cap \left\{ u \in C(X) \mid u\geq 0 \right\}, \quad \ell^{1,-}\left(X,\mu\right)\coloneqq -\ell^{1,+}\left(X,\mu\right). \end{align*} We now define the \emph{formal graph Laplacian} $\Delta \colon \textnormal{dom}\left(\Delta\right) \subseteq C(X) \to C(X)$ associated to the graph $G=(X,w,\kappa,\mu)$ by \begin{subequations} \begin{equation} \textnormal{dom}\left(\Delta\right)\coloneqq\{ u \in C(X) \mid \sum_{y\in X}w(x,y)|u(y)| < \infty \quad \forall x \in X \},\label{formal_laplacian1} \end{equation} \begin{align} \Delta u(x)&\coloneqq \frac{1}{\mu(x)}\sum_{y\in X} w(x,y)\left(u(x) - u(y)\right) + \frac{\kappa(x)}{\mu(x)}u(x)\label{formal_laplacian2} \\ &= \operatorname{Deg}(x)u(x) - \frac{1}{\mu(x)}\sum_{y\in X} w(x,y) u(y).\nonumber \end{align} \end{subequations} \begin{remark}\label{rem:1} We observe that if $u\geq 0$, then $\Delta u(x)$ is always defined as an extended real-valued function taking values in $[-\infty,\infty)$. Furthermore, if $u \geq 0$, then $u \in \textnormal{dom}\left(\Delta\right)$ if and only if $|\Delta u(x)|< \infty$ for every $x \in X$ if and only if $\Delta u(x)>-\infty$ for every $x\in X$ if and only if $\sum_{y\in X} w(x,y) u(y)>-\infty$ for every $x\in X$. \end{remark} \begin{figure}[!b] \centering \begin{minipage}{.45\textwidth} \begin {center} \begin {tikzpicture}[-latex ,auto ,node distance =1.5 cm and 1cm ,on grid , semithick , whitestyle/.style={circle,draw,fill=white,minimum size=1cm}, ghost/.style={circle,fill=white,minimum size=0.7cm}] \node[whitestyle] (C){$x_8$}; \node[whitestyle] (A) [above left=of C] {$x_6$}; \node[whitestyle] (B) [above right =of C] {$x_5$}; \node[whitestyle] (D) [below left=of A] {$x_7$}; \node[whitestyle] (E) [below right=of B] {$x_9$}; \node[whitestyle] (F) [above right=of A] {$x_{4}$}; \node[ghost] (g1) [above =of D] {}; \node[ghost] (g2) [above =of E] {}; \node[ghost] (g3) [above =of g1] {}; \node[ghost] (g4) [above =of g2] {}; \node[whitestyle] (G) [above =of g4] {$x_3$}; \node[whitestyle] (H) [above =of g3] {$x_0$}; \node[whitestyle] (I) [above=of H] {$x_1$}; \node[whitestyle] (L) [above=of G] {$x_2$}; \path (C) edge [double=black,-] node[] {} (A); \path (A) edge [double=black,-] node[] {} (C); \path (A) edge [double=black,-] node[] {} (B); \path (B) edge [double=black,-] node[] {} (A); \path (C) edge [double=black,-] node[] {} (B); \path (B) edge [double=black,-] node[] {} (C); \path (B) edge [double=black,-] node[] {} (F); \path (F) edge [double=black,-] node[] {} (B); \path (F) edge [double=black,-] node[] {} (H); \path (H) edge [double=black,-] node[] {} (F); \path (H) edge [double=black,-] node[] {} (D); \path (D) edge [double=black,-] node[] {} (H); \path (D) edge [double=black,-] node[] {} (C); \path (C) edge [double=black,-] node[] {} (D); \path (D) edge [double=black,-] node[] {} (A); \path (A) edge [double=black,-] node[] {} (D); \path (C) edge [double=black,-] node[] {} (E); \path (E) edge [double=black,-] node[] {} (C); \path (E) edge[double=black,-] node[] {} (G); \path (G) edge[double=black,-] node[] {} (E); \path (G) edge[double=black,-] node[] {} (L); \path (L) edge[double=black,-] node[] {} (G); \path (I) edge[double=black,-] node[] {} (H); \path (H) edge[double=black,-] node[] {} (I); \path (I) edge[double=black,-] node[above] {} (L); \path (L) edge[double=black,-] node[above] {} (I); \path (F) edge[double=black,-] node[] {} (G); \path (G) edge[double=black,-] node[] {} (F); \draw[latex'-latex',double] (G) edge[double=black,-] node [left] {} (H); \end{tikzpicture} \end{center}\end{minipage} \begin{minipage}{.45\textwidth} \centering \begin {tikzpicture}[-latex ,auto ,node distance =1.5 cm and 1cm ,on grid , semithick , whitestyle/.style={circle,draw,fill=white,minimum size=1cm}, blackstyle/.style ={ circle ,top color =black, bottom color = black, draw, white, minimum size =1cm}, gray-orangestyle/.style ={ circle ,top color =lightgray!70 , bottom color = lightgray!70 , draw, black, text=black, minimum size =1cm}, green-redstyle/.style ={ circle ,top color = darkpastelgreen , bottom color =darkpastelgreen , draw, black , text=black, minimum size=1cm}, greenstyle/.style={circle ,top color =darkspringgreen , bottom color = darkspringgreen, draw,black , text=black ,minimum size=1cm}, lightgreenstyle-red/.style={circle ,top color =lightgreen!70 , bottom color = lightgreen!70, draw, black, text=black ,minimum size=1cm}, ghost/.style={circle,fill=white,minimum size=0.7cm}] \node[greenstyle] (C){$x_8$}; \node[greenstyle] (A) [above left=of C] {$x_6$}; \node[greenstyle] (B) [above right =of C] {$x_5$}; \node[lightgreenstyle-red] (D) [below left=of A] {$x_7$}; \node[lightgreenstyle-red] (E) [below right=of B] {$x_9$}; \node[lightgreenstyle-red] (F) [above right=of A] {$x_{4}$}; \node[ghost] (g1) [above =of D] {}; \node[ghost] (g2) [above =of E] {}; \node[ghost] (g3) [above =of g1] {}; \node[ghost] (g4) [above =of g2] {}; \node[gray-orangestyle] (G) [above =of g4] {$x_3$}; \node[gray-orangestyle] (H) [above =of g3] {$x_0$}; \node[whitestyle] (I) [above=of H] {$x_1$}; \node[whitestyle] (L) [above=of G] {$x_2$}; \path (C) edge [double=black,-] node[] {} (A); \path (A) edge [double=black,-] node[] {} (C); \path (A) edge [double=black,-] node[] {} (B); \path (B) edge [double=black,-] node[] {} (A); \path (C) edge [double=black,-] node[] {} (B); \path (B) edge [double=black,-] node[] {} (C); \path (B) edge [double=black,-] node[] {} (F); \path (F) edge [double=black,-] node[] {} (B); \path (F) edge [double=black,-] node[] {} (H); \path (H) edge [double=black,-] node[] {} (F); \path (H) edge [double=black,-] node[] {} (D); \path (D) edge [double=black,-] node[] {} (H); \path (D) edge [double=black,-] node[] {} (C); \path (C) edge [double=black,-] node[] {} (D); \path (D) edge [double=black,-] node[] {} (A); \path (A) edge [double=black,-] node[] {} (D); \path (C) edge [double=black,-] node[] {} (E); \path (E) edge [double=black,-] node[] {} (C); \path (E) edge[double=black,-] node[] {} (G); \path (G) edge[double=black,-] node[] {} (E); \path (G) edge[double=black,-] node[] {} (L); \path (L) edge[double=black,-] node[] {} (G); \path (I) edge[double=black,-] node[] {} (H); \path (H) edge[double=black,-] node[] {} (I); \path (I) edge[double=black,-] node[above] {} (L); \path (L) edge[double=black,-] node[above] {} (I); \path (F) edge[double=black,-] node[] {} (G); \path (G) edge[double=black,-] node[] {} (F); \draw[latex'-latex',double] (G) edge[double=black,-] node [left] {} (H); \end{tikzpicture} \end{minipage} \caption{Example of a connected graph $G=(X,w,\kappa,\mu)$ (in the left picture) with $X=\left\{x_i \mid i=0,\ldots, 9\right\}$ and a proper subset $A=\left\{x_4,x_5,x_6,x_7,x_8,x_{9}\right\}$. A black line between two nodes $x_i,x_j \in X$ means that $x_i\sim x_j$. In the right picture, the interior $\mathring{A}=\{x_5,x_6,x_8\}$ is colored in green while the interior boundary $\mathring{\partial} A=\{x_4,x_7,x_9\}$ is colored in light green. The nodes in the exterior boundary $\mathbullet{\partial} A = \{x_0, x_3\}\subseteq X\setminus A$ are colored in light gray.}\label{fig:interior-boundary} \end{figure} As we will see in the upcoming sections, it will be useful to deal with subgraphs of a graph. We start by discussing the notion of the interior and two notions of boundary for a subset of the node set. Given a graph $G=(X,w,\kappa,\mu)$ and a subset $A\subset X$, then $$ \mathring{A}\coloneqq\left\{ x \in A \mid x \nsim y \mbox{ for every } y \in X\setminus A \right\} $$ is called the \emph{interior} of $A$ and the elements of $\mathring{A}$ are called \emph{interior nodes} of $A$. On other hand, the sets of nodes \begin{align*} &\mathring{\partial} A\coloneqq\left\{ x \in A \mid x \sim y \mbox{ for some } y \in X\setminus A \right\},\\ &\mathbullet{\partial} A\coloneqq\left\{ y \in X\setminus A \mid y \sim x \mbox{ for some } x \in A \right\} \end{align*} are called the \emph{interior boundary} and the \emph{exterior boundary} of $A$, respectively. Although these notions are rather standard in the graph setting, we illustrate the definitions with an example in Figure~\ref{fig:interior-boundary}. We next introduce the concept of an induced subgraph. \begin{definition}[Induced subgraph]\label{def:induced_subgraph} We say that a graph $F=(A, w',\kappa', \mu')$ is an \emph{induced subgraph} of $G$, and we write $F\subset G$, if \begin{itemize} \item $A\subset X$; \item $w' \equiv w_{|A\times A}$; \item $\kappa'(x) = \kappa(x)$ for every $x \in\mathring{A}$; \item $\mu' \equiv \mu_{|A}$ \end{itemize} where $w_{|A\times A}$ and $\mu_{|A}$ denote the restrictions of $w$ and $\mu$ to the sets $A\times A$ and $A$, respectively. We call $G$ the \emph{host graph} or the \emph{supergraph}. The corresponding formal graph Laplacian for a subgraph $F$ is defined according to \eqref{formal_laplacian1} and \eqref{formal_laplacian2} where the quadruple $(X,w,\kappa,\mu)$ is replaced by $(A,w',\kappa',\mu')$. Observe that we do not require that $\kappa' \equiv \kappa$ on $\mathring{\partial}A$. Different choices of $\kappa'$ on $\mathring{\partial} A$ will produce different subgraphs. We say that $F$ is the \emph{canonical induced subgraph} if $\kappa'=\kappa_{|A}$. \end{definition} \begin{remark} Our notion of induced subgraph is intrinsically related to the killing term $\kappa'$. If we do not consider any $\kappa$, then the definition of induced subgraph is equivalent to the classical one, see for example \cite[Definition 2.2]{estrada2015first}. \end{remark} Of particular interest is the killing term $\kappa_{\textnormal{dir}}$ that arises from \textquotedblleft Dirichlet boundary conditions.\textquotedblright \begin{figure}[!t] \centering \begin{minipage}{.45\textwidth} \begin {center} \begin{tikzpicture}[-latex ,auto ,node distance =1.5 cm and 1cm ,on grid , semithick , whitestyle/.style={circle,draw,fill=white,minimum size=1cm}, blackstyle/.style ={ circle ,top color =black, bottom color = black, draw, white, minimum size =1cm}, white-redstyle/.style ={ circle ,top color =lightgray!70 , bottom color = lightgray!70, draw, black, text=black, minimum size =1cm}, green-redstyle/.style ={ circle ,top color = lightgreen!70 , bottom color =lightgreen!70 , draw, black , text=black, minimum size=1cm}, greenstyle/.style={circle ,top color =darkspringgreen , bottom color = darkspringgreen, draw,black , text=black ,minimum size=1cm}, ghost/.style={circle,fill=white,minimum size=0.7cm}] \node[greenstyle] (C){$x_8$}; \node[greenstyle] (A) [above left=of C] {$x_6$}; \node[greenstyle] (B) [above right =of C] {$x_5$}; \node[green-redstyle] (D) [below left=of A] {$x_7$}; \node[green-redstyle] (E) [below right=of B] {$x_9$}; \node[green-redstyle] (F) [above right=of A] {$x_{4}$}; \node[ghost] (g1) [above =of D] {}; \node[ghost] (g2) [above =of E] {}; \node[ghost] (g3) [above =of g1] {}; \node[ghost] (g4) [above =of g2] {}; \node[white-redstyle] (G) [above =of g4] {$x_3$}; \node[white-redstyle] (H) [above =of g3] {$x_0$}; \node[whitestyle] (I) [above=of H] {$x_1$}; \node[whitestyle] (L) [above=of G] {$x_2$}; \path (C) edge [double=black,-] node[] {} (A); \path (A) edge [double=black,-] node[] {} (C); \path (A) edge [double=black,-] node[] {} (B); \path (B) edge [double=black,-] node[] {} (A); \path (C) edge [double=black,-] node[] {} (B); \path (B) edge [double=black,-] node[] {} (C); \path (B) edge [double=black,-] node[] {} (F); \path (F) edge [double=black,-] node[] {} (B); \path (D) edge [double=black,-] node[] {} (C); \path (C) edge [double=black,-] node[] {} (D); \path (D) edge [double=black,-] node[] {} (A); \path (A) edge [double=black,-] node[] {} (D); \path (C) edge [double=black,-] node[] {} (E); \path (E) edge [double=black,-] node[] {} (C); \path (G) edge[double=black,-] node[] {} (L); \path (L) edge[double=black,-] node[] {} (G); \path (I) edge[double=black,-] node[] {} (H); \path (H) edge[double=black,-] node[] {} (I); \path (I) edge[double=black,-] node[above] {} (L); \path (L) edge[double=black,-] node[above] {} (I); \draw[latex'-latex',double] (G) edge[double=black,-] node [left] {} (H); \end{tikzpicture} \end{center}\end{minipage} \begin{minipage}{.45\textwidth} \centering \begin {tikzpicture}[-latex ,auto ,node distance =1.5 cm and 1cm ,on grid , semithick , whitestyle/.style={circle,draw,fill=white,minimum size=1cm}, blackstyle/.style ={ circle ,top color =black, bottom color = black, draw, text=white, minimum size =1cm}, white-redstyle/.style ={ circle ,top color =lightgray!70 , bottom color = lightgray!70 , draw, double=red,very thick, black, text=black, minimum size =1cm}, green-redstyle/.style ={ circle ,top color = lightgreen!70 , bottom color =lightgreen!70 , draw,double=red,very thick, black , text=black, minimum size=1cm}, greenstyle/.style={circle ,top color =darkspringgreen , bottom color = darkspringgreen, draw,black , text=black ,minimum size=1cm}, ghost/.style={circle,fill=white,minimum size=0.7cm}] \node[greenstyle] (C){$x_8$}; \node[greenstyle] (A) [above left=of C] {$x_6$}; \node[greenstyle] (B) [above right =of C] {$x_5$}; \node[green-redstyle] (D) [below left=of A] {$x_7$}; \node[green-redstyle] (E) [below right=of B] {$x_9$}; \node[green-redstyle] (F) [above right=of A] {$x_{4}$}; \node[ghost] (g1) [above =of D] {}; \node[ghost] (g2) [above =of E] {}; \node[ghost] (g3) [above =of g1] {}; \node[ghost] (g4) [above =of g2] {}; \node[white-redstyle] (G) [above =of g4] {$x_3$}; \node[white-redstyle] (H) [above =of g3] {$x_0$}; \node[whitestyle] (I) [above=of H] {$x_1$}; \node[whitestyle] (L) [above=of G] {$x_2$}; \path (C) edge [double=black,-] node[] {} (A); \path (A) edge [double=black,-] node[] {} (C); \path (A) edge [double=black,-] node[] {} (B); \path (B) edge [double=black,-] node[] {} (A); \path (C) edge [double=black,-] node[] {} (B); \path (B) edge [double=black,-] node[] {} (C); \path (B) edge [double=black,-] node[] {} (F); \path (F) edge [double=black,-] node[] {} (B); \path (D) edge [double=black,-] node[] {} (C); \path (C) edge [double=black,-] node[] {} (D); \path (D) edge [double=black,-] node[] {} (A); \path (A) edge [double=black,-] node[] {} (D); \path (C) edge [double=black,-] node[] {} (E); \path (E) edge [double=black,-] node[] {} (C); \path (G) edge[double=black,-] node[] {} (L); \path (L) edge[double=black,-] node[] {} (G); \path (I) edge[double=black,-] node[] {} (H); \path (H) edge[double=black,-] node[] {} (I); \path (I) edge[double=black,-] node[above] {} (L); \path (L) edge[double=black,-] node[above] {} (I); \draw[latex'-latex',double] (G) edge[double=black,-] node [left] {} (H); \path (H) edge [double=red, dashed,-] node[midway,below] {$\textcolor{red2}{b_{\textnormal{dir}}}$} (F); \path (H) edge [double=red, dashed,-] node[] {$\textcolor{red2}{b_{\textnormal{dir}}}$} (D); \path (E) edge[double=red, dashed,-] node[] {$\textcolor{red2}{b_{\textnormal{dir}}}$} (G); \path (F) edge[double=red, dashed,-] node[] {$\textcolor{red2}{b_{\textnormal{dir}}}$} (G); \draw[latex'-latex',double] (G) edge[black,-] node [left] {} (H); \end{tikzpicture} \end{minipage} \caption{Continuation of the example in Figure \ref{fig:interior-boundary}. The left picture gives a canonical induced subgraph $F\subset G$: In general, $F$ is not ``affected'' by the complementary set $X\setminus A$ since the weights of the edges lying in $X\setminus A$ do not influence the graph $F$. The right picture gives instead a Dirichlet subgraph $G_{\textnormal{dir}}$: Due to the presence of the Dirichlet weight function $b_{\textnormal{dir}}$ the subgraph $G_{\textnormal{dir}}$ is still affected by the complementary set $X\setminus A$.}\label{fig:dirichlet_subgraph} \end{figure} \begin{definition}[Dirichlet subgraph]\label{def:dir_subgraph} An induced subgraph $$ G_{\textnormal{dir}} \coloneqq\left(A, w_{|A\times A}, \kappa_{\textnormal{dir}}, \mu_{|A}\right)\subset G $$ is called a \emph{Dirichlet subgraph} if \begin{equation}\label{def:dir_potential} \begin{cases} \kappa_{\textnormal{dir}}(x) \coloneqq \kappa_{|A}(x) + b_{\textnormal{dir}}(x),\\ b_{\textnormal{dir}}(x)\coloneqq\sum_{y \in \mathbullet{\partial} A}w(x,y) = \sum_{y \not \in A}w(x,y). \end{cases} \end{equation} We note that $b_{\textnormal{dir}} \colon A \to \mathbb{R}$ is finite because of \ref{assumption:degree}. We call $b_{\textnormal{dir}}$ the \emph{boundary (Dirichlet) weight-function} and $\kappa_{\textnormal{dir}}$ the \emph{Dirichlet killing term}. Clearly, $b_{\textnormal{dir}}(x)= 0$ for every $x \in \mathring{A}$. We will denote by $\Delta_{\textnormal{dir}}$ the graph Laplacian of $G_{\textnormal{dir}}$ in order to distinguish it from the graph Laplacian of $G$. \end{definition} The Dirichlet killing term describes the edge deficiency of nodes in $G_{\textnormal{dir}}$ compared to the same nodes in $G$, see Figure \ref{fig:dirichlet_subgraph}. The name ``Dirichlet'' in the above definition comes from the following observation, see, for example, \cite[pg.~197 and Proposition 2.4]{keller2012dirichlet} and \cite[Proposition 2.23]{keller2021graphs}: Let $\boldsymbol{\mathfrak{i}} \colon C(A) \hookrightarrow C(X)$ be the canonical embedding and $\boldsymbol{\pi} \colon C(X) \to C(A)$ be the canonical projection, i.e., \begin{equation}\label{eq:embedding-projection} \boldsymbol{\mathfrak{i}}v(x)= \begin{cases} v(x) & \mbox{if } x\in A,\\ 0 & \mbox{if } x\in X\setminus A, \end{cases} \qquad \boldsymbol{\pi}u= u_{|A}. \end{equation} Under some mild assumptions, it is almost straightforward to prove, see Lemma \ref{lem:A1}, that \begin{itemize} \item $\Delta_{\textnormal{dir}} v(x)=\Delta\boldsymbol{\mathfrak{i}}v(x)$ for $v\in \textnormal{dom}(\Delta_{\textnormal{dir}})$ and $x \in A$; \item $\Delta u(x) = \Delta_{\textnormal{dir}}\boldsymbol{\pi}u(x)$ for $u\in \textnormal{dom}\left(\Delta\right)\cap \left\{u \in C(X) \mid u \equiv 0 \mbox{ on } X\setminus A \right\}$ and $x \in A$. \end{itemize} Therefore, the Dirichlet graph Laplacian $\Delta_{\textnormal{dir}}$ can be viewed as the restriction of $\Delta$ having imposed Dirichlet conditions on the exterior boundary of $A$. \subsection{m-accretive operators}\label{ssec:m-accretivity} In preparation for Section \ref{sec:existence_weak}, we introduce here the fundamental tools that will play a major role in the proofs of existence and uniqueness of solutions to the \ref{Model_Equation_graph} in the discrete setting. We begin by giving two equivalent definitions of accretivity. As a reference for this topic, see for example \cite{deimling2010nonlinear}. \begin{definition}[Accretive and m-accretive operators]\label{def:m-accretivity} If $\mathfrak{E}=(E,\|\cdot\|)$ is a real Banach space and $\mathcal{L} \colon \textnormal{dom}(\mathcal{L})\subseteq E \to E$ is a (not necessarily linear) operator, then $\mathcal{L}$ is said to be \emph{accretive} if $\mathcal{L}$ satisfies one of the following equivalent conditions: \begin{enumerate}[(i)] \item\label{m-accretivity1} $\left\|(u - v) + \lambda \left(\mathcal{L}u - \mathcal{L}v\right) \right\|\geq \|u - v\|$ for every $u,v \in \textnormal{dom}\left(\mathcal{L}\right)$ and for every $\lambda >0$. \item\label{m-accretivity2} $\langle \mathcal{L}u -\mathcal{L}v, u - v \rangle_+ \geq 0$ for every $u,v \in \textnormal{dom}\left(\mathcal{L}\right)$ where for $z, k \in E$ $$ \langle z, k \rangle_+\coloneqq\|k\|\lim_{\lambda\to 0^+} \lambda^{-1}\left( \left\|k + \lambda z\right\| - \|k\| \right). $$ \end{enumerate} Concerning the well-posedness of condition \ref{m-accretivity2} and its application to $\ell^p$-spaces, see Remark \ref{rem:accretivity_l^2} below. An accretive operator $\mathcal{L}$ is called \emph{$m$-accretive} if $\operatorname{id} + \lambda\mathcal{L}$ is surjective for every $\lambda >0$. \end{definition} Accretive operators are called monotone operators in the Hilbert space setting. Accretivity is a way to extend the property of monotonicity of real-valued functions of a real variable to spaces with a more complex structure. This follows by the trivial observation that a map $f \colon \textnormal{dom}(f) \subseteq \mathbb{R} \to \mathbb{R}$ is monotone (increasing) if and only if $\left(f(s_1)-f(s_2)\right)\left(s_1-s_2\right)\geq 0$ for all $s_1,s_2 \in \textnormal{dom}(f)$. Let us highlight that $m$-accretivity is related to the self-adjointness of linear operators in the Hilbert case setting. Indeed, a linear operator $\mathcal{L}$ on a Hilbert space is self-adjoint and nonnegative if and only if $\mathcal{L}$ is symmetric, closed and $m$-accretive (by the Minty theorem, m-accretive and maximal monotone are equivalent properties in Hilbert spaces), see \cite[ Problem V.3.32]{kato2013perturbation}. In this context, we note that there has been recent interest in the graph setting concerning the essential self-adjointness of the formal Laplacian and related operators restricted to finitely supported functions, see, for example, \cite{wojciechowski2008stochastic,milatovic2011essential,keller2012dirichlet,huang2013note,guneysu2014generalized,milatovic2014self,milatovic2015maximal,guneysu2016feynman,schmidt2020existence}. Concerning the $m$-accretivity of the graph Laplacian we also highlight a couple of recent results: The first is obtained in \cite{milatovic2015maximal}, where the authors establish the $m$-accretivity on $\ell^p(X,\mu)$ for $1\leq p <\infty$ in the more general setting of Hermitian vector bundles, under some hypotheses on the graph. The second result is obtained in \cite{anne2020m} where the authors prove a criterion for the $m$-accretivity of a graph Laplacian (not necessarily self-adjoint) on directed graphs in the Hilbert case. \begin{remark}\label{rem:accretivity_l^2} Let us observe that condition \ref{m-accretivity2} in Definition \ref{def:m-accretivity} is well-posed. First of all, for $\lambda>0$ $$ -\|z\|\leq \frac{\left\|k + \lambda z\right\| - \|k\|}{\lambda} \leq \|z\|. $$ Then we observe that, for every $0<s<\lambda$, \begin{align*} \| k + sz\| - \|k\| = \left\|\left(1-\frac{s}{\lambda}\right)k + \frac{s}{\lambda} (k+\lambda z) \right\| - \|k\| &\leq \left(1-\frac{s}{\lambda}\right)\|k\| +\frac{s}{\lambda}\|k+\lambda z\| - \|k\|\\ &=\frac{s}{\lambda} \left(\| k + \lambda z\| - \|k\|\right), \end{align*} i.e., the map $\lambda \mapsto \lambda^{-1}\left(\|k + \lambda z \| - \|k\|\right)$ is monotone increasing in $\lambda>0$. Therefore, $$ \lim_{\lambda\to 0^+}\lambda^{-1}\left(\|k + \lambda z \| - \|k\|\right) $$ exists and belongs to $[-\|z\|, \|z\|]$. Now, let us fix $E=\ell^p(X,\mu)$ with $p\in [1,\infty)$. By the convexity of the map $t\mapsto |t|^p$, for every $\lambda\in (0,1]$ we get \begin{align*} &\frac{|k \pm \lambda z |^p - |k|^p}{\lambda}= \frac{|(1-\lambda)k + \lambda(k\pm z) |^p - |k|^p}{\lambda}\leq |k \pm z|^p - |k|^p \end{align*} and $$|k|^p - |k - \lambda z |^p\leq |k + \lambda z |^p -|k|^p $$ where the second inequality can be easily derived by $|f+g|^p\leq 2^{p-1}(|f|^p+|g|^p)$ with $f\coloneqq k-\lambda z$ and $g\coloneqq k+\lambda z$. Combining these inequalities gives $$ |k|^p - |k - z|^p\leq \frac{ |k|^p - | k - \lambda z|^p}{\lambda} \leq\frac{|k + \lambda z |^p - |k|^p}{\lambda}\leq |k + z|^p - |k|^p \quad \forall\, \lambda\in (0,1], $$ that is, $\lambda^{-1}| |k + \lambda z |^p - |k|^p |$ is dominated by an integrable function. Then, by the mean value theorem and dominated convergence, for every $p\geq1$ (and $\|k\|_p\neq 0$) we get \begin{align*} \lim_{\lambda\to 0^+}\lambda^{-1}\left(\|k + \lambda z \|_p - \|k\|_p\right) &=\lim_{\lambda\to 0^+}\lambda^{-1}\left((\|k + \lambda z \|^p_p)^{1/p} - (\|k\|^p_p)^{1/p}\right) \\ &= \frac{1}{p}(\|k\|^p_p)^{\frac{1}{p}-1}\lim_{\lambda\to 0^+}\sum_{x \in X} \frac{|k(x) + \lambda z(x) |^p - |k(x)|^p}{\lambda}\mu(x)\\ &=\begin{cases} \|k\|_p^{1-p}\sum_{x \in X}z(x)|k(x)|^{p-1}\operatorname{sgn}(k(x)) \mu(x) &\mbox{for } p>1,\\ \sum\limits_{\substack{x\in X\colon\\ k(x)= 0}}|z(x)|\mu(x) + \sum\limits_{\substack{x\in X\colon\\ k(x)\neq 0}} z(x)\operatorname{sgn}(k(x))\mu(x) &\mbox{for } p=1 \end{cases} \end{align*} where \begin{equation}\label{eq:sgn} \operatorname{sgn}(s)\coloneqq\begin{cases} 1 & \mbox{if } s>0,\\ 0 & \mbox{if } s=0,\\ -1 & \mbox{if } s<0. \end{cases} \end{equation} Summarizing, for $E=\ell^p(X,\mu)$ with $p\in [1,\infty)$, \begin{equation}\label{accretivity_for_lp} \langle z, k \rangle_+ = \begin{cases} \|k\|_p^{2-p} \sum\limits_{x \in X} z(x)k(x)|k(x)|^{p-2}\mu(x) & \mbox{if } p>1,\\ \|k\|_1\left( \sum\limits_{\substack{x\in X\colon\\ k(x)= 0}}|z(x)|\mu(x) + \sum\limits_{\substack{x\in X\colon\\ k(x)\neq 0}} z(x)\operatorname{sgn}(k(x))\mu(x) \right)& \mbox{if } p=1. \end{cases} \end{equation} \end{remark} A simple example of an $m$-accretive operator is the graph Laplacian on finite graphs with respect to the $\ell^p$-norm. This result should be well-known but for completeness we give a proof in the following proposition. \begin{proposition}\label{lem:m-accretivity_for_finite_graphs} Let $G$ be a finite graph. Then, the graph Laplacian $\Delta$ on $\ell^p(X,\mu)$ is $m$-accretive for $p\geq 1$. In particular, $$ \left\| (\operatorname{id} + \lambda \Delta)u \right\|_p\geq \|u \|_p \qquad \mbox{for every } u \in C(X), \, \lambda >0. $$ \end{proposition} \begin{proof} Fix $p\in (1,\infty)$. Applying the linearity of $\Delta$ and \eqref{accretivity_for_lp} of Remark \ref{rem:accretivity_l^2}, $\Delta$ is accretive if and only if $$ \|u\|_p^{2-p}\sum_{x\in X} \Delta u(x) u(x)|u(x)|^{p-2}\mu(x)\geq 0. $$ Using the fact that the sum is finite, we have \begin{align*} \sum_{x\in X} \Delta u(x) & u(x)|u(x)|^{p-2}\mu(x) \\ & = \sum_{x\in X} u(x)|u(x)|^{p-2} \sum_{y\in X}w(x,y)(u(x)-u(y)) + \sum_{x\in X}\kappa(x)|u(x)|^p\\ &\geq \sum_{x\in X} \sum_{y\in X}w(x,y)(|u(x)|^{p}-u(y)u(x)|u(x)|^{p-2})\\ &\geq \frac{1}{2} \sum_{x,y\in X}w(x,y)(|u(x)|^p + |u(y)|^p - |u(x)|^{p-1}|u(y)| - |u(x)||u(y)|^{p-1}). \end{align*} The conclusion now follows from the following inequality $$ a^p + b^p - a^{p-1}b - ab^{p-1} \geq 0 \qquad \forall\; a,b \geq 0, \quad \forall\, p \in (1,\infty) $$ which can be established by elementary calculus as we now show. The inequality holds for $a=0$ or $b=0$, so assuming that $b>0$, dividing through by $b^p$ and setting $t\coloneqq a/b$, it is equivalent to prove that $$ \beta(t) \coloneqq t^p +1 - t^{p-1} - t \geq 0 \qquad \mbox{for } t \geq 0. $$ Note that $\beta(0)=1$ and $\beta(t) \to \infty$ as $t \to \infty$. We have $$ \beta'(t) = pt^{p-1} -(p-1)t^{p-2}-1 $$ and thus $$ \beta'(t)<0 \mbox{ for all } t \mbox{ small}, \quad \beta'(1)=0, \quad \beta'(t)\to \infty \mbox{ as } t \to \infty. $$ Moreover, $$ \beta''(t)= p(p-1)t^{p-2} -(p-1)(p-2)t^{p-3}= p(p-1)t^{p-3}\left(t-\frac{p-2}{p}\right). $$ Thus, if $1<p\leq 2$, then $\beta''\geq 0$ and $\beta'$ is increasing. If $p>2$, then $$ \beta''(t)<0 \mbox{ for } t<\frac{p-2}{p}<1, \qquad \beta''(t)\geq 0 \mbox{ for } t \geq \frac{p-2}{p}. $$ Hence, $\beta'$ is decreasing until ${(p-2)}/{p}$ and increasing afterwards. In any case, $\beta'(t)<0$ for $t<1$ and $\beta'(t)>0$ for $t>1$, so $$ \min_{t>0} \beta(t) = \beta(1) = 0. $$ By the equivalence between \ref{m-accretivity2} and \ref{m-accretivity1} of Definition \ref{def:m-accretivity}, we then have $$ \left\| (\operatorname{id} + \lambda \Delta)u \right\|_p\geq \|u \|_p \qquad \mbox{for every } u \in C(X), \, \lambda >0. $$ We conclude the lemma for $p\in (1,\infty)$ by noticing that $\operatorname{id}+\lambda\Delta$ injective and $C(X)$ finite imply that $\operatorname{id}+\lambda\Delta$ surjective by linearity. The case $p=1$ is addressed in the more general case of Corollary~\ref{cor:accretivityL-finite} in Appendix~\ref{sec:appendix2}. \end{proof} As a final comment, we observe that if two operators are accretive on a given Banach space, this will not automatically imply that the composition (if defined) of the two operators is accretive. See the following simple example. \begin{example}\label{ex:1} Consider the finite birth-death chain $G=\left(X,w,\kappa,\mu\right)$ (see Figure \ref{fig:ex:1}) with \begin{itemize} \item $X= \left\{ x_1, x_2, x_3, x_4 \right\}$; \item $w(x_i,x_j)=1$ if and only if $|i-j|=1$ and zero otherwise; \item $\kappa \equiv 0$; \item $\mu \equiv 1$. \end{itemize} Define now $\phi \colon \mathbb{R} \to \mathbb{R}$ by $\phi(s)\coloneqq s|s|^3$. Since $\phi$ is monotone, thanks to Remark \ref{rem:accretivity_l^2}, $\Phi$, the canonical extension of $\phi$ to $\ell^2(X,\mu)$, is accretive. Consider now the graph Laplacian $ \Delta \colon \ell^2\left(X,\mu\right)\to \ell^2\left(X,\mu\right)$ associated to $G$ which in this case acts as $$ \Delta u(x_i) =\begin{cases} u(x_{1}) - u(x_{2}) & \mbox{if } i=1, \\ 2u(x_{i}) -u(x_{i-1}) - u(x_{i+1}) & \mbox{if } i = 2, 3,\\ u(x_{4}) - u(x_{3}) & \mbox{if } i=4. \end{cases} $$ By Proposition \ref{lem:m-accretivity_for_finite_graphs}, $\Delta$ is accretive. On the other hand, the operator $\mathcal{L}\coloneqq\Delta\Phi$ is not accretive. Indeed, a computation shows that if $u$ and $v$ are defined by $$ u(x_i)=\begin{cases} 3, & \mbox{for } i=1,\\ 4, & \mbox{for } i=2,\\ 0 & \mbox{otherwise}, \end{cases}\qquad v(x_i)=\begin{cases} 3, & \mbox{for } i=2,\\ 0 & \mbox{otherwise}, \end{cases} $$ then $$ \langle \mathcal{L} u- \mathcal{L} v, u-v \rangle_{\ell^2}=-13 <0 $$ and therefore $\mathcal{L}$ is not accretive as claimed. \end{example} As it is shown in Corollary \ref{cor:accretivityL-finite}, the operator $\mathcal{L}=\Delta\Phi$ is accretive on $\ell^1(X,\mu)$ for every finite graph. This result and the above example show that accretivity is a property related not only to the action of the operator but to the norm on the underlying space as well. \begin{figure} \begin {center} \begin{tikzpicture}[-latex, auto,node distance =3 cm ,on grid , semithick , state2/.style ={ circle ,top color =white , bottom color = gray , draw,black , text=black , minimum width =1 cm}, state3/.style ={ circle ,top color =white , bottom color = white , draw,white , text=white , minimum width =1 cm}, state/.style={circle ,top color =white , bottom color = white, draw,black , text=black , minimum width =1 cm}] \node[state] (E) [] {$x_{1}$}; \node[state] (F) [right=of E] {$x_{2}$}; \node[state] (G) [right=of F] {$x_3$}; \node[state] (H) [right=of G] {$x_{4}$}; \path (G) edge [black,-] node[] {$w(x_3,x_4)$} (H); \path (H) edge [black,-] node[] {} (G); \path (G) edge [black,-] node[] {} (F); \path (F) edge [black,-] node[] {$w(x_2,x_3)$} (G); \path (E) edge [black,-] node[] {$w(x_1,x_2)$} (F); \path (F) edge [black,-] node[] {} (E); \end{tikzpicture} \caption{Visual representation of the graph $G$ of the Example \ref{ex:1}.}\label{fig:ex:1} \end{center} \end{figure} \section{The Cauchy model problem}\label{sec:Cauchy_model_problem} Let $f \colon (0,T)\times X \to \mathbb{R}$ and $u_0 \colon X \to \mathbb{R}$. Given a graph $G$, let us consider the following Cauchy problem for the generalized porous medium equation (GPME) (or filtration equation): \vspace{0.3cm} \noindent\textbf{Problem:} \begin{equation}\tag*{Cauchy-GPME}\label{eq:C-D} \begin{cases} \partial_t u(t,x) + \Delta\Phi u(t,x) = f(t,x) & \mbox{for every } (t,x) \in (0,T)\times X,\\ \lim_{t\to 0^+} u(t,x) = u_0(x) & \mbox{for every } x\in X \end{cases} \end{equation} where $\Phi \colon C(X) \to C(X)$ is the canonical extension of a function $\phi \colon \mathbb{R} \to \mathbb{R}$ such that $\phi$ is strictly monotone increasing, $\phi(\mathbb{R})=\mathbb{R}$ and $\phi(0)=0$. If $\phi(s)= s^m \coloneqq s|s|^{m-1}$, then we will call the above equation the porous medium equation (PME) when $m>1$ and the fast diffusion equation (FDE) when $0<m<1$. Clearly, when $m=1$ and $f \equiv 0$ we recover the classic heat equation. The function $f$ is called the \emph{forcing term}. For notational convenience, we specify the time interval $(0,T)$ but everything we will say can be generalized to any (not necessarily bounded above) interval $(a,b)\subset \mathbb{R}$. \vspace{0.3cm} \noindent The \textquotedblleft$+$\textquotedblright \, sign in our equation comes from the fact that we are considering the formal graph Laplacian which corresponds to minus the second derivative in the Euclidean case. We now introduce the various classes of solutions for the \ref{eq:C-D} problem in order of increasing regularity. The weakest notion of solution is obtained by means of a discretization and approximation procedure in the time variable. More precisely, we first need to discretize the time interval $(0,T)$ with respect to the forcing term $f$ such that the corresponding time-discretization $\boldsymbol{f}_n$ of $f$ is \textquotedblleft close\textquotedblright to $f$ in a way that will be made clear next. \begin{definition}[$\epsilon$-discretization]\label{def:epsilon-discretization} Given a time interval $[0,T]$ with $T<\infty$ and a forcing term $f \in L^1([0,T] ; \ell^1\left(X,\mu\right))$, we define a partition of the time interval $$ \mathcal{T}_n \coloneqq \left\{ \{t_k\}_{k=0}^n \mid 0= t_0<t_1<\ldots<t_n\leq T \right\} $$ and a time-discretization $\boldsymbol{f}_n$ of $f$ $$ \boldsymbol{f}_n\coloneqq\left\{ \{f_k\}_{k=1}^{n} \mid f_k \in \ell^1(X,\mu),\; f_k(x)\coloneqq f(t_k,x)\right\}. $$ Having fixed $\epsilon >0$, we call $\mathcal{D}_\epsilon\coloneqq (\mathcal{T}_n,\boldsymbol{f}_n)$ an \emph{$\epsilon$-discretization} of $([0,T]; f)$ if \begin{itemize} \item $t_k-t_{k-1} \leq \epsilon$ for every $k=1,\ldots, n$ and $T- t_n \leq \epsilon$; \item $\sum_{k=1}^{n} \int_{t_{k-1}}^{t_{k}} \left\| f(t) - f_k \right\| dt \leq \epsilon$. \end{itemize} \end{definition} \begin{remark}\label{rem:existence_discretization} Definition \ref{def:epsilon-discretization} is well-posed. If $f\in L^1([0,T] ; \ell^1\left(X,\mu\right))$, then for every $\epsilon >0$ there exists an $\epsilon$-discretization $\mathcal{D}_\epsilon$ of $([0,T]; f)$, see \cite[Lemma 4.1]{evans1977nonlinear}. \end{remark} Now, given an $\epsilon$-discretization $\mathcal{D}_\epsilon$, consider the following system of difference equations which arises from an implicit Euler-discretization of the \ref{eq:C-D}: \begin{equation}\label{implicit_Euler} \frac{u_k - u_{k-1}}{\lambda_k} + \Delta\Phi u_k = f_k, \qquad \lambda_k\coloneqq t_k - t_{k-1}\mbox{ and } k=1,\ldots,n \end{equation} with $u_0$ given. Writing $\mathcal{L} = \Delta\Phi $, we then require that every $u_k$ belongs to \begin{equation*} \textnormal{dom}(\mathcal{L})=\left\{ u \in \ell^{1}\left(X,\mu\right) \mid \Phi u\in \textnormal{dom}\left(\Delta\right), \Delta\Phi u \in \ell^{1}\left(X,\mu\right) \right\}. \end{equation*} \begin{definition}[$\epsilon$-approximate solution]\label{epsilon-approximation} If the system \eqref{implicit_Euler} admits a solution $\boldsymbol{u}_\epsilon =\left\{u_k\right\}_{k=1}^n$ such that $u_k\in \textnormal{dom}(\mathcal{L})$ for every $k=1,\ldots, n$, then we define $u_\epsilon$ as the piecewise constant function \begin{equation}\label{epsilon_approximation} u_\epsilon(t)\coloneqq \begin{cases} \sum_{k=1}^n u_k\mathds{1}_{(t_{k-1},t_k]}(t) & \mbox{for } t\in (0, t_n],\\ u_0 & \mbox{for } t=0 \end{cases} \end{equation} and we call $u_\epsilon$ an \emph{$\epsilon$-approximate solution} of the \ref{eq:C-D} (subordinate to $\mathcal{D}_\epsilon$). \end{definition} We then have the following definition of a \emph{mild solution} which first appeared as a formal definition in \cite{crandall1980regularizing}. It can be viewed as a uniform limit of ``numerical approximations'' to solutions of the \ref{eq:C-D} obtained by the system of difference equations \eqref{implicit_Euler}. \begin{definition}[Mild solution]\label{def:weak_solution} If $T<+\infty$, we say that $u \colon [0,T]\to \ell^1\left(X,\mu\right)$ is a \emph{mild solution} of the \ref{eq:C-D} problem if $u \in C\left( [0,T]; \ell^1\left(X,\mu\right)\right)$ and $u$ is obtained as a uniform limit of $\epsilon$-approximate solutions. Namely, for every $\epsilon>0$ there exists an $\epsilon$-discretization $\mathcal{D}_\epsilon$ of $([0,T]; f)$, as in Definition \ref{def:epsilon-discretization}, and an $\epsilon$-approximate solution $u_\epsilon$ subordinate to $\mathcal{D}_\epsilon$, as in \eqref{epsilon_approximation}, such that \begin{equation*} \left\|u(t) - u_\epsilon(t) \right\| < \epsilon \qquad \mbox{for every } t\in [0,t_n]\subseteq [0,T]. \end{equation*} If $T = +\infty$, then we say that $u$ is a mild solution of the \ref{eq:C-D} if the restriction of $u$ to each compact subinterval $[0,a]\subset [0,+\infty)$ is a mild solution of the \ref{eq:C-D} on $[0,a]$. \end{definition} We next introduce two further classes of solutions, namely, strong and classic solutions. Following the definitions, we will discuss the relationship between these notions. \begin{definition}[Strong solution]\label{def:strong_solution} We say that $u \colon [0,T] \to \ell^1\left(X,\mu\right)$ is a \emph{strong solution} of the \ref{eq:C-D} problem if \begin{itemize} \item $u(t) \in \overline{\textnormal{dom}(\mathcal{L})}$ for every $t \in [0,T]$; \item $u \in C\left([0,T]; \ell^1\left(X,\mu\right)\right)\cap W^{1,1}_{\textnormal{loc}}((0,T);\ell^1\left(X,\mu\right))$; \item $\partial_t u(t,x) + \Delta\Phi u(t,x) = f(t,x)$ for almost every $t\in (0,T)$; \item $u(0)=u_0$. \end{itemize} \end{definition} \begin{definition}[Classic solution]\label{def:classical_solution} We say that $u \colon [0,T] \to \ell^1\left(X,\mu\right)$ is a \emph{classic solution} of the \ref{eq:C-D} problem if \begin{itemize} \item $u(t) \in \overline{\textnormal{dom}(\mathcal{L})}$ for every $t \in [0,T]$; \item $u \in C\left( [0,T]; \ell^1\left(X,\mu\right)\right)\cap C^1\left( (0,T); \ell^1\left(X,\mu\right)\right)$; \item $\partial_t u(t,x) + \Delta\Phi u(t,x) = f(t,x)$ for every $t\in (0,T)$; \item $u(0)=u_0$. \end{itemize} \end{definition} In the literature, mild solutions are also known by other names: they are called \emph{limit solutions} in \cite{lakshmikantham1981nonlinear} and \emph{weak solutions} in \cite{kobayasi1984nonlinear}. In \cite{benilan1972solutions}, P. B{\'e}nilan and H. Br{\'e}zis introduced the definition of \emph{faible} (i.e., \emph{weak}) solutions of the abstract Cauchy problem \begin{equation}\label{abstract_Cauchy} \begin{cases} \partial_tu(t) + \mathcal{A}u(t) = f(t) &\mbox{for } t\in (0, T),\\ u(0)= u_0 \end{cases} \end{equation} as a uniform limit of strong solutions $u_n$ of \eqref{abstract_Cauchy} with $f$ replaced by $f_n$ where $f_n\to f$ in $L^1([0,T]; E)$. Clearly, a classic solution is a strong solution. The fact that a strong solution is a mild solution assuming that $f \in L^1_{\textnormal{loc}}([0,T] ; \ell^1\left(X,\mu\right))$ is a standard result, e.g. \cite[Theorem 1.4]{benilan1988evolution}. Therefore, assuming $f$ is strongly measurable and locally Bochner integrable, we have compatibility of the three different definitions in the sense that classic solution $\Rightarrow$ strong solution $\Rightarrow$ mild solution in order of descending ``regularity.'' As a final remark, we observe that a mild solution may not be differentiable and does not necessarily satisfy the \ref{eq:C-D} in a pointwise sense. Nonetheless, this notion is known as the most natural one of the generalized notions of solutions of \eqref{abstract_Cauchy}. \section{Existence and uniqueness of mild solutions}\label{sec:existence_weak}\label{sec:exisntence&uniqueness} The theory of nonlinear operators on Banach spaces is well-established. We refer the interested reader to \cite[Chapter 3]{lakshmikantham1981nonlinear}, \cite[Chapter 4]{barbu2010nonlinear} or \cite[Appendix A]{andreu2004parabolic} and references therein for well-organized summaries of all of the main results. If the operator $\mathcal{L}=\Delta\Phi$ is accretive and such that for every $\epsilon>0$ there exists an $\epsilon$-approximate solution as in Definition \ref{epsilon-approximation}, then it is possible to infer the existence and uniqueness of mild solutions for the \ref{eq:C-D}, relying on some consequences of a result due to P. B{\'e}nilan, see \cite{benilan1972equations} and \cite[Theorem 3.3]{benilan1988evolution}, which is an extension of the famous Crandall-Liggett theorem, see \cite{crandall1971generation}. The main idea is the following: Given $u_0\in \ell^1\left(X,\mu\right)$ and an $\epsilon$-discretization $\mathcal{D}_\epsilon$ as in Definition~\ref{def:epsilon-discretization}, then solving system \eqref{implicit_Euler} means solving the equation \begin{equation*} (\operatorname{id} + \lambda_k \Delta \Phi)u_k= u_{k-1} + \lambda_kf_k \end{equation*} with $$ u_k \in \ell^1\left(X,\mu\right), \qquad \Phi u_k\in\textnormal{dom}(\Delta), \qquad \Delta\Phi u_k\in \ell^1(X,\mu) $$ for every $k=1,\ldots,n$ where $\lambda_k>0$ and $f_k\in \ell^1\left(X,\mu\right)$. This is doable, in particular, if \begin{equation}\label{eq:semi_lin_elliptic} (\operatorname{id} + \lambda \Delta \Phi) u = g \end{equation} is solvable for any $g \in \ell^1(X,\mu)$, $\lambda>0$ and the solution $u \in \ell^1(X,\mu)$ is such that $\Phi u\in\textnormal{dom}(\Delta)$ and $\Delta\Phi u\in \ell^1(X,\mu)$. Therefore, if $\mathcal{L}$ is $m$-accretive, then we would get existence and uniqueness of mild solutions in one step. For example, in Euclidean case, when $X=\Omega$ is a bounded domain in $\mathbb{R}^n$, then the accretivity property holds for $\mathcal{L}\coloneqq \Delta\Phi$ defined on $$ \textnormal{dom}\left(\mathcal{L}\right)\coloneqq\left\{u \in L^1\left(\Omega\right) \mid \Phi(u) \in W^{1,1}_0(\Omega), \Delta\Phi u\in L^1(\Omega) \right\} $$ where the Laplace operator $\Delta$ is understood in the sense of distributions. The difficult part is to prove the $m$-accretivity, i.e., to prove that for every $g \in L^1(\Omega)$ there exists $u\in \textnormal{dom}\left(\mathcal{L}\right)$ that is a solution for equation \eqref{eq:semi_lin_elliptic} for any $\lambda >0$. To circumvent the direct approach, it is common to switch to an equivalent formulation, namely, having defined $v\coloneqq\Phi u$ and $u=\Psi v=\Phi^{-1} v$, the question is whether the equation $$ (\Psi + \lambda \Delta) v=g $$ admits a solution $v$ in $$ \left\{v \in W^{1,1}_0(\Omega) \mid \Delta v\in L^1(\Omega) \right\}. $$ The positive answer to this question in the Euclidean case was given by H. Br{\'e}zis and W. Strauss in \cite{brezis1973semi}. In particular, the trick of this approach is to relate the $m$-accretive property of the nonlinear operator $\mathcal{L}$ to suitable properties of the linear operator $\Delta$ which is easier to handle. The main issue in the discrete setting is to prove existence of solutions under minimal assumptions on the underlying graph $G$. Indeed, while the accretivity of $\mathcal{L}$ can be established for any finite graph, see Corollary \ref{cor:accretivityL-finite} in Appendix \ref{sec:appendix2}, the accretivity of $\mathcal{L}$ can be a tricky property to prove for more general graphs. Moreover, the hypothesis required to make use of the result in \cite{brezis1973semi}, which are satisfied by the Euclidean Laplacian $\Delta$ over bounded domains $\Omega$, are, in general, not satisfied by the graph Laplacian. Consequently, we are forced to step back and to prove ``by hand'' the existence of $\epsilon$-approximate solutions $u_\epsilon(t)$ for every $\epsilon>0$ with the property that $u_\epsilon(t)$ belongs to a dense subset of $\textnormal{dom}(\mathcal{L})$ where $\mathcal{L}$ is accretive. The idea is to find a solution $u$ to $(\operatorname{id}+\lambda \Delta\Phi) u=g$ by building $u$ as a limit of a sequence $\{ \Psi v_n\}$ where the $v_n$ are solutions of $(\Psi + \lambda \Delta_n) v_n=g_n$ on suitable restrictions of the graph $G$ to finite subgraphs. In particular, we will see that this can be achieved by decomposing $G$ as an infinite ascending chain $\{G_{\textnormal{dir},n}\}_{n=1}^\infty$ of finite connected Dirichlet subgraphs. After this introduction, we are now ready to prove our main results. In Theorem \ref{thm:main1} we will prove the accretivity of the operator $\mathcal{L}$ on a suitable dense subset of $\textnormal{dom}(\mathcal{L})$ and the surjectivity of $\operatorname{id}+\lambda\mathcal{L}$ on the non-negative/positive cones of $\ell^1(X,\mu)$ and, under three different additional hypotheses, on the entire space $\ell^1(X,\mu)$. In Theorem \ref{thm:main2} we will then establish the existence and uniqueness of mild solutions for the \ref{eq:C-D} problem. As a concluding application, in Corollary \ref{cor:application}, we prescribe some hypotheses on the graph that guarantee that a mild solution is indeed a classic solution. \subsection{Proofs of the main theorems}\label{ssec:proofs} Let us recall that $\mathcal{L}$ is the operator \begin{align*} &\mathcal{L} \colon \textnormal{dom}\left( \mathcal{L} \right)\subseteq \ell^{1}\left(X,\mu\right) \to \ell^{1}\left(X,\mu\right) , \\ &\textnormal{dom}\left( \mathcal{L} \right)=\left\{ u \in \ell^{1}\left(X,\mu\right) \mid \Phi u\in \textnormal{dom}\left(\Delta\right),\, \Delta\Phi u \in \ell^{1}\left(X,\mu\right) \right\} \end{align*} whose action is given by $$ \mathcal{L} u= \Delta\Phi u, $$ and that for a subset $\Omega \subseteq \textnormal{dom}\left( \mathcal{L} \right)$, we write $\mathcal{L}_{|\Omega}$ for the restriction of $\mathcal{L}$ to $\Omega$. The extra hypotheses listed in Theorem \ref{thm:main1} are \begin{enumerate}[label={\upshape(\bfseries H\arabic*)},wide = 0pt, leftmargin = 3em] \item\label{m-accretivity_A2}$G$ is locally finite; \item\label{m-accretivity_B2} $\inf_{x \in X}\mu(x)>0$; \item\label{m-accretivity_C2} $\sup_{x \in X}\frac{\sum_{y \in X}w(x,y)}{\mu(x)}<\infty$ and $\Phi(\ell^1(X,\mu)) \subseteq \ell^1(X,\mu)$. \end{enumerate} \vspace{0.2cm} \noindent\textbf{Proof of Theorem~\ref{thm:main1}.} We divide the proof into several steps. From \textbf{Steps 0} to \textbf{IV} we will assume that $G$ is connected. To help orient the reader, we first give a brief outline of the structure of the proof: In \textbf{Step 0}, we will introduce a sequence of operators $\mathcal{L}_{n} \colon C_c(X) \to \ell^1(X,\mu)$ and discuss that for every graph $G$ there exists a dense subset $\Omega \subseteq \textnormal{dom}(\mathcal{L})$ where $\mathcal{L}$ is accretive. Later, we will show that all solutions of the equation $(\operatorname{id} + \lambda \mathcal{L})u=g$ that we construct along the way belong to $\Omega$. In \textbf{Step I}, assuming that $G$ is finite, we will prove that $\operatorname{id}+\lambda\mathcal{L}$ is surjective and preserves nonnegativity/nonpositivity and also give an upper bound of the norm of the solutions with respect to $g$. This step plays a crucial role in proving the surjectivity of $\operatorname{id}+\lambda\mathcal{L}$ in the infinite case where we will approximate the graph $G$ by an ascending chain of finite Dirichlet subgraphs. In \textbf{Step II}, given $\lambda>0$ and $g\in \ell^1(X,\mu)$, we will show that there exists a sequence of compactly supported functions $u_n$ and $u\in \ell^1(X,\mu)$ such that $\lim_{n\to \infty}\|u_n-u\|=0$ and $\lim_{n\to \infty}\|(\boldsymbol{\mathfrak{i}}_{n,\infty}\operatorname{id}_n\boldsymbol{\pi}_n+\lambda\mathcal{L}_{n})u_n- g\|=0$. Using this construction, in \textbf{Step III} we will show that $u$ solves $(\operatorname{id} + \lambda \mathcal{L})u=g$ for every $g\in\ell^{1,\pm}(X,\mu)$ and that $u\in \Omega$. In \textbf{Step IV-H1,-H2,-H3} we will prove that $u$ solves $(\operatorname{id} + \lambda \mathcal{L})u=g$ for every $g\in\ell^{1}(X,\mu)$ and that $u\in \Omega$ under any one of the three different assumptions \ref{m-accretivity_A2}, \ref{m-accretivity_B2} and \ref{m-accretivity_C2}. Finally, in \textbf{Step V} we remove the assumption of connectedness that we used while proving \textbf{Steps 0} to \textbf{IV}. \vspace{0.5cm} \noindent\textbf{Step 0} (When $G$ is connected, there exists a dense subset $\Omega$ of $\textnormal{dom}(\mathcal{L})$ where $\mathcal{L}_{|\Omega}$ is accretive)\textbf{:} This is exactly the content of Lemma \ref{lem:L_Omega_accretive} in Appendix \ref{sec:appendix2}. To help orient the reader, we recall here the notations involved and the definition of $\Omega$. If $G$ is finite, then $\Omega=\textnormal{dom}(\mathcal{L})=C(X)$. If $G$ is infinite, we take an exhaustion $\{X_n\}_{n=1}^\infty$ of $X$, i.e., a sequence of subsets $X_n$ of $X$ such that $X_n \subseteq X_{n+1}$ and $X = \cup_{n=1}^\infty X_n$, where we assume that each $X_n$ is additionally finite, along with the canonical embedding $\boldsymbol{\mathfrak{i}}_{n,\infty}$ and the canonical projection $\boldsymbol{\pi}_n$ for each $X_n$: \begin{align*} &\boldsymbol{\mathfrak{i}}_{n,\infty} \colon C(X_n)\to C(X) \quad &\boldsymbol{\mathfrak{i}}_{n,\infty}u(x) \coloneqq \begin{cases} u(x) & \mbox{if } x \in X_n,\\ 0 & \mbox{if } x \in X\setminus X_n; \end{cases}\\ &\boldsymbol{\pi}_n \colon C(X) \to C(X_n) \quad &\boldsymbol{\pi}_nu(x) \coloneqq u(x) \mbox{ for every } x \in X_n. \end{align*} At this point, the exhaustion $\{X_n\}_{n=1}^\infty$ can be arbitrary but should consist of finite sets. We then define the operators $\mathcal{L}_{n}$ as in Definition \ref{def:Lmin}, namely, $$ \mathcal{L}_{n} \colon \textnormal{dom}\left( \mathcal{L}_{n} \right)\subseteq \ell^{1}\left(X,\mu\right) \to \ell^{1}\left(X,\mu\right) $$ with \begin{align*} \textnormal{dom}\left( \mathcal{L}_{n} \right)\coloneqq C_c(X), \quad \mathcal{L}_{n} u\coloneqq \boldsymbol{\mathfrak{i}}_{n,\infty}\Delta_{\textnormal{dir},n}\Phi\boldsymbol{\pi}_{n} u \end{align*} where $\Delta_{\textnormal{dir},n}$ is the graph Laplacian associated to the Dirichlet subgraph $G_{\textnormal{dir},n} \subseteq G$ on the node set $X_n$. Then, the set $\Omega$ is defined as \begin{equation*}\label{eq:Omega} \Omega\coloneqq \{u \in \textnormal{dom}(\mathcal{L}) \mid \exists\, \{u_n\}_n \mbox{ s.t. } \operatorname{supp}u_n\subseteq X_n,\; \lim_{n\to \infty}\|u_n -u \|=0,\; \lim_{n\to \infty}\|\mathcal{L}_{n} u_n -\mathcal{L}u \|=0 \} \end{equation*} where $\operatorname{supp}u_n$ denotes the support of the function $u_n$. We have $\overline{\Omega}=\overline{\textnormal{dom}(\mathcal{L})}=\ell^1(X,\mu)$ by Lemma \ref{lem:FC} and that $\mathcal{L}_{|\Omega}$ is accretive by Lemma \ref{lem:L_Omega_accretive}. \vspace{0.5cm} \noindent\textbf{Step I} (When $G$ is finite and connected, $\operatorname{id} +\lambda\mathcal{L}$ is bijective and preserves nonnegativity/nonpositivity)\textbf{:} Assume that $G$ is finite and connected, i.e., $|X|=n$. In this case, $$ \textnormal{dom}(\Delta)=C(X)\simeq \mathbb{R}^n, \quad \ell^1\left(X,\mu\right)= \left(C(X), \|\cdot\| \right) \quad \mbox{ with } \|u\|= \sum_{x\in X}|u(x)|\mu(x). $$ Fix now $\lambda >0$. Writing $\psi \coloneqq \phi^{-1}$ and $v\coloneqq \Phi u$, we can rewrite equation \begin{equation}\label{eq:step-I-1.0} (\operatorname{id} + \lambda \Delta\Phi) u=g \end{equation} in the equivalent form \begin{equation}\label{eq:step-I-1} (\Psi + \lambda \Delta) v = g. \end{equation} Clearly, $\psi$ is strictly monotone increasing, $\psi(\mathbb{R})=\mathbb{R}$ and $\psi(0)=0$. Let us enumerate the nodes of $X$, that is, we write $X= \{ x_1, x_2, \ldots, x_n \}$. Owing to the isomorphism between $C(X)$ and $\mathbb{R}^n$, we identify $n$-dimensional vectors and real-valued functions on $X$ in the standard way, that is, given $v \in C(X)$ we associate to $v$ the vector $\boldsymbol{v}=(v_1,\ldots,v_n)\coloneqq(v(x_1),\ldots, v(x_n))$ and vice-versa. Define $M \colon \mathbb{R}^n \to \mathbb{R}^n$ by $$ M\boldsymbol{v}\coloneqq (\Psi + \lambda \Delta)\boldsymbol{v}. $$ Let us observe that: \begin{enumerate}[i)] \item $(\Psi\boldsymbol{v})_i=\psi(v_i)$ for every $i=1,\ldots,n$ where $\psi \colon \mathbb{R} \to \mathbb{R}$ is surjective and strictly monotone increasing; \item For every $\lambda>0$, $\lambda\Delta$ is a diagonally dominant matrix (e.g., \cite[Definition 6.1.9]{horn2012matrix}), i.e., $$ \left(\lambda\Delta\right)_{i,i}= \frac{\lambda}{\mu(x_i)}\left(\kappa(x_i) + \sum_{\substack{j=1\\j\neq i}}^n w(x_i,x_j)\right) \geq \frac{\lambda}{\mu(x_i)}\sum_{\substack{j=1\\j\neq i}}^n w(x_i,x_j)= \sum_{\substack{j=1\\j\neq i}}^n \left|\left(\lambda\Delta\right)_{i,j}\right|, \quad \forall\; i=1,\ldots,n. $$ \end{enumerate} Therefore, by \cite[Theorem 1]{willson1968solutions}, for every $\boldsymbol{g} \in \mathbb{R}^n$ there exists a unique solution $\boldsymbol{v}$ to the equation $$ M\boldsymbol{v} = \boldsymbol{g}. $$ We now show that the norm of the solution is bounded above by the norm of $g$. Let $v$ be the solution of \eqref{eq:step-I-1} with right-hand side $g$. Since $\psi$ is strictly monotone increasing and $\psi(0)=0$, we get $$ \operatorname{sgn}\left(v(x)\right) = \operatorname{sgn}\left(\Psi v(x)\right). $$ By Proposition \ref{prop:nonnegativity}, and recalling that $v=\Phi u$, i.e., $u = \Psi v$, it follows that $$ \sum_{x\in X} \Delta v (x) \operatorname{sgn}\left(\Psi v(x)\right)\mu(x)= \sum_{\substack{x\in X\colon\\u(x)\neq 0}} \Delta \Phi u(x) \operatorname{sgn}\left(u(x)\right)\mu(x)\geq0. $$ Therefore, we conclude \begin{align}\label{eq:step-I-3} \|u\| =\| \Psi v \| &= \sum_{x\in X} \left|\Psi v(x) \right|\mu(x)\\\nonumber &= \sum_{x\in X} \Psi v(x)\operatorname{sgn}\left(\Psi v(x)\right)\mu(x)\\\nonumber &= \sum_{x\in X} g(x)\operatorname{sgn}\left(\Psi v(x)\right)\mu(x) - \lambda\sum_{x\in X} \Delta v(x) \operatorname{sgn}\left(\Psi v(x)\right)\mu(x)\\\nonumber &\leq \sum_{x\in X} g(x)\operatorname{sgn}\left(\Psi v(x)\right)\mu(x)\\\nonumber &\leq \| g\|. \end{align} By Case 2) of Theorem~\ref{thm:min_principle}, if $g\geq 0$, then $v\geq 0$, and if $g\leq 0$, then $v\leq 0$. Consequently, $u=\Psi v$ has the same sign as $g$. Therefore, if $G$ is finite we can conclude that for every $g \in \ell^{1,\pm}(X,\mu)$, the unique solution $u$ of \eqref{eq:step-I-1.0} belongs to $\ell^{1,\pm}(X,\mu)$ and satisfies $\|u \| \leq \|g\|$. \vspace{0.5cm} \noindent\textbf{Step II} (Constructing a solution when $G$ is infinite and connected)\textbf{:} We want to show that if $G$ is infinite and connected, then for every fixed $\lambda>0$ and $g \in \ell^1(X,\mu)$ there exists $u\in \ell^1(X,\mu)$ and a sequence $\{u_n\}_n$ such that \begin{subequations} \begin{equation}\label{eq:l1convergence2} \operatorname{supp}u_n\subseteq X_n; \end{equation} \begin{equation}\label{eq:l1convergence} \lim_{n\to \infty}\|u_n -u\|=0; \end{equation} \begin{equation}\label{eq:l1convergence0} \lim_{n\to \infty}\|\left(\boldsymbol{\mathfrak{i}}_{n,\infty}\operatorname{id}_n\boldsymbol{\pi}_{n} + \lambda \mathcal{L}_{n}\right) u_n- g\|=0 \end{equation} \end{subequations} where $\operatorname{id}_n$ is the identity operator on $C(X_n)$. We divide this step into two sub-steps consisting of the cases when $g$ is nonnegative (or nonpositive) and then general $g$. \vspace{0.2cm} \noindent\textbf{Step II-1} ($g\in \ell^{1,\pm}(X,\mu)$)\textbf{:} Assume that $g\in \ell^{1,+}(X,\mu)$. By Lemma \ref{lem:chain1}, we can choose the exhaustion $\{X_n\}_{n=1}^{\infty}$ with the following additional properties: \begin{equation}\label{eq:property_sets1} X_{n}\subset X_{n+1}, \qquad X=\bigcup_{n=1}^\infty X_n \end{equation} and such that the set \begin{equation}\label{eq:property_sets2} \{ x \in X_n \mid x\sim y \mbox{ for some } y \in X_{n+1}\setminus X_n \} \end{equation} is not empty for all $n$. For each $n$, we define the subgraph \begin{equation}\label{eq:def_Dir_subgraph} G_{\textnormal{dir},n} = (X_n, w_n, \kappa_{\textnormal{dir},n}, \mu_{n})\subset G \end{equation} as a Dirichlet subgraph of $G$, see Definition \ref{def:dir_subgraph}. That is, \begin{itemize} \item $w_n \equiv w_{|X_{n}\times X_{n}}$;\ \item $\mu_n \equiv \mu_{|X_{n}}$; \item for every $x \in X_n$, $b_{\textnormal{dir},n}(x)= \sum_{y \in \mathbullet{\partial}X_n}w(x,y)$; \item for every $x \in X_n$, $\kappa_{\textnormal{dir},n}(x) = \kappa_{|X_n}(x) + b_{\textnormal{dir},n}(x)$. \end{itemize} If we define \begin{align*} &\mathring{\partial}X_{n,n+1}\coloneqq\{ x \in X_{n} \mid \exists y \in X_{n+1}\setminus X_n \mbox{ such that } x \sim y \}, \\ &\mathbullet{\partial}X_{n,n+1}\coloneqq\{ y \in X_{n+1} \setminus X_n \mid \exists x \in X_n \mbox{ such that } x \sim y \} \end{align*} which are not empty by construction and \begin{equation*}\label{eq:thm_existence_weak_1} b'_{\textnormal{dir},n}(x)= \sum_{y \in \mathbullet{\partial}X_{n,n+1}}w(x,y) \end{equation*} then, for every $x\in X_n$, it holds that \begin{align*} \kappa_{\textnormal{dir},n}(x) &= \kappa_{|X_n}(x) + b_{\textnormal{dir},n}(x)\\ &=\kappa_{|X_n}(x) + \sum_{y \in \mathbullet{\partial}X_n}w(x,y)\\ &= \kappa_{|X_{n+1}}(x) +\sum_{y \in X\setminus X_n}w(x,y)\\ &= \kappa_{|X_{n+1}}(x) + \sum_{y \in X\setminus X_{n+1}}w(x,y) + \sum_{y \in X_{n+1}\setminus X_{n}}w(x,y)\\ &= \kappa_{|X_{n+1}}(x) + \sum_{y \in \mathbullet{\partial}X_{n+1}}w(x,y) + \sum_{y \in \mathbullet{\partial}X_{n,n+1}}w(x,y)\\ &= \kappa_{|X_{n+1}}(x) + b_{\textnormal{dir},n+1}(x) + b'_{\textnormal{dir},n}(x)\\ &= \kappa_{\textnormal{dir},n+1}(x) + b'_{\textnormal{dir},n}(x). \end{align*} So, the collection $\{G_{\textnormal{dir},n}\}_{n\in \mathbb{N}}$ is a sequence of connected finite Dirichlet subgraphs such that each subgraph $G_{\textnormal{dir},n}$ is a Dirichlet subgraph of $G_{\textnormal{dir},n+1}$, that is, $$G_{\textnormal{dir},1}\subset \ldots \subset G_{\textnormal{dir},n}\subset G_{\textnormal{dir},n+1} \subset \ldots \subset G.$$ Denoting by \begin{equation*}\label{eq:embedding-projection2} \boldsymbol{\mathfrak{i}}_{n} \colon C(X_n) \hookrightarrow C(X_{n+1}), \quad \boldsymbol{\mathfrak{i}}_{n,\infty} \colon C(X_{n}) \hookrightarrow C(X), \quad \boldsymbol{\pi}_n \colon C(X) \to C(X_n) \end{equation*} the canonical embeddings and projections, respectively, define $$ g_n \coloneqq \boldsymbol{\pi}_{n}g \quad \mbox{where } g \in \ell^{1,+}(X,\mu). $$ From \textbf{Step I}, for every $n\in \mathbb{N}$ there exist $\hat{v}_n \in C(X_n)$ such that $\hat{v}_{n} \geq 0$ and \begin{equation}\label{def:hatv_n} (\Psi + \lambda \Delta_{\textnormal{dir}, n})\hat{v}_{n} = g_n. \end{equation} Setting $$ q_n(x)\coloneqq ( \Psi +\lambda\Delta_{\textnormal{dir},n+1}) \boldsymbol{\mathfrak{i}}_{n}\hat{v}_{n}(x) \in C(X_{n+1}) $$ by Lemma \ref{lem:A1}, and the fact that every $G_{\textnormal{dir},n}$ is a Dirichlet subgraph of $G_{\textnormal{dir}, n+1}$, we have $$ (\Psi + \lambda \Delta_{\textnormal{dir}, n+1})\boldsymbol{\mathfrak{i}}_{n}\hat{v}_{n}(x) =(\Psi + \lambda \Delta_{\textnormal{dir}, n})\hat{v}_{n}(x) \quad \forall x \in X_n $$ and \begin{equation*} q_n(x)=\begin{cases} g_n(x) & \mbox{if } x \in X_n,\\ -\frac{\lambda}{\mu(x)}\sum_{y \in X_n}w(x,y)\hat{v}_{n}(y)& \mbox{if } x \in \mathbullet{\partial}X_{n,n+1},\\ 0 & \mbox{if } x \in X_{n+1}\setminus (X_n \cup \mathbullet{\partial}X_{n,n+1}). \end{cases} \end{equation*} Since $0\leq \hat{v}_n$ and $0\leq g_{n+1}$, it follows that $q_n(x) \leq g_{n+1}(x)$ for every $x \in X_{n+1}\setminus X_n$. In particular, from the fact that $g_{n+1}(x)=g_n(x)$ for every $x \in X_n$, we get $q_n\leq g_{n+1}$. By Corollary \ref{cor:min}, we have $\boldsymbol{\mathfrak{i}}_{n}\hat{v}_{n}\leq \hat{v}_{n+1}$, and by the fact that $\psi$ is monotone increasing and $\psi(0)=0$, we have \begin{equation*} \Psi\boldsymbol{\mathfrak{i}}_{n}\hat{v}_{n}\leq \Psi \hat{v}_{n+1}\quad \mbox{and}\quad \Psi\boldsymbol{\mathfrak{i}}_{n}\hat{v}_{n}(x)=\boldsymbol{\mathfrak{i}}_{n}\Psi \hat{v}_{n}(x)= \begin{cases} \psi (\hat{v}_n(x)) &\mbox{if } x \in X_n,\\ 0 &\mbox{if } x \in X_{n+1}\setminus X_n. \end{cases} \end{equation*} Therefore, $\boldsymbol{\mathfrak{i}}_{n}\Psi \hat{v}_{n}\leq \Psi \hat{v}_{n+1}$. In particular, we get \begin{align} &\boldsymbol{\mathfrak{i}}_{n,\infty}\hat{v}_{n}(x)=\boldsymbol{\mathfrak{i}}_{n+1,\infty}\boldsymbol{\mathfrak{i}}_{n}\hat{v}_{n}(x) \leq \boldsymbol{\mathfrak{i}}_{n+1,\infty}\hat{v}_{n+1}(x),\label{eq:monotonicity1}\\ &\boldsymbol{\mathfrak{i}}_{n,\infty}\Psi \hat{v}_{n}(x)=\boldsymbol{\mathfrak{i}}_{n+1,\infty}\boldsymbol{\mathfrak{i}}_{n}\Psi \hat{v}_{n}(x) \leq \boldsymbol{\mathfrak{i}}_{n+1,\infty}\Psi \hat{v}_{n+1}(x)\label{eq:monotonicity2} \end{align} for every $x \in X$. Moreover, writing $\hat{u}_{n}(x)=\psi\left(\hat{v}_{n}(x)\right)\geq 0$ for every $x \in X_n$ and indicating by $\|\cdot \|_n$ the restriction of $\|\cdot\|$ to $C(X_n)$ from \eqref{eq:step-I-3} we have \begin{align*} 0\leq \hat{u}_{n}(x) \mu(x)&\leq \sum_{x \in X_n} \hat{u}_{n}(x) \mu(x)\\ &= \|\hat{u}_n\|_n \leq \|g_n\|_n \leq \|g\|, \end{align*} that is, for every fixed $x$, $\hat{u}_{n}(x)$ is bounded uniformly in $n$. In particular, writing \begin{equation*}\label{re-labelling} v_n \coloneqq \boldsymbol{\mathfrak{i}}_{n,\infty}\hat{v}_{n} \in C_c(X)\qquad \mbox{and} \qquad u_n \coloneqq \boldsymbol{\mathfrak{i}}_{n,\infty}\hat{u}_n \in C_c(X) \end{equation*} it follows that \begin{equation}\label{eq:boundedness} \Psi v_{n}(x) = u_n(x) \in \left[0, \frac{\|g\|}{\mu(x)}\right]. \end{equation} Consequently, by \eqref{eq:monotonicity2} and \eqref{eq:boundedness}, for every fixed $x \in X$ we have a sequence \begin{equation}\label{eq:sequence} \left\{u_{n}(x) \right\}_n \coloneqq \left\{\Psi v_{n}(x) \right\}_n \end{equation} that is monotonic and bounded. We can then define $u,v \in C(X)$ such that \begin{align} &u(x) \coloneqq \lim_{n\to \infty} u_{n}(x) \quad \mbox{for } x\in X,\label{eq:limit1} \\ &v\coloneqq \Phi u.\label{eq:limit2} \end{align} Notice that, by construction, $x \in X_n$ eventually so that $u_{n}(x) = \hat{u}_n(x)$ eventually. In particular, by the continuity of $\phi$ and the fact that $\psi=\phi^{-1}$ $$ v(x) = \lim_{n\to \infty} v_{n}(x) $$ and the limit is monotone. Moreover, $u_n$ satisfies \eqref{eq:l1convergence2}, i.e., $\operatorname{supp}u_n\subseteq X_n$ for every $n$ and \begin{equation*} (\operatorname{id}_n + \lambda\Delta_{\textnormal{dir},n}\Phi)\boldsymbol{\pi}_nu_n= g_n. \end{equation*} Therefore, \begin{align*} (\boldsymbol{\mathfrak{i}}_{n,\infty}\operatorname{id}_n\boldsymbol{\pi}_{n} + \lambda \mathcal{L}_{n} )u_n &= (\boldsymbol{\mathfrak{i}}_{n,\infty}\operatorname{id}_n\boldsymbol{\pi}_{n} + \lambda \boldsymbol{\mathfrak{i}}_{n,\infty}\Delta_{\textnormal{dir},n}\Phi\boldsymbol{\pi}_{n} ) u_n\\ &= \boldsymbol{\mathfrak{i}}_{n,\infty}(\operatorname{id}_n + \lambda \Delta_{\textnormal{dir},n}\Phi )\boldsymbol{\pi}_{n}u_n\\ &= \boldsymbol{\mathfrak{i}}_{n,\infty} g_n. \end{align*} In particular, since $g \in \ell^1(X,\mu)$, $$ \lim_{n\to \infty}\|\left(\boldsymbol{\mathfrak{i}}_{n,\infty}\operatorname{id}_n\boldsymbol{\pi}_{n} + \mathcal{L}_{n}\right) u_n- g\|=0, $$ which is exactly \eqref{eq:l1convergence0}. Let $\|\cdot \|_n$ be the restriction of $\|\cdot\|$ to $C(X_n)$. Since every $X_n$ is finite, from \eqref{eq:step-I-3} in \textbf{Step I} we obtain $$ \| \Psi \hat{v}_{n}\|_n\leq \| g_n\|_n $$ and, consequently, by Fatou's lemma \begin{align} \|u \| &\leq \liminf_{n\to \infty}\| u_n\| =\liminf_{n\to \infty} \| \Psi v_{n} \| = \liminf_{n\to \infty} \| \Psi \hat{v}_{n} \|_n \leq \liminf_{n\to \infty} \| g_n \|_n = \| g \|.\label{eq:contractivity} \end{align} In particular, $u =\Psi v$ is in $\ell^{1,+}(X,\mu)$. Finally, by dominated convergence, we get \eqref{eq:l1convergence}, i.e., $\lim_{n\to \infty} \| u_n - u \| =0$. \vspace{0.2cm} \noindent\textbf{Step II-2} ($g\in \ell^1(X,\mu)$)\textbf{:} Using the same notation as in \textbf{Step II-1}, we define for $g \in \ell^1(X,\mu)$ \begin{equation*} g_n \coloneqq \boldsymbol{\pi}_{n}g,\quad g^+_n\coloneqq \max\{0, \; g_n \},\quad g^-_n\coloneqq \min\{0, \; g_n \}. \end{equation*} From \textbf{Step I} there exist $\hat{v}_{n}, \hat{v}^+_{n}, \hat{v}^-_{n}\in C(X_n)$ that satisfy \begin{equation}\label{eq:Laccretive} \begin{cases} (\Psi + \lambda\Delta_{\textnormal{dir},n})\hat{v}_{n} = g_n,\\ (\Psi +\lambda\Delta_{\textnormal{dir},n})\hat{v}^+_{n} = g^+_n,\\ (\Psi +\lambda\Delta_{\textnormal{dir},n})\hat{v}^-_{n} = g^-_n. \end{cases} \end{equation} Define \begin{equation}\label{eq:def_u_n} \hat{u}_n\coloneqq \Psi \hat{v}_n, \quad u_n \coloneqq \mathbf{\mathfrak{i}}_{n,\infty}\hat{u}_n. \end{equation} Clearly, $u_n$ satisfies \eqref{eq:l1convergence2} and, by construction, \begin{equation*} (\operatorname{id}_n + \lambda\Delta_{\textnormal{dir},n}\Phi)\boldsymbol{\pi}_nu_n= g_n. \end{equation*} Therefore, \begin{align*} (\boldsymbol{\mathfrak{i}}_{n,\infty}\operatorname{id}_n\boldsymbol{\pi}_{n} + \lambda \mathcal{L}_{n} )u_n &= (\boldsymbol{\mathfrak{i}}_{n,\infty}\operatorname{id}_n\boldsymbol{\pi}_{n} + \lambda \boldsymbol{\mathfrak{i}}_{n,\infty}\Delta_{\textnormal{dir},n}\Phi\boldsymbol{\pi}_{n} ) u_n\\ &= \boldsymbol{\mathfrak{i}}_{n,\infty}(\operatorname{id}_n + \lambda \Delta_{\textnormal{dir},n}\Phi )\boldsymbol{\pi}_{n}u_n\\ &= \boldsymbol{\mathfrak{i}}_{n,\infty} g_n, \end{align*} that is, $$ \lim_{n\to \infty}\|\left(\boldsymbol{\mathfrak{i}}_{n,\infty}\operatorname{id}_n\boldsymbol{\pi}_{n} + \lambda \mathcal{L}_{n}\right) u_n- g\|=0, $$ which is exactly \eqref{eq:l1convergence0}. Define now $$\hat{u}^+_n\coloneqq \Psi \hat{v}^+_n,\quad u_n^+\coloneqq\mathbf{\mathfrak{i}}_{n,\infty}\hat{u}^+_n, \quad u^+\coloneqq \lim_{n\to \infty} u^+_n.$$ In particular, $u^+ \in \ell^1(X,\mu)$ is the monotone limit solution of \eqref{eq:step-I-1.0} obtained in \eqref{eq:sequence} and \eqref{eq:limit1} of \textbf{Step II-1}. In the same way, define $\hat{u}^-_n$, $u^-_n$ and $u^-$. Finally, define $$v^+\coloneqq \Phi u^+, \quad v^-\coloneqq\Phi u^-$$ as in \eqref{eq:limit2} of \textbf{Step II-1}. Let us observe that, by the definitions \eqref{eq:Laccretive}-\eqref{eq:def_u_n} and Corollary~\ref{cor:min}, and monotone limits, it holds that \begin{align*} & u_n(x) = \hat{u}_n(x)\leq \hat{u}^+_n(x)\leq u^+(x) \quad \mbox{if } x\in X_n,\\ & u_n(x) = 0\leq u^+(x) \quad \mbox{if } x\notin X_n, \end{align*} and \begin{align*} & u^-(x)\leq \hat{u}^-_n(x)\leq \hat{u}_n(x)=u_n(x) \quad \mbox{if } x\in X_n,\\ & u^-(x)\leq 0= u_n(x) \quad \mbox{if } x\notin X_n. \end{align*} In particular, $$ u^-(x)\leq u_n(x)\leq u^+(x), \quad \forall\, x\in X,\; n\in \mathbb{N}, $$ that is, for every fixed $x\in X$ the sequence $u_n(x)$ is uniformly bounded in $n$ with \begin{equation}\label{eq:uniform_bound} |u_n(x)| \leq c_x\coloneqq \max\left\{|u^-(x)|, |u^+(x)|\right\}<\infty \quad \forall\, n\in \mathbb{N}. \end{equation} Therefore, by passing to a subsequence using a diagonal sequence argument, the limit functions \begin{align*} &u(x)\coloneqq \lim_{n\to \infty} u_{n}(x) = \lim_{n\to \infty} \Psi v_{n}(x),\\ & v \coloneqq \Phi \end{align*} exist and are well-defined on $X$. By the same arguments in \eqref{eq:contractivity}, it follows that \begin{equation}\label{eq:contractivity2} \|u\| \leq \|g\| \quad \mbox{and} \quad u =\Psi v \in \ell^1(X,\mu). \end{equation} Moreover, from the previous \textbf{Step II-1} we know that $u^+,u^- \in \ell^1(X,\mu)$ and then from \eqref{eq:uniform_bound} it follows that $|u_n|$ is bounded above by an integrable function. By dominated convergence, we get \eqref{eq:l1convergence}, i.e., $\lim_{n\to \infty} \| u_n - u \| =0$. \vspace{0.5cm} \noindent\textbf{Step III} (When $G$ is infinite and connected, $\operatorname{id}+ \lambda \mathcal{L}_{|\Omega}$ maps bijectively onto $\ell^{1,\pm}(X,\mu)$)\textbf{:} We want to show that the function $u \in \ell^{1\pm}(X,\mu)$ that we constructed in \textbf{Step II-1} as limit of a sequence of finitely supported functions $\{u_n\}_n$ is a solution of \eqref{eq:step-I-1.0} which belongs to $\Omega$. In order to do so, it remains to show that: \begin{subequations} \begin{equation}\label{eq:1} u\in \textnormal{dom}(\mathcal{L}); \end{equation} \begin{equation}\label{eq:3} (\operatorname{id} + \lambda\mathcal{L})u=g; \end{equation} \begin{equation}\label{eq:2} \lim_{n\to \infty}\|\mathcal{L}_{n} u_n - \mathcal{L}u \|=0. \end{equation} \end{subequations} Let us now highlight that $v_{n}=\Phi u_n \in C_c(X) \subseteq \textnormal{dom}\left(\Delta\right)$ for every $n$, that is, $\Delta v_{n}$ is well-defined. Since, for every $x \in X$, $$ \sum_{y\in X} \lim_{n\to \infty}w(x,y) v_{n}(y)= \lim_{n\to \infty}\sum_{y\in X}w(x,y) v_{n}(y) $$ by \eqref{eq:monotonicity1} and monotone convergence, and recalling that every $G_{\textnormal{dir},n}$ is a Dirichlet subgraph of $G$ for every $n\in \mathbb{N}$, by Lemma \ref{lem:A1} we get \begin{align}\label{eq:thm_existence_5} \nonumber \Psi v(x)&+\frac{\lambda}{\mu(x)}\left[ \deg(x)v(x) - \sum_{y\in X}w(x,y)v(y) \right] \\\nonumber &= \lim_{n\to \infty}\left( \Psi v_{n}(x) + \frac{\lambda}{\mu(x)}\left[\deg(x)v_{n}(x) - \sum_{y\in X}w(x,y) v_{n}(y) \right] \right) \\\nonumber &= \lim_{n\to \infty} (\Psi + \lambda \Delta)v_{n} (x)\\\nonumber &= \lim_{n\to \infty} \left( \Psi + \lambda\Delta_{\textnormal{dir},n} \right)\hat{v}_{n} (x)\\ &= \lim_{n\to \infty} g_n (x)= g(x). \end{align} Notice that along the way we used the fact that $\psi$ is continuous. Moreover, by Remark \ref{rem:1}, we observe that $v \in \textnormal{dom}\left( \Delta \right)$, that is, $\Phi u \in \textnormal{dom}\left( \Delta \right)$ and $$ (\Psi + \lambda\Delta)v(x) = g(x), $$ namely, $v$ is a nonnegative solution of \eqref{eq:step-I-1} and thus $u$ is a nonnegative solution of \eqref{eq:3}. By the fact that $\lambda \Delta\Phi u = g - u$ and $g, u \in \ell^1(X,\mu)$, we obtain $\Delta\Phi u \in \ell^{1}(X,\mu)$. We can then conclude that $u \in \textnormal{dom}\left(\mathcal{L}\right)$, i.e., \eqref{eq:1}. Let us prove \eqref{eq:2}: By the fact that $\lambda\mathcal{L}u= g-u$ we obtain \begin{align}\label{eq:thm_existence_Alb} \|\mathcal{L}_{n} u_n - \mathcal{L}u\| & \leq \frac{1}{\lambda} \left( \|(\boldsymbol{\mathfrak{i}}_{n,\infty}\operatorname{id}_n\boldsymbol{\pi}_n + \lambda\mathcal{L}_{n})u_n - g\| + \| \boldsymbol{\mathfrak{i}}_{n,\infty}\operatorname{id}_n\boldsymbol{\pi}_n u_n - u \| \right) \nonumber \\ &= \frac{1}{\lambda} \left( \|(\boldsymbol{\mathfrak{i}}_{n,\infty}\operatorname{id}_n\boldsymbol{\pi}_n + \lambda\mathcal{L}_{n})u_n - g\| + \| u_n - u \| \right) \end{align} and we conclude \eqref{eq:2} by using \eqref{eq:l1convergence} and \eqref{eq:l1convergence0}. In particular, we have shown that for every $\lambda >0$ and $g \in \ell^{1,+}(X,\mu)$ there exists a unique $u \in \Omega\cap \ell^{1,+}(X,\mu)$ such that $(\operatorname{id}+\lambda\mathcal{L})u=g$, and $\|u\|\leq \|g\|$. If $g\in \ell^{1,-}(X,\mu)$, then the arguments of the proof are completely symmetric and the (nonpositive) solution $u$ can be built as monotone decreasing limit. \vspace{0.5cm} \noindent\textbf{Step IV} (When $G$ is infinite, connected and satisfies assumptions \ref{m-accretivity_A2}, \ref{m-accretivity_B2} or \ref{m-accretivity_C2}, then $\operatorname{id} +\lambda\mathcal{L}_{|\Omega}$ maps bijectively onto $\ell^{1}(X,\mu)$)\textbf{:} The statement follows immediately by the same arguments in \textbf{Step III} if we can show that $u\in \textnormal{dom}(\mathcal{L})$ and $$ (\operatorname{id}+\lambda\mathcal{L})u=g. $$ We divide this step into three sub-steps. \vspace{0.2cm} \noindent\textbf{Step IV-H1} ($G$ is locally finite)\textbf{:} Let us highlight that $\textnormal{dom}(\Delta)=C(X)$ because of the local finiteness of $G$ so that $v_{n}, v \in \textnormal{dom}\left(\Delta\right)$, that is, $\Delta v_{n}$ and $\Delta v=\Delta\Phi u$ are well-defined. Furthermore, by the local finiteness of $G$, for every fixed $x$ there exists a finite number of nodes $y\in X$ such that $w(x,y)\neq 0$. By Lemma~\ref{lem:chain2}, we can assume moreover that the sequence $\{X_n\}_{n=1}^\infty$ in addition to \eqref{eq:property_sets1} and \eqref{eq:property_sets2} also satisfies $$ \mathring{X}_{n}\subset \mathring{X}_{n+1}\quad \mbox{and}\quad\bigcup_{n=1}^\infty \mathring{X}_n =X. $$ Let $N=N(x)$ be such that $x \in \mathring{X}_n$ for every $n \geq N$. As a consequence, for every fixed $x \in X$, the series are in fact finite sums and, passing to the limit, we get \begin{align*} \lim_{n\to \infty} \sum_{y \in X} w(x,y) |v_{n}(y)|& =\lim_{n\to \infty}\sum_{y \in X_N} w(x,y) |v_{n}(y)|\\ &=\sum_{y \in X} \lim_{n\to \infty} w(x,y) |v_{n}(y)|. \end{align*} Therefore, by the above considerations, we have \begin{align*} (\Psi +\lambda\Delta) v(x) & =\lim_{n\to \infty} \left( \Psi + \lambda\Delta \right)v_{n} (x)\\ &= \lim_{n\to \infty} \left( \Psi + \lambda\Delta_{\textnormal{dir},n} \right)\hat{v}_{n} (x)\\ &= \lim_{n\to \infty} g_n (x)= g(x) \end{align*} where the second equality follows from Lemma \ref{lem:A1} (and the fact that every $G_{\textnormal{dir},n}$ is a Dirichlet subgraph of $G$ for every $n\in \mathbb{N}$). Since $\lambda \Delta\Phi u = g - u$, we get $\Delta\Phi u \in \ell^{1}(X,\mu)$, i.e., $u\in \textnormal{dom}(\mathcal{L})$. By the same arguments as in \textbf{Step III}, see \eqref{eq:thm_existence_Alb}, we can check that $u \in \Omega$. By the accretivity of $\mathcal{L}_{|\Omega}$ we obtain the uniqueness of $u$. In particular, we have proven that for every $\lambda >0$ and $g \in \ell^{1}(X,\mu)$ there exists a unique $u \in \Omega$ such that $(\operatorname{id}+\lambda\mathcal{L})u=g$, and $\|u\|\leq \|g\|$. \vspace{0.2cm} \vspace{0.2cm} \noindent\textbf{Step IV-H2} ($\inf_{x\in X}\mu(x)>0$)\textbf{:} Once we show that for every fixed $x \in X$ \begin{equation}\label{eq:thm_existence_4} \sum_{y \in X} \lim_{n\to \infty} w(x,y) |v_{n}(y)|=\lim_{n\to \infty} \sum_{y \in X} w(x,y) |v_{n}(y)|<\infty, \end{equation} that is, $v \in \textnormal{dom}(\Delta)$ and $\lim_{n\to \infty} \Delta v_{n}(x) = \Delta\left(\lim_{n\to \infty}v_{n}\right)(x)$ by dominated convergence, then we can conclude the proof as in the final part of \textbf{Step IV-H1}. Indeed, one of the main issues in the previous steps was to show that the solution $v$ of equation \eqref{eq:step-I-1} belongs to $\textnormal{dom}(\Delta)$ so that $\left(\Psi + \lambda\Delta\right)v$ is well-defined. If this is established, we can apply the same arguments as in \eqref{eq:thm_existence_5}. However, here \eqref{eq:thm_existence_4} is immediate: By the uniformly lower boundedness of the measure $\mu$ it follows that $\ell^1(X,\mu)\subseteq \ell^\infty(X,\mu)$ and, since $\Psi v=u \in \ell^1(X,\mu)$, we get $v\in \ell^\infty(X,\mu)$ by the surjectivity of $\psi$. So, $v \in \textnormal{dom}(\Delta)$ and \eqref{eq:thm_existence_4} follows. \vspace{0.2cm} \noindent\textbf{Step IV-H3} ($\sup_{x\in X}\frac{\sum_{y\in X}w(x,y)}{\mu(x)}\leq c <\infty$ and $\Phi(\ell^1) \subseteq \ell^1$)\textbf{:} The reasoning of the previous step applies here as well. By \eqref{eq:contractivity2} we have $u \in \ell^1(X,\mu)$ and from the hypothesis on $\Phi$ it follows that $v=\Phi u \in \ell^1(X,\mu)$. Therefore, $$ \sum_{y \in X} w(x,y)|v(y)|= \sum_{y \in X} \frac{w(x,y)}{\mu(y)}|v(y)|\mu(y)\leq \sum_{y \in X} \sup_{z}\frac{\sum_{x\in X}w(x,z)}{\mu(z)}|v(y)|\mu(y) \leq c\|v\| < \infty. $$ Thus, $v \in \textnormal{dom}(\Delta)$ and \eqref{eq:thm_existence_4} holds. \vspace{0.5cm} \noindent\textbf{Step V} (Constructing a solution when $G$ is not connected)\textbf{:} Assume now that $G$ is not connected and write $X$ as a disjoint union of connected components, that is, $X=\bigsqcup_{k=1}^KY_k$ where $Y_k$ are connected components of $X$ and $K\in \mathbb{N} \cup \{\infty\}$. We first observe that, if $u \in \textnormal{dom}(\Delta)$ and $x \in Y_k$, then $\Delta u(x) = \Delta_{k} \boldsymbol{\pi}_k u(x)$ where $\Delta_{k}$ is the formal graph Laplacian associated to the canonical induced subgraph $G_{k}=(Y_k,w_{|Y_k\times Y_k},\kappa_{|Y_k}, \mu_{|Y_k})$ and $\boldsymbol{\pi}_{k}$ is the projection onto $C(Y_k)$. We then write $\mathcal{L}_k \colon \textnormal{dom}(\mathcal{L}_k)\subseteq \ell^1(Y_k,\mu_{|Y_k}) \to \ell^1(Y_k,\mu_{|Y_k})$ where \begin{align*} &\textnormal{dom}(\mathcal{L}_k)\coloneqq \{v \in \ell^1(Y_k,\mu_{|Y_k}) \mid \Phi v \in \textnormal{dom}(\Delta_k),\; \Delta_k\Phi v\in \ell^1(Y_k,\mu_{|Y_k})\},\\ &\mathcal{L}_k v\coloneqq \Delta_k\Phi v. \end{align*} Notice that if $u \in \textnormal{dom}(\mathcal{L})$, then $\boldsymbol{\pi}_{k}u \in \textnormal{dom}(\mathcal{L}_k)$ for every $k$ and $\mathcal{L}u(x)= \mathcal{L}_k\boldsymbol{\pi}_{k}u(x)$ for every $x \in Y_k$. Now, for every $Y_k$, we fix an exhaustion $\{Y_{k,n}\}_n$ as in Lemma \ref{lem:chain1} and define the set $\Omega_{k}$ associated to the subgraph $G_{k}$ and $\{Y_{k,n}\}_n$ as in Definition \ref{def:Omega}. As we already know from \textbf{Step 0}, $\mathcal{L}_k$ is accretive on $\Omega_k$. We next define \begin{equation*} \Omega \coloneqq \{ u \in \textnormal{dom}(\mathcal{L}) \mid \exists\, \{u_k\}_k \mbox{ s.t. } u_k \in \Omega_k \mbox{ for } k=1,\ldots,K,\, \boldsymbol{\pi}_{k}u=u_k\}. \end{equation*} By Lemma \ref{lem:FC0}, $C_c(X)\subseteq \textnormal{dom}(\mathcal{L})$. Furthermore, by Lemma \ref{lem:FC}, for every $u \in C_c(X)$, $u_k\coloneqq \boldsymbol{\pi}_{k}u \in C_c(Y_k)\subseteq \Omega_k$. It follows that $C_c(X) \subseteq \Omega$ and, in particular, $\overline{\Omega}=\overline{\textnormal{dom}(\mathcal{L})}=\ell^1(X,\mu)$. It is not difficult to show that $\mathcal{L}$ is accretive on $\Omega$, as each $\mathcal{L}_k$ is accretive on $\Omega_k$. Indeed, for every $u,v \in \Omega$ \begin{align*} \left\|(u - v) + \lambda \left(\mathcal{L}u - \mathcal{L}v\right) \right\|&=\sum_{k=1}^K \left\|(\boldsymbol{\pi}_{k}u - \boldsymbol{\pi}_{k}v) + \lambda \left(\mathcal{L}_k \boldsymbol{\pi}_{k}u - \mathcal{L} \boldsymbol{\pi}_{k}v\right) \right\|_{Y_k}\\ &= \sum_{k=1}^K \left\|(u_k - v_k) + \lambda \left(\mathcal{L}_k u_k - \mathcal{L}_k v_k\right) \right\|_{Y_k}\\ &\geq \sum_{k=1}^K\|u_k - v_k\|_{Y_k}= \|u - v\|. \end{align*} Finally, fix $g \in \ell^1(X,\mu)$ and $\lambda>0$. Clearly, if $g \in \ell^{1,\pm}(X,\mu)$, then $\boldsymbol{\pi}_{k}g\in \ell^{1,\pm}(Y_k,\mu_{|Y_k})$ for every $k$ and, if $G$ satisfies one of the assumptions \ref{m-accretivity_A2}, \ref{m-accretivity_B2} or \ref{m-accretivity_C2}, then $G_k$ satisfies the same property for every $k$. Therefore, for every $k$, let $u_k\in C(Y_k)$ be the unique function in $\Omega_k$ which solves \begin{equation*} (\operatorname{id}_{k}+\lambda\Delta_{k}\Phi)u_k=\boldsymbol{\pi}_{k}g \end{equation*} as constructed in \textbf{Steps I} to \textbf{IV} above. Now, define \begin{equation*} u(x) \coloneqq u_k(x) \mbox{ if } x \in Y_k. \end{equation*} The function $u$ has the following properties: \begin{enumerate}[i)] \item $\Phi u \in \textnormal{dom}(\Delta)$ since $\Phi u_k \in \textnormal{dom}(\Delta_{k})$ for every $k$; \item $u$ solves $(\operatorname{id}+\lambda\Delta\Phi)u=g$. \end{enumerate} Moreover, by \eqref{eq:contractivity2} \begin{equation*} \sum_{k=1}^K \|u_k\|_{Y_k}\leq \sum_{k=1}^K\|\boldsymbol{\pi}_{k}g\|_{Y_k}= \|g\| \end{equation*} and thus $\|u\|\leq \|g\|$. Therefore, $u\in \ell^1(X,\mu)$ and $\Delta\Phi u=g-u \in\ell^1(X,\mu)$, that is, $u \in \textnormal{dom}(\mathcal{L})$. In particular, $u \in \Omega$. This completes the proof of Theorem~\ref{thm:main1}. \qed \vspace{0.2cm} \begin{remark} As the proof shows, the conclusion of \textbf{Step V} follows under the weaker assumption that at least one of the conditions $g\geq 0$, $g\leq 0$, \ref{m-accretivity_A2}, \ref{m-accretivity_B2} or \ref{m-accretivity_C2} in the statement of Theorem \ref{thm:main1} holds in each connected component $Y_k$ of $X$, not necessarily the same condition for different $Y_k$. \end{remark} Using the constructions carried out in the proof of Theorem~\ref{thm:main1}, we can now prove the existence and uniqueness of solutions for the \ref{eq:C-D}. \vspace{0.3cm} \noindent\textbf{Proof of Theorem \ref{thm:main2}.} From the definition of mild solutions, Definition \ref{def:weak_solution}, without loss of generality we can suppose that $T<\infty$. Then, since $f\in L^1_{\textnormal{loc}}\left([0,T]; \ell^1\left(X,\mu\right)\right)= L^1\left([0,T]; \ell^1\left(X,\mu\right)\right)$, there exists an $\epsilon$-discretization $\mathcal{D}_\epsilon$ of $([0,T];f)$ for every $\epsilon >0$. Let us observe that solving \eqref{implicit_Euler}, i.e., \begin{equation*} \frac{u_k - u_{k-1}}{\lambda_k} + \Delta\Phi u_k = f_k, \qquad \lambda_k\coloneqq t_k - t_{k-1} \end{equation*} for $k=1,\ldots,n$ means to solve at each step the equation \begin{equation*} (\operatorname{id} + \lambda_k\Delta\Phi) u_k= u_{k-1} + \lambda_kf_k \end{equation*} in such a way that $$ u_k \in \ell^1\left(X,\mu\right), \qquad \Phi u_k\in\textnormal{dom}(\Delta), \qquad \Delta\Phi u_k\in \ell^1(X,\mu) $$ where $\lambda_k>0$ and $f_k\in \ell^1\left(X,\mu\right)$. Therefore, given $u_0$ and $\{f_k\}_{k=1}^n$, the solution $\{u_k\}_{k=1}^n$ (if any) of \eqref{implicit_Euler} is computed recursively starting from \begin{equation}\label{eq:step1} (\operatorname{id} + \lambda_1\Delta\Phi) u_1= u_{0} + \lambda_1f_1. \end{equation} If $u_0, f_1 \in \ell^1(X,\mu)$ are nonnegative (nonpositive), then $g\coloneqq u_0+\lambda_1f_1 \in \ell^{1,\pm}(X,\mu)$ and by Theorem \ref{thm:main1} there exists a unique nonnegative (nonpositive) solution $u_1\in \Omega$ of \eqref{eq:step1}. Iterating the procedure, each $u_{k-1} + \lambda_kf_k \in \ell^{1,\pm}(X,\mu)$. Therefore, for every $\epsilon>0$ there exists an $\epsilon$-approximate solution $u_\epsilon$ of the \ref{eq:C-D} (see \eqref{epsilon_approximation}), such that $u_\epsilon(t)\geq 0$ and $u_\epsilon(t)\in \Omega$ for every $t\in(0,T]$. In Theorem \ref{thm:main1} we also proved that $\mathcal{L}_{|\Omega}$ is accretive and $\overline{\Omega}=\overline{\textnormal{dom}(\mathcal{L})}=\ell^1(X,\mu)$ by Lemma \ref{lem:FC}. Therefore, summarizing, we have that: \begin{enumerate}[1)] \item By hypotheses \ref{item:nonnegativity/nonpositivity}, \ref{hp1}, and \ref{hp2} we have $u_0 \geq 0$, $u_0 \in \ell^1(X,\mu)=\overline{\textnormal{dom}\left(\mathcal{L}_{|\Omega}\right)}=\overline{\Omega}$, and $f \in L^1\left([0,T]; \ell^1\left(X,\mu\right)\right)$, respectively; \item $\mathcal{L}_{|\Omega}$ is accretive; \item For every $u_0\geq 0$ and $f(t)\geq 0$, there exists an $\epsilon$-approximate solution $u_\epsilon$ such that $u_\epsilon(t)\geq 0$ and $u_\epsilon(t)\in \textnormal{dom}\left(\mathcal{L}_{|\Omega}\right)=\Omega$ for every $t\in (0,T]$. \end{enumerate} Then, by standard results (see \cite[Theorem 3.3]{benilan1988evolution} or \cite[Theorem 4.1]{barbu2010nonlinear}), there exists a unique mild solution $u$ of the \ref{eq:C-D} which satisfies \eqref{uniform_limit}. Since the limit is uniform and $u_\epsilon(t)\geq 0$, then $u(t)\geq 0$ and $u(t) \in \ell^1(X,\mu)$ for every $t \in [0,T]$. The validity of \eqref{contraction_of_solutions} is again standard, see \cite[Theorem 4.1]{barbu2010nonlinear}. If $u_0 \leq 0$ and $f(t)\leq 0$, then we get the same results in a completely analogous way. Under the extra hypothesis \ref{m-accretivity_A}, \ref{m-accretivity_B} or \ref{m-accretivity_C} in Theorem \ref{thm:main1} we have established the $m$-accretivity of $\mathcal{L}_{|\Omega}$ which implies the existence of $\epsilon$-approximate solutions for every $\epsilon$ as above. Therefore, under the hypotheses \ref{hp1}, \ref{hp2} and \ref{hpA}, there exists a unique mild solution $u$ of the \ref{eq:C-D} which satisfies \eqref{uniform_limit} and \eqref{contraction_of_solutions}, see \cite[Corollary 4.1]{barbu2010nonlinear}. \qed \vspace{0.2cm} We now recall the following general result see, e.g., \cite[Proposition 3]{crandall1986nonlinear} or \cite[Theorem 1.6]{benilan1988evolution}. \begin{proposition}\label{prop:continuity} Let $f \in L^1_{\textnormal{loc}}([0,T] \, ;\, \ell^1\left(X,\mu\right))$. Let $\textnormal{dom}(\mathcal{L})$ be closed and let $\mathcal{L}$ be continuous on $\textnormal{dom}(\mathcal{L})$. If $u$ is a mild solution on $(0,T)$, then $u$ is a strong solution and $u$ satisfies for every $0<t<T$ \begin{equation*} u(t) = u(0) - \int_0^t \mathcal{L} u(s) ds +\int_0^t f(s)ds. \end{equation*} Moreover, if $f \in C([0,T] \, ;\, \ell^1\left(X,\mu\right))$, then $u$ is a classic solution. \end{proposition} We now provide a direct application of the proposition above to the graph setting. \begin{corollary}\label{cor:application} Let $G=(X,w,\kappa,\mu)$ be a graph. If \begin{enumerate}[(i)] \item\label{cor_item1} $ \sup_{x\in X} \operatorname{Deg}(x) < \infty; $ \item\label{cor_item2} $\Phi \colon \ell^1(X,\mu) \to \ell^1(X,\mu)$ is continuous; \end{enumerate} then $\textnormal{dom}(\mathcal{L})=\ell^1(X,\mu)$ and $\mathcal{L}$ is continuous. In particular, the conclusions of Proposition \ref{prop:continuity} hold. \end{corollary} \begin{proof} Let $u\in \ell^1(X,\mu)$. By \ref{cor_item2} we have that $\Phi u \in \ell^1(X,\mu)$ and then by \ref{cor_item1} \begin{equation*} \sum_{y\in X} w(x,y) |\phi(u(y))| \leq c_1\sum_{\substack{y\in X}} |\phi(u(y))| \mu(y) <\infty \end{equation*} for some $c_1>0$, that is, $\Phi u \in \textnormal{dom}\left(\Delta\right)\cap \ell^1(X,\mu)$. Let us recall from \cite[Theorem 9.2]{haeseler2012laplacians} or \cite[Theorem 2.15]{keller2021graphs} that the formal graph Laplacian $\Delta$ is bounded on $\ell^1(X,\mu)$ (indeed, on $\ell^p(X, \mu)$ for all $p\in [1,\infty]$) if and only if \ref{cor_item1} holds. Therefore, $$ \|\Delta\Phi u\| \leq c_2 \|\Phi u\| < \infty $$ for some $c_2>0$, namely, $\Delta\Phi u \in \ell^1(X,\mu)$ and $\textnormal{dom}(\mathcal{L})=\ell^1(X,\mu)$. Therefore, by \ref{cor_item2}, $\mathcal{L}$ is continuous as the composition of continuous operators is continuous. \end{proof} \begin{remark} Observe that, if $G$ is finite, then both hypotheses \ref{cor_item1} and \ref{cor_item2} in Corollary~\ref{cor:application} are trivially satisfied and if $f$ is continuous, then the \ref{eq:C-D} always has a unique classic solution for any $\phi$. About hypothesis \ref{cor_item2}, if $G$ is not finite, one condition that ensures the continuity of $\Phi$ is if $\phi$ is Lipschitz continuous with uniform Lipschitz constant. Another sufficient condition for the continuity of the operator $\Phi$ on $\ell^1(X,\mu)$ is that $\mu$ is bounded away from zero, i.e., assumption \ref{m-accretivity_B} and that $\phi$ is uniformly Lipschitz on every interval $[-R,R]$. This is for instance the case of the PME where $\phi(s)=s|s|^{m-1}$ with $m> 1$. To prove these statements, recall first that we are assuming $\Phi (\ell^1(X,\mu))\subseteq \ell^1(X,\mu)$ and that if $\inf_{x\in X}\mu(x)\geq c>0$, then $\ell^1(X,\mu)\subseteq \ell^\infty(X,\mu)$ and $\|u\|_\infty \leq c^{-1}\|u\|_1$ for every $u \in \ell^1(X,\mu)$. Therefore, if $\|u_n - u\|_1 \to 0$ then $|u_n-u|$ is uniformly bounded. In particular, there exists $R>0$ such that $u_n(x), u(x) \in [-R,R]$ for every $x\in X$ and for every $n\in \mathbb{N}$. Consequently, $|\phi(u_n(x)) - \phi(u(x))| \leq L_R|u_n(x) - u(x)|$ where $L_R$ is the Lipschitz constant of $\phi$ on $[-R,R]$ and then $\|\Phi u_n - \Phi u\|_1\to 0$. \end{remark}
proofpile-arXiv_065-71
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The $L^2$ discrepancy of a finite point set $P \subset [0,1)^2$ in the unit square is defined as \[ D_2(P) = \left( \int_{[0,1]^2} \left( B(x,y) -|P|xy \right)^2 \, \mathrm{d} x \, \mathrm{d} y \right)^{1/2} , \] where $B(x,y)=|P \cap ([0,x) \times [0,y))|$ is the number of points of $P$ which fall in the rectangle $[0,x) \times [0,y)$. The $L^2$ discrepancy is a common measure of equidistribution, with direct applications to numerical integration; for a general introduction we refer to the monograph Drmota--Tichy \cite{DT}. A seminal result of K.\ Roth \cite{RO} states that every finite point set $P$ satisfies $D_2(P) \gg \sqrt{\log |P|}$ with a universal implied constant. This is known to be sharp, with several explicit constructions e.g.\ based on digital nets attaining the optimal order $D_2(P) \ll \sqrt{\log |P|}$, see \cite{DP}. In this paper we undertake a detailed study of the $L^2$ discrepancy of $2$-dimensional lattices. Given $\alpha \in \mathbb{R}$ and $N \in \mathbb{N}$, we will consider the $N$-element set \[ L(\alpha, N) = \left\{ \left( \{ n \alpha \}, \frac{n}{N} \right) \in [0,1)^2 \, : \, 0 \le n \le N-1 \right\} , \] where $\{ \cdot \}$ denotes fractional part, and the $2N$-element set \[ S(\alpha, N) = \left\{ \left( \{ \pm n \alpha \}, \frac{n}{N} \right) \in [0,1)^2 \, : \, 0 \le n \le N-1 \right\} . \] Note that $L(\alpha, N)$ is the intersection of the unit square $[0,1)^2$ and the lattice spanned by the vectors $(\alpha, 1/N)$ and $(1,0)$. We call $S(\alpha, N)$ the symmetrization of $L(\alpha, N)$; more precisely, $S(\alpha, N)$ is the union of $L(\alpha, N)$ and its reflection about the vertical line $x=1/2$. We study both rational and irrational values of $\alpha$. The equidistribution properties of $S(\alpha, N)$ and $L(\alpha, N)$, in particular their $L^2$ discrepancy, are closely related to the Diophantine approximation properties of $\alpha$. Throughout this paper, $\alpha=[a_0;a_1,a_2,\dots]$ will denote the (finite or infinite) continued fraction expansion of $\alpha$, and $p_k/q_k=[a_0;a_1,\dots, a_k]$ its convergents. In the rational case it will not matter which of the two possible expansions is chosen. Roughly speaking, we will show that for $N \approx q_K$, \[ D_2^2 (S(\alpha, N)) \approx \sum_{k=1}^K a_k^2 \quad \textrm{and} \quad D_2^2(L(\alpha, N)) \approx \sum_{k=1}^K a_k^2 + \left( \sum_{k=1}^K (-1)^k a_k \right)^2 . \] See Propositions \ref{parsevalprop} and \ref{simpleparsevalprop} below for a precise formulation. Our first result characterizes all irrationals for which $S(\alpha, q_K)$ resp.\ $L(\alpha, q_K)$ attains optimal $L^2$ discrepancy as $K \to \infty$. We also consider the same problem for $S(\alpha, N)$ and $L(\alpha, N)$ as $N \to \infty$. The first equivalence below generalizes a result of Davenport \cite{DA}, who showed that $S(\alpha, N)$ attains optimal $L^2$ discrepancy whenever $\alpha$ is badly approximable, i.e.\ $a_k \ll 1$. \begin{thm}\label{optimalirrationaltheorem} Let $\alpha =[a_0;a_1,a_2, \dots]$ be irrational. We have \[ \begin{split} D_2 (S(\alpha,N)) \ll \sqrt{\log N} \,\, &\Longleftrightarrow \,\, D_2 (S(\alpha,q_K)) \ll \sqrt{\log q_K} \,\, \Longleftrightarrow \,\, \frac{1}{K} \sum_{k=1}^K a_k^2 \ll 1 , \\ D_2 (L(\alpha,q_K)) \ll \sqrt{\log q_K} \,\, &\Longleftrightarrow \,\, \frac{1}{K} \sum_{k=1}^K a_k^2 \ll 1 \textrm{ and } \frac{1}{\sqrt{K}} \left| \sum_{k=1}^K (-1)^k a_k \right| \ll 1. \end{split} \] \end{thm} \begin{remark}\label{LalphaNremark} We also give an almost complete answer for the unsymmetrized lattice $L(\alpha, N)$ with general $N$: under the assumption $a_k \ll \sqrt{k}/\log^2 k$, we have \[ D_2 (L(\alpha,N)) \ll \sqrt{\log N} \,\, \Longleftrightarrow \,\, \frac{1}{K} \sum_{k=1}^K a_k^2 \ll 1 \textrm{ and } \frac{1}{\sqrt{K}} \left| \sum_{k=1}^K (-1)^k a_k \right| \ll 1. \] In the special case of a badly approximable $\alpha$, this equivalence was observed in \cite{BI,BTY2}. Note that $K^{-1} \sum_{k=1}^K a_k^2 \ll 1$ implies that $a_k \ll \sqrt{k}$; we do not know whether the slightly stronger extra assumption $a_k \ll \sqrt{k}/\log^2 k$ can be removed. \end{remark} More precise results can be deduced for an irrational $\alpha$ whose continued fraction expansion is explicitly known. The most interesting case is that of quadratic irrationals, whose continued fractions are of the form $\alpha=[a_0;a_1,\dots, a_r,\overline{a_{r+1}, \dots, a_{r+p}}]$, where the overline denotes the period. Note that in this case $\sum_{k=1}^K (-1)^k a_k = A(\alpha) K +O(1)$ with some constant $A(\alpha)$. In fact, $A(\alpha)=0$ if $p$ is odd, and $A(\alpha)=p^{-1} \sum_{k=1}^p (-1)^{r+k} a_{r+k}$ (possibly zero) if $p$ is even. We also have $\log q_K=\Lambda(\alpha) K+O(1)$ with some constant $\Lambda(\alpha )>0$. In fact, $\Lambda(\alpha) = p^{-1} \log \eta$, where $\eta>1$ is the larger of the two eigenvalues of the matrix \[ \left( \begin{array}{cc} 0 & 1 \\ 1 & a_{r+1} \end{array} \right) \left( \begin{array}{cc} 0 & 1 \\ 1 & a_{r+2} \end{array} \right) \cdots \left( \begin{array}{cc} 0 & 1 \\ 1 & a_{r+p} \end{array} \right) . \] \begin{thm}\label{quadraticirrationaltheorem} Let $\alpha$ be a quadratic irrational, and let $A(\alpha)$ and $\Lambda (\alpha)$ be as above. There exists a constant $c(\alpha)>0$ such that \[ D_2^2(S(\alpha, N)) = c(\alpha) \log N +O(1) , \] and \[ D_2^2(L(\alpha, N)) = \left\{ \begin{array}{ll} \frac{3}{2} c(\alpha) \log N + O((\log \log N)^4) & \textrm{if } A(\alpha)=0, \\ \frac{A(\alpha)^2}{144 \Lambda(\alpha )^2} \log^2 N + O(\log N) & \textrm{if } A(\alpha) \neq 0 . \end{array} \right. \] The implied constants depend only on $\alpha$. \end{thm} \noindent We proved the same result for $S(\alpha, N)$ with the slightly worse error term $O(\log \log N)$ in a previous paper \cite{BO2}. In contrast to $A(\alpha)$ and $\Lambda(\alpha)$, there seems to be no simple way to compute the value of $c(\alpha)$ directly from the continued fraction expansion. The latter constant first appeared in certain lattice point counting problems studied in detail by Beck \cite{BE1,BE2,BE3}, who showed that it is related to the arithmetic of the ring of algebraic integers of the real quadratic field $\mathbb{Q}(\alpha)$, and computed its explicit value for any quadratic irrational; for instance, \[ c \left( \frac{1+\sqrt{5}}{2} \right) = \frac{1}{30 \sqrt{5} \log \frac{1+\sqrt{5}}{2}} \quad \textrm{and} \quad c(\sqrt{3}) = \frac{1}{12 \sqrt{3} \log (2+\sqrt{3})} . \] Precise results also follow for non-badly approximable irrationals whose continued fraction expansions are explicitly known. Consider Euler's number $e=[2;1,2,1,1,4,1,\dots, 1,2n,1,\dots ]$ as an illustration. Since the ``period length'' is odd, the square of the alternating sum $(\sum_{k=1}^K (-1)^k a_k)^2 \ll K^2$ is negligible compared to $\sum_{k=1}^K a_k^2 = (4/81)K^3+O(K^2)$. Thus from our general results it easily follows that \[ D_2 (S(e,N)) = \frac{1}{3\sqrt{30}} \left( \frac{\log N}{\log \log N} \right)^{3/2} \left( 1 + O \left( \frac{\log \log \log N}{\log \log N} \right) \right) , \] and \[ D_2 (L(e,N)) = \frac{1}{6\sqrt{5}} \left( \frac{\log N}{\log \log N} \right)^{3/2} \left( 1 + O \left( \frac{\log \log \log N}{\log \log N} \right) \right) . \] In contrast, e.g.\ for $\tan 1 = [1;1,1,3,1,5,1,\dots, 2n-1,1, \dots ]$, the ``period length'' is even, and the alternating sum $(\sum_{k=1}^K (-1)^k a_k)^2=K^4/16+O(K^3)$ dominates $\sum_{k=1}^K a_k^2 = K^3/6 + O(K^2)$. Consequently, \[ D_2 (S(\tan 1,N)) = \frac{1}{3\sqrt{30}} \left( \frac{\log N}{\log \log N} \right)^{3/2} \left( 1 + O \left( \frac{\log \log \log N}{\log \log N} \right) \right) , \] but for the unsymmetrized lattice we have the larger order of magnitude \[ D_2 (L(\tan 1, N)) = \frac{1}{12} \left( \frac{\log N}{\log \log N} \right)^2 \left( 1 + O \left( \frac{\log \log \log N}{\log \log N} \right) \right) . \] We also establish precise results for randomly chosen $\alpha$, starting with the asymptotics a.e.\ in the sense of the Lebesgue measure. \begin{thm}\label{aeasymptotictheorem} Let $\varphi$ be a positive nondecreasing function on $(0,\infty)$. \begin{enumerate} \item[(i)] If $\sum_{n=1}^{\infty} 1/\varphi(n) < \infty$, then for a.e.\ $\alpha$, \[ \begin{split} D_2(S(\alpha, N)) &\le \varphi (\log N) + O(\log N \log \log N), \\ D_2(L(\alpha, N)) &\le \varphi (\log N) + O(\log N \log \log N) \end{split} \] with implied constants depending only on $\alpha$ and $\varphi$. \item[(ii)] If $\sum_{n=1}^{\infty} 1/\varphi (n) = \infty$, then for a.e.\ $\alpha$, \[ D_2 (S(\alpha, N)) \ge \varphi (\log N) \quad \textrm{and} \quad D_2 (L(\alpha, N)) \ge \varphi (\log N) \quad \textrm{for infinitely many } N. \] \end{enumerate} \end{thm} \noindent In particular, for a.e.\ $\alpha$ we have $D_2 (S(\alpha, N)) \ll \log N (\log \log N)^{1+\varepsilon}$ and $D_2 (L(\alpha, N)) \ll \log N (\log \log N)^{1+\varepsilon}$ with any $\varepsilon >0$, but these fail with $\varepsilon =0$. Our next result is the distributional analogue of Theorem \ref{aeasymptotictheorem}, stating that if $\alpha$ is chosen randomly from $[0,1]$ with an absolutely continuous distribution, then after suitable normalization $D_2^2(S(\alpha, N))$ converges to the standard L\'evy distribution. If $\alpha$ is chosen randomly with the Lebesgue measure $\lambda$ or the Gauss measure $\nu (B)=(1/\log 2) \int_B 1/(1+x) \, \mathrm{d}x$ ($B \subseteq [0,1]$ Borel) as distribution, then we also estimate the rate of convergence in the Kolmogorov metric. \begin{thm}\label{irrationallimitdistributiontheorem} If $\mu$ is a Borel probability measure on $[0,1]$ which is absolutely continuous with respect to the Lebesgue measure, then for any $t \ge 0$, \[ \mu \left( \left\{ \alpha \in [0,1] \, : \, 5 \pi^3 \frac{D_2^2 (S(\alpha, N))}{\log^2 N} \le t \right\} \right) \to \int_0^t \frac{e^{-1/(2x)}}{\sqrt{2 \pi} x^{3/2}} \, \mathrm{d} x \qquad \textrm{as } N \to \infty . \] If $\mu$ is either the Lebesgue measure $\lambda$ or the Gauss measure $\nu$, then for any $N \ge 3$, \[ \sup_{t \ge 0} \left| \mu \left( \left\{ \alpha \in [0,1] \, : \, 5 \pi^3 \frac{D_2^2 (S(\alpha, N))}{\log^2 N} \le t \right\} \right) - \int_0^t \frac{e^{-1/(2x)}}{\sqrt{2 \pi} x^{3/2}} \, \mathrm{d} x \right| \ll \frac{(\log \log N)^{1/3}}{(\log N)^{1/3}} \] with a universal implied constant. \end{thm} \noindent We conjecture that a similar result holds for the unsymmetrized lattice as well, i.e.\ if $\alpha$ is chosen randomly from $[0,1]$ with an absolutely continuous distribution, then $D_2^2(L(\alpha, N))/\log^2 N$ has a nondegenerate limit distribution as $N \to \infty$. Our results, especially Theorems \ref{optimalirrationaltheorem}, \ref{aeasymptotictheorem} and \ref{irrationallimitdistributiontheorem} should be compared to the corresponding properties of the discrepancy of the classical sequence $\{ n \alpha \}$, defined as \[ \mathrm{Disc}_N(n \alpha ) = \sup_{[a,b] \subset [0,1)} \left| \sum_{n=1}^N I_{[a,b]}(\{ n \alpha \}) - N (b-a) \right| . \] Here and for the rest of the paper, $I_S$ denotes the indicator function of a set $S$. Note that $\max_{1 \le \ell \le N} \mathrm{Disc}_{\ell}(n \alpha)$ is, up to a factor of $2$, equal to $D_{\infty}(L(\alpha, N))$, where the $L^{\infty}$ discrepancy (also called star-discrepancy) $D_{\infty}$ of a finite point set is defined as $D_2$ with the $L^2$ norm replaced by the $L^{\infty}$ norm. Roughly speaking, for $N \approx q_K$ we have $\max_{1 \le \ell \le N} \mathrm{Disc}_{\ell}( n \alpha ) \approx \sum_{k=1}^K a_k$. By a classical theorem of W.\ Schmidt \cite[p.\ 41]{DT}, the optimal rate for the discrepancy is $\log N$, and we can characterize all irrationals for which the optimum is attained \cite[p.\ 53]{DT} as \[ \mathrm{Disc}_N (n \alpha ) \ll \log N \,\, \Longleftrightarrow \,\, \frac{1}{K} \sum_{k=1}^K a_k \ll 1 . \] The discrepancy $\mathrm{Disc}_N(n \alpha )$ is also known to satisfy the same asymptotics a.e.\ as in Theorem \ref{aeasymptotictheorem} \cite[p.\ 63]{DT}. A fortiori, the previous two results apply also to $\max_{1 \le \ell \le N} \mathrm{Disc}_{\ell}(n \alpha)$, and hence to $D_{\infty}(L(\alpha, N))$. We mention two distributional analogues due to Kesten \cite{KE}: \[ \begin{split} \frac{\mathrm{Disc}_N(n \alpha)}{\log N \log \log N} &\to \frac{2}{\pi^2} \quad \textrm{in measure,} \\ \frac{\max_{1 \le \ell \le N}\mathrm{Disc}_{\ell}(n \alpha)}{\log N \log \log N} &\to \frac{3}{\pi^2} \quad \textrm{in measure.} \end{split} \] As a curious observation, we mention that there exists an irrational $\alpha$ such that \[ \log N \ll D_2(S(\alpha, N)) \le D_{\infty} (S(\alpha, N)) \ll \log N , \] and \[ \log N \ll D_2(L(\alpha, N)) \le D_{\infty} (L(\alpha, N)) \ll \log N , \] i.e.\ both $S(\alpha, N)$ and $L(\alpha, N)$ have optimal $L^{\infty}$ discrepancy, but neither has optimal $L^2$ discrepancy. Indeed, it is easy to construct\footnote{E.g.\ let $a_k=k$ if $k$ is a power of $2$, and $a_k=1$ otherwise.} a sequence of positive integers $a_k$ such that $K^{-1} \sum_{k=1}^K a_k \ll 1$ but $\sum_{k=1}^K a_k^2 \gg K^2$. Consider now the case of a rational $\alpha$. For the sake of simplicity, we will always assume that $N$ is the denominator of $\alpha$. That is, given a reduced fraction $p/q$, we study the $q$-element set \[ L(p/q,q) = \left\{ \left( \left\{ \frac{np}{q} \right\}, \frac{n}{q} \right) \in [0,1)^2 \, : \, 0 \le n \le q-1 \right\} , \] and the $2q$-element set \[ S(p/q,q) = \left\{ \left( \left\{ \pm \frac{np}{q} \right\}, \frac{n}{q} \right) \in [0,1)^2 \, : \, 0 \le n \le q-1 \right\} . \] The characterization of all rationals for which the $L^2$ discrepancy is optimal is exactly the same as in the irrational case. \begin{thm}\label{optimalrationaltheorem} Let $p/q=[a_0;a_1,\dots, a_r]$ be a reduced rational. We have \[ \begin{split} D_2(S(p/q,q)) \ll \sqrt{\log q} \,\, &\Longleftrightarrow \,\, \frac{1}{r} \sum_{k=1}^r a_k^2 \ll 1, \\ D_2(L(p/q,q)) \ll \sqrt{\log q} \,\, &\Longleftrightarrow \,\, \frac{1}{r} \sum_{k=1}^r a_k^2 \ll 1 \textrm{ and } \frac{1}{\sqrt{r}} \left| \sum_{k=1}^r (-1)^k a_k \right| \ll 1. \end{split} \] \end{thm} As an analogue of the metric results on typical values of $\alpha$ in the sense of the Lebesgue measure above, we also study the $L^2$ discrepancy for typical values of rationals. In this case, ``typical'' means choosing a reduced fraction $p/q$ randomly from the set of all reduced rationals with bounded denominator. \begin{thm}\label{rationallimitdistributiontheorem} Let $F_Q$ denote the set of all reduced fractions in $[0,1]$ with denominator at most $Q$. For any $Q \ge 2$, \[ \sup_{t \ge 0} \left| \frac{1}{|F_Q|} \left| \left\{ \frac{p}{q} \in F_Q \, : \, 5 \pi^3 \frac{D_2^2 (S(p/q,q))}{\log^2 q} \le t \right\} \right| - \int_0^t \frac{e^{-1/(2x)}}{\sqrt{2 \pi} x^{3/2}} \, \mathrm{d} x \right| \ll \frac{1}{(\log Q)^{1/2}} \] with a universal implied constant. \end{thm} \noindent We conjecture that a similar result holds for the unsymmetrized lattice as well, i.e.\ if $p/q$ is chosen randomly from $F_Q$, then $D_2^2(L(p/q,q))/\log^2 q$ has a nondegenerate limit distribution as $Q \to \infty$. In Section \ref{parsevalsection}, we derive an explicit formula for $D_2(S(\alpha, N))$ and $D_2(L(\alpha, N))$ in terms of the partial quotients of $\alpha$, see Propositions \ref{parsevalprop} and \ref{simpleparsevalprop}. Theorems \ref{optimalirrationaltheorem}, \ref{quadraticirrationaltheorem} and \ref{optimalrationaltheorem} are proved in Section \ref{optimalsubsection}. In Section \ref{typicalirrationalsection}, we show how Theorems \ref{aeasymptotictheorem} and \ref{irrationallimitdistributiontheorem} follow from classical results on the metric theory of continued fractions and $\psi$-mixing random variables. The proof of Theorem \ref{rationallimitdistributiontheorem} in Section \ref{typicalrationalsection}, on the other hand, relies on recent results of Bettin and Drappeau \cite{BD} on the statistics of partial quotients of random rationals. \section{$L^2$ discrepancy via the Parseval formula}\label{parsevalsection} \subsection{The main estimates} We remind that $\alpha = [a_0;a_1,a_2, \dots]$ is the (finite or infinite) continued fraction expansion of a real number $\alpha$, and $p_k/q_k=[a_0;a_1,\dots, a_k]$ denotes its convergents. For the rest of the paper, we also use the notation \[ T_n=\sum_{\ell=0}^n \left( \frac{1}{2} - \{ \ell \alpha \} \right) \quad \textrm{and} \quad E_N=\frac{1}{N}\sum_{n=0}^{N-1} T_n . \] For the sake of readability, $a=b \pm c$ denotes $|a-b| \le c$, and $\zeta$ is the Riemann zeta function. Our main tool is an evaluation of the $L^2$ discrepancy up to a small error, based on the Parseval formula. This method goes back to Davenport \cite{DA}, and more recently has also been used in \cite{BI,BTY1,BTY2,HKP,RS}. We follow the steps in our previous paper \cite{BO2}, where we considered irrationals whose sequence of partial quotients is reasonably well-behaved (e.g.\ bounded, or increasing at a regular rate such as for Euler's number). Here we shall need a more refined analysis in order to study arbitrary reals without any assumption on the partial quotients. \begin{prop}\label{parsevalprop} For any $q_{K-1} \le N \le q_K$, we have \[ D_2^2 (S(\alpha, N)) = \sum_{m=1}^{q_{K-1}-1} \frac{1}{4 \pi^4 m^2 \| m \alpha \|^2} + \xi_S(\alpha, N) \pm \left( \sum_{k=0}^{K-1} \frac{a_{k+1}}{2q_k} + \frac{\zeta(3)}{16 \pi^4 N} \sum_{k=0}^{K-2} (a_{k+1}+2)^3 q_k + 6.28 \right) \] with some $\xi_S(\alpha, N)$ which satisfies both $0 \le \xi_S(\alpha, N) \le \sum_{m=q_{K-1}}^{q_K-1} \frac{1}{2 \pi^4 m^2 \| m \alpha \|^2}$ and \[ \xi_S(\alpha, N) = \sum_{m=q_{K-1}}^{q_K-1} \frac{1}{4 \pi^4 m^2 \| m \alpha \|^2} \pm \left( \frac{\zeta(3)}{16 \pi^4 N} (a_K+2)^3 q_{K-1} +0.07 \right) . \] Similarly, for any $q_{K-1} \le N \le q_K$, we have \[ \begin{split} D_2^2(L(\alpha, N)) = &\frac{1}{N} \sum_{n=0}^{N-1} \left( T_n^2 + \frac{1}{2}T_n \right) + \left( 1-\frac{1}{2N} \right) \sum_{m=1}^{q_{K-1}-1} \frac{1}{4 \pi^4 m^2 \| m \alpha \|^2} \\ &+\xi_L(\alpha, N) \pm \left( \sum_{k=0}^{K-1} \frac{a_{k+1}}{8 q_k} + \frac{\zeta(3)}{16 \pi^4 N} \sum_{k=0}^{K-2} (a_{k+1}+2)^3 q_k + 2.78 \right) \end{split} \] with some $\xi_L(\alpha, N)$ which satisfies both $0 \le \xi_L(\alpha, N) \le \sum_{m=q_{K-1}}^{q_K-1} \frac{1}{2 \pi^4 m^2 \| m \alpha \|^2}$ and \[ \xi_L(\alpha, N) = \left( 1-\frac{1}{2N} \right) \sum_{m=q_{K-1}}^{q_K-1} \frac{1}{4 \pi^4 m^2 \| m \alpha \|^2} \pm \frac{\zeta(3)}{16 \pi^4 N} (a_K+2)^3 q_{K-1} . \] \end{prop} \noindent We also prove a simpler form which is sharp up to a constant factor. \begin{prop}\label{simpleparsevalprop} For any $q_{K-1} \le N \le q_K$, we have $D_2^2 (S(\alpha, N)) \ll \sum_{k=1}^K a_k^2$. For $N=q_K$, we also have $D_2^2 (S(\alpha, q_K)) \gg \sum_{k=1}^K a_k^2$, and \[ \sum_{k=1}^K a_k^2 + \left( \sum_{k=1}^K (-1)^k a_k \right)^2 \ll D_2^2 (L(\alpha, q_K)) \ll \sum_{k=1}^K a_k^2 + \left( \sum_{k=1}^K (-1)^k a_k \right)^2 . \] The implied constants are universal. \end{prop} \noindent We postpone the proofs to Sections \ref{section2.3} and \ref{section2.4}, and now comment on the main terms. The contribution of the sums $T_n$ can be written as \[ \frac{1}{N} \sum_{n=0}^{N-1} \left( T_n^2 + \frac{1}{2} T_n \right) = \frac{1}{N} \sum_{n=0}^{N-1} (T_n-E_N)^2 + E_N^2 + \frac{1}{2} E_N . \] Observing a connection with Dedekind sums, Beck showed \cite[p.\ 79 and p.\ 91]{BE1} (see also \cite{SCH}) that for any $q_{K-1} \le N \le q_K$, the ``expected value'' $E_N$ is \begin{equation}\label{EN} E_N = \frac{1}{12} \sum_{k=1}^K (-1)^k a_k +O \left( \max_{1 \le k \le K} a_k \right) . \end{equation} For $N=q_K$, the error term can be improved to \begin{equation}\label{EqK} E_{q_K} = \frac{1}{12} \sum_{k=1}^K (-1)^k a_k +O(1) . \end{equation} Both implied constants are universal. Generalizing results of Beck, in a recent paper \cite{BO1} we proved that if $a_k \le c k^d$ with some constants $c>0$ and $d \ge 0$, then for any $q_{K-1} \le N \le q_K$, the ``variance'' is \begin{equation}\label{Tnvariance} \frac{1}{N} \sum_{n=0}^{N-1} (T_n-E_N)^2 = \sum_{m=1}^{q_K-1} \frac{1}{8 \pi^4 m^2 \| m \alpha \|^2} +O \left( \max_{|k-K| \ll \log K} a_k^2 \cdot (\log \log N)^4 \right) \end{equation} with implied constants depending only on $c$ and $d$. See also Lemma \ref{Tnlemma} below. Finally, we will need two different evaluations of the Diophantine sum appearing in Proposition \ref{parsevalprop}. On the one hand, for general $\alpha$ we have \cite[p.\ 110]{BO3}, \cite{BO2} \begin{equation}\label{diophantineevaluation} \sum_{m=1}^{q_K-1} \frac{1}{m^2 \| m \alpha \|^2} = \frac{\pi^4}{90} \sum_{k=1}^K a_k^2 \pm 152 \sum_{k=1}^K a_k . \end{equation} On the other hand, Beck \cite[p.\ 176]{BE1} proved that if $\alpha$ is quadratic irrational, then for any $M \ge 1$, \begin{equation}\label{diophantinesumbeck} \sum_{m=1}^M \frac{1}{4 \pi^4 m^2 \| m \alpha \|^2} = c(\alpha) \log M +O(1) \end{equation} with some constant $c(\alpha)>0$ and an implied constant depending only on $\alpha$. \subsection{Optimal lattices}\label{optimalsubsection} In this section, we deduce Theorems \ref{optimalirrationaltheorem}, \ref{quadraticirrationaltheorem} and \ref{optimalrationaltheorem} from Propositions \ref{parsevalprop} and \ref{simpleparsevalprop}. \begin{proof}[Proof of Theorem \ref{optimalirrationaltheorem}] Consider first the symmetrized lattice $S(\alpha, N)$. We will show the implications \[ \frac{1}{K} \sum_{k=1}^K a_k^2 \ll 1 \,\, \Longrightarrow \,\, D_2(S(\alpha, N)) \ll \sqrt{\log N} \,\, \Longrightarrow \,\, D_2(S(\alpha, q_K)) \ll \sqrt{\log q_K} \,\, \Longrightarrow \,\, \frac{1}{K} \sum_{k=1}^K a_k^2 \ll 1 . \] Assume that $K^{-1} \sum_{k=1}^K a_k^2 \ll 1$ as $K \to \infty$. By Proposition \ref{simpleparsevalprop}, for any $q_{K-1} \le N \le q_K$ we have $D_2^2 (S(\alpha, N)) \ll \sum_{k=1}^K a_k^2 \ll K \ll \log N$, as claimed. The second implication is trivial. Next, assume that $D_2 (S(\alpha, N)) \ll \sqrt{\log N}$ as $N \to \infty$. By Proposition \ref{simpleparsevalprop}, for $N=q_K$ we have \[ \sum_{k=1}^K a_k^2 \ll D_2^2 (S(\alpha, q_K)) \ll \log q_K \le \sum_{k=1}^K \log (a_k+1) \ll \sum_{k=1}^K a_k \le \sqrt{K \sum_{k=1}^K a_k^2}, \] and the claim follows. This finishes the proof of the equivalence for $S(\alpha, N)$. Consider now the unsymmetrized lattice $L(\alpha, q_K)$. Assume that $K^{-1} \sum_{k=1}^K a_k^2 \ll 1$ and $K^{-1/2} \left| \sum_{k=1}^K (-1)^k a_k \right| \ll 1$ as $K \to \infty$. By Proposition \ref{simpleparsevalprop}, for $N=q_K$ we have \[ D_2^2 (L(\alpha, q_K)) \ll \sum_{k=1}^K a_k^2 + \left( \sum_{k=1}^K (-1)^k a_k \right)^2 \ll K \ll \log q_K, \] as claimed. Next, assume that $D_2(L(\alpha, q_K)) \ll \sqrt{\log q_K}$ as $K \to \infty$. By Proposition \ref{simpleparsevalprop}, for $N=q_K$ we have \[ \sum_{k=1}^K a_k^2 + \left( \sum_{k=1}^K (-1)^k a_k \right)^2 \ll D_2^2(L(\alpha, q_K)) \ll \log q_K . \] Hence both $\sum_{k=1}^K a_k^2 \ll \log q_K$ and $\left( \sum_{k=1}^K (-1)^k a_k \right)^2 \ll \log q_K$. As above, the former estimate shows that $K^{-1} \sum_{k=1}^K a_k^2 \ll 1$. In particular, $\log q_K \le \sum_{k=1}^K \log (a_k+1) \ll \sum_{k=1}^K a_k^2 \ll K$, therefore $\left( \sum_{k=1}^K (-1)^k a_k \right)^2 \ll K$, as claimed. This finishes the proof of the equivalence for $L(\alpha, q_K)$. \end{proof} \begin{proof}[Proof of Theorem \ref{optimalrationaltheorem}] As Proposition \ref{simpleparsevalprop} applies to both rationals and irrationals, the proof is identical to that of Theorem \ref{optimalirrationaltheorem}. \end{proof} \begin{proof}[Proof of Theorem \ref{quadraticirrationaltheorem}] Let $\alpha$ be a quadratic irrational. By Proposition \ref{parsevalprop} and formula \eqref{diophantinesumbeck}, for any $q_{K-1} \le N \le q_K$, \[ D_2^2 (S(\alpha, N)) = \sum_{m=1}^{q_K-1} \frac{1}{4 \pi^4 m^2 \| m \alpha \|^2} +O(1)=c(\alpha) \log N +O(1) , \] as claimed. Using also formula \eqref{Tnvariance}, we similarly get \[ D_2^2 (L(\alpha, N)) = \frac{3}{2} c(\alpha) \log N + E_N^2+\frac{1}{2} E_N +O((\log \log N)^4). \] Formula \eqref{EN} shows that here $E_N= \frac{A(\alpha)}{12} K+O(1)=\frac{A(\alpha)}{12 \Lambda(\alpha)} \log N +O(1)$, and the claim follows. \end{proof} \subsection{Proof of Proposition \ref{parsevalprop}}\label{section2.3} \begin{lem}\label{diophantinelemma}\hspace{1mm} \begin{enumerate} \item[(i)] For any $K \ge 1$, \[ \sum_{m=1}^{q_K-1} \frac{1}{\pi^2 m^2 \| m \alpha \|} \le \sum_{k=0}^{K-1} \frac{a_{k+1}}{2 q_k} + 3.12. \] \item[(ii)] For any $K \ge 1$ and $n \ge 0$, \[ \sum_{m=q_K}^{\infty} \frac{1}{2 \pi^2 m^2} \min \left\{ \frac{1}{4 \| m \alpha \|^2}, n^2 \right\} \le 1.12 \frac{n}{q_K} + 0.61 \frac{n^2}{q_K^2}. \] \item[(iii)] For any $K \ge 1$ and $N \ge q_{K-1}$, \[ \sum_{m=1}^{q_K-1} \frac{1}{4 \pi^4 m^2 \| m \alpha \|^2} \min \left\{ \frac{1}{4 N \| 2 m \alpha \|}, 1 \right\} \le \frac{\zeta(3)}{16 \pi^4 N} \sum_{k=0}^{K-1} (a_{k+1}+2)^3 q_k+ 0.07. \] \end{enumerate} \end{lem} \begin{proof} The proof of all three claims is based on the following simple observations. Let $k \ge 1$, or $k=0$ and $a_1>1$. For any integer $a \ge 1$ let $J_{k,a}=[aq_k, (a+1)q_k) \cap [q_k,q_{k+1})$ be a (possibly empty) index set. Let $\delta_k=q_k \alpha -p_k$, and recall from the general theory of continued fractions that $1/(q_{k+1}+q_k) \le |\delta_k|=\| q_k \alpha \| \le 1/q_{k+1}$. For any integer $m \in J_{k,a}$, we have $m \alpha = mp_k/q_k + m \delta_k/q_k$, and here the second term is negligible as $m|\delta_k|/q_k<1/q_k$. Since $p_k$ and $q_k$ are relatively prime, as $m$ runs in the index set $J_{k,a}$, the numbers $mp_k$ attain each mod $q_k$ residue class at most once. If $mp_k \not\equiv 0, \pm 1 \pmod{q_k}$, then \[ \| m \alpha \| = \left\| \frac{mp_k}{q_k} + \frac{m \delta_k}{q_k} \right\| \ge \left\| \frac{mp_k}{q_k} \right\| - \frac{1}{q_k} \ge \frac{1}{2} \left\| \frac{mp_k}{q_k} \right\| . \] Therefore for any nondecreasing function $f: [2,\infty ) \to [0,\infty )$, we have \begin{equation}\label{fdiophantinesum} \sum_{m \in J_{k,a}} f \left( \frac{1}{\| m \alpha \|} \right) \le 3 f \left( \frac{1}{\| q_k \alpha \|} \right) + \sum_{j=2}^{q_k-2} f \left( \frac{2}{\| j/q_k \|} \right) \le 3f \left( \frac{1}{\| q_k \alpha \|} \right) + 2 \sum_{2 \le j \le q_k/2} f \left( \frac{2q_k}{j} \right) . \end{equation} Note that $3 f(1/\| q_k \alpha \| )$ is an upper bound to the contribution of the three terms for which $m p_k \equiv 0, \pm 1 \pmod{q_k}$. We also have the simpler estimate \begin{equation}\label{fdiophantinesumsimple} \sum_{1 \le m < q_{k+1}} f \left( \frac{1}{\| m \alpha \|} \right) \le 2 \sum_{1 \le j \le q_{k+1}/2} f \left( \frac{1}{j \| q_k \alpha \|} \right) . \end{equation} Indeed, consider the points $m \alpha \pmod{1}$, $1 \le m < q_{k+1}$ and the intervals $H_j=[j \| q_k \alpha \|, (j+1)\| q_k \alpha \|)$, $j \ge 1$ and $H_j = ((j-1) \| q \alpha \|, j \| q_k \alpha \| ]$, $j \le -1$. Since $\| (m_1-m_2) \alpha \| \ge \| q_k \alpha \|$ for any $m_1, m_2 \in [1,q_{k+1})$, $m_1 \neq m_2$, each interval $H_j$ contains at most one point $m \alpha \pmod{1}$, and \eqref{fdiophantinesumsimple} follows. \noindent\textbf{(i)} Estimate \eqref{fdiophantinesum} yields \[ \sum_{m \in J_{k,a}} \frac{1}{\pi^2 m^2 \| m \alpha \|} \le \frac{1}{\pi^2 a^2 q_k^2} \left( \frac{3}{\| q_k \alpha \|} + 2 \sum_{2 \le j \le q_k/2} \frac{2q_k}{j} \right) \le \frac{1}{\pi^2 a^2 q_k^2} \left( 3(q_{k+1}+q_k) + 4 q_k \log \frac{q_k}{2} \right) . \] Summing over $a \ge 1$ and\footnote{If $a_1=1$, then the term $k=0$ can be removed.} $0 \le k \le K-1$ leads to \[ \begin{split} \sum_{m=1}^{q_K-1} \frac{1}{\pi^2 m^2 \| m \alpha \|} &\le \sum_{k=0}^{K-1} \frac{3 q_{k+1} + 3q_k + 4 q_k \log (q_k/2)}{6 q_k^2} \\ &\le \sum_{k=0}^{K-1} \frac{a_{k+1}}{2 q_k} + \sum_{k=0}^{K-1} \frac{3+2\log (q_k/2)}{3 q_k} \\ &\le \sum_{k=0}^{K-1} \frac{a_{k+1}}{2 q_k} + \sum_{k=0}^{\infty} \frac{3+2 \log (F_{k+1}/2)}{3 F_{k+1}} , \end{split} \] where $F_{k+1}$ are the Fibonacci numbers. The numerical value of the series in the previous line is $3.1195\dots$, as claimed. \noindent\textbf{(ii)} Estimate \eqref{fdiophantinesum} yields \[ \sum_{m \in J_{k,a}} \frac{1}{2 \pi^2 m^2} \min \left\{ \frac{1}{4 \| m \alpha \|^2}, n^2 \right\} \le \frac{1}{2 \pi^2 a^2 q_k^2} \left( 3n^2 + 2 \sum_{j=2}^{\infty} \min \left\{ \frac{q_k^2}{j^2} ,n^2 \right\} \right) \le \frac{1}{2 \pi^2 a^2 q_k^2} \left( 3n^2 + 4 n q_k \right) . \] Note that the contribution of the terms $2 \le j \le \lfloor q_k/n \rfloor +1$ and $j \ge \lfloor q_k/n \rfloor +2$ is at most $n q_k$ each. Summing over $a \ge 1$ and $k \ge K$ leads to \[ \sum_{m=q_K}^{\infty} \frac{1}{2 \pi^2 m^2} \min \left\{ \frac{1}{4 \| m \alpha \|^2}, n^2 \right\} \le \sum_{k=K}^{\infty} \frac{3n^2 + 4nq_k}{12 q_k^2} . \] From the recursion satisfied by $q_k$ one readily sees that $q_{K+\ell} \ge F_{\ell+1} q_K$ for all $\ell \ge 0$, hence the right hand side of the previous formula is at most $c_1 n/q_K + c_2n^2/q_K^2$ with $c_1=\sum_{\ell =0}^{\infty} 1/(3F_{\ell +1}) = 1.1199\dots$ and $c_2=\sum_{\ell =0}^{\infty} 1/(4F_{\ell +1}^2)=0.6065\dots$, as claimed. \noindent\textbf{(iii)} The contribution of all $m$ such that $\| m \alpha \| > 1/4$ is negligible: \[ \sum_{\substack{1 \le m \le q_K-1 \\ \| m \alpha \| > 1/4}} \frac{1}{4 \pi^4 m^2 \| m \alpha \|^2} \min \left\{ \frac{1}{4 N \| 2 m \alpha \|}, 1 \right\} < \sum_{m=1}^{\infty} \frac{4}{\pi^4 m^2} = \frac{2}{3 \pi^2} . \] On the other hand, $\| m \alpha \| \le 1/4$ implies $\| 2m \alpha \| = 2 \| m \alpha \|$, hence the contribution of all such terms is \[ \sum_{\substack{1 \le m \le q_K-1 \\ \| m \alpha \| \le 1/4}} \frac{1}{4 \pi^4 m^2 \| m \alpha \|^2} \min \left\{ \frac{1}{4 N \| 2 m \alpha \|}, 1 \right\} \le \sum_{m=1}^{q_K-1} \frac{1}{32 \pi^4 N m^2 \| m \alpha \|^3} . \] Estimate \eqref{fdiophantinesumsimple} gives \[ \sum_{q_k \le m < q_{k+1}} \frac{1}{32 \pi^4 N m^2 \| m \alpha \|^3} \le \frac{1}{16 \pi^4 N q_k^2} \sum_{j=1}^{\infty} \frac{1}{j^3 \| q_k \alpha \|^3} \le \frac{\zeta(3) (a_{k+1}+2)^3 q_k}{16 \pi^4 N} . \] Summing over $0 \le k \le K-1$, we thus obtain \[ \sum_{m=1}^{q_K-1} \frac{1}{4 \pi^4 m^2 \| m \alpha \|^2} \min \left\{ \frac{1}{4 N \| 2 m \alpha \|}, 1 \right\} \le \sum_{k=0}^{K-1} \frac{\zeta(3) (a_{k+1}+2)^3 q_k}{16 \pi^4 N} + \frac{2}{3 \pi^2} . \] Here $2/(3 \pi^2)=0.06754\dots$, as claimed. \end{proof} \begin{proof}[Proof of Proposition \ref{parsevalprop}] We give a detailed proof for the symmetrized lattice $S(\alpha, N)$, and then indicate at the end how to modify the proof for the unsymmetrized lattice $L(\alpha, N)$. Let $B(x,y)=|S(\alpha, N) \cap ([0,x) \times [0,y))|$ denote the number of points of $S(\alpha, N)$ which fall into the box $[0,x) \times [0,y)$. Integrating on the strips $[0,1) \times [n/N,(n+1)/N)$ separately leads to \[ D_2^2 (S(\alpha, N)) = \sum_{n=0}^{N-1} \int_0^1 \int_{\frac{n}{N}}^{\frac{n+1}{N}} \left( B(x,y) - 2N xy \right)^2 \, \mathrm{d}y \, \mathrm{d}x = M+R+\frac{4}{9} \] with \[ \begin{split} M&:=\frac{1}{N} \sum_{n=0}^{N-1} \int_0^1 \left( B \left( x, \frac{n+1}{N} \right) - 2(n+1)x \right)^2 \, \mathrm{d}x, \\ R&:= \frac{2}{N} \sum_{n=0}^{N-1} \int_0^1 \left( B \left( x, \frac{n+1}{N} \right) - 2(n+1)x \right) x \, \mathrm{d}x . \end{split} \] The function \[ B \left( x, \frac{n+1}{N} \right) -2(n+1)x = \sum_{\ell =0}^{n} \left( I_{[0,x)}(\{ \ell \alpha \}) + I_{[0,x)}(\{ -\ell \alpha \}) -2x \right) \] is mean zero, and has Fourier coefficients \[ \begin{split} \int_0^1 \left( B \left( x, \frac{n+1}{N} \right) -2(n+1)x \right) e^{-2 \pi i m x} \, \mathrm{d}x &= \sum_{\ell =0}^n \frac{\cos (2 \ell m \pi \alpha)}{\pi i m} \\ &= \frac{1}{2 \pi i m} \left( \frac{\sin ((2n+1)m \pi \alpha )}{\sin (m \pi \alpha )} + 1 \right) . \end{split} \] The Fourier coefficients of $x$ are $\int_0^1 x e^{-2\pi i m x} \, \mathrm{d}x=-1/(2 \pi i m)$, thus by the Parseval formula we have \[ R = \frac{2}{N} \sum_{n=0}^{N-1} 2 \sum_{m=1}^{\infty} \frac{1}{2 \pi i m} \left( \frac{\sin ((2n+1)m \pi \alpha )}{\sin (m \pi \alpha)} + 1 \right) \cdot \frac{-1}{2 \pi i m} = \frac{1}{N} \sum_{n=0}^{N-1} \sum_{m=1}^{\infty} \frac{\sin ((2n+1) m \pi \alpha )}{\pi^2 m^2 \sin (m \pi \alpha)} +\frac{1}{6} . \] The Parseval formula similarly gives \[ \begin{split} M &= \frac{1}{N} \sum_{n=0}^{N-1} 2 \sum_{m=1}^{\infty} \frac{1}{4 \pi^2 m^2} \left( \frac{(\sin((2n+1)m \pi \alpha)}{\sin (m \pi \alpha)} + 1 \right)^2 \\ &= \frac{1}{N} \sum_{n=0}^{N-1} \sum_{m=1}^{\infty} \frac{\sin^2((2n+1)m \pi \alpha)}{2\pi^2 m^2 \sin^2 (m \pi \alpha)} + \frac{1}{N} \sum_{n=0}^{N-1} \sum_{m=1}^{\infty} \frac{\sin ((2n+1)m \pi \alpha )}{\pi^2 m^2 \sin (m \pi \alpha )} + \frac{1}{12} . \end{split} \] We can estimate the total error in the previous two formulas using \[ \left| \frac{\sin ((2n+1)m \pi \alpha )}{\sin (m \pi \alpha)} \right| \le \min \left\{ \frac{1}{2 \| m \alpha \|}, 2n+1 \right\} \] and Lemma \ref{diophantinelemma} (i) as \[ \begin{split} \bigg| \frac{1}{N} \sum_{n=0}^{N-1} \sum_{m=1}^{\infty} \frac{2 \sin ((2n+1)m \pi \alpha )}{\pi^2 m^2 \sin (m \pi \alpha )} \bigg| &\le \frac{1}{N} \sum_{n=0}^{N-1} \left( \sum_{m=1}^{q_K-1} \frac{1}{\pi^2 m^2 \| m \alpha \|} + \sum_{m=q_K}^{\infty} \frac{2(2n+1)}{\pi^2 m^2} \right) \\ &\le \sum_{k=0}^{K-1} \frac{a_{k+1}}{2q_k} +3.12+ \frac{4N}{\pi^2 q_K} . \end{split} \] By the assumption $N \le q_K$ and the fact $3.12+4/\pi^2+4/9+1/6+1/12 <4.22$, we thus obtain \[ D_2^2 (S(\alpha, N)) = \frac{1}{N} \sum_{n=0}^{N-1} \sum_{m=1}^{\infty} \frac{\sin^2((2n+1)m \pi \alpha)}{2\pi^2 m^2 \sin^2 (m \pi \alpha)} \pm \left( \sum_{k=0}^{K-1} \frac{a_{k+1}}{2q_k} + 4.22 \right) . \] Lemma \ref{diophantinelemma} (ii) estimates the tail of the infinite series in the previous formula as \[ \sum_{m=q_K}^{\infty} \frac{\sin^2((2n+1)m \pi \alpha)}{2\pi^2 m^2 \sin^2 (m \pi \alpha)} \le \sum_{m=q_K}^{\infty} \frac{1}{2 \pi^2 m^2} \min \left\{ \frac{1}{4 \| m \alpha \|^2}, (2n+1)^2 \right\} \le 1.12 \frac{2n+1}{q_K} + 0.61 \frac{(2n+1)^2}{q_K^2} . \] By the assumption $N \le q_K$ and the facts $\sum_{n=0}^{N-1}(2n+1)^2 \le (4/3)N^3$ and $4.22+1.12+(4/3)\cdot 0.61<6.16$, we immediately get \[ D_2^2 (S(\alpha, N)) = \frac{1}{N} \sum_{n=0}^{N-1} \sum_{m=1}^{q_K-1} \frac{\sin^2((2n+1)m \pi \alpha)}{2\pi^2 m^2 \sin^2 (m \pi \alpha)} \pm \left( \sum_{k=0}^{K-1} \frac{a_{k+1}}{2q_k} + 6.16 \right) . \] Elementary calculations show that the function $1/\sin^2 (\pi x) - 1/(\pi^2 \| x \|^2)$ is increasing on $(0,1/2]$, hence $1/(\pi^2 \| x \|^2) \le 1/\sin^2 (\pi x) \le 1/(\pi^2 \| x \|^2) + 1-4/\pi^2$ for all $x$. The error of replacing $\sin^2 (m \pi \alpha)$ by $\pi^2 \| m \alpha \|^2$ in the denominator of the previous formula is thus at most \[ \frac{1}{N} \sum_{n=0}^{N-1} \sum_{m=1}^{q_K-1} \frac{\sin^2 ((2n+1)m \pi \alpha) (1-4/\pi^2)}{2 \pi^2 m^2} \le \sum_{m=1}^{\infty} \frac{1-4/\pi^2}{2 \pi^2 m^2} = \frac{1-4/\pi^2}{12} . \] Since $6.16+(1-4/\pi^2)/12 <6.21$, we obtain \begin{equation}\label{dsquareformula} D_2^2 (S(\alpha, N)) = \frac{1}{N} \sum_{n=0}^{N-1} \sum_{m=1}^{q_{K-1}-1} \frac{\sin^2((2n+1)m \pi \alpha)}{2\pi^4 m^2 \| m \alpha \|^2} + \xi_S(\alpha, N) \pm \left( \sum_{k=0}^{K-1} \frac{a_{k+1}}{2q_k} + 6.21 \right) , \end{equation} where we define \[ \xi_S(\alpha, N) := \frac{1}{N} \sum_{n=0}^{N-1} \sum_{m=q_{K-1}}^{q_K-1} \frac{\sin^2((2n+1)m \pi \alpha)}{2\pi^4 m^2 \| m \alpha \|^2} . \] Using the trigonometric identity \[ \frac{1}{N} \sum_{n=0}^{N-1} \sin^2 ((2n+1)x) = \frac{1}{2} - \frac{\sin(4Nx)}{4N\sin(2x)}, \] the first term in \eqref{dsquareformula} simplifies to \[ \sum_{m=1}^{q_{K-1}-1} \frac{1}{4 \pi^4 m^2 \| m \alpha \|^2} - \sum_{m=1}^{q_{K-1}-1} \frac{\sin (4Nm \pi \alpha)}{8 \pi^4 N m^2 \| m \alpha \|^2 \sin(2m\pi \alpha )} . \] Here second term can be estimated using Lemma \ref{diophantinelemma} (iii) as \[ \begin{split} \left| \sum_{m=1}^{q_{K-1}-1} \frac{\sin (4Nm \pi \alpha)}{8 \pi^4 N m^2 \| m \alpha \|^2 \sin(2m\pi \alpha )} \right| &\le \sum_{m=1}^{q_{K-1}-1} \frac{1}{4 \pi^4 m^2 \| m \alpha \|^2} \min \left\{ \frac{1}{4N \| 2m \alpha \|} , 1 \right\} \\ &\le \frac{\zeta(3)}{16 \pi^4 N} \sum_{k=0}^{K-2} (a_{k+1}+2)^3 q_k + 0.07. \end{split} \] Therefore \eqref{dsquareformula} simplifies to \[ D_2^2 (S(\alpha, N)) = \sum_{m=1}^{q_{K-1}-1} \frac{1}{4 \pi^4 m^2 \| m \alpha \|^2} + \xi_S(\alpha, N) \pm \left( \sum_{k=0}^{K-1} \frac{a_{k+1}}{2q_k} + \frac{\zeta(3)}{16 \pi^4 N} \sum_{k=0}^{K-2} (a_{k+1}+2)^3 q_k + 6.28 \right) , \] and it remains to prove the properties of $\xi_S(\alpha, N)$. Clearly, $0 \le \xi_S(\alpha, N) \le \sum_{m=q_{K-1}}^{q_K-1} \frac{1}{2 \pi^4 m^2 \| m \alpha \|^2}$. On the other hand, repeating arguments from above and from Lemma \ref{diophantinelemma} (iii), we can also write \[ \begin{split} \xi_S(\alpha, N) &= \sum_{m=q_{K-1}}^{q_K-1} \frac{1}{4 \pi^4 m^2 \| m \alpha \|^2} - \sum_{m=q_{K-1}}^{q_K-1} \frac{\sin (4Nm \pi \alpha)}{8 \pi^4 N m^2 \| m \alpha \|^2 \sin(2m\pi \alpha )} \\ &= \sum_{m=q_{K-1}}^{q_K-1} \frac{1}{4 \pi^4 m^2 \| m \alpha \|^2} \pm \sum_{m=q_{K-1}}^{q_K-1} \frac{1}{4 \pi^4 m^2 \| m \alpha \|^2} \min \left\{ \frac{1}{4N \| 2m \alpha \|}, 1 \right\} \\ &= \sum_{m=q_{K-1}}^{q_K-1} \frac{1}{4 \pi^4 m^2 \| m \alpha \|^2} \pm \left( \frac{\zeta(3)}{16 \pi^4 N} (a_K+2)^3 q_{K-1} +0.07 \right) . \end{split} \] This finishes the proof for $S(\alpha, N)$. The proof for $L(\alpha, N)$ is entirely analogous. The only difference is that the number of points $B(x,y):=|L(\alpha, N) \cap ([0,x) \times [0,y))|$ which fall into the box $[0,x) \times [0,y)$ satisfies \[ B \left( x, \frac{n+1}{N} \right) - (n+1) x = \sum_{\ell =0}^n \left( I_{[0,x)}(\{ \ell \alpha \}) -x \right) , \] which is not a mean zero function. Its integral ($0$th Fourier coefficient) is \[ \int_0^1 \left( B \left( x, \frac{n+1}{N} \right) - (n+1) x \right) \, \mathrm{d} x = \sum_{\ell =0}^n \left( \frac{1}{2} - \{ \ell \alpha \} \right) =T_n, \] which introduces the extra terms $N^{-1} \sum_{n=0}^{N-1} T_n/2$ resp.\ $N^{-1} \sum_{n=0}^{N-1} T_n^2$ when the Parseval formula is applied to the analogue of $R$ resp.\ $M$ as above. For the convenience of the reader we mention that the analogue of formula \eqref{dsquareformula} is \[ \begin{split} D_2^2 (L(\alpha, N)) = &\frac{1}{N} \sum_{n=0}^{N-1} \left( T_n^2+\frac{1}{2} T_n \right) + \frac{1}{N} \sum_{n=0}^{N-1} \sum_{m=1}^{q_{K-1}-1} \frac{\sin^2((n+1)m \pi \alpha)}{2\pi^4 m^2 \| m \alpha \|^2} \\ &+ \xi_L(\alpha, N) \pm \left( \sum_{k=0}^{K-1} \frac{a_{k+1}}{8q_k} + 2.78 \right) , \end{split} \] where \[ \xi_L (\alpha, N) := \frac{1}{N} \sum_{n=0}^{N-1} \sum_{m=q_{K-1}}^{q_K-1} \frac{\sin^2((n+1)m \pi \alpha)}{2\pi^4 m^2 \| m \alpha \|^2} . \] \end{proof} \subsection{Proof of Proposition \ref{simpleparsevalprop}}\label{section2.4} The following lemma is a simpler form of formula \eqref{Tnvariance}, but it applies without any assumption on the partial quotients. As modifying the proof of \eqref{Tnvariance} is not entirely straightforward, we include the details. \begin{lem}\label{Tnlemma} For any $K \ge 1$, \[ \frac{1}{q_K} \sum_{n=0}^{q_K-1} (T_n-E_{q_K})^2 \ll \sum_{k=1}^K a_k^2 \] with a universal implied constant. \end{lem} \begin{proof} For the sake of readability, set $p=p_K$ and $q=q_K$. For any integer $1 \le \ell \le q-1$, we have $\| \ell p/q \| \ge 1/q$ and $|\ell \alpha - \ell p/q| \le q |\alpha -p/q| < 1/q$. Thus there is no integer between $\ell p/q$ and $\ell \alpha$, hence \[ \left| \{ \ell \alpha \} - \left\{ \frac{\ell p}{q} \right\} \right| \le \left| \ell \alpha - \frac{\ell p}{q} \right| < \frac{1}{q} . \] Consequently, for all $0 \le n \le q-1$, \[ T_n=\sum_{\ell=0}^n \left( \frac{1}{2} - \{ \ell \alpha \} \right) = \sum_{\ell =0}^n \left( \frac{1}{2} - \frac{1}{2q} - \left\{ \frac{\ell p}{q} \right\} \right) + O(1) . \] Introducing \[ T_n^* := \sum_{\ell =0}^n \left( \frac{1}{2} - \frac{1}{2q} - \left\{ \frac{\ell p}{q} \right\} \right) \quad \textrm{and} \quad E_q^* := \frac{1}{q} \sum_{n=0}^{q-1} T_n^*, \] we thus have $T_n -E_q = T_n^* - E_q^*+O(1)$. Therefore $q^{-1} \sum_{n=0}^{q-1} (T_n-E_q)^2 \ll q^{-1} \sum_{n=0}^{q-1} (T_n^*-E_q^*)^2 +1$, and it remains to estimate the latter. The rest of the proof is based on Fourier analysis on the finite cyclic group $\mathbb{Z}_q$, which we identify by $\{ 0,1,\dots, q-1 \}$. Elementary calculations show that \[ \sum_{x=0}^{q-1} \left( \frac{1}{2} - \frac{1}{2q} - \left\{ \frac{x}{q} \right\} \right) e^{-2 \pi i m x/q} = \left\{ \begin{array}{ll} 0 & \textrm{if } m=0, \\ 1/(1-e^{-2 \pi i m/q}) & \textrm{if } 1 \le m \le q-1 . \end{array} \right. \] Therefore by Fourier inversion on $\mathbb{Z}_q$, \[ \frac{1}{2} - \frac{1}{2q} - \left\{ \frac{x}{q} \right\} = \frac{1}{q} \sum_{m=1}^{q-1} \frac{e^{2 \pi i m x/q}}{1-e^{-2 \pi i m /q}} , \qquad x \in \mathbb{Z} . \] We can thus write $T_n^*$ as \[ T_n^* = \frac{1}{q} \sum_{m=1}^{q-1} \sum_{\ell =0}^n \frac{e^{2 \pi i m \ell p /q}}{1-e^{-2 \pi i m/q}} = \frac{1}{q} \sum_{m=1}^{q-1} \frac{1-e^{2 \pi i m (n+1)p/q}}{(1-e^{- 2 \pi i m/q}) (1-e^{2 \pi i m p/q})} . \] Letting $B=q^{-1} \sum_{m=1}^{q-1} 1/(1-e^{-2 \pi i m/q})(1-e^{2 \pi i mp/q})$, we have \[ \frac{1}{q} \sum_{n=0}^{q-1} (T_n^*-E_q^*)^2 \le \frac{1}{q} \sum_{n=0}^{q-1} |T_n^*-B|^2 = \frac{1}{q} \sum_{n=0}^{q-1} \frac{1}{q^2} \left| \sum_{m=1}^{q-1} \frac{e^{2 \pi i m (n+1)p/q}}{(1-e^{-2 \pi i m/q})(1-e^{2 \pi i m p/q})} \right|^2 . \] Expanding the square shows that here \[ \begin{split} \bigg| \sum_{m=1}^{q-1} &\frac{e^{2 \pi i m (n+1)p/q}}{(1-e^{-2 \pi i m/q})(1-e^{2 \pi i m p/q})} \bigg|^2 \\ & \hspace{20mm} = \sum_{m=1}^{q-1} \frac{1}{|1-e^{-2 \pi i m/q}|^2 |1-e^{2 \pi i m p/q}|^2} \\ & \hspace{25mm} + \sum_{\substack{m_1, m_2=1 \\ m_1 \neq m_2}}^{q-1} \frac{e^{2 \pi i (m_1-m_2)(n+1)p/q}}{(1-e^{- 2 \pi i m_1 /q})(1-e^{2 \pi i m_1 p/q})(1-e^{2 \pi i m_2 /q})(1-e^{-2 \pi i m_2 p/q})}. \end{split} \] As $\sum_{n=0}^{q-1} e^{2 \pi i (m_1-m_2)(n+1)p/q} =0$ for all $m_1 \neq m_2$, the contribution of the off-diagonal terms is zero. Formula \eqref{diophantineevaluation} thus leads to \[ \frac{1}{q} \sum_{n=0}^{q-1} (T_n^*-E_q^*)^2 \le \frac{1}{q^2} \sum_{m=1}^{q-1} \frac{1}{|1-e^{-2 \pi i m/q}|^2 |1-e^{2 \pi i mp/q}|^2} \ll \sum_{m=1}^{q-1} \frac{1}{m^2 \| mp/q \|^2} \ll \sum_{k=1}^K a_k^2, \] as claimed. \end{proof} \begin{proof}[Proof of Proposition \ref{simpleparsevalprop}] By Proposition \ref{parsevalprop}, for any $q_{K-1} \le N \le q_K$ we have \[ D_2^2(S(\alpha, N)) \ll \sum_{m=1}^{q_K-1} \frac{1}{m^2 \| m \alpha \|^2} + \sum_{k=0}^{K-1} \frac{a_{k+1}}{q_k} + \sum_{k=0}^{K-2} \frac{a_{k+1}^3 q_k}{N} . \] Here $a_{k+1}^3 q_k /N \le a_{k+1}^2$, hence formula \eqref{diophantineevaluation} yields $D_2^2(S(\alpha, N)) \ll \sum_{k=1}^K a_k^2$, as claimed. Using Lemma \ref{Tnlemma} and formula \eqref{EqK} we also deduce that for $N=q_K$, \[ \frac{1}{q_K} \sum_{n=0}^{q_K-1} \left( T_n^2+\frac{1}{2} T_n \right) = \frac{1}{q_K} \sum_{n=0}^{q_K-1} (T_n-E_{q_K})^2 + E_{q_K}^2 + \frac{1}{2} E_{q_K} \ll \sum_{k=1}^K a_k^2 + \left( \sum_{k=1}^K (-1)^k a_k \right)^2 , \] and the upper bound for $D_2^2(L(\alpha, q_K))$ follows. Next, we prove the lower bounds. Let $c>0$ resp.\ $C>0$ denote suitably small resp.\ large universal constants whose values change from line to line. By Proposition \ref{parsevalprop} and formula \eqref{diophantineevaluation}, for $N=q_K$ we have \[ \begin{split} D_2^2 (S(\alpha, q_K)) &\ge \sum_{m=1}^{q_K-1} \frac{1}{4 \pi^4 m^2 \| m \alpha \|^2} - \frac{\zeta (3)}{16 \pi^4 q_K} \sum_{k=0}^{K-1} (a_{k+1}+2)^3 q_k - C \sum_{k=1}^K a_k \\ &\ge \left( \frac{1}{360} - \frac{\zeta (3)}{16 \pi^4} \right) \sum_{k=1}^K a_k^2 -C \sum_{k=1}^K a_k . \end{split} \] The point is that $1/360>\zeta (3)/(16 \pi^4)$, i.e.\ the coefficient of $a_k^2$ is positive. The contribution of all $k$ such that $a_k \ll 1$ is $\ll K$, and for all other terms $a_k^2$ dominates $a_k$. Therefore $D_2^2 (S(\alpha, q_K)) \ge c \sum_{k=1}^K a_k^2 -CK$. On the other hand, by Roth's theorem we also have $D_2^2 (S(\alpha, q_K)) \gg \log q_K \gg K$. Taking a suitable weighted average of the previous two inequalities establishes the lower bound $D_2^2(S(\alpha, q_K)) \ge c \sum_{k=1}^K a_k^2$. From Proposition \ref{parsevalprop} we similarly deduce \[ D_2^2(L(\alpha, q_K)) \ge \frac{1}{q_K} \sum_{n=1}^{q_K-1} \left( T_n-E_{q_K} \right)^2 + E_{q_K}^2 + c \sum_{k=1}^K a_k^2 . \] Here $q_K^{-1} \sum_{n=0}^{q_K-1} (T_n-E_{q_K})^2 \ge 0$, and the lower bound for $D_2^2(L(\alpha, q_K))$ follows from formula \eqref{EqK}. \end{proof} \subsection{Proof of Remark \ref{LalphaNremark}} Let $\alpha$ be an irrational such that $a_k \ll \sqrt{k}/\log^2 k$. For any $q_{K-1} \le N \le q_K$ we then have $\max_{|k-K| \ll \log K} a_k^2 \cdot (\log \log N)^4 \ll K$, hence formulas \eqref{Tnvariance} and \eqref{diophantineevaluation} give \[ \frac{1}{N} \sum_{n=0}^{N-1} (T_n-E_N)^2 = \sum_{m=1}^{q_K-1} \frac{1}{8 \pi^4 m^2 \| m \alpha \|^2} +O(K) \ll \sum_{k=1}^K a_k^2 . \] Using this fact instead of Lemma \ref{Tnlemma} in the proof of Proposition \ref{simpleparsevalprop}, we deduce that $D_2^2(L(\alpha, N)) \ll \sum_{k=1}^K a_k^2 + (\sum_{k=1}^K (-1)^k a_k)^2$ holds for all $q_{K-1} \le N \le q_K$ (instead of only for $N=q_K$). In particular, the equivalence stated in Remark \ref{LalphaNremark} follows. \section{Typical irrationals}\label{typicalirrationalsection} \subsection{Asymptotics almost everywhere} Let us recall certain basic facts about the statistics of the partial quotients of a typical irrational number. Let $\varphi$ be a positve nondecreasing function on $(0,\infty)$, and let $A_K=\max_{1 \le k \le K} a_k$. It is well known that for a.e.\ $\alpha$ we have $\log q_k \sim \frac{\pi^2}{12 \log 2} k$, and that $a_k \le \varphi (k)$ for all but finitely many $k$ if and only if $\sum_{n=1}^{\infty} 1/\varphi (n)< \infty$. A classical result of Diamond and Vaaler \cite{DV} on trimmed sums states that for a.e.\ $\alpha$, \begin{equation}\label{diamondvaaler} \frac{\sum_{k=1}^K a_k - A_K}{K \log K} \to \frac{1}{\log 2} \qquad \textrm{as } K \to \infty . \end{equation} \begin{proof}[Proof of Theorem \ref{aeasymptotictheorem}] For any $N \ge 2$, let $K_N(\alpha)$ be the positive integer for which $q_{K_N(\alpha) -1} < N \le q_{K_N(\alpha )}$. In particular, for a.e.\ $\alpha$ we have $K_N(\alpha ) \sim \frac{12 \log 2}{\pi^2} \log N$, where $\frac{12 \log 2}{\pi^2}=0.8427\dots$. \noindent\textbf{(i)} Assume that $\sum_{n=1}^{\infty} 1/\varphi(n)<\infty$. As observed in the Introduction, by a classical discrepancy estimate for the sequence $\{ n \alpha \}$ \cite[p.\ 52]{DT}, we have \[ \begin{split} &D_2 (S(\alpha, N)) \ll D_{\infty} (L(\alpha, N)) \ll \sum_{k=1}^{K_N(\alpha)} a_k, \\ &D_2 (L(\alpha, N)) \ll D_{\infty} (L(\alpha, N)) \ll \sum_{k=1}^{K_N(\alpha)} a_k . \end{split} \] The asymptotic relation \eqref{diamondvaaler} of Diamond and Vaaler shows that for a.e.\ $\alpha$, \[ \begin{split} &D_2 (S(\alpha, N)) \le C \sum_{k=1}^{K_N(\alpha)} a_k = C A_{K_N(\alpha)} + O(K_N(\alpha) \log K_N(\alpha)) , \\ &D_2 (L(\alpha, N)) \le C \sum_{k=1}^{K_N(\alpha)} a_k = C A_{K_N(\alpha)} + O(K_N(\alpha) \log K_N(\alpha) ) \end{split} \] with a universal constant $C>0$. Here $A_{K_N(\alpha)} \le \varphi(K_N(\alpha))$ and $K_N(\alpha) \le \log N$ for all but finitely many $N$. Therefore $D_2(S(\alpha, N)) \le C \varphi (\log N)+O(\log N \log \log N)$ and $D_2(L(\alpha, N)) \le C \varphi (\log N)+O(\log N \log \log N)$ with implied constants depending only on $\alpha$ and $\varphi$. The factor $C$ can be removed by repeating the argument with $\varphi(x)/C$ instead of $\varphi(x)$. \noindent\textbf{(ii)} Assume that $\sum_{n=1}^{\infty} 1/\varphi(n)=\infty$. By Proposition \ref{simpleparsevalprop}, we have \[ D_2(S(\alpha, q_K)) \ge c \bigg( \sum_{k=1}^K a_k^2 \bigg)^{1/2} \ge c A_K \quad \textrm{and} \quad D_2(L(\alpha, q_K)) \ge c \bigg( \sum_{k=1}^K a_k^2 \bigg)^{1/2} \ge c A_K \] with a universal constant $c>0$. Here $A_K \ge \varphi(K)$ for infinitely many $K$, and $K \ge (\log q_K)/2$ for all but finitely many $K$. Hence $D_2(S(\alpha, q_K)) \ge c \varphi ((\log q_K)/2)$ and $D_2(L(\alpha, q_K)) \ge c \varphi ((\log q_K)/2)$ for infinitely many $K$. Repeating the argument with $\varphi (2x)/c$ instead of $\varphi (x)$, we deduce that $D_2(S(\alpha, q_K)) \ge \varphi (\log q_K)$ and $D_2(L(\alpha, q_K)) \ge \varphi (\log q_K)$ for infinitely many $K$, as claimed. \end{proof} \subsection{Limit distribution}\label{limitdistributionsection} Let $\lambda$ be the Lebesgue measure, and $\nu (B)= (1/\log 2) \int_B 1/(1+x) \, \mathrm{d} x$ ($B \subseteq [0,1]$ Borel) the Gauss measure. If $\alpha$ is chosen randomly from $[0,1]$ with distribution $\nu$, then its partial quotients are identically distributed random variables with distribution \[ \nu \left( \left\{ \alpha \in [0,1] \, : \, a_k = n \right\} \right) = \frac{1}{\log 2} \log \left( 1+\frac{1}{n(n+2)} \right) , \qquad k,n \ge 1. \] If $\alpha$ is chosen randomly from $[0,1]$ with distribution either $\lambda$ or $\nu$, then the sequence $a_k$ is $\psi$-mixing with exponential rate \cite[p.\ 119]{IK}. To find the limit distribution of $D_2^2(S(\alpha, N))/\log^2 N$, we shall need more sophisticated facts about the partial quotients of a typical irrational, which we now gather. Most importantly, a special case of a limit distribution theorem of Samur \cite{SA} (see also \cite{BO1}) states that if $\mu$ is a Borel probability measure on $[0,1]$ which is absolutely continuous with respect to the Lebesgue measure, then for any $t \ge 0$, \begin{equation}\label{Samur} \mu \left( \left\{ \alpha \in [0,1] \, : \, \frac{2 \log^2 2}{\pi K^2} \sum_{k=1}^K a_k^2 \le t \right\} \right) \to \int_0^t \frac{e^{-1/(2x)}}{\sqrt{2 \pi} x^{3/2}} \, \mathrm{d}x \qquad \textrm{as } K \to \infty . \end{equation} If $\mu$ is either $\lambda$ or $\nu$, then general results of Heinrich \cite{HE} on $\psi$-mixing random variables imply the rate of convergence \begin{equation}\label{Heinrich1} \sup_{t \ge 0} \left| \mu \left( \left\{ \alpha \in [0,1] \, : \, \frac{2 \log^2 2}{\pi K^2} \sum_{k=1}^K a_k^2 \le t \right\} \right) - \int_0^t \frac{e^{-1/(2x)}}{\sqrt{2 \pi} x^{3/2}} \, \mathrm{d}x \right| \ll \frac{1}{K^{1-\varepsilon}} \end{equation} with an arbitrary $\varepsilon >0$ and an implied constant depending only on $\varepsilon$. The corresponding result for $\sum_{k=1}^K a_k$ in the Gauss measure is also due to Heinrich: \[ \sup_{t \in \mathbb{R}} \left| \nu \left( \left\{ \alpha \in [0,1] \, : \, \frac{1}{K} \sum_{k=1}^K a_k - \frac{\log K -\gamma}{\log 2} \le t \right\} \right) - F(t) \right| \ll \frac{\log^2 K}{K} , \] where $\gamma$ is the Euler--Mascheroni constant, and $F(t)$ is the distribution function of the law with characteristic function \[ \int_{\mathbb{R}} e^{itx} \, \mathrm{d}F(t) = \exp \left( - \frac{\pi}{2 \log 2} |x| \left( 1+\frac{2i}{\pi} \mathrm{sgn}(x) \log |x| \right) \right) . \] Note that this is a stable law with stability parameter $1$ (and skewness parameter $1$). Hence $1-F(t) \ll 1/t$ as $t \to \infty$, and we immediately obtain \begin{equation}\label{sumakbound} \nu \left( \left\{ \alpha \in [0,1] \, : \, \frac{1}{K} \sum_{k=1}^K a_k \ge t + \frac{\log K}{\log 2} \right\} \right) \ll \frac{1}{t} + \frac{\log^2 K}{K} \qquad \textrm{as } t \to \infty . \end{equation} The final ingredient is a similar estimate for the convergent denominators: with a large enough universal constant $C>0$, \begin{equation}\label{qKbound} \nu \left( \left\{ \alpha \in [0,1] \, : \, \left| \log q_K - \frac{\pi^2}{12 \log 2} K \right| \ge C \sqrt{K \log K} \right\} \right) \ll \frac{1}{\sqrt{K}} . \end{equation} This follows from the fact that $\log q_K$ satisfies the central limit theorem with rate $O(1/\sqrt{K})$, as shown by Morita \cite{MO}. We mention that a better upper bound can be deduced from the large deviation inequality of Takahasi \cite{TA}, but \eqref{qKbound} suffices for our purposes. \begin{proof}[Proof of Theorem \ref{irrationallimitdistributiontheorem}] Throughout the proof, $C>0$ is a large universal constant whose value changes from line to line, and $Y_i=Y_i(\alpha, N)$, $i=1,2,\ldots$ are error terms. For any $N \ge 2$, let $K_N(\alpha)$ be the positive integer for which $q_{K_N(\alpha) -1} < N \le q_{K_N(\alpha )}$. Proposition \ref{parsevalprop} and formula \eqref{diophantineevaluation} show that we can write \[ D_2^2 (S(\alpha, N)) = \frac{1}{360} \sum_{k=1}^{K_N(\alpha)-1} a_k^2 + Y_1, \quad \textrm{where} \quad |Y_1| \le \frac{1}{180} a_{K_N(\alpha)}^2+ C \sum_{k=1}^{K_N(\alpha)} a_k + \frac{C}{N} \sum_{k=0}^{K_N(\alpha) -2} a_{k+1}^3 q_k . \] Using the general fact $q_{k+2}/q_k \ge 2$, we estimate the last error term as \[ \begin{split} \frac{1}{N} \sum_{k=0}^{K_N(\alpha)-2} a_{k+1}^3 q_k &\le \frac{1}{N} \sum_{k=1}^{K_N(\alpha)-1} a_k^2 q_k \\ &\le \sum_{k=1}^{K_N(\alpha) - 100 \log K_N(\alpha)} a_k^2 \frac{q_k}{q_{K_N(\alpha)-1}} + \sum_{k=K_N(\alpha) - 100 \log K_N(\alpha)}^{K_N(\alpha) -1} a_k^2 \frac{q_k}{q_{K_N(\alpha) -1}} \\ &\le \frac{1}{K_N(\alpha)^{10}} \sum_{k=1}^{K_N(\alpha )} a_k^2 + \sum_{k=K_N(\alpha) - 100 \log K_N(\alpha)}^{K_N(\alpha) -1} a_k^2 . \end{split} \] This leads to the simplified form $D_2^2(S(\alpha, N)) = (1/360) \sum_{k=1}^{K_N(\alpha)} a_k^2 + Y_2$, where \[ |Y_2| \le \frac{C}{K_N(\alpha)^{10}} \sum_{k=1}^{K_N(\alpha)} a_k^2 + C\sum_{k=K_N(\alpha) -100 \log K_N(\alpha)}^{K_N(\alpha)} a_k^2 + C\sum_{k=1}^{K_N(\alpha)} a_k . \] Set $\overline{K}=\lceil \frac{12 \log 2}{\pi^2} \log N \rceil$. The estimate \eqref{qKbound} states that \[ \nu \left( \left\{ \alpha \in [0,1] \, : \, \left| \log q_{\overline{K}} - \frac{\pi^2}{12 \log 2} \overline{K} \right| \ge C \sqrt{\overline{K} \log \overline{K}} \right\} \right) \ll \frac{1}{\sqrt{\overline{K}}} . \] By the definition of $K_N(\alpha)$ and $\overline{K}$, this immediately gives \[ \nu \left( \left\{ \alpha \in [0,1] \, : \, |K_N(\alpha ) - \overline{K}| \ge C \sqrt{\overline{K} \log \overline{K}} \right\} \right) \ll \frac{1}{\sqrt{\overline{K}}} . \] Roughly speaking, this means that we can replace $K_N(\alpha)$ by $\overline{K}$ in the above formulas; the point is that the latter does not depend on $\alpha$. More precisely, outside a set of $\nu$-measure $\ll 1/\sqrt{\overline{K}}$, we have $D_2^2(S(\alpha, N)) = (1/360) \sum_{k=1}^{\overline{K}} a_k^2+Y_3$, where \[ |Y_3| \le \frac{C}{\overline{K}^{10}} \sum_{k=1}^{2 \overline{K}} a_k^2 + C \sum_{k=\overline{K}-C\sqrt{\overline{K} \log \overline{K}}}^{\overline{K} + C \sqrt{\overline{K} \log \overline{K}}} a_k^2 +C \sum_{k=1}^{2\overline{K}} a_k . \] Since $5 \pi^3 /\log^2 N = 720 \log^2 2 /(\pi \overline{K}^2) +O(1/\overline{K}^3)$, normalizing the previous formula leads to the fact that outside a set of $\nu$-measure $\ll 1/\sqrt{\overline{K}}$, \[ 5 \pi^3 \frac{D_2^2 (S(\alpha, N))}{\log^2 N} = \frac{2 \log^2 2}{\pi \overline{K}^2} \sum_{k=1}^{\overline{K}} a_k^2 +Y_4 , \] where \[ |Y_4| \le \frac{C}{\overline{K}^3} \sum_{k=1}^{2 \overline{K}} a_k^2 + \frac{C}{\overline{K}^2} \sum_{k=\overline{K}-C\sqrt{\overline{K} \log \overline{K}}}^{\overline{K} + C \sqrt{\overline{K} \log \overline{K}}} a_k^2 +\frac{C}{\overline{K}^2} \sum_{k=1}^{2\overline{K}} a_k . \] We now estimate the three error terms in the previous formula. The limit distribution with rate of Heinrich \eqref{Heinrich1} gives \[ \nu \left( \left\{ \alpha \in [0,1] \, : \, \frac{1}{\overline{K}^3} \sum_{k=1}^{2 \overline{K}} a_k^2 \ge \frac{1}{\overline{K}^{1/3}} \right\} \right) \ll \int_{\mathrm{const} \cdot \overline{K}^{2/3}}^{\infty} \frac{e^{-1/(2x)}}{\sqrt{2 \pi} x^{3/2}} \, \mathrm{d} x + \frac{1}{\overline{K}^{1-\varepsilon}} \ll \frac{1}{\overline{K}^{1/3}} . \] Since the sequence $a_k$ is strictly stationary, we similarly deduce \[ \begin{split} \nu \Bigg( \Bigg\{ \alpha \in [0,1] \, : \, \frac{1}{\overline{K}^2} \sum_{k=\overline{K}-C \sqrt{\overline{K} \log \overline{K}}}^{\overline{K}+C \sqrt{\overline{K} \log \overline{K}}} &a_k^2 \ge \frac{(\log \overline{K})^{1/3}}{\overline{K}^{1/3}} \Bigg\} \Bigg) \\ &= \nu \left( \left\{ \alpha \in [0,1] \, : \, \frac{1}{\overline{K}^2} \sum_{k=1}^{C \sqrt{\overline{K} \log \overline{K}}} a_k^2 \ge \frac{(\log \overline{K})^{1/3}}{\overline{K}^{1/3}} \right\} \right) \\ &\ll \int_{\mathrm{const} \cdot \overline{K}^{2/3}/(\log \overline{K})^{2/3}}^{\infty} \frac{e^{-1/(2x)}}{\sqrt{2 \pi} x^{3/2}} \, \mathrm{d} x + \frac{1}{\overline{K}^{1/2-\varepsilon}} \\ &\ll \frac{(\log \overline{K})^{1/3}}{\overline{K}^{1/3}}. \end{split} \] Finally, formula \eqref{sumakbound} gives \[ \nu \left( \left\{ \alpha \in [0,1] \, : \, \frac{1}{\overline{K}^2} \sum_{k=1}^{2\overline{K}} a_k \ge \frac{1}{\overline{K}^{1/3}} \right\} \right) \ll \frac{1}{\overline{K}^{2/3}} . \] By the previous three estimates, we can finally write \begin{equation}\label{D2inmeasure} 5 \pi^3 \frac{D_2^2 (S(\alpha, N))}{\log^2 N} = \frac{2 \log^2 2}{\pi \overline{K}^2} \sum_{k=1}^{\overline{K}} a_k^2 +Y_5, \end{equation} where \begin{equation}\label{Y5bound} \nu \left( \left\{ \alpha \in [0,1] \, : \, |Y_5| \ge C \frac{(\log \overline{K})^{1/3}}{\overline{K}^{1/3}} \right\} \right) \le C \frac{(\log \overline{K})^{1/3}}{\overline{K}^{1/3}} . \end{equation} The proof of the theorem is now immediate. Assume first, that $\mu$ is absolutely continuous with respect to the Lebesgue measure. The theorem of Samur \eqref{Samur} ensures that the main term in \eqref{D2inmeasure} converges in distribution to the standard L\'evy distribution as $N$, and hence $\overline{K}$, goes to infinity. Since $Y_5 \to 0$ in $\nu$-measure, the same holds also in $\mu$-measure, and the convergence to the standard L\'evy distribution remains true for the left hand side of \eqref{D2inmeasure}. This finishes the proof for a general absolutely continuous measure $\mu$. Next, let $\mu$ be either $\lambda$ or $\nu$. Then the sequence $a_k$ is $\psi$-mixing with exponential rate, and the limit distribution with rate of Heinrich \eqref{Heinrich1} ensures that the main term in \eqref{D2inmeasure} converges to the standard L\'evy distribution with rate $\ll 1/\overline{K}^{1-\varepsilon}$. The estimate \eqref{Y5bound}, which holds also with $\lambda$ in place of $\nu$, together with the trivial fact that the distribution function of the L\'evy distribution is Lipschitz, shows that this convergence remains true for the left hand side of \eqref{D2inmeasure} with the rate $\ll (\log \overline{K})^{1/3} / \overline{K}^{1/3}$. This finishes the proof of the rate of convergence for $\lambda$ and $\nu$. \end{proof} \section{Typical rationals}\label{typicalrationalsection} Let $F_Q$ denote the set of all reduced fractions in $[0,1]$ with denominator at most $Q$, and let us write every $p/q \in F_Q$ in the form $p/q=[0;a_1, \ldots, a_r]$. It does not matter which of the two possible expansions is chosen. Note that the partial quotients $a_1=a_1(p/q), \ldots, a_r=a_r(p/q)$ as well as the length $r=r(p/q)$ is a function of $p/q$. For the sake of simplicity, we use the convention $a_k=0$ if $k>r$. The proof of Theorem \ref{rationallimitdistributiontheorem} is based on recent results of Bettin and Drappeau on the limit distribution of power sums of the partial quotients; they are perfect analogues of the results for typical irrationals mentioned in Section \ref{limitdistributionsection}. \begin{lem}[Bettin--Drappeau \cite{BD}]\label{BDlemma} For any $Q \ge 2$ and $\varepsilon>0$, \begin{equation}\label{Bettin1} \sup_{t \ge 0} \left| \frac{1}{|F_Q|} \left| \left\{ \frac{p}{q} \in F_Q \, : \, \frac{\pi^3}{72 \log^2 Q} \sum_{k=1}^r a_k^2 \le t \right\} \right| - \int_0^t \frac{e^{-1/(2x)}}{\sqrt{2 \pi} x^{3/2}} \, \mathrm{d} x \right| \ll \frac{1}{(\log Q)^{1-\varepsilon}} \end{equation} and \[ \sup_{t \in \mathbb{R}} \left| \frac{1}{|F_Q|} \left| \left\{ \frac{p}{q} \in F_Q \, : \, \frac{1}{\log Q} \sum_{k=1}^r a_k - \frac{\log \log Q - \gamma}{\pi^2/12} \le t \right\} \right| - G(t) \right| \ll \frac{1}{(\log Q)^{1-\varepsilon}} \] with implied constants depending only on $\varepsilon$. Here $\gamma$ is the Euler--Mascheroni constant, and $G(t)$ is the distribution function of the law with characteristic function \[ \int_{\mathbb{R}} e^{itx} \, \mathrm{d} G(t) = \exp \left( - \frac{6}{\pi} |x| \left( 1 + \frac{2i}{\pi} \mathrm{sgn}(x) \log |x| \right) \right) . \] \end{lem} \noindent The second limit distribution in Lemma \ref{BDlemma} immediately yields \begin{equation}\label{Bettin2} \frac{1}{|F_Q|} \left| \left\{ \frac{p}{q} \in F_Q \, : \, \frac{1}{\log Q} \sum_{k=1}^r a_k \ge t + \frac{\log \log Q}{\pi^2/12} \right\} \right| \ll \frac{1}{t} + \frac{1}{(\log Q)^{1-\varepsilon}} \quad \textrm{as } t \to \infty . \end{equation} Note that \eqref{Bettin1} was stated in \cite{BD} with the rate $\ll 1/(\log \log Q)^{1-\varepsilon}$, but the methods of that paper actually give $\ll 1/(\log Q)^{1-\varepsilon}$. For the sake of completeness, we deduce \eqref{Bettin1} as stated here in Section \ref{section4.1}. We now prove a lemma which will serve as a substitute for the fact that the partial quotients are not exactly identically distributed, and then prove Theorem \ref{rationallimitdistributiontheorem}. \begin{lem}\label{partialquotientlemma} For any positive integers $Q,k,t$, we have \[ \left| \left\{ \frac{p}{q} \in F_Q \, : \, a_k \ge t \right\} \right| \le \frac{2Q^2}{t} . \] \end{lem} \begin{proof} Assume first, that $k=1$. Note that $a_1 \ge t$ implies that $0 < p/q \le 1/t$. In particular, for each $1 \le q \le Q$ there are at most $q/t$ possible numerators $p$, hence \begin{equation}\label{k=1case} \left| \left\{ \frac{p}{q} \in F_Q \, : \, a_1 \ge t \right\} \right| \le \sum_{q=1}^Q \frac{q}{t} \le \frac{Q^2}{t} . \end{equation} Next, assume that $k \ge 2$. Let $\mathrm{denom}(x)$ denote the denominator of a rational $x$ (in its reduced form). From the recursion satisfied by the denominator of the convergents one readily deduces the supermultiplicative property \[ \mathrm{denom}([0;a_1, \ldots, a_r]) \ge \mathrm{denom}([0;a_1, \ldots,a_{k-1}]) \cdot \mathrm{denom}([0;a_k, \ldots , a_r]) . \] For any fixed positive integers $b_1, \ldots, b_{k-1}$ we thus obtain \[ \left| \left\{ \frac{p}{q} \in F_Q \, : \, a_1=b_1, \ldots, a_{k-1}=b_{k-1}, \,\, a_k \ge t \right\} \right| \le \left| \left\{ \frac{p}{q} \in F_{Q/\mathrm{denom}([0;b_1, \ldots, b_{k-1}])} \, : \, a_1 \ge t \right\} \right| . \] Summing over $b_1, \ldots, b_{k-1}$ and applying \eqref{k=1case} leads to \[ \left| \left\{ \frac{p}{q} \in F_Q \, : \, a_k \ge t \right\} \right| \le \sum_{b_1, \ldots, b_{k-1}=1}^{\infty} \frac{Q^2}{t (\mathrm{denom}([0;b_1, \dots, b_{k-1}]))^2} . \] Recall that the set of real numbers $[0;c_1, c_2, \ldots ] \in [0,1]$ such that $c_1=b_1, \ldots, c_{k-1}=b_{k-1}$ is an interval whose length is at least $1/(2 \, \mathrm{denom}([0;b_1, \ldots, b_{k-1}])^2)$. Since these are pairwise disjoint intervals, we have \[ \sum_{b_1, \ldots, b_{k-1}=1}^{\infty} \frac{1}{(\mathrm{denom}([0;b_1, \ldots, b_{k-1}]))^2} \le 2, \] and the claim follows. \end{proof} \begin{proof}[Proof of Theorem \ref{rationallimitdistributiontheorem}] Throughout the proof, $C>0$ is a large universal constant whose value changes from line to line, and $Z_i=Z_i (p/q)$, $i=1,2$ are error terms. Proposition \ref{parsevalprop} and formula \eqref{diophantineevaluation} show that we can write \[ D_2^2 (S(p/q,q)) = \frac{1}{360} \sum_{k=1}^r a_k^2 +Z_1, \quad \textrm{where} \quad |Z_1| \le C \sum_{k=1}^r a_k + \frac{C}{q} \sum_{k=0}^{r-1} a_{k+1}^3 q_k . \] Here $a_{k+1}^3 q_k \le a_{k+1}^2 q_{k+1}$, and $q_k/q=q_k/q_r \le 1/F_{r-k+1}$, where $F_{r-k+1}$ are the Fibonacci numbers. Hence normalizing the previous formula leads to \[ 5 \pi^3 \frac{D_2^2 (S(p/q,q))}{\log^2 Q} = \frac{\pi^3}{72 \log^2 Q} \sum_{k=1}^r a_k^2 + Z_2, \quad \textrm{where} \quad |Z_2| \le \frac{C}{\log^2 Q} \sum_{k=1}^r a_k + \frac{C}{\log^2 Q} \sum_{k=1}^r \frac{a_k^2}{F_{r-k+1}} . \] The first error term can be estimated in measure using formula \eqref{Bettin2} as \[ \frac{1}{|F_Q|} \left| \left\{ \frac{p}{q} \in F_Q \, : \, \frac{1}{\log^2 Q} \sum_{k=1}^r a_k \ge \frac{1}{(\log Q)^{1/2}} \right\} \right| \ll \frac{1}{(\log Q)^{1/2}} . \] Note that the map reversing the order of the partial quotients $F_Q \to F_Q$, $[0;a_1, a_2, \ldots, a_r] \mapsto [0; a_r, \ldots, a_2, a_1]$ is a bijection; in fact, $[0;a_r,\ldots, a_2,a_1]$ is the reduced fraction $q_{r-1}/q_r$. Therefore the distribution of $(a_r, \ldots, a_2, a_1)$ is identical to that of $(a_1, a_2, \ldots, a_r)$, and we can apply Lemma \ref{partialquotientlemma} to estimate the second error term in measure as \[ \begin{split} \frac{1}{|F_Q|} \left| \left\{ \frac{p}{q} \in F_Q \, : \, \frac{1}{\log^2 Q} \sum_{k=1}^r \frac{a_k^2}{F_{r-k+1}} \ge \frac{1}{(\log Q)^{1/2}} \right\} \right| &= \frac{1}{|F_Q|} \left| \left\{ \frac{p}{q} \in F_Q \, : \, \sum_{k=1}^r \frac{a_k^2}{F_k} \ge (\log Q)^{3/2} \right\} \right| \\ &\le \frac{1}{|F_Q|} \sum_{k=1}^{\infty} \left| \left\{ \frac{p}{q} \in F_Q \, : \, \frac{a_k^2}{F_k} \ge (\log Q)^{3/2} \right\} \right| \\ &\le \frac{1}{|F_Q|} \sum_{k=1}^{\infty} \frac{2Q^2}{F_k^{1/2} (\log Q)^{3/4}} \\ &\ll \frac{1}{(\log Q)^{3/4}} . \end{split} \] Note that we used the convention $a_k=0$ if $k>r$, and the fact that $|F(Q)| \gg Q^2$. In particular, \[ \frac{1}{|F_Q|} \left| \left\{ \frac{p}{q} \in F_Q \, : \, |Z_2| \ge \frac{1}{(\log Q)^{1/2}} \right\} \right| \ll \frac{1}{(\log Q)^{1/2}} , \] and the limit distribution theorem \eqref{Bettin1} of Bettin and Drappeau yields \[ \sup_{t \ge 0} \left| \frac{1}{|F_Q|} \left| \left\{ \frac{p}{q} \in F_Q \, : \, 5 \pi^3 \frac{D_2^2 (S(p/q,q))}{\log^2 Q} \le t \right\} \right| - \int_0^t \frac{e^{-1/(2x)}}{\sqrt{2 \pi} x^{3/2}} \, \mathrm{d} x \right| \ll \frac{1}{(\log Q)^{1/2}} . \] The error of replacing $\log^2 Q$ by $\log^2 q$ is easily seen to be negligible compared to $1/(\log Q)^{1/2}$. \end{proof} \subsection{Proof of Lemma \ref{BDlemma}}\label{section4.1} We now deduce the rate $\ll 1/(\log Q)^{1-\varepsilon}$ in \eqref{Bettin1}. Fix $\varepsilon >0$. Applying the main result \cite[Theorem 1.1]{BD} of Bettin and Drappeau to, in their notation, $\phi(x)=\lfloor 1/x \rfloor^2$ with $\alpha_0=1/2-\varepsilon$, we conclude that there exist constants $t_0,\delta>0$ such that for all $|t| < t_0$, \begin{equation}\label{charfunction} \frac{1}{|F_Q|} \sum_{p/q \in F_Q} \exp \left( it \sum_{k=1}^r a_k^2 \right) = \exp \left( U(t) \log Q + O \left( |t|^{1/2-\varepsilon} + Q^{-\delta} \right) \right) , \end{equation} where \[ U(t) = \frac{12}{\pi^2} \int_{0}^{1} \frac{e^{it\lfloor 1/x \rfloor^2} -1}{1+x} \, \mathrm{d}x + O \left( |t|^{1-\varepsilon} \right) = \frac{12}{\pi^2} \int_{1}^{\infty} \frac{e^{it \lfloor x \rfloor^2}-1}{x^2+x} \, \mathrm{d}x + O \left( |t|^{1-\varepsilon} \right) . \] Here $t_0, \delta$ and the implied constants depend only on $\varepsilon$. Our improvement in \eqref{Bettin1} comes from a more careful estimate for $U(t)$. Assume that $0<t<t_0$. Since $|\lfloor x \rfloor^2 -x^2| \le 2x$, the error of removing the integer part function is negligible: \[ \left| \int_{1}^{\infty} \frac{e^{i t \lfloor x \rfloor^2} - e^{i t x^2}}{x^2+x} \, \mathrm{d}x \right| \le \int_{1}^{\infty} \frac{\min \{ 2 t x , 2 \}}{x^2+x} \, \mathrm{d} x \ll t \log \frac{1}{t} . \] Therefore \[ U(t) = \frac{12}{\pi^2} \int_{1}^{\infty} \frac{e^{itx^2}-1}{x^2+x} \, \mathrm{d} x +O(t^{1-\varepsilon}) = \frac{12 \sqrt{t}}{\pi^2} \int_{\sqrt{t}}^{\infty} \frac{e^{ix^2}-1}{x^2+\sqrt{t}x} \, \mathrm{d} x +O(t^{1-\varepsilon}) . \] We now compare the remaining integral to its limit, the Fresnel-type integral $\int_{0}^{\infty} (e^{ix^2-1})/x^2 \, \mathrm{d} x = (i-1) \sqrt{2 \pi}/2$. We have \[ \begin{split} \left| \int_{\sqrt{t}}^{\infty} \frac{e^{ix^2}-1}{x^2+\sqrt{t}x} \, \mathrm{d} x - \int_{0}^{\infty} \frac{e^{ix^2}-1}{x^2} \, \mathrm{d} x \right| &\le \left| \int_{0}^{\sqrt{t}} \frac{e^{ix^2}-1}{x^2} \, \mathrm{d} x \right| + \int_{\sqrt{t}}^{\infty} |e^{ix^2}-1| \cdot \left| \frac{1}{x^2+\sqrt{t}x} - \frac{1}{x^2} \right| \, \mathrm{d} x \\ &\le \int_{0}^{\sqrt{t}} 1 \, \mathrm{d} x + \int_{\sqrt{t}}^{\infty} \min \{ x^2, 2 \} \frac{\sqrt{t}}{x^3} \, \mathrm{d} x \\ &\ll \sqrt{t} \log \frac{1}{t} , \end{split} \] hence $U(t)= \frac{6 \sqrt{2} \sqrt{t}}{\pi^{3/2}} (i-1) + O(t^{1-\varepsilon})$. The case of negative $t$ follows from complex conjugation, thus for $|t|<t_0$, \begin{equation}\label{Utestimate} U(t) = - \frac{6 \sqrt{2} |t|^{1/2}}{\pi^{3/2}} (1- i \mathrm{sgn}(t)) + O(|t|^{1-\varepsilon}) . \end{equation} Now let \[ \varphi_1(t)= \frac{1}{|F_Q|} \sum_{p/q \in F_Q} \exp \left( i t \frac{\pi^3}{72 \log^2 Q} \sum_{k=1}^r a_k^2 \right) \] and $\varphi_2(t)=\exp (-|t|^{1/2}(1-i \mathrm{sgn}(t)))$; the latter is the characteristic function of the standard L\'evy distribution. The Berry--Esseen inequality \cite[p.\ 142]{PE} states that the distance of these two distributions in the Kolmogorov metric is, with any $T>0$, \[ \sup_{t \ge 0} \left| \frac{1}{|F_Q|} \left| \left\{ \frac{p}{q} \in F_Q \, : \, \frac{\pi^3}{72 \log^2 Q} \sum_{k=1}^r a_k^2 \le t \right\} \right| - \int_0^t \frac{e^{-1/(2x)}}{\sqrt{2 \pi} x^{3/2}} \, \mathrm{d} x \right| \ll \frac{1}{T} + \int_{0}^{T} \frac{|\varphi_1(t)-\varphi_2(t)|}{t} \, \mathrm{d} t . \] Choose $T = \log Q$. Formulas \eqref{charfunction} and \eqref{Utestimate} show that for $|t| \le \log Q$, \[ \begin{split} \varphi_1(t) &= \varphi_2(t) \exp \left( O \left( \left( \frac{|t|}{\log^2 Q} \right)^{1-\varepsilon} \log Q + \left( \frac{|t|}{\log^2 Q} \right)^{1/2-\varepsilon} + Q^{-\delta} \right) \right) \\ &= \varphi_2(t) \left( 1 + O \left( \frac{|t|^{1-\varepsilon}+|t|^{1/2-\varepsilon}}{(\log Q)^{1-2 \varepsilon}} + Q^{-\delta} \right) \right) . \end{split} \] Using $|\varphi_2(t)|= e^{-|t|^{1/2}}$, this immediately yields \[ |\varphi_1(t)-\varphi_2(t)| \ll e^{-|t|^{1/2}} \left( \frac{|t|^{1-\varepsilon}+|t|^{1/2-\varepsilon}}{(\log Q)^{1-2 \varepsilon}} + Q^{-\delta} \right) . \] It is now easy to see that \[ \int_{Q^{-100}}^{1} \frac{|\varphi_1(t)-\varphi_2(t)|}{t} \, \mathrm{d} t \ll \frac{1}{(\log Q)^{1-2 \varepsilon}} \quad \textrm{and} \quad \int_{1}^{\log Q} \frac{|\varphi_1(t)-\varphi_2(t)|}{t} \, \mathrm{d} t \ll \frac{1}{(\log Q)^{1-2 \varepsilon}} . \] On the other hand, by a very rough estimate we have $\sum_{k=1}^r a_k^2 \le Q^3$, hence $|\varphi_1(t)-1| \ll |t| Q^3$. Clearly $|\varphi_2(t)-1| \ll |t|^{1/2}$, thus \[ \int_{0}^{Q^{-100}} \frac{|\varphi_1(t)-\varphi_2(t)|}{t} \, \mathrm{d} t \ll \int_{0}^{Q^{-100}} \frac{t Q^3 + t^{1/2}}{t} \, \mathrm{d} t \ll Q^{-50} . \] Therefore \[ \frac{1}{\log Q} + \int_{0}^{\log Q} \frac{|\varphi_1(t)-\varphi_2(t)|}{t} \, \mathrm{d} t \ll \frac{1}{(\log Q)^{1-2 \varepsilon}} , \] as claimed. \section*{Acknowledgments} The author is supported by the Austrian Science Fund (FWF), project F-5510. I would like to thank Sary Drappeau for helpful discussions on Lemma \ref{BDlemma}.
proofpile-arXiv_065-72
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Graph coloring has been well-studied in mathematics since the eighteenth century and has widespread applications in day-to-day life, including scheduling problems, register allocation, radio frequency assignments and sudoku solutions \cite{coloring-applications}. Traditionally, the coloring of a graph refers to an assignment of labels (called colors) to the vertices of a graph such that no two adjacent vertices share the same color. The chromatic number of a graph is defined to be the minimum number of colors for which such an assignment is possible. More recently, a quantum generalization of the chromatic number was introduced within the framework of non-local games in quantum information theory \cite{cameron}. The quantum chromatic number of a graph is defined as the minimal number of colors necessary in a quantum protocol in which two separated players, who cannot communicate with each other but share an entangled quantum state, try to convince an interrogator with certainty that they have a coloring for the given graph. There are known examples of graphs whose quantum chromatic number is strictly smaller than its classical chromatic number \cite{mr1, cameron}, thus exhibiting the power of quantum entanglement. Quantum chromatic number of classical graphs have close connections to Tsirelson's conjecture and the Connes embedding problem and have been extensively studied in the past decade \cite{rank1_qcn, mr1, paulsen-ivan, paulsen-ivan-sev-winter-stahlke}. In general, computing the chromatic number of a graph is an NP-hard problem. However, several lower bounds on the (quantum) chromatic number have been shown using spectral graph theory \cite{EW1} . In this paper, we are interested in generalizing these spectral bounds to the setting of \textit{quantum graphs} and estimating the quantum chromatic number of a quantum graph. Quantum graphs are a non-commutative generalization of classical graphs that have attracted significant attention in recent years due to their intriguing connections with several areas of mathematics, physics and computer science. They first appeared in \cite{shulman}, and have independently emerged in other disguises thereafter. In information theory, quantum graphs were introduced as a quantum analogue of the confusability graph of classical channels \cite{dsw}. Another definition was proposed in the context of quantum relations \cite{weaver-qg}, which describes a quantum graph as a reflexive and symmetric quantum relation on a finite dimensional von-Neumann algebra. In \cite{oxford}, an equivalent perspective on quantum graphs was developed in a categorical framework for quantum functions, using a quantum adjacency matrix, and was further generalized in \cite{kari}. In recent years, research in quantum graph theory has undergone vast developments and quantum graphs have been explored in the context of zero-error quantum information theory, quantum error correction, operator algebras, non-local games, quantum symmetries, non-commutative topology and other fields \cite{kari2, randomqg, ramsey1, ramsey2, matsuda, gromada}. In particular, there have been multiple studies on the coloring of quantum graphs \cite{paulsen-ortiz, mehta, stahlke, bgh, ivan-lyudmila} leading to different variants of the chromatic number of a quantum graph, in both the classical and quantum sense. One recent approach developed in \cite{bgh} defines the coloring of a quantum graph using a two-player non-local game involving quantum inputs and classical outputs. This game generalizes the coloring game for classical graphs \cite{cameron} and introduces chromatic numbers of a quantum graph in different mathematical models: $loc, q, qa, qc, C^*, hered, alg$. It was shown that the chromatic numbers defined in this framework agree nicely with other versions in the literature \cite{mr1, stahlke, mehta}, and also lead to a four-coloring theorem for quantum graphs in the algebraic model. We adopt this formalism of quantum graph coloring in the present paper. The goal of this paper is to obtain bounds for the quantum chromatic number of quantum graphs. Chromatic numbers of quantum graphs are closely related to the zero-error capacity of quantum channels \cite{dsw}. Hence, estimating these numbers is useful for the development of zero-error quantum communication. In \cite{EW1}, the authors proved many lower bounds on the quantum chromatic number of \textit{classical graphs} using an algebraic characterization of graph coloring. We extend their results to the setting of quantum graphs using a combinatorial definition of quantum graph coloring developed in \cite{bgh}. Our approach uses the quantum adjacency matrix, defined in \cite{oxford, kari}, to associate a spectrum with the given quantum graph. We use this spectrum and techniques adapted from \cite{EW1} to achieve the spectral estimates. In this process, we naturally get lower bounds for the classical chromatic numbers of quantum graphs since the classical chromatic number is greater than or equal to the quantum chromatic number. Our main result can be summarized as follows: \begin{thm} \label{main result} Let $\mathcal{G} = (\mathcal{M}, \psi, A, S)$ be an irreflexive quantum graph, and let $\chi(\mathcal{G})$ and $\chi_q(\mathcal{G})$ denote the classical and quantum chromatic numbers of $\mathcal{G}$ respectively. Then, \[ 1 + \max \left\{ \dfrac{\lambda_{\max}}{|\lambda_{\min}|}, \dfrac{\dim(S)}{\dim(S) - \dim(\mathcal{M}) \gamma_{\min}}, \dfrac{s^\pm}{s^\mp}, \dfrac{n^\pm}{n^\mp} , \dfrac{\lambda_{\max}}{\lambda_{\max} - \gamma_{\max}+ \theta_{max}} \right \} \le \chi_q(\mathcal{G}) \le \chi(\mathcal{G}). \] \end{thm} Specifically, we prove that Hoffman's bound \cite{hoffman} holds in the case of quantum graphs. We also introduce quantum analogues for the edge number, inertia, Laplacian and signless Laplacian of a quantum graph along the way. Further, we demonstrate the tightness of all the bounds in the case of irreflexive complete quantum graphs. \noindent Our paper is organized as follows: Section \S \ref{background} provides the necessary background on quantum graphs and the connections between different perspectives. We also review the notion of quantum graph coloring and chromatic numbers in this section. Section \S \ref{pinching and twirling} introduces the spectrum of a quantum graph and develops algebraic results connecting the quantum adjacency operator to quantum graph coloring. In section \S \ref{bounds}, we prove the spectral lower bounds listed in theorem \ref{main result} for the quantum chromatic numbers of quantum graphs. We conclude with an illustration of the bounds in the case of complete quantum graphs in section \S \ref{application}. The appendix \S \ref{appendix} presents a table translating the different definitions of quantum graphs. \section{Preliminaries} \label{background} In this section, we review some definitions and results on quantum graphs and quantum coloring which will be required for our discussion. We begin by listing some notations used in the paper. \subsection{Notations} \begin{itemize}[noitemsep] \item $[n]$ denotes the discrete set $\{1,2, \ldots, n\}$. \item $\ket{\cdot}$ denotes a column vector, while $\bra{\cdot}$ denotes its conjugate transpose. \item $M_n$ denotes the set of all $ n \times n$ complex matrices. \item $e_{ij}$ denotes the matrix whose $i^{th}$ row - $j^{th}$ column has entry $1$ and all other entries are 0. \item $\Tr$ denotes the natural trace, given by summing all diagonal terms of a matrix. \item $B(\mathcal{H} )$ denotes the algebra of bounded linear operators on a Hilbert space $\mathcal{H} $. \item If $T$ is a subset of an algebra $\mathcal{A}$, then the commutant of $T$ is denoted by $T' = \{ a \in \mathcal{A}: at = ta, \; \forall \; t \in T\}$. \item The spectrum of an operator $A$ is denoted by $\sigma(A)$. \item $G$ denotes a classical graph, $\chi(G)$ denotes the classical chromatic number of $G$ and $\chi_q(G)$ denotes the quantum chromatic number of $G$. \end{itemize} \subsection{Quantum graphs as operator spaces} Quantum graphs can be defined in different ways, as mentioned in the introduction. One way to describe them is as operator spaces satisfying a certain bimodule property \cite{weaver-qg}. This approach is commonly used for studying quantum coloring problems. We discuss this formalism here: \begin{defn} \label{qg1} Let $\mathcal{H} $ be a complex Hilbert space and $\mathcal{M} \subseteq B(\mathcal{H} )$ be a (non-degenerate) von Neumann algebra. A \textit{quantum graph} on $\mathcal{M}$ is an operator space $S \subseteq B(\mathcal{H} )$ that is closed under adjoint and is a bimodule over the commutant of $\mathcal{M}$, that is $\mathcal{M}' S \mathcal{M}' \subseteq S$. We denote this quantum graph by the tuple $\mathcal{G} = (S, \mathcal{M}, B(\mathcal{H} ))$. \end{defn} Motivated by confusability graphs in information theory, quantum graphs are generally assumed to be reflexive ($I \in S$) and hence, $S$ is an operator system in $B(\mathcal{H} )$. But for the purposes of graph coloring, we will only consider irreflexive quantum graphs, that is quantum analogues of graphs without loops. \begin{defn} A quantum graph $(S, \mathcal{M}, B(\mathcal{H} ))$ is said to be \textit{irreflexive} if $S \subseteq (\mathcal{M}')^{\perp}$. \end{defn} In particular, an irreflexive quantum graph on $M_n$ (with the standard representation $M_n = B(\mathbb{C}^n)$) is simply a self-adjoint traceless operator subspace in $M_n$. This is sometimes used as the definition of non-commutative graphs in the literature \cite{stahlke}. \noindent It can be shown that the operator space $S$ associated to a quantum graph $(S, \mathcal{M}, B(\mathcal{H} ))$ is essentially independent of the representation of $\mathcal{M}$ \cite{weaver1}. The intuition is that $S$ contains operators that represent edges in the graph, as illustrated by the following example. \begin{eg} \label{classical S} Let $G$ be a classical graph on $n$ vertices. One can identify the vertex set of $G$ with the algebra of diagonal matrices $D_n \subseteq M_n$, by identifying each vertex $i$ with the diagonal matrix $e_{ii} \in D_n$. Then, $S_G = span\{ e_{ij}: (i,j) \mbox{ is an edge in } G\} \subseteq M_n$ is a quantum graph over $D_n$. \end{eg} \begin{rem} Any quantum graph over $D_n$ is necessarily of the form $S_G$ for some classical graph $G$ \cite{weaver-qg}. Also, two reflexive classical graphs $G_1, G_2$ are isomorphic if and only if their corresponding operator systems $S_{G_1}, S_{G_2}$ are unitally completely order isomorphic \cite{paulsen-ortiz}. \end{rem} A ``purely quantum" example is the following one: \begin{eg} Let $\mathcal{M} = M_2$ and $S = \left \{ \begin{bmatrix} a & b \\ c & a \end{bmatrix} : a, b, c \in \mathbb{C} \right \}$. Then $(S, M_2, B(\mathbb{C}^2))$ is a quantum graph on $M_2$ that doesn't arise from any classical graph. \end{eg} \subsection{Quantum graphs with a quantum adjacency matrix} In this paper, we take advantage of an alternate (but equivalent) definition of a quantum graph, which involves quantizing the vertex set and the adjacency matrix. This formalism was first introduced in \cite{oxford} using the language of special symmetric dagger Frobenius algebras, and was later generalized to the non-tracial case in \cite{kari, matsuda}. In this perspective, the non-commutative analogue of a vertex set is played by a C*-algebra, which also carries the structure of a Hilbert space. It is defined as follows: \begin{defn}[Quantum set] A quantum set is a pair $(\mathcal{M}, \psi)$, where $\mathcal{M}$ is a finite dimensional C*-algebra and $\psi: \mathcal{M} \to \mathbb{C}$ is a faithful state. \end{defn} Using $\psi$, one can view $\mathcal{M}$ as a Hilbert space $L^2(\mathcal{M}) = L^2(\mathcal{M}, \psi)$ obtained from the GNS representation of $\mathcal{M}$ with respect to $\psi$. That is, $L^2(\mathcal{M})$ is the vector space $\mathcal{M}$ equipped with the inner product $\langle x, y \rangle = \psi(y^*x)$. Let $m: \mathcal{M} \otimes \mathcal{M} \to \mathcal{M}$ denote the multiplication map and $m^*$ denote the adjoint of $m$ when viewed as a linear operator from $L^2(\mathcal{M}) \otimes L^2(\mathcal{M}) \to L^2(\mathcal{M})$. Further, we denote the unit of $\mathcal{M}$ by $\mathbb{1}$ and let $\eta: \mathbb{C} \to \mathcal{M}$ be the unit map $\lambda \mapsto \lambda \mathbb{1}$. The adjoint of $\eta$ (as an operator on Hilbert spaces) is denoted by $\eta^*$ and is equal to $\psi$. While there are many choices for a faithful state $\psi$ on $\mathcal{M}$, we will restrict our attention to $\delta$-forms, as done in \cite{kari}. \begin{defn} For $\delta > 0$, a state $\psi: \mathcal{M} \to \mathbb{C}$ is called a \textit{$\delta$-form} if $mm^* = \delta^2 I$. \end{defn} \begin{eg} Let $X$ be a finite set and $\mathcal{M} = C(X)$ be the algebra of continuous complex valued functions on $X$ . Then the uniform measure $\psi (f) = \frac{1}{|X|} \sum_{x \in X} f(x)$ is a $\delta$-form on $C(X)$ with $\delta^2 = |X|$. In this case, $m^*$ is given by $e_i \mapsto |X| (e_i \otimes e_i)$, where $e_i$ is the characteristic function on the set $\{i \} \subseteq X$. \end{eg} \begin{eg} Let $\mathcal{M}$ be $M_n$ equipped with the canonical normalized trace $\psi = \frac{1}{n} \Tr$. Then $m^*(e_{ij}) = n \sum_{k=1}^n e_{ik} \otimes e_{kj}$, and $\psi$ is an $n$-form on $M_n$. \end{eg} The $\delta$-forms in the above examples are tracial, that is $\psi(xy) = \psi(yx)$ for all $x,y \in \mathcal{M}$. A tracial $\delta$-form on a finite dimensional C*-algebra is unique and has a nice form, which will be used in later sections. We recall this now: \begin{prop} \label{plancheral} Let $\mathcal{M}$ be a finite dimensional C*-algebra, decomposed as $\mathcal{M} \cong \bigoplus_{i=1}^N M_{n_i}$, where $N, n_1, n_2, \ldots, n_N$ are some positive integers. Then, there exists a \textit{unique} tracial $\delta$-form on $\mathcal{M}$ given by \begin{equation} \label{plancheral trace} \psi = \dfrac{1}{dim(\mathcal{M})} \bigoplus_{i=1}^N n_i \Tr( \cdot ) \end{equation} In this case, $\delta^2 = dim(\mathcal{M})$ and the state $\psi$ is called the \textit{Plancheral trace}. Moreover, $\psi = \frac{1}{\dim(\mathcal{M})} \Tr\vert_{\mathcal{M}}$ , where $\Tr: B(L^2(\mathcal{M})) \to \mathbb{C}$ is the canonical trace. \end{prop} A quantum set endowed with an additional structure of a quantum adjacency matrix yields a quantum graph. \begin{defn}[\cite{kari}] \label{qg2} Let $\mathcal{M}$ be a finite dimensional C*-algebra equipped with a $\delta$-form $\psi$. A self-adjoint linear map $A: L^2(\mathcal{M}) \to L^2(\mathcal{M})$ is called a \textit{quantum adjacency matrix} if it satisfies the following conditions: \begin{enumerate} \item $m(A \otimes A)m^* = \delta^2 A$, \item $(I \otimes \eta^*m )(I \otimes A \otimes I) ( m^* \eta \otimes I) = A $. \end{enumerate} The tuple $\mathcal{G} = (\mathcal{M}, \psi, A)$ is called an (undirected) quantum graph. The quantum graph $(\mathcal{M}, \psi, A)$ is said to be \textit{reflexive} if it further satisfies the condition $m(A \otimes I)m^* = \delta^{2} I$ or is said to be \textit{irreflexive} if it satisfies the condition $m(A \otimes I)m^* = 0$. \end{defn} The motivation for the above definition comes from the commutative setting where $\mathcal{M} = C(X)$ and $\psi$ is the uniform measure on $X$. In this case, the quantum adjacency matrix $A: L^2(\mathcal{M}) \to L^2(\mathcal{M})$ can be identified with a matrix in $M_{|X|}(\mathbb{C})$, and the operation $\delta^{-2} m(P \otimes Q)m^*$ is simply the schur product of the matrices $P$ and $Q$, given by entrywise multiplication. So, the first condition in definition \ref{qg2} says that $A$ must be an idempotent with respect to Schur multiplication, which is equivalent to saying that $A$ has entries in $\{0, 1\}$. The second condition says $ A = A^T$. If we drop the second condition in definition \ref{qg2}, it is called a \textit{directed} quantum graph \cite{kari2}. \begin{rem} \label{* preserve} The self-adjointness of $A$ along with condition (2) in definition \ref{qg2} implies that $A$ is also *-preserving \cite{matsuda}, that is $Ax^* = (Ax)^*$ for all $x \in \mathcal{M}$. \end{rem} Every quantum set can be easily equipped with an adjacency operator to obtain a quantum graph. An example is the complete quantum graph. \begin{defn} Let $(\mathcal{M}, \psi)$ be a quantum set. A reflexive \textit{complete quantum graph} on $\mathcal{M}$ is defined by $A = \delta^2 \psi (\cdot) \mathbb{1}$. In the classical case, this gives the all 1s matrix and corresponds to the reflexive complete graph on $\dim(\mathcal{M})$ vertices. An \textit{irreflexive} complete quantum graph on $(\mathcal{M}, \psi)$ is defined by $A = \delta^2 \psi (\cdot) \mathbb{1} - I$. \end{defn} There are several non-trivial examples of quantum graphs. In particular, \cite{matsuda} gives a concrete classification of all undirected reflexive quantum graphs on $M_2$, and \cite{gromada} gives an example of a quantum graph, which is not quantum isomorphic to any classical graph. \subsection{Translation between different perspectives of quantum graphs} \label{translation} The two definitions of quantum graphs given in \ref{qg1} and \ref{qg2} can be shown to be equivalent \cite{oxford}, using a bijective correspondence between linear operators on $L^2(\mathcal{M})$ and elements in $\mathcal{M} \otimes \mathcal{M}^{op}$. A thorough algebraic proof for the correspondence between different definitions of quantum graphs is given in \cite{larissa}. We summarize this connection below: \begin{enumerate} \item Given a quantum graph $(\mathcal{M}, \psi, A)$, define $P: B(L^2(\mathcal{M})) \to B(L^2(\mathcal{M}))$ as \begin{equation} \label{a to p} P(X) = \delta^{-2} m(A \otimes X)m^* . \end{equation} Then, range$(P)$ is a self-adjoint operator subspace in $B(L^2(\mathcal{M}))$ that is a bimodule over $\mathcal{M}'$. \item Given a quantum graph $( S, \; (\mathcal{M}, \psi), \; B(L^2(\mathcal{M})) \;)$, let $P: B(L^2(\mathcal{M})) \to B(L^2(\mathcal{M}))$ denote a self-adjoint $\mathcal{M}'-\mathcal{M}'$ bimodule projection with $range(P) = S$. That is, $P(axb) = aP(x)b$, for all $x \in B(L^2(\mathcal{M}))$, $a,b \in \mathcal{M}'$ and $P^2 = P = P^*$, where the adjoint is taken with respect to the trace inner product on $B(L^2(\mathcal{M}))$. (Such a $P$ always exists and is unique for the given $S$ \cite{weaver1}.) Then, $A: L^2(\mathcal{M}) \to L^2(\mathcal{M})$ defined by \begin{equation} \label{p to a} A(x) = \delta^2 (\psi \otimes I) P ( x \otimes 1) \end{equation} is a quantum adjacency matrix on $(\mathcal{M},\psi)$. \end{enumerate} \begin{rem} \label{cbm} In \eqref{p to a} $P$ is interpreted as an element of $\mathcal{M} \otimes \mathcal{M}^{op}$ using the following well-known *-isomorphism in finite dimensions: \begin{eqnarray*} \label{p and P} \mathcal{M} \otimes \mathcal{M}^{op} & \cong & {}_{\mathcal{M}'}CB_{\mathcal{M}'} (B(L^2(\mathcal{M}))), \mbox{ given by } \\ x \otimes y^{op} & \longleftrightarrow & x(\cdot)y. \end{eqnarray*} Here, $\mathcal{M}^{op}$ denotes the opposite algebra of $\mathcal{M}$ and ${}_{\mathcal{M}'}CB_{\mathcal{M}'} (B(L^2(\mathcal{M})))$ denotes the set of completely bounded maps $P$ on $B(L^2(\mathcal{M}))$ with the property $P(axb) = aP(x)b$, for all $x \in B(L^2(\mathcal{M}))$, $a,b \in \mathcal{M}'$. An infinite dimensional version of this result can be found in \cite{effros-ruan}. \end{rem} \begin{rem} We also note that the expressions \eqref{a to p} and \eqref{p to a} are inverses of each other. \end{rem} In general, the above correspondence between $S$ and linear operators $A$ is not one-one since there are several different bimodule idempotents $P$ with the same range $S$. However, there is a \textit{unique} self-adjoint quantum adjacency matrix $A$ for a given $S$, which corresponds to the unique orthogonal bimodule projection onto $S$. In this case, $A$ is also completely positive, which was used as an alternate definition of quantum adjacency matrix in \cite{randomqg}. \subsection{Chromatic number of quantum graphs} In this section, we review a notion of quantum graph coloring that was developed in \cite{bgh} using a two-player quantum-to-classical nonlocal game, generalizing the coloring game for classical graphs \cite{cameron}. The inputs for the quantum graph coloring game are elements from a suitably chosen basis for the graph operator space (known as quantum edge basis) and the outputs are classical color assignments. The inputs are quantum in the sense that they are tensor product states, where one player receives the left leg of the tensor and the other player receives the right leg. The players win the game if their responses jointly satisfy a synchronicity condition and respect the adjacency structure of the quantum graph. We refer the reader to \cite{bgh} for more details on the game, and for the results presented in this section. Using the winning strategies for the quantum graph coloring game, the chromatic number of a quantum graph can be defined in different mathematical models: $loc, q, qa, qc, C^*, hered, alg$. In this paper, we will restrict our discussion to the classical $(loc)$, and quantum $(q)$ chromatic numbers. We begin with recalling an algebraic definition of quantum graph coloring that arises from the non-local game in \cite{bgh}. \begin{defn}[ \cite{bgh}] \label{qcoloring} Let $\mathcal{G} = (S, \mathcal{M}, B(\mathcal{H} ))$ be an irreflexive quantum graph. We say that there is a $c$-coloring of $\mathcal{G}$ if there exists a finite von-Neumann algebra $\mathcal{N}$ with a faithful normal trace and projections $\{P_a\}_{a=1}^c \subseteq \mathcal{M} \otimes \mathcal{N}$ such that \begin{enumerate} \item $P_a^2 = P_a = P_a^*$, for $1 \le a \le c$, \item $\sum_{i=1}^c P_a = I_{\mathcal{M} \otimes \mathcal{N}}$, \end{enumerate} satisfying the following condition: \begin{equation} \label{annihilate S} P_a (X \otimes I_{\mathcal{N}}) P_a = 0, \; \forall X \in S \mbox{ and } 1 \le a \le c. \end{equation} If $\dim(\mathcal{N}) = 1$, we call it a \textit{classical} $(loc)$ coloring of $\mathcal{G}$ and if $\dim(\mathcal{N}) < \infty$, we call it a \textit{quantum} $(q)$ coloring of $\mathcal{G}$. More generally, when $\mathcal{N}$ is a finite von-Neumann algebra (possibly infinite dimensional), it is called a \textit{quantum commuting} $(qc)$ coloring of $\mathcal{G}$. \end{defn} The projections $\{P_a\}_{a=1}^c$ are obtained from the winning strategies of the non-local quantum graph coloring game. In particular, when $\mathcal{M} = D_n$, we recover the usual classical and quantum coloring of classical graphs on $n$ vertices. \begin{defn}[ \cite{bgh}] \label{qc-qg} Let $\mathcal{G} = (S, \mathcal{M}, B(\mathcal{H} ))$ be an irreflexive quantum graph. The \textit{quantum chromatic number} of $\mathcal{G}$ is defined to be the least $c$ such that there exists a $c$-coloring of $\mathcal{G}$ in the sense of definition \ref{qcoloring}, with $\dim(\mathcal{N}) < \infty$. We denote this quantum chromatic number by $\chi_q(\mathcal{G})$. Moreover, when $\dim(\mathcal{N}) = 1$, it is called the \textit{classical} chromatic number $\chi(\mathcal{G}) = \chi_{loc}(\mathcal{G})$ and when $dim(\mathcal{N}) = \infty$, it is called the \textit{quantum commuting} chromatic number $\chi_{qc}(\mathcal{G})$. \end{defn} It was shown in \cite{bgh} that every quantum graph $\mathcal{G} = (S, \mathcal{M}, M_n)$ has a finite quantum coloring and $\chi_q(\mathcal{G}) \le \dim(\mathcal{M})$. Further, for all quantum graphs $\mathcal{G}$, we have \begin{equation} \chi_{qc} (\mathcal{G}) \le \chi_{q} (\mathcal{G}) \le \chi (\mathcal{G}). \end{equation} Also, if $(S, \mathcal{M}, M_n)$ and $(T, \mathcal{M}, M_n)$ are two quantum graphs such that $S \subseteq T$, then $\chi_t(S, \mathcal{M}, M_n) \le \chi_t(T, \mathcal{M}, M_n)$, where $t \in \{loc, q, qc\}$. \begin{eg} Let $G$ be a classical graph on $n$ vertices and $\mathcal{G} = (S_G, D_n, M_n)$ be the quantum graph associated with $G$, as in example \ref{classical S}. Then, \begin{equation} \chi(\mathcal{G}) = \chi(G) \mbox{ and } \chi_q(\mathcal{G}) = \chi_q(G). \end{equation} \end{eg} \begin{eg} For complete quantum graphs, the quantum chromatic number is the full dimension of the quantum vertex set. That is, $\chi_q(M_n, \mathcal{M}, M_n) = dim(\mathcal{M})$. \end{eg} \begin{rem} Indeed, $\chi(M_n, \mathcal{M}, M_n) < \infty$ if and only if $\mathcal{M}$ is abelian. In particular, if $\mathcal{M}$ is non-abelian, then $\chi(M_n, \mathcal{M}, M_n) \ne \chi_q(M_n, \mathcal{M}, M_n)$. \end{rem} It is also useful to note that definition \ref{qc-qg} is a special case of Stahlke's entanglement-assisted chromatic number \cite{stahlke}. Also, when $\mathcal{N} = \mathbb{C}$, it is equivalent to Kim \& Mehta's strong chromatic numbers of non-commutative graphs \cite{mehta}. \section{Use of quantum adjacency matrix in coloring} \label{pinching and twirling} While definition \ref{qg1} was used in \cite{bgh} for developing chromatic number of quantum graphs, definition \ref{qg2} offers the advantage of associating a spectrum with the quantum graph, which is useful for estimating these chromatic numbers. We introduce this now: \begin{defn} Let $\mathcal{M}$ be a finite dimensional C*-algebra equipped with its tracial $\delta$-form $\psi$, and let $\mathcal{G} = (S, \mathcal{M}, B(L^2(\mathcal{M},\psi)))$ be a (undirected) quantum graph on $(\mathcal{M}, \psi)$. The \textit{spectrum of $\mathcal{G}$} is defined to be the spectrum of the quantum adjacency operator $A$, defined by \begin{equation} \label{fix A} A = \delta^{-2} (\psi \otimes I) P_S (I \otimes \mathbb{1}), \end{equation} where $P_S$ is the orthogonal bimodule projection onto $S$. \end{defn} Note that $A$ is \textit{self-adjoint} and so, the spectrum of an undirected quantum graph is real. \begin{conv} \label{assumption} For the remainder of this paper, $\mathcal{M}$ denotes a finite dimensional C*-algebra equipped with its \textit{tracial} $\delta$-form $\psi$, as given in \ref{plancheral}. We assume that our quantum graph $(S, \mathcal{M}, B(L^2(\mathcal{M},\psi)))$ is irreflexive. Further, $A$ always refers to the unique self-adjoint quantum adjacency matrix associated with $S$. We denote this quantum graph by $\mathcal{G} = (\mathcal{M}, \psi, A, S)$. \end{conv} We now show the connection between quantum adjacency matrix and quantum graph coloring by generalizing some algebraic results in \cite{EW1} to the quantum graph setting. The following lemma proves that ``pinching" operation annihilates the quantum adjacency matrix and leaves the commutant of the quantum vertex set invariant. \begin{lem} \label{pinching1} Let $\mathcal{G} = (\mathcal{M}, \psi, A, S)$ be an irreflexive quantum graph. If $\{P_k\}_{k=1}^c \subseteq \mathcal{M} \otimes \mathcal{N}$ is an arbitrary $c$-quantum coloring of $\mathcal{G}$, then \begin{equation} \label{annihilate A} \sum_{k =1}^c P_k (A \otimes I_{\mathcal{N}}) P_k = 0, \end{equation} \begin{equation} \label{p commute d} \sum_{k = 1}^c P_k (E \otimes I_{\mathcal{N}}) P_k = E \otimes I_{\mathcal{N}}, \;\; \forall E \in \mathcal{M}'. \end{equation} \end{lem} \begin{proof} We first show that $A \in S$. Recall that $A$ is given by \eqref{fix A}, using the orthogonal bimodule projection onto $S$. Using the inverse relations \eqref{a to p} and \eqref{p to a}, it can be shown that $P_S$ must be of the form $\delta^{-2} m( A \otimes (\cdot) )m^*$. In particular, $P_S(A) = \delta^{-2} m(A \otimes A)m^* = A$ by the Schur idempotent property of $A$. So, $A \in range(P_S) = S$. Now, by \eqref{annihilate S}, we get that $\sum_{k =1}^c P_k (A \otimes I_{\mathcal{N}}) P_k = 0$. \noindent Equation \eqref{p commute d} follows from the fact that the projections $P_k \in \mathcal{M} \otimes \mathcal{N}$ commute with $E \otimes I_{\mathcal{N}} \in \mathcal{M}' \otimes \mathcal{N}'$, and $\sum_{k=1}^c P_k = I_{\mathcal{M} \otimes \mathcal{N}}$. \end{proof} The next lemma is a corresponding result for the ``twirling" operation. \begin{lem} \label{pinching2} Suppose $\mathcal{G} = (\mathcal{M}, \psi, A, S)$ is an irreflexive quantum graph and $\{P_k\}_{k=1}^c \subseteq \mathcal{M} \otimes \mathcal{N}$ is a $c$-quantum coloring of $\mathcal{G}$. Define $U := \sum_{l =1}^c \omega^l P_l$, where $\omega = e^{2 \pi i / c}$ is a $c^{th}$ root of unity. Then, \begin{equation} \sum_{k =1}^c P_k (X \otimes I_{\mathcal{N}}) P_k = \dfrac{1}{c} \sum_{k =1}^c U^k (X \otimes I_{\mathcal{N}}) (U^*)^k, \;\;\; \forall \; X \in B(L^2(\mathcal{M})). \end{equation} In particular, \begin{equation} \label{l7} \sum_{k =1}^c U^k (A \otimes I_{\mathcal{N}}) (U^*)^k = 0, \end{equation} \begin{equation} \sum_{k =1}^c U^k (E \otimes I_{\mathcal{N}}) (U^*)^k = c \; (E \otimes I_{\mathcal{N}}), \;\; \forall E \in \mathcal{M}'. \end{equation} \end{lem} \begin{proof} Note that $U^* = \sum_{l=1}^c \omega^{-l} P_l$ since $\{P_l\}_{l =1}^c$ are self-adjoint. Also, the $k^{th}$ power of $U$ is given by \[ U^k = \sum_{l=1}^c \omega^{lk} P_l \] as the projections $\{P_l\}_{l =1}^c$ are mutually orthogonal, that is $P_i P_j = 0$ if $i \ne j$. Now, for $X \in B(L^2(\mathcal{M}))$, we obtain: \begin{eqnarray*} \sum_{k =1}^c U^k (X \otimes I_{\mathcal{N}}) (U^*)^k & = & \sum_{k=1}^c \sum_{l,l' =1}^c \omega^{(l - l')k} P_l (X \otimes I_{\mathcal{N}}) P_{l'} \\ & = & \sum_{l,l' =1}^c ( \sum_{k=1}^c \omega^{(l - l')k} ) P_l (X \otimes I_{\mathcal{N}}) P_{l'} \\ & = & \sum_{l,l' =1}^c ( c \; \delta_{l,l'} ) P_l (X \otimes I_{\mathcal{N}}) P_{l'}, \mbox{ where $\delta_{l,l'}$ denotes the Kr\"{o}necker delta}\\ & = & c \sum_{l =1}^c P_l (X \otimes I_{\mathcal{N}}) P_l \end{eqnarray*} Hence, we get the result. The rest follows from lemma \ref{pinching1}. \end{proof} Next, we note some obvious properties of $A \otimes I_{\mathcal{N}}$ for future reference. \begin{prop} \label{A tilda} Suppose $\mathcal{G} = (\mathcal{M}, \psi, A, S)$ is an irreflexive quantum graph and $\{P_k\}_{k=1}^c \subseteq \mathcal{M} \otimes \mathcal{N}$ is an arbitrary $c$-quantum coloring of $\mathcal{G}$. Assume that $2 \le \dim(\mathcal{M}) < \infty$ and $\mathcal{N} \subseteq B(\mathcal{H} )$ for some Hilbert space $\mathcal{H} $, say $\dim(\mathcal{H} ) = d$. Define $\tilde{A} = A \otimes I_{\mathcal{N}}$. Then \begin{enumerate} \item $\tilde{A}$ is self-adjoint and has real eigenvalues. \item The spectrum of $\tilde{A}$ has the same elements as the spectrum of $A$, but each with a multiplicity of $d$. In particular, the largest and smallest eigenvalue of $\tilde{A}$ coincide with the largest and smallest eigenvalue of $A$, respectively. \item $\tilde{A} = \sum_{a,b=1}^c P_a \tilde{A} P_b$. \item $\tilde{A}$ can be expressed as a block partitioned matrix $\begin{bmatrix} \widehat{A}_{11} & \widehat{A}_{12} & \ldots & \widehat{A}_{1c} \\ \widehat{A}_{21} & \widehat{A}_{22} & \ldots & \widehat{A}_{2c} \\ \vdots & \vdots & \vdots & \vdots \\ \widehat{A}_{c1} & \widehat{A}_{c2} & \ldots & \widehat{A}_{cc} \end{bmatrix}$, such that $\widehat{A}_{ii} = 0$ for all $i \in [c]$. In particular, $\Tr(A) = \dfrac{1}{d} \Tr(\tilde{A}) = 0$. \end{enumerate} \end{prop} \begin{proof} The first two statements are evident since $A$ is self-adjoint and tensoring with identity only produces more copies of the same eigenvalues. The third statement follows from the fact that $\sum_{k=1}^c P_k = I_{\mathcal{M} \otimes \mathcal{N}}$. To see the last statement, note that $\tilde{A}$ can be interpreted as a giant matrix over complex numbers as $\mathcal{M}$ and $\mathcal{N}$ are finite dimensional. Choose an orthonormal basis for $L^2(\mathcal{M}) \otimes \mathcal{H} $ such that all the projections $P_k$ are represented as diagonal matrices. Identify $\widehat{A}_{ab}$ with the matrix $P_a \tilde{A} P_b$. Then, we get the desired block partition. From \eqref{annihilate A}, it follows that $\widehat{A}_{ii} = 0$ for $1 \le i \le c$. \end{proof} \section{Spectral lower bounds for $\chi_q(\mathcal{G})$ and $\chi(\mathcal{G})$} \label{bounds} In this section, we obtain spectral lower bounds for the quantum chromatic number of quantum graphs, generalizing results from \cite{EW1}. Since $\chi_q(\mathcal{G}) \le \chi(\mathcal{G})$ \cite{bgh}, our estimates are also lower bounds for the classical chromatic number of quantum graphs. \noindent Our spectral bounds for an undirected quantum graph $\mathcal{G} = (\mathcal{M}, \psi, A, S)$ can be summarized as follows: \begin{equation} \label{all bounds} 1 + \max \left\{ \dfrac{\lambda_{\max}}{|\lambda_{\min}|}, \dfrac{\dim(S)}{\dim(S) - \dim(\mathcal{M}) \gamma_{\min}}, \dfrac{s^\pm}{s^\mp}, \dfrac{n^\pm}{n^\mp} , \dfrac{\lambda_{\max}}{\lambda_{\max} - \gamma_{\max}+ \theta_{max}} \right \} \le \chi_q(\mathcal{G}) \le \chi(\mathcal{G}). \end{equation} \noindent Here, $\lambda_{\max}, \lambda_{\min}$ denote the maximum and minimum eigenvalues of $A$; $s^+, s^-$ denote the sum of the squares of the positive and negative eigenvalues of $A$ respectively; $n^+, n^-$ are the number of positive and negative eigenvalues of $A$ including multiplicities; $\gamma_{\max}, \gamma_{\min}$ denote the maximum and minimum eigenvalues of the signless Laplacian operator (definition \ref{q-defn}); and $\theta_{\max}$ denotes the maximum eigenvalue of the Laplacian operator (definition \ref{q-defn}). The key ingredient in proving these bounds is lemma \ref{pinching1} and \ref{pinching2}. Using these, the proof of the corresponding classical bounds can essentially be adapted to our setting. Throughout our discussion, we follow convention \ref{assumption}. So, $A$ always refers to the unique self-adjoint quantum adjacency matrix associated with $(S, \mathcal{M}, B(L^2(\mathcal{M},\psi)))$, as in \eqref{fix A}. \subsection{Hoffman's bound} One of the well-known spectral bounds in graph theory is the Hoffman's bound \cite{hoffman}. This is a lower bound on the chromatic number of a graph using the largest and smallest eigenvalues of the adjacency matrix. The classical bound is as follows: If $G$ is an irreflexive classical graph whose adjacency matrix $A$ has eigenvalues $\lambda_{\max} = \lambda_1 \ge \lambda_2 \ge \ldots \ge \lambda_n = \lambda_{\min}$, then \begin{equation} 1 + \dfrac{\lambda_{\max}}{| \lambda_{\min} |} \le \chi(G). \end{equation} \noindent We can prove a quantum version of this bound using the following result from linear algebra. \begin{lem} \label{l2} Let $A$ be a self-adjoint matrix, block partitioned as $\begin{bmatrix} A_{11} & A_{12} & \ldots & A_{1n} \\ A_{21} & A_{22} & \ldots & A_{2n} \\ \vdots & \vdots & \vdots & \vdots \\ A_{n1} & A_{n2} & \ldots & A_{nn} \end{bmatrix}$. Then, \[ (n-1) \lambda_{\min}(A) + \lambda_{\max}(A) \le \sum_{i=1}^n \lambda_{\max}(A_{ii}),\] where $\lambda_{\max}(\cdot)$ and $\lambda_{\min}(\cdot)$ represent the maximum and minimum eigenvalues of that matrix. \begin{proof} We start with the case $n=2$. \noindent Let $x = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$ be a normalized eigenvector ($\|x_1\|^2 + \|x_2\|^2 = 1$) corresponding to $\lambda_{\max}(A)$. Define $y = \begin{bmatrix} \frac{\|x_2\|}{\|x_1\|} x_1 \\ - \frac{\|x_1\|}{\|x_2\|} x_2 \end{bmatrix}$. Then, we have \[ \lambda_{\max}(A) + \lambda_{\min}(A) \le \bra{x}A\ket{x} + \bra{y}A\ket{y} = \dfrac{\bra{x_1} A_{11} \ket{x_1}}{\|x_1\|^2} + \dfrac{\bra{x_2} A_{22} \ket{x_2}}{\|x_2\|^2} \le \lambda_{\max}(A_{11}) + \lambda_{\max}(A_{22}).\] The general case follows by induction on $n$. \end{proof} \end{lem} \noindent The generalization of Hoffman's bound to quantum graphs is as follows: \begin{thm} Let $\mathcal{G} = (\mathcal{M}, \psi, A, S)$ be an irreflexive quantum graph and $\lambda_{\max} = \lambda_1 \ge \lambda_2 \ge \ldots \ge \lambda_{\dim(\mathcal{M})} = \lambda_{\min}$ be all the eigenvalues of $A$. Then \begin{equation} 1 + \dfrac{\lambda_{\max}}{| \lambda_{\min} |} \le \chi_q(\mathcal{G}). \end{equation} \begin{proof} Let $\{P_k\}_{k=1}^c \subseteq \mathcal{M} \otimes \mathcal{N}$ be a $c$-quantum coloring of $\mathcal{G}$ and $\tilde{A} = A \otimes I_{\mathcal{N}}$. Partition $\tilde{A}$ as $[ \widehat{A}_{ab} ]_{a,b=1}^c$, as in proposition \ref{A tilda}. Applying lemma \ref{l2}, we get \begin{equation} \label{eq5} (c-1) \lambda_{\min}(\tilde{A}) + \lambda_{\max}(\tilde{A}) \le \sum_{i=1}^c \lambda_{\max}(\widehat{A}_{ii}). \end{equation} But $\widehat{A}_{ii} = 0$ for all $1 \le i \le c$. Hence equation \eqref{eq5} reduces to \[ (c-1) \lambda_{\min}(\tilde{A}) + \lambda_{\max}(\tilde{A}) \le 0.\] Recall that $\lambda_{\min}(\tilde{A}) = \lambda_{\min}(A)$ and $\lambda_{\max}(\tilde{A}) = \lambda_{\max}(A)$. So, we get $(c-1) \lambda_{\min}(A) + \lambda_{\max}(A) \le 0$. On rearranging and taking minimum over all $c$, we get \[ 1 + \dfrac{\lambda_{\max}(A)}{| \lambda_{\min} (A) |} \le \chi_q(\mathcal{G}).\] \end{proof} \end{thm} \subsection{Lower bound using edge number} In this section, we prove a spectral lower bound on the quantum chromatic number using a quantum analogue for the number of edges in the graph. For a classical graph $G$ with $n$ vertices and $m$ edges, it was shown \cite{edge} that \begin{equation} \label{b3} 1 + \dfrac{2m}{2m - n \gamma_{\min}} \le \chi(G), \end{equation} where $\gamma_{\min}$ is the minimum eigenvalue of the signless Laplacian of $G$. To prove a generalization of this bound to arbitrary quantum graphs $(\mathcal{M}, \psi, A, S)$, we first introduce a quantum analogue for $m, n$ and $\gamma_{\min}$. Recall that the degree matrix for classical graphs is a diagonal matrix obtained from the action of the adjacency matrix on the all 1s vector. This can be extended to quantum graphs as follows: \begin{defn} Let $\mathcal{G} = (\mathcal{M},\psi,A, S)$ be a quantum graph and $\mathbb{1}$ denote the unit in $\mathcal{M}$. Then the \textit{quantum degree matrix} of $\mathcal{G}$ is a linear operator $D \in B(L^2(\mathcal{M}))$ given by \[ D: \mathcal{M} \longrightarrow \mathcal{M} \mbox{ as } x \mapsto x (A \mathbb{1}), \forall x \in \mathcal{M}.\] In other words, $D$ can be interpreted as $A \mathbb{1} \in \mathcal{M}$ viewed as an element of $B(L^2(\mathcal{M}))$ under the \textit{right} regular representation. \end{defn} \begin{rem}The definition $D = A \mathbb{1}$ was also used in \cite{randomqg} and \cite{matsuda}. The only difference in our case is that we view $D$ under the \textit{right} regular representation, instead of the usual left regular representation of $\mathcal{M}$. The advantage of using right regular representation is that $D$ then belongs to $\mathcal{M}'$. \end{rem} Our next goal is to define a quantum analogue for the ``number of edges" in the graph. To do that, we need the following result: \begin{prop} \label{value of 2m} Let $\mathcal{M}$ be a finite dimensional C*-algebra, equipped with its tracial $\delta$-form $\psi$. If $(\mathcal{M},\psi,A, S)$ is a quantum graph with degree matrix $D$, then, \begin{equation} \Tr(D) = \delta^2 \psi(A \mathbb{1}) = \dim(S). \end{equation} \begin{proof} Let $P_S: B(L^2(\mathcal{M})) \to B(L^2(\mathcal{M}))$ denote the orthogonal bimodule projection onto $S$. We can express $P_S$ as an element $\sum_{i=1}^t x_i \otimes y_i^{op} \in \mathcal{M} \otimes \mathcal{M}^{op}$, such that $P_S (a \otimes b) = \sum_{i=1}^{t} x_ia \otimes by_i$, for all $a,b \in \mathcal{M}$ using the correspondence mentioned in remark \ref{cbm}. Now, $A = \delta^2 (\psi \otimes I)P_S(I \otimes \eta)$ implies \begin{equation} A(\mathbb{1}) = \delta^2 (\psi \otimes I) P_S (\mathbb{1} \otimes \mathbb{1}) = \delta^2 (\psi \otimes I) (\s{t} x_i \otimes y_i) = \delta^2 \s{t} \psi(x_i) y_i. \end{equation} Thus, \vspace{-0.5cm} \begin{eqnarray*} \psi(A \mathbb{1}) & = & \psi( \delta^2 \s{t} \psi(x_i) y_i) = \delta^2 \s{t} \psi(x_i) \psi(y_i) \\ & = & \delta^2 \s{t} \inner{x_i}{\mathbb{1}}\inner{y_i}{\mathbb{1}} \\ & = & \delta^2 \s{t} \inner{x_i \otimes y_i}{\mathbb{1} \otimes \mathbb{1}} = \delta^2 \inner{ \s{t} x_i \otimes y_i}{\mathbb{1} \otimes \mathbb{1}} \\ & = & \delta^2 \inner{P_S}{I}, \mbox{ when viewed as operators on $B(L^2(\mathcal{M}))$} \\ & = & \delta^2 \dfrac{\Tr(P_S)}{\dim(B(L^2(\mathcal{M})))} \\ & = & \dim(\mathcal{M}) \dfrac{\dim(S)}{\dim(\mathcal{M})^2} = \dfrac{\dim(S)}{\dim(\mathcal{M})} \end{eqnarray*} where we have used the fact that $\psi$ is a tracial state and $\delta^2 = \dim(\mathcal{M})$. Also, the trace on $B(L^2(\mathcal{M}))$ restricted to $\mathcal{M}$ (or $\mathcal{M}'$ by symmetry) is just $\dim(\mathcal{M}) \psi$. So, \[ \Tr(D) = \dim(\mathcal{M}) \; \psi(A \mathbb{1}). \] Hence, $\Tr(D) = \dim(S)$. \end{proof} \end{prop} We now define quantum analogues of some classical quantities: \begin{defn} \label{q-defn} Let $\mathcal{G} = (\mathcal{M},\psi,A, S)$ be an irreflexive quantum graph with degree matrix $D$. \begin{enumerate} \item The \textit{quantum vertex number} for $\mathcal{G}$ is defined to be $\dim(\mathcal{M})$. \item The \textit{quantum edge number} for $\mathcal{G}$ is defined to be $\dfrac{\Tr(D)}{2} = \dfrac{\dim(S)}{2}.$ \item The \textit{Laplacian} of $\mathcal{G}$ is the linear operator $L = D - A \in B(L^2(\mathcal{M}))$. \item The \textit{signless Laplacian} of $\mathcal{G}$ is the linear operator $Q = D + A \in B(L^2(\mathcal{M}))$. \end{enumerate} \end{defn} For a classical irreflexive graph $G = (V,E)$, these definitions clearly coincide with the usual values. In particular, if $\mathcal{G} = (S_G, D_{|V|}, M_{|V|})$, then the quantum vertex number is $|V|$ and the quantum edge number is $|E|$ since $2 |E| = \sum_{v \in V} deg(v) = \Tr(D)$. \begin{rem} The quantum edge number need not be an integer in general. But for most purposes, we will only need $2m = \Tr(D) = \dim(S)$. \end{rem} We are now ready to prove a quantum version of the spectral bound in \eqref{b3}. \begin{thm} Let $\mathcal{G} = (\mathcal{M},\psi, A, S)$ be an irreflexive quantum graph . Then \begin{equation} 1 + \dfrac{2m}{2m - n \gamma_{\min}} \le \chi_q(\mathcal{G}), \end{equation} where $m$ is the quantum edge number, $n$ is the quantum vertex number and $\gamma_{\min}$ is the minimum eigenvalue of the signless Laplacian of $\mathcal{G}$, in the sense of definition \ref{q-defn}. More precisely, \begin{equation} 1 + \frac{\dim(S)}{\dim(S) -\dim(\mathcal{M}) \gamma_{\min}} \le \chi_q(\mathcal{G}). \end{equation} \begin{proof} Let $\{P_k\}_{k=1}^c \subseteq \mathcal{M} \otimes \mathcal{N}$ be a $c$-quantum coloring of $\mathcal{G}$ and let $U$ be defined as in lemma \ref{pinching2}. Then, \eqref{l7} can be rearranged as $ U^c (A \otimes I_{\mathcal{N}})(U^*)^c = - \sum_{k=1}^{c-1} U^k (A \otimes I_{\mathcal{N}}) (U^*)^k$. Using $D - Q = - A$ and $U^c = I_{\mathcal{M} \otimes \mathcal{N}}$, we get \begin{eqnarray*} A \otimes I_{\mathcal{N}} & = & \sum_{k=1}^{c-1} U^k ((D-Q) \otimes I_{\mathcal{N}}) (U^*)^k \\ & = & \sum_{k=1}^{c-1} U^k (D \otimes I_{\mathcal{N}}) (U^*)^k - \sum_{k=1}^{c-1} U^k (Q \otimes I_{\mathcal{N}}) (U^*)^k \\ \label{eq1} & = & (D \otimes I_{\mathcal{N}}) \sum_{k=1}^{c-1} U^k (U^*)^k - \sum_{k=1}^{c-1} U^k (Q \otimes I_{\mathcal{N}}) (U^*)^k \\ & = & (c-1) (D \otimes I_{\mathcal{N}}) - \sum_{k=1}^{c-1} U^k (Q \otimes I_{\mathcal{N}}) (U^*)^k \end{eqnarray*} where we have used the fact that $D \in \mathcal{M}'$ and hence $D \otimes I_{\mathcal{N}}$ commutes with $U \in \mathcal{M} \otimes \mathcal{N}$. Let $\mathcal{N}$ be represented in some $B(\mathcal{H} )$ and let $u$ denote a unit vector in $\mathcal{H} $ such that $\inner{u}{u} = 1$. Further, let $\ket{\xi} = \mathbb{1} \otimes u$ denote a column vector in $L^2(\mathcal{M}) \otimes \mathcal{H} $ and $\bra{\xi}$ denote its corresponding conjugate row vector. Multiplying the left and right most sides of the above equation by $\bra{\xi}$ from the left and by $\ket{\xi}$ from the right, we obtain \begin{equation} \label{eq2} \bra{\xi} A \otimes I_{\mathcal{N}} \ket{\xi} = (c-1) \bra{\xi} D \otimes I_{\mathcal{N}} \ket{\xi} - \sum_{k=1}^{c-1} \bra{\xi} U^k (Q \otimes I_{\mathcal{N}}) (U^*)^k \ket{\xi}. \end{equation} Now, $\bra{\xi}A \otimes I_{\mathcal{N}} \ket{ \xi} = \inner{\mathbb{1}}{A \mathbb{1}} \inner{u}{u} = \psi( (A\mathbb{1})^*)= \psi( A\mathbb{1}) = \dfrac{\dim(S)}{\dim(\mathcal{M})}$, where we use the *-preserving property of $A$ (remark \ref{* preserve}) and proposition \ref{value of 2m}. Similarly, $\langle \xi | D \otimes I_{\mathcal{N}} | \xi \rangle = \dfrac{\dim(S)}{\dim(\mathcal{M})}$. To estimate the last term, recall that eigenvalues are invariant under unitary conjugation and tensoring with identity only changes their multiplicity. So, \begin{eqnarray*} \gamma_{\min} & = & \min \left \{ \bra{w} Q \ket{w} : w \in L^2(\mathcal{M}), \inner{w}{w} = 1 \right\} \\ & = & \min \left \{ \langle v | Q \otimes I_{\mathcal{N}} | v \rangle : v \in L^2(\mathcal{M}) \otimes \mathcal{H} , \inner{v}{v} = 1 \right \} \\ & = & \min \left \{ \langle v | U^k(Q \otimes I_{\mathcal{N}})(U^*)^k | v \rangle : v \in L^2(\mathcal{M}) \otimes \mathcal{H} , \inner{v}{v} = 1 \right \} \\ & \le & \langle \xi | U^k (Q \otimes I_{\mathcal{N}}) (U^*)^k | \xi \rangle, \;\; \forall k \in [c]. \end{eqnarray*} Hence, \eqref{eq2} leads to \begin{equation} \dfrac{\dim(S)}{\dim(\mathcal{M})} \le (c-1) \dfrac{\dim(S)}{\dim(\mathcal{M})} - (c-1) \gamma_{\min}, \end{equation} which upon rearranging yields $1 + \dfrac{\dim(S)}{\dim(S) - \dim(\mathcal{M}) \gamma_{\min}} \le c$. Taking minimum over all $c$ , we get the desired bound. \end{proof} \end{thm} \subsection{Bound using the sum of square of eigenvalues} In \cite{ando-lin}, it was proved that for a classical graph $G$, \begin{equation} 1 + \max \left \{ \dfrac{s^+}{s^-} , \dfrac{s^-}{s^+} \right \} \le \chi(G), \end{equation} where $s^+$ is the sum of the squares of the positive eigenvalues of the adjacency matrix and $s^-$ is the sum of the squares of its negative eigenvalues. In this section, we show that the above bound also works in the setting of quantum graphs. We first recall the following result from linear algebra, whose proof can be found in \cite{ando-lin}. \begin{lem} \label{l1} Let $X = [X_{ij}]_{i,j}^r$ and $Y = [Y_{ij}]_{i,j}^r$ be two positive semidefinite matrices conformally partitioned. If $X_{ii} = Y_{ii}$ for $1 \le i \le r$ and $XY =0$, then $\Tr(X^*X) \le (r-1) \Tr(Y^*Y)$. \end{lem} We now adapt the proof of the classical bound in \cite{ando-lin} to the quantum case. \begin{thm} \label{bound3} Let $\mathcal{G} = (\mathcal{M}, \psi, A, S)$ be an irreflexive quantum graph and $\lambda_1 \ge \lambda_2 \ge \ldots \ge \lambda_{\dim (\mathcal{M})}$ be all the eigenvalues of $A$. Let $s^+ = \sum_{\lambda_i > 0} (\lambda_i)^2$ and $s^- = \sum_{\lambda_i < 0} (\lambda_i)^2$. Then, \begin{equation} 1 + \max \left \{ \dfrac{s^+}{s^-} , \dfrac{s^-}{s^+} \right \} \le \chi_q(\mathcal{G}). \end{equation} \begin{proof} Let $\{P_k\}_{k=1}^c \subseteq \mathcal{M} \otimes \mathcal{N}$ be a $c$-quantum coloring of $\mathcal{G}$. Further, let $\tilde{A} = A \otimes I_{\mathcal{N}}$ and let $\mu_1 \ge \mu_2 \ge \ldots \ge \mu_t$ be all the eigenvalues of $\tilde{A}$. Consider a spectral decomposition of $\tilde{A}$, \begin{equation} \tilde{A} = \s{t} \mu_i (v_i v_i^*), \mbox{ where } v_i \in L^2(\mathcal{M}) \otimes \mathcal{H} , \end{equation} and write $\tilde{A} = \tilde{B} - \tilde{C}$, where \begin{equation} \label{eq3} \tilde{B} = \sum_{ \mu_i > 0} \mu_i (v_i v_i^*) \hspace{1cm} \tilde{C} = \sum_{ \mu_i < 0} - \mu_i (v_i v_i^*). \end{equation} Suppose $N \subseteq B(\mathcal{H} )$ for some Hilbert space $\mathcal{H} $. Then, \begin{equation} \label{eq4} \Tr( {\tilde{B}}^* \tilde{B}) = \sum_{ \mu_i > 0} \mu_i^2 = \dim(\mathcal{H} ) \; s^+ \mbox{ and } \Tr( \tilde{C}^* \tilde{C}) = \sum_{ \mu_i < 0} \mu_i^2 = \dim(\mathcal{H} ) \; s^- . \end{equation} Partition $\tilde{A}$ as $ [ \widehat{A}_{ab} ]_{a,b=1}^c$ as in proposition \ref{A tilda}. Similarly, let \[ \tilde{B} = [\widehat{B}_{ab}]_{a,b=1}^c = \sum_{a,b =1}^c P_a \tilde{B} P_b \mbox{ and } \tilde{C} = [\widehat{C}_{ab}]_{a,b=1}^c = \sum_{a,b =1}^c P_a \tilde{C} P_b.\] Now, $B$ and $C$ are positive semidefinite matrices that are conformally partitioned. Further, $\widehat{B}_{ii} = \widehat{C}_{ii}$ since $0 = P_i \tilde{A} P_i = P_i \tilde{B} P_i - P_i \tilde{C} P_i$ for all $1 \le i \le c$. Also $\tilde{B}\tilde{C} = \tilde{C}\tilde{B} = 0$. So, by lemma \ref{l1} and \eqref{eq4}, it follows that $\dfrac{s^+}{s^-} \le c - 1$ and $\dfrac{s^-}{s^+} \le c - 1$. Taking minimum over all $c$, we get $1 + \max \left \{ \dfrac{s^+}{s^-} , \dfrac{s^-}{s^+} \right \} \le \chi_q(\mathcal{G})$. \end{proof} \end{thm} \subsection{Inertial lower bound} In this section, our goal is to generalize the following inertial bound \cite{inertial} to quantum graphs: \begin{equation} 1 + \max \left \{ \dfrac{n^+}{n^-}, \dfrac{n^-}{n^+} \right \} \le \chi(G), \end{equation} where $(n^+, n^0, n^-)$ is the inertia of $G$. We begin with defining the inertia of a quantum graph: \begin{defn} Let $\mathcal{G} = (\mathcal{M}, \psi, A, S)$ be a quantum graph and $\lambda_1 \ge \lambda_2 \ge \ldots \ge \lambda_{\dim(\mathcal{M})}$ denote the eigenvalues of $A$. The \textit{inertia} of $\mathcal{G}$ is the ordered triple $(n^+, n^0, n^-)$, where $n^+$, $n^0$ and $n^-$ are the numbers of positive, zero and negative eigenvalues of $A$ including multiplicities. \end{defn} \begin{thm} Let $\mathcal{G} = (\mathcal{M}, \psi, A, S)$ be an irreflexive quantum graph with inertia $(n^+, n^0, n^-)$. Then, \begin{equation} 1 + \max \left \{ \dfrac{n^+}{n^-}, \dfrac{n^-}{n^+} \right \} \le \chi_q(\mathcal{G}). \end{equation} \end{thm} \begin{proof} Let $\{P_k\}_{k=1}^c \subseteq \mathcal{M} \otimes \mathcal{N}$ be a $c$-quantum coloring of $\mathcal{G}$. Let $U$ be defined as in lemma \ref{pinching2} and $\tilde{A}, \tilde{B}$ and $\tilde{C}$ be defined as in the proof of theorem \ref{bound3}. Then, we have \begin{equation} \label{l5} \sum_{k=1}^{c-1} U^k \tilde{B} (U^*)^k - \sum_{k=1}^{c-1} U^k \tilde{C} (U^*)^k = \sum_{k=1}^{c-1} U^k \tilde{A} (U^*)^k = -\tilde{A} = \tilde{C} - \tilde{B}, \end{equation} Note that $\tilde{B}$ and $\tilde{C}$ are positive definite operators with $rank(\tilde{B}) = n^+$ and $rank(\tilde{C}) = n^-$. Further let \[ P^+ = \sum_{\mu_i > 0} v_i v_i^* \mbox{ and } P^- = \sum_{\mu_i < 0} v_i v_i^* \] denote the orthogonal projectors onto the subspaces spanned by the eigenvectors corresponding to the positive and negative eigenvalues of $\tilde{A}$ respectively. Observe that $\tilde{B} = P^+ \tilde{A} P^+$ and $\tilde{C} = - P^- \tilde{A} P^-$. Multiplying \eqref{l5} by $P^-$ on both sides, we obtain: \begin{equation} \label{l6} P^- \sum_{k=1}^{c-1} U^k \tilde{B} (U^*)^k P^- - P^- \sum_{k=1}^{c-1} U^k \tilde{C} (U^*)^k P^- = C \end{equation} Now we use the fact that if $X,Y$ are two positive definite matrices such that $X-Y$ is positive definite, then $rank(X) \ge rank(Y)$. By applying this to \eqref{l6}, we get \[ rank(P^- \sum_{k=1}^{c-1} U^k \tilde{B} (U^*)^k P^-) \ge rank(C). \] Recall that the rank of a sum is less than or equal to the sum of the ranks of the summands, and that the rank of a product is less than or equal to the minimum of the ranks of the factors. So, we get $(c-1)n^+ \ge n^-$. Similarly, it can be shown that $(c-1)n^- \ge n^+$. Hence, $ \max \left \{ \dfrac{n^+}{n^-}, \dfrac{n^-}{n^+} \right \} \le c-1$. Taking minimum over all $c$, we get the desired bound. \end{proof} \subsection{Bound using maximum eigenvalue of the Laplacian and signless Laplacian} Let $L$ and $Q$ denote the Laplacian and signless Laplacian of $\mathcal{G} = (\mathcal{M}, \psi, A, S)$ in the sense of definition \ref{q-defn}. Further, let $\lambda_{\max}, \theta_{\max}$ and $\gamma_{\max}$ denote the largest eigenvalue of $A, L$ and $Q$ respectively. Then \begin{equation} 1 + \dfrac{\lambda_{\max}}{\lambda_{\max} - \gamma_{\max}+ \theta_{max}} \le \chi_q(\mathcal{G}). \end{equation} Like the previous cases, this bound can also be shown by adapting the classical proof \cite{kolotilina} and applying lemma \ref{pinching2}. \section{Illustration} \label{application} In this section, we illustrate the tightness of these bounds in the case of complete quantum graphs. Let $K_{\mathcal{M}}$ denote the irreflexive complete quantum graph on $(\mathcal{M}, \psi)$. The quantum adjacency matrix in this case is given by $A = \delta^2 \psi( \cdot) \mathbb{1} - I$. For $x \in \mathcal{M}$, we have \begin{eqnarray*} A(x) & = & \delta^2 \psi(x) \mathbb{1} - I \\ & = & (\dim \mathcal{M}) \inner{x}{\mathbb{1}} \mathbb{1} - I \\ & = & (\dim \mathcal{M}) P_{\mathbb{1}}(x) - I, \end{eqnarray*} where $P_{\mathbb{1}}: \mathcal{M} \to \mathcal{M}$ denotes the orthogonal projection onto 1, given by $x \mapsto \inner{x}{\mathbb{1}} \mathbb{1}$. Since $P_{\mathbb{1}}$ is a rank-1 projection, its spectrum is precisely $\{0,1\}$, where $0$ has a multiplicity of $\dim(\mathcal{M}) - 1$. Using functional calculus, we get \begin{equation} \sigma(A) = \{ \dim(\mathcal{M}) - 1, \;\; -1 \}, \end{equation} where $-1$ has a multiplicity of $\dim(\mathcal{M}) - 1$. Similarly, we get \begin{equation} \sigma(Q) = \{ 2 \dim(\mathcal{M}) - 2, \; \dim(\mathcal{M}) - 2 \}, \end{equation} where $\dim(\mathcal{M}) - 2$ has a multiplicity of $\dim(\mathcal{M}) - 1$, and \begin{equation} \sigma(L) = \{ \dim(\mathcal{M}) , 0 \}, \end{equation} where $\dim(\mathcal{M})$ has a multiplicity of $\dim(\mathcal{M}) - 1$. \noindent Thus, for an irreflexive complete quantum graph, we have: \vspace{-0.4cm} \begin{itemize}[noitemsep] \item $\lambda_{\max} = \dim \mathcal{M} - 1$, $\lambda_{\min} = - 1$ \item $\gamma_{\max} = 2 \dim(\mathcal{M}) - 2$, $\gamma_{\min} = \dim \mathcal{M} - 2$ \item $\theta_{\max} = \dim \mathcal{M}$ \item $s^+ = (\dim(\mathcal{M}) - 1)^2$, $s^- = \dim(\mathcal{M}) - 1$ \item $n^+ = 1$, $n^- = \dim(\mathcal{M}) -1$ \item $2m = \dim(\mathcal{M})^2 - \dim(\mathcal{M})$ \end{itemize} \noindent On applying these to theorem \ref{main result}, we see that all the five spectral bounds give the same result, namely: \begin{equation} \dim(\mathcal{M}) \le \chi_q(K_{\mathcal{M}}). \end{equation} The reverse inequality $\chi_q(K_{\mathcal{M}}) \le \dim(\mathcal{M})$ was proved in \cite{bgh}, and $\chi_q(K_{\mathcal{M}}) = \dim(\mathcal{M})$. So, we conclude that all the bounds in theorem \ref{main result} are tight in the case of complete quantum graphs. \section{Conclusion and future directions} \vspace{-0.3cm} In this work, we have shown that several spectral lower bounds for the chromatic number of classical graphs are also lower bounds for the classical and quantum chromatic numbers of quantum graphs. We believe that quantum graph spectral theory is a promising field of study. As a next step, it would be interesting to find bounds that exhibit a separation between the different variants of chromatic numbers of quantum graphs. Alternatively, investigating examples of quantum graphs that show a separation between these spectral bounds would also be helpful. We hope to explore these in a future work. \section{Acknowledgments} \vspace{-0.3cm} The author is grateful to her PhD supervisor, Michael Brannan, for his valuable guidance and insights on this project. The author would also like to thank Larissa Kroell for useful discussions on quantum graph theory. This work was partially supported by NSF grants DMS-2000331 and DMS-1700267. \section{Appendix} \label{appendix} \vspace{-0.3cm} \begin{comment} If $\mathcal{M}$ is a unital finite dimensional C*-algebra, then by the structure theorem [REF] we have \begin{equation} \mathcal{M} \cong \bigoplus_{i=1}^N M_{n_i}, \end{equation} for some positive integers $N, n_1, n_2 \ldots n_N$. A faithful state on $M_n$ can be written uniquely of the form $\Tr(Q \;\; \cdot)$ for some positive matrix $Q \in M_n$ satisfying $\Tr(Q) = 1$. In particular, faithful states on $\mathcal{M}$ are of the form $\psi = \bigoplus_{i=1}^N \Tr(Q_i \;\; \cdot)$, where $Q_i \in M_{n_i}$ are positive matrices satisfying $\sum_{i=1}^N \Tr(Q_i) = 1$. We state the following characterization of $\delta$-forms on $\mathcal{M}$, given by [REF]: \begin{prop} Let $\psi = \bigoplus_{i=1}^N \Tr(Q_i \;\; \cdot)$ be a faithful state on a unital finite dimensional C*-algebra $\mathcal{M}$. Then $\psi$ is a $\delta$-form if and only if each $Q_i$ is invertible and $\Tr(Q_i^{-1}) = \delta^2$ for all $1 \le i \le N$. In particular, $\mathcal{M}$ admits a \textit{unique} tracial $\delta$-form with $\delta^2 = dim(\mathcal{M})$ given by \begin{equation} \label{plancheral trace} \psi = \dfrac{1}{dim(\mathcal{M})} \bigoplus_{i=1}^N n_i \Tr( \cdot ) \end{equation} \end{prop} \end{comment} Let $\mathcal{M}$ be a finite dimensional C*-algebra, equipped with its tracial $\delta$-form $\psi$. The properties of a quantum graph on $\mathcal{M} \subseteq B(L^2(\mathcal{M}, \psi))$ in the different perspectives is summarized in the following table. Here, $p = \sum_{i=1}^t a_i \otimes b_i \in \mathcal{M} \otimes \mathcal{M}^{op}$ and $m, \sigma$ denote the multiplication map and swap map on $\mathcal{M} \otimes \mathcal{M}^{op}$ respectively. Further, $\mathcal{H} = L^2(\mathcal{M}, \psi)$, $T \in B(L^2(\mathcal{M}))$, $ \xi \in L^2(\mathcal{M})$ and $x,y \in \mathcal{M}'$. \def2{2} \begin{center} \begin{tabular}{| p{1.8cm} || p{1.8cm} | p{1.8cm} | p{2.5cm} | p{3.4cm} | p{3.2cm} |} \hline \textsc{property} & \textsc{classical graph} & $S \subseteq B(\mathcal{H} )$ & $p \in \mathcal{M} \otimes \mathcal{M}^{op}$ & $A: \mathcal{M} \to \mathcal{M}$ & $P: B(\mathcal{H} ) \to B(\mathcal{H} )$ \\ \hline \hline Bimodule structure & Relations on a set & $\mathcal{M}'S\mathcal{M}' \subseteq S$ & $\sum_{i} a_i (xTy) b_i = x(\sum_{i} a_i T b_i )y$ & $m(A \otimes xTy) m^* = \newline x (m(A \otimes T ) m^*) y$ & $P(xTy) = xP(T)y$ \\ \hline Schur \newline idempotent & $A \in M_n(\{0,1\})$ & $A \in S$ & $p^2 = p$ & $m(A \otimes A)m^* = \delta^{2} A$ & $P^2 = P$ \\ \hline Reflexive & $I \in S_{G}$ & $\mathcal{M}' \subseteq S$ & $m(p) = \mathbb{1}$ & $m(A \otimes I)m^* = \delta^{2} I$ & $P(I) = I$ \\ \hline Irreflexive & $ I \notin S_G$ & $\mathcal{M}' \perp S$ & $m(p) = 0$ & $m(A \otimes I)m^* = 0$ & $P(I) = 0$ \\ \hline Undirected & $A = A^T$ & $S = S^*$ & $\sigma(p) = p$ & $(I \otimes \eta^*m )(I \otimes A \otimes I)\newline ( m^* \eta \otimes I) = A $ \newline Alternatively, $A(\xi^*) = [A^*(\xi)]^* $ & $P^*(T) = P(T^*)^*$, \newline {\small ( * denotes adjoint as an operator on Hilbert spaces)} \\ \hline Self \newline adjoint & $A = A^*$ & & $\sigma(p) = p^*$ & $A(\xi) = A^*(\xi)$ & $P(T^*) = P(T)^*$ \\ \hline Real & $A = \overline{A}$ & & $p^* = p$ & $A(\xi^*) = (A(\xi))^*$ & $P^*(T) = P(T)$ \\ \hline Positivity & $A$ is C.P & & $p$ is positive (i.e. $p = g^*g$) & $A$ is completely \newline positive (C.P) & $P$ is positive \newline (i.e. $P = G^*G$) \\ \hline \end{tabular} \end{center} In particular, for undirected quantum graphs: \begin{eqnarray*} P^2 = P = P^* & \iff & p^2 = p = p^* \\ & \iff & A \mbox{ is Schur-idempotent and real } \\ & \iff & A \mbox{ is Schur-idempotent and self-adjoint.} \end{eqnarray*} \bibliographystyle{amsalpha}
proofpile-arXiv_065-73
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The free transport equation (or free transport operator) is one of the most important ones in a wide area of mathematics. When we consider a probability density function $f : \mathbb{R}_{+}\times \Omega \times\mathbb{R}^{d}\rightarrow \mathbb{R}_{+}$, the free transport equation is written by \[ \partial_{t}f + v\cdot\nabla_{x}f = 0. \] Above equation is very simple and has explicit solution $f(t,x,v) = f_0(x-vt, v)$ when initial data $f_0$ is smooth and spatial domain is $\mathbb{R}^{d}$ or $\mathbb{T}^{d}$. However, if we consider general boundary problems, it becomes very complicated. One of the most important and ideal boundary conditions in kinetic theory is the specular reflection boundary condition, \begin{equation} \label{BC} f(t,x,v) = f(t,x,R_{x}v), \quad R_{x} = I - 2n(x)\otimes n(x),\quad x\in \partial\Omega, \end{equation} where $n(x)$ is outward unit normal vector on the boundary $\partial\Omega$ when $\partial\Omega$ is smooth. \eqref{BC} is motivated by billiard model and we usually analyze the problem through characteristics: \begin{equation} \label{XV heu} \begin{split} X(s;t,x,v) &:= \text{position of a particle at time $s$ which was at phase space $(t,x,v)$}, \\ V(s;t,x,v) &:= \text{velocity of a particle at time $s$ which was at phase space $(t,x,v)$}, \\ \end{split} \end{equation} where $X(s;t,x,v)$ and $V(s;t,x,v)$ satisfy the following Hamiltonian structure, \begin{equation} \label{Ham} \frac{d}{ds}X(s;t,x,v) = V(s;t,x,v),\quad \frac{d}{ds}V(s;t,x,v) = 0, \\ \end{equation} under billiard-like reflection condition on the boundary. Explicit formulation of $(X(s;t,x,v), V(s;t,x,v))$ will be given right after Definition \ref{notation}. Since $X(t;t,x,v) = x$, $V(t;t,x,v)=v$ by definition, we can easily guess the following solution, \[ f(t,x,v) = f_{0}(X(0;t,x,v), V(0;t,x,v)), \] which is same as $f_0(x-vt, v)$ when $\Omega=\mathbb{R}^{d}$. However, unlike to whole space case, regularity of the solution $f(t,x,v)$ depends on the regularity of trajectory \eqref{XV heu}. More precisely, when $X(0;t,x,v)\in \partial\Omega$, differentiability of \eqref{XV heu} break down in general. This means that for any time $t>0$, there exist corresponding $(x_{*},v_{*}) \in \Omega\times \mathbb{R}^{d}$ such that $f(t, \cdot, \cdot)$ is not differentiable at the point. Or equivalently, for any $(x,v)\in \Omega\times \mathbb{R}^{d}$, there exists some corresponding time $t$ such that $f(\cdot, x, v)$ is not differentiable at that time. \\ Now let us consider general kinetic model which has hyperbolic structure, such as hard sphere or general cut-off Boltzmann equations. (Of course, there are lots of other kinetic literature which consider various boundary condition problems.) Although the Boltzmann equation (or other general kinetic equations) is much more complicated than the free transport equation, the recent development of the Boltzmann (or kinetic) boundary problems shows the regularity issue of the problems very well. \\ \indent In $\mathbb{T}^{3}$ or $\mathbb{R}^{3}$, many results have been known using high order regularity function spaces. We refer to some classical works such as \cite{DV, StrainJAMS,GuoVPB,GuoVMB}. (We note that the apriori assumption of \cite{DV} also covers some boundary condition problems, including specular reflection \eqref{BC}.) More recently, in the case of non-cutoff Boltzmann equation (which has regularizing effect), it is known that the solution is $C^{ \infty}$ by \cite{CILS}.\\ \indent However, when it comes to general boundary condition problems, a way of getting sufficient high order regularity estimate is not known and low regularity approach has been widely used. By defining mild solution, low regularity $L^{\infty}$ solutions have been studied after \cite{Guo10}. In \cite{LY2004}, they studied the pointwise estimate for the Green function of the linearized Boltzmann equation in $\mathbb{R}$. They also obtained weighted $L^\infty$ decay of the Boltzmann equation using the Green function approach. In \cite{UY2006}, they constructed $L^2\cap L^\infty_{\beta}$ solution to the Boltzmann equation in the whole space using Duhamel's principle and the spectral theory. Recently, authors in \cite{LLWW2022} provide new analysis to derive $L^\infty$ estimate rather than using Green function and Duhamel's principle. In addition, there are lots of references \cite{CKLVPB,DHWY2017,DHWZ2019,DKL2020,DW2019,KimLee,KimLeeNonconvex,KLP2022,LY2017}, where the low regularity solutions were studied for the cut-off type Boltzmann equation. We also refer to recent results \cite{AMSY2020,DLSS2021,GHJO2020,KGH2020}, etc., which deal with low regularity solution of the kinetic equation whose collision operator has regularizing effect such as non-cutoff Boltzmann or Landau equation. In fact, however, there are only a few results known about regularity of the Boltzmann equation with boundary conditions. We refer to \cite{GKTT2016,GKTT2017,Kim2011,KimLee2021}. \\ \indent As briefly explained above, regularity issue of boundary condition problems is very fundamental problem. In fact, even without complicated collision type operators, regularity of the free transport equation with boundary conditions has not been studied thoroughly to the best of author's knowledge. \\ \subsection{Statements of main theorems} In this paper, we study classical $C^{2}_{t,x,v}$ regularity of the free transport equation in a 2D disk, \begin{equation} \label{eq} \partial_{t}f + v\cdot\nabla_{x}f = 0,\quad x\in \Omega:=\{ x\in\mathbb{R}^{2} \ : \ |x| < 1\}, \end{equation} with the specular reflection boundary condition \eqref{BC}. Note that $n(x)=x \in \partial\Omega$, since we consider unit disk $\partial\Omega = \{x\in\mathbb{R}^{2} : |x|=1 \}$. Our aim is to find initial-boundary compatibility conditions of initial data $f_0$ for $C^{1}$ and $C^{2}$ regularity of the solution $f(t,x,v)$. We expect the solution will be mild solution $f(t,x,v)=f_0(X(0;t,x,v), V(0;t,x,v))$ surely (See Definition \ref{notation} for $X$ and $V$.) By performing derivative of $f$ in terms of $t,x,v$ directly (up to second order), we will find some conditions of $f_0$ which contains first and second derivative in both $x$ and $v$. (See \eqref{C1 cond}, \eqref{C2 cond34}, \eqref{C2 cond 1}, and \eqref{C2 cond 2}.) \\ \hide \begin{equation} f(t,x,v) = f(t,x,R_{x}v), \quad R_{x} = I - 2n(x)\otimes n(x),\quad x\in \partial\Omega \end{equation} where $n(x)$ is outward unit normal vector \unhide In general, for smooth bounded domain $\Omega$, we define \begin{equation*} \Omega = \{ x\in \mathbb{R}^{2} : \xi(x) < 0 \},\quad \partial\Omega = \{ x\in \mathbb{R}^{2} : \xi(x) = 0 \}. \end{equation*} In the case of unit disk, we may choose \begin{equation*} \xi(x) = \frac{1}{2}|x|^{2} - \frac{1}{2}, \end{equation*} and hence \begin{equation*} \nabla\xi(x) = x,\quad \nabla^{2}\xi(x) = I. \\ \end{equation*} Now let us define some notation to precisely describe characteristics $X(s;t,x,v)$ and $V(s;t,x,v)$. \begin{definition} \label{notation} Considering \eqref{Ham}, we define basic notations \begin{equation} \notag \begin{split} t_{\mathbf{b}}(x,v) &:= \sup \big\{ s \geq 0 : x - sv \in \Omega \big\} , \\ x_{\mathbf{b}}(x,v) &:= x - t_{\mathbf{b}}(x,v)v = X ( t- t_{\mathbf{b}}(t,x,v);t,x,v) \ \text{1st bouncing point backward in time}, \\ v_{\mathbf{b}}(x,v) &:= v = \lim_{s\rightarrowt_{\mathbf{b}}(t,x,v)}V ( t- s;t,x,v), \\ t^{k}(t,x,v) & := t^{k-1} - t_{\mathbf{b}} (x^{k-1}, v^{k-1}), \ \text{k-th bouncing time backward in time}, \ t^{1}(t,x,v) := t-t_{\mathbf{b}}(x,v),\\ x^{k}(x,v) &:= x^{k-1} - t_{\mathbf{b}}(x^{k-1}, v^{k-1}) v^{k-1} = X(t^{k}; t^{k-1}, x^{k-1}, v^{k-1}) \\ &= \text{k-th bouncing point backward in time},\quad x_{\mathbf{b}} := x^{1}, \\ v^{k} &= R_{x^{k}} v^{k-1} = R_{x^{k}} \lim_{s\rightarrow t^{k}-} V(s; t^{k-1}, x^{k-1}, v^{k-1}), \\ \end{split} \end{equation} where $R_{x^{k}}$ is defined in \eqref{BC}. We set $(t^0,x^0,v^0)=(t,x,v)$ and define the specular characteristics as \begin{equation}\label{XV} \begin{split} X(s;t,x,v) &= \sum_{k} \mathbf{1}_{s \in ( t^{k+1}, t^{k}]} X(s;t^{k}, x^{k}, v^{k}), \\ V(s;t,x,v) &= \sum_{k} \mathbf{1}_{s \in ( t^{k+1}, t^{k}]} V(s;t^{k}, x^{k}, v^{k}). \end{split} \end{equation} \end{definition} We also use $\gamma_{\pm}$ and $\gamma_{0}$ notation to denote \begin{equation} \notag \begin{split} \gamma_{+} &:= \{ (x,v)\in \partial\Omega\times \mathbb{R}^{2} : v\cdot n(x) > 0 \}, \\ \gamma_{0} &:= \{ (x,v)\in \partial\Omega\times \mathbb{R}^{2} : v\cdot n(x) = 0 \}, \\ \gamma_{-} &:= \{ (x,v)\in \partial\Omega\times \mathbb{R}^{2} : v\cdot n(x) < 0 \}. \\ \end{split} \end{equation} Note that unit disk $\Omega$ is uniformly convex and its linear trajectory \eqref{XV} is well-defined if $x\in\Omega$ (see velocity lemma \cite{Guo10} for example). However, we want to investigate regularity up to boundary $\overline{\Omega}$, so we carefully exclude $\gamma_0$ from $\overline{\Omega}\times \mathbb{R}^{2}$ since we do not define characteristics starting from (backward in time) $\gamma_0$. Hence, using \eqref{XV} and \eqref{Ham}, it is natural to write \eqref{eq} as the following mild formulation, \begin{equation} \label{solution} f(t,x,v) = f_{0}(X(0;t,x,v), V(0;t,x,v)),\quad (x,v) \in \mathcal{I} := \{\overline{\Omega}\times \mathbb{R}^{2} \}\backslash \gamma_{0}. \\ \end{equation} \\ \indent Meanwhile, to study regularity of \eqref{solution}, the following quantity is very important, \begin{equation} \label{def A} \begin{split} A_{v, y} &:= \left[\left((v\cdot n(y))I+(n(y) \otimes v) \right)\left(I-\frac{v\otimes n(y)}{v\cdot n(y)}\right)\right] ,\quad (y,v)\in \{\partial\Omega\times \mathbb{R}^2\} \backslash \gamma_0. \\ \end{split} \end{equation} Notice that $A_{v,y}$ is a matrix-valued function $A_{v,y}: \{\mathbb{R}^d\times\partial \Omega\}\backslash \gamma_0 \rightarrow \mathbb{R}^d\times \mathbb{R}^d $. ($d=2$ in this paper in particular) In fact, $A_{v, y}$ can be written as \begin{equation*} \begin{split} A_{v, y} &= \nabla_{y}\big( (v\cdot n(y))n(y)\big) =\left( (v \cdot n(y) ) \nabla_y n(y) + (n(y)\otimes v ) \nabla_y n(y)\right),\\ \end{split} \end{equation*} which is identical to \eqref{def A} by \eqref{normal}. \\ Throughout this paper, we denote the $v$-derivative of the $i$-th column of the matrix $A_{v,y}$ by $\nabla_v A^i_{v,y}$, where $A^i$ be the $i$-th column of a matrix $A$ for $1\leq i \leq d$. For fixed $i$, it means that \begin{equation*} \nabla_{v} A^i_{v,x^1} =\left. \left(\nabla_v A^i_{v,y}\right)\right|_{y=x^1}. \end{equation*} It is important to note that we carefully distinguish between $\nabla_v A^i_{v,x^1}$ and $\nabla_v (A^i_{v,x^1(x,v)})$. \\ \hide \begin{proposition} [Faa di Bruno formula] For higher order $n$-derivatives, the following formula would be useful. \[ (f\circ H)^{(n)} = \sum_{\sum_{j=1}^{n}j m_{j}=n} \frac{n!}{m_{1}!\cdots m_{n}!} \big( f^{(m_{1}+\cdots+m_{n})}\circ H \big) \prod_{j=1}^{n} \Big( \frac{H^{(j)}}{j!} \Big)^{m_{j}} \] \end{proposition} \unhide \begin{remark} Assume $f_{0}$ satisfies \eqref{BC}. If $f_{0} \in C^{1} _{x,v}( \overline{\Omega}\times \mathbb{R}^{2})$, then \begin{equation} \label{C1_v trivial} \nabla_{v}f_{0}(x,v) = \nabla_{v}f_{0}(x,R_{x}v)R_{x},\quad \forall x\in\partial\Omega,\quad \forall v\in\mathbb{R}^{2}, \end{equation} also hold. Similalry, if $f_{0} \in C^{2}_{x,v}( \overline{\Omega}\times \mathbb{R}^{2})$, then \begin{equation} \label{C2_v trivial} \nabla_{vv}f_{0}(x,v) = R_{x}\nabla_{vv}f_{0}(x,R_{x}v)R_{x},\quad \forall x\in\partial\Omega,\quad \forall v\in\mathbb{R}^{2}, \end{equation} also holds as well as \eqref{C1_v trivial}. \\ \end{remark} \begin{theorem} [$C^{1}$ regularity] \label{thm 1} Let $f_{0}$ be $C^{1}_{x,v}( \overline{\Omega}\times\mathbb{R}^{2})$ which satisfies \eqref{BC}. If initial data $f_{0}$ satisfies \begin{equation} \label{C1 cond} \Big[ \nabla_x f_0( x,v) + \nabla_v f_0(x, v) \frac{ (Qv)\otimes (Qv) }{v\cdot n} \Big] R_{x} = \nabla_x f_0(x, R_{x}v) + \nabla_v f_0(x, R_{x}v) \frac{ (QR_{x}v)\otimes (QR_{x}v) }{R_{x}v\cdot n},\quad (x,v)\in \gamma_{-}, \end{equation} then $f(t,x,v)$ defined in \eqref{solution} is a unique $C^{1}_{t,x,v}(\mathbb{R}_{+}\times \mathcal{I})$ solution of \eqref{eq}. We also note that if \eqref{C1 cond} holds, then it also holds for $(x,v)\in \gamma_{+}$. Here, $Q$ is counterclockwise rotation by $\frac{\pi}{2}$ in $\mathbb{R}^{2}$. Moreover, if the initial condition \eqref{C1 cond} does not hold, then $f(t,x,v)$ is not of class $C^1_{t,x,v}$ at time $t$ such that $t^k(t,x,v)=0$ for some $k$. \begin{remark}[Example of initial data satisfying \eqref{C1 cond}] \label{example} In \eqref{C1 cond}, we consider the following special case \begin{equation} \label{specialcase} \nabla_x f_0(x,v)R_x = \nabla_x f_0(x,R_xv) \quad \textrm{and} \quad \nabla_vf_0(x,v)\frac{(Qv)\otimes (Qv)}{v\cdot n}R_x = \nabla_v f_0(x,R_xv) \frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdot n}, \end{equation} for $(x,v)\in \gamma_-$. Since $Q^TR_xQ=-R_x$ and $v\cdot n=-R_xv\cdot n$, we derive \begin{equation*} \nabla_v f_0(x,v) \cdot (Qv) = \nabla_v f_0(x,R_xv)\cdot (QR_xv), \end{equation*} from the second condition above. Here, $A^T$ means transpose of a matrix $A$. From \eqref{C1_v trivial}, we get \begin{equation*} \nabla_v f_0(x,R_xv)\cdot (R_xQv) = \nabla_v f_0(x,R_xv) \cdot (QR_xv), \end{equation*} which implies that $\nabla_vf_0(x,v)\cdot (Qv) = \nabla_v f_0(x,R_xv)\cdot (R_xQv) =0$ because $R_xQ= -QR_x$. It means that $\nabla_v f_0(x,v)$ is parallel to $v$. Then, $ f_0(x,v)$ is a radial function with respect to $v$. Since the second condition in \eqref{specialcase} also holds for $(x,v)\in \gamma_+$, we deduce that a direction of $\nabla_v f_0(x,v)$ is $v^T$ for $v\in \gamma_- \cup\gamma_+$. In other words, \begin{equation*} f_0(x,v)=G(x,\vert v \vert), \quad (x,v)\in \gamma_-\cup \gamma_+, \end{equation*} where $G$ is a real-valued $C^1_{x,v}$ function. Moreover, $f_0$ can be continuously extended to $\gamma_0$ to satisfy $f_0 \in C^1_{x,v}(\partial \Omega\times \mathbb{R}^2)$. From the first condition $\nabla_x f_0 (x,v)R_x = \nabla_x f_0(x,R_xv)$ in \eqref{specialcase}, we have \begin{equation*} \nabla_x G(x,\vert v \vert) R_x = \nabla_x G(x,\vert v \vert). \end{equation*} Thus, $\nabla_x G(x,\vert v \vert)$ is orthogonal to $n(x)=x$, which means that the directional derivative $\nabla_x f_0(x,v) \cdot n(x)$ be 0 for $x\in \partial \Omega$. In conclusion, $f_0(x,v)=G(x,\vert v \vert)$ such that $\nabla_x f_0(x,v) \cdot n(x)=0$ for all $(x,v)\in \partial \Omega\times \mathbb{R}^2$ whenever $f_0$ satisfies \eqref{specialcase} for $(x,v)\in \gamma_-$. \end{remark} \hide \[ Q = \begin{pmatrix} \vert & \vert & \vert \\ \hat{x}\times \widehat{v\times x} & \hat{x} & \widehat{v\times x} \\ \vert & \vert & \vert \end{pmatrix} \begin{pmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} \vert & \vert & \vert \\ \hat{x}\times \widehat{v\times x} & \hat{x} & \widehat{v\times x} \\ \vert & \vert & \vert \end{pmatrix}^{-1}. \] \unhide \end{theorem} \begin{theorem} [$C^{2}$ regularity] \label{thm 2} Let $f_{0}$ be $C^{2}_{x,v}( \overline{\Omega}\times\mathbb{R}^{2})$ which satisfies \eqref{BC} and \eqref{C1 cond}. (The condition \eqref{C1 cond} was necessary to satisfy $f(t,x,v)\in C^1_{t,x,v}$ in Theorem \ref{thm 1}). If we assume \begin{equation} \label{C2 cond34} \nabla_{x}f_0(x, R_{x}v) \parallel (R_{x}v)^{T},\quad \nabla_{v}f_0(x, R_{x}v) \parallel (R_{x}v)^{T}, \end{equation} and \begin{eqnarray} &&R_{x} \Big[ \nabla_{xv}f_{0}(x,v) + \nabla_{vv}f_{0}(x,v) \frac{ (Qv)\otimes (Qv)}{v\cdot n} \Big] R_{x} = \nabla_{xv}f_{0}(x, R_xv) + \nabla_{vv}f_{0}(x, R_xv) \frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdot n} \notag \\ &&\quad\hspace{7.5cm} + R_{x} \begin{bmatrix} \nabla_{v}f_{0}(x , R_xv) \mathcal{J}_1 \\ \nabla_{v}f_{0}(x , R_xv) \mathcal{J}_2 \end{bmatrix} R_{x}, \label{C2 cond 1} \\ &&R_{x}\Big[ \nabla_{xx}f_{0}(x,v) + \nabla_{vx}f_{0}(x, v) \frac{ (Qv)\otimes (Qv)}{v\cdot n} + \frac{ (Qv)\otimes (Qv)}{v\cdot n} \nabla_{xv}f_{0}(x, v) \Big] R_{x} \notag \\ &&\quad = \nabla_{xx}f_{0}(x, R_xv) + \nabla_{vx}f_{0}(x, R_xv)\frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdot n} + \frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdot n} \nabla_{xv}f_{0}(x, R_xv) \notag \\ &&\quad \quad -2R_x \begin{bmatrix} \nabla_{v}f_{0}(x, R_xv) \nabla_{v}A^{1}_{v,x} \\ \nabla_{v}f_{0}(x, R_xv) \nabla_{v}A^{2}_{v,x} \end{bmatrix} R_xA_{v,x}R_x + A_{v,x}\begin{bmatrix} \nabla_{v}f_{0}(x, R_xv) \mathcal{J}_1 \\ \nabla_{v}f_{0}(x, R_xv) \mathcal{J}_2 \end{bmatrix}R_x \notag \\ &&\quad \quad - 2 R_x \begin{bmatrix} \nabla_{v}f_{0}(x, R_xv) \mathcal{K}_1 \\ \nabla_{v}f_{0}(x, R_xv) \mathcal{K}_2 \end{bmatrix} R_x, \label{C2 cond 2} \end{eqnarray} where $x=(x_1,x_2), \; v=(v_1,v_2)$, and \begin{align*} &\mathcal{J}_1:=\frac{1}{v\cdot x} \begin{bmatrix} -4v_2x_1x_2 & 4v_1x_1x_2 \\ -2v_2(x_2^2-x_1^2) & 2v_1(x_2^2-x_1^2) \end{bmatrix}, \quad \mathcal{J}_2:= \frac{1}{v\cdot x}\begin{bmatrix} -2v_2(x_2^2-x_1^2) & 2v_1(x_2^2-x_1^2)\\ 4v_2x_1x_2 & -4v_1x_1x_2 \end{bmatrix},\\ &\mathcal{K}_1:=\begin{bmatrix} \dfrac{4v_1^2v_2^2x_1^3 +2v_1v_2^3(3x_1^2x_2-x_2^3)+ 2v_2^4(3x_1x_2^2+x_1^3)}{(v\cdot x)^3} & \dfrac{-4v_1^3v_2x_1^3-2v_1^2v_2^2(3x_1^2x_2-x_2^3)-2v_1v_2^3(3x_1x_2^2+x_1^3)}{(v\cdot x)^3}\\ \dfrac{4v_2^4x_2^3+2v_1v_2^3(3x_1x_2^2-x_1^3)+2v_1^2v_2^2(3x_1^2x_2+x_2^3)}{(v\cdot x)^3} & \dfrac{-4v_1v_2^3x_2^3-2v_1^2v_2^2(3x_1x_2^2-x_1^3)-2v_1^3v_2(3x_1^2x_2+x_2^3)}{(v\cdot x)^3} \end{bmatrix},\\ &\mathcal{K}_2 := \begin{bmatrix} \dfrac{-4v_1^3v_2x_1^3-2v_1v_2^3(3x_1x_2^2+x_1^3) -2v_1^2v_2^2(3x_1^2x_2-x_2^3)}{(v\cdot x)^3} & \dfrac{4v_1^4x_1^3 +2v_1^2v_2^2(3x_1x_2^2+x_1^3)+2v_1^3v_2 (3x_1^2x_2-x_2^3)}{(v \cdot x)^3}\\ \dfrac{-4v_1v_2^3x_2^3 -2v_1^3v_2(3x_1^2x_2+x_2^3) -2v_1^2v_2^2(3x_1x_2^2-x_1^3)}{(v\cdot x)^3} & \dfrac{4v_1^2 v_2^2 x_2^3 +2v_1^4(3x_1^2x_2+x_2^3)+2v_1^3v_2(3x_1x_2^2-x_1^3)}{(v \cdot x)^3} \end{bmatrix}, \end{align*} for all $(x,v)\in \gamma_-$, then $f(t,x,v)$ defined in \eqref{solution} is a unique $C^{2}_{t,x,v}(\mathbb{R}_{+}\times \mathcal{I})$ solution of \eqref{eq}. In this case, $f_0(x,v)=G(x,\vert v \vert)$ satisfying $\nabla_x f_0(x, v)=0$ for $x \in \partial \Omega$, where $G$ is a real-valued $C^2_{x,v}$ function. Additionally, $f(t,x,v)$ is not of class $C^2_{t,x,v}$ at time $t$ such that $t^k(t,x,v)=0$ for some $k$ if one of the initial conditions \eqref{C2 cond34}, \eqref{C2 cond 1}, and \eqref{C2 cond 2} for $(x,v)\in \gamma_-$ is not satisfied. \end{theorem} \begin{remark} (Higher regularity) If we want higher regularity such as $C^{3}$ and $C^{4}$, we should assume additional initial-boundary compatibility conditions for those regularities as we assumed \eqref{C2 cond34}-\eqref{C2 cond 2} in Theorem \ref{thm 2} for $C^{2}$ as well as \eqref{C1 cond}. Although the computation for higher regularity is available in principle, we should carefully check whether the additional conditions for higher regularity make lower regularity conditions trivial or not. Here, the trivial condition for \eqref{C2 cond 2} means \[ \nabla_{x,v}f_0(x,v) = 0,\quad \forall (x,v)\in\gamma_{-}. \] In fact, the answer is given in Section 1.2. Because of very nontrivial null structure of \eqref{1st order}, imposing \eqref{C2 cond34}-\eqref{C2 cond 2} does not make \eqref{C1 cond} trivial, fortunately. Once we find a new initial-boundary compatibility condition for $C^{3}$, for example, we also have to check \begin{equation} \notag \begin{split} &\text{Do additional compatibility conditions for $C^{3}$ regularity make } \\ &\quad \text{\eqref{C1 cond} or \eqref{C2 cond34}-\eqref{C2 cond 2} trivial, e.g. $\nabla f_0 = \nabla^{2}f_0=0$ on $\gamma_{-}$?} \end{split} \end{equation} Whenever we gain conditions for higher order regularity, initial-boundary compatibility conditions are stacked and they might make lower order compatibility conditions just trivial ones. It is a very interesting question, but they require very complicated geometric considerations and obtaining higher order condition itself will be also very painful. But, if we impose very strong (trivial) high order initial-boundary compatibility conditions \[ \nabla_{x,v}^{i}f_0(x,v) = 0,\quad \forall(x,v)\in\gamma_{-},\quad 1\leq i \leq k, \] then we will get $C^{k}$ regularity of the solution. \end{remark} \begin{remark} (Necessary conditions for $C^{2}$ regularity) In Theorem \ref{thm 2}, initial conditions \eqref{C2 cond 1} and \eqref{C2 cond 2} are sufficient conditions for $f \in C^2_{t,x,v}$. Although these contain non-symmetric complicated first-order terms, we can obtain simpler necessary conditions. Observe that the null space of $\mathcal{J}_i, \mathcal{K}_i$ is spanned by $v$, i.e., \begin{equation} \label{null J,K} \mathcal{J}_i v =0, \quad \mathcal{K}_i v =0, \quad i=1,2. \end{equation} Multiplying the reflection matrix $R_x$ on both sides in \eqref{C2 cond 1} and \eqref{C2 cond 2}, we get necessary conditions for $C^{2}$ solution, \hide \begin{align*} &\nabla_{xv} f_0(x,v) +\nabla_{vv} f_0(x,v)\frac{ (Qv)\otimes (Qv)}{v\cdot n} = R_x \left[\nabla_{xv}f_{0}(x, R_xv) + \nabla_{vv}f_{0}(x, R_xv) \frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdot n} \right]R_x \\ &\hspace{6.5cm} + \begin{bmatrix} \nabla_{v}f_{0}(x , R_xv) \mathcal{J}_1 \\ \nabla_{v}f_{0}(x , R_xv) \mathcal{J}_2 \end{bmatrix},\\ &\nabla_{xx}f_{0}(x,v) + \nabla_{vx}f_{0}(x, v) \frac{ (Qv)\otimes (Qv)}{v\cdot n} + \frac{ (Qv)\otimes (Qv)}{v\cdot n} \nabla_{xv}f_{0}(x, v)\\ &\quad = R_x \left[ \nabla_{xx}f_{0}(x, R_xv) + \nabla_{vx}f_{0}(x, R_xv)\frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdot n} + \frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdot n} \nabla_{xv}f_{0}(x, R_xv)\right]R_x \\ &\quad \quad -2 \begin{bmatrix} \nabla_{v}f_{0}(x, R_xv) \nabla_{v}A^{1}_{v,x} \\ \nabla_{v}f_{0}(x, R_xv) \nabla_{v}A^{2}_{v,x} \end{bmatrix} R_xA_{v,x} + R_xA_{v,x}\begin{bmatrix} \nabla_{v}f_{0}(x, R_xv) \mathcal{J}_1 \\ \nabla_{v}f_{0}(x, R_xv) \mathcal{J}_2 \end{bmatrix} - 2 \begin{bmatrix} \nabla_{v}f_{0}(x, R_xv) \mathcal{K}_1 \\ \nabla_{v}f_{0}(x, R_xv) \mathcal{K}_2 \end{bmatrix} , \end{align*} \unhide \begin{equation} \label{C2 nec cond} \begin{split} &v^T \left[ \nabla_{xv} f_0(x,v) +\nabla_{vv} f_0(x,v)\frac{ (Qv)\otimes (Qv)}{v\cdot n}\right] v=(R_xv)^T \left[\nabla_{xv}f_{0}(x, R_xv) + \nabla_{vv}f_{0}(x, R_xv) \frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdot n} \right](R_xv),\\ &v^T \left[ \nabla_{xx}f_{0}(x,v) + \nabla_{vx}f_{0}(x, v) \frac{ (Qv)\otimes (Qv)}{v\cdot n} + \frac{ (Qv)\otimes (Qv)}{v\cdot n} \nabla_{xv}f_{0}(x, v)\right]v \\ &=(R_xv)^T \left[\nabla_{xx}f_{0}(x, R_xv) + \nabla_{vx}f_{0}(x, R_xv)\frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdot n} + \frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdot n} \nabla_{xv}f_{0}(x, R_xv)\right](R_xv), \end{split} \end{equation} for all $(x,v) \in \gamma_-$, where we used $R_x^2 =I$, \eqref{null J,K}, and $A_{v,x}v=0$ in Lemma \ref{lem_RA}. \end{remark} \begin{remark}\label{extension C2 cond34} Using \eqref{C1_v trivial} and \eqref{C2 cond34} yields that \begin{equation} \label{f0 gamma+} \nabla_v f_0(x,v) \parallel v^T, \end{equation} for all $(x,v) \in \gamma_-\cup \gamma_+$. From \eqref{f0 gamma+}, we have \begin{equation*} \nabla_v f_0(x,v) \frac{(Qv) \otimes (Qv)}{v\cdot n} R_x = \nabla_v f_0(x,R_xv)\frac{(QR_xv)\otimes(QR_xv)}{R_x v\cdot n}. \end{equation*} Thus, the condition \eqref{C1 cond} in Theorem \ref{thm 1} becomes \begin{equation*} \nabla_x f_0(x,v) R_x = \nabla_x f_0(x,R_xv). \end{equation*} Similarly, by \eqref{C2 cond34} and the above result, we have \begin{equation*} \nabla_x f_0(x,v) \parallel v^T, \end{equation*} for all $(x,v) \in \gamma_-\cup \gamma_+$. Hence, we conclude that \eqref{C2 cond34} can be extended to $\gamma_-\cup\gamma_+$ under conditions \eqref{C1 cond} and \eqref{C2 cond34} for $(x,v)\in\gamma_-$. \end{remark} \begin{remark}[Extension to 3D sphere] By symmetry, Theorem \ref{thm 1} and \ref{thm 2} also hold for three dimensional sphere if the rotation operator $Q$ is properly redefined in the plane spanned by $\{x, v\}$ for $x\in \partial\Omega$, $x\nparallel v\neq 0$. \\ \end{remark} \begin{theorem} [Regularity estimates] \label{thm 3} The $C^{1}(\mathbb{R}_{+}\times \mathcal{I})$ and $C^{2}(\mathbb{R}_{+}\times \mathcal{I})$ solutions of Theorem \ref{thm 1} and \ref{thm 2} enjoy the following regularity estimates : \begin{equation} \label{C1 bound} \|f\|_{C^{1}_{t,x,v}} \lesssim \|f_0\|_{C^{1}} \frac{|v|}{|v\cdot n(x_{\mathbf{b}})|^{2}} \langle v \rangle^{2}(1 + |v|t), \end{equation} \begin{equation} \label{C2 bound} \|f\|_{C^{2}_{t,x,v}} \lesssim \|f_0\|_{C^{2}} \frac{|v|^{2}}{|v\cdot n(x_{\mathbf{b}})|^{4}} \langle v \rangle^{4}(1 + |v|t)^{2}, \end{equation} where $x_{\mathbf{b}} = x_{\mathbf{b}}(x,v)$ and $\langle v \rangle := 1 + |v|$. \end{theorem} \subsection{Brief sketch of proofs and some important remarks} In this paper, our aim is to analyze regularity of mild form \eqref{solution} where characteristic $(X(0;t, x, v), V(0;t, x,v))$ is well-defined (by excluding $\gamma_0$). If backward in time position $X(0;t,x,v) \notin \partial\Omega$, the characteristic is also a smooth function and we expect that the regularity of \eqref{solution} will be the same as initial data $f_0$ by the chain rule. When $X(0;t,x,v) \in \partial\Omega$, however, the derivative via the chain rule does not work anymore because of discontinuous behavior of velocity $V(0;t,x,v)$. Depending on perturbed directions, we obtain different directional derivatives. In fact, we can split directions into two pieces: one gives bouncing and the other does not. See \eqref{R12_v} and \eqref{set R_vel} for $C^{1}_{v}$ for example. By matching these directional derivatives and performing some symmetrization, we obtain symmetrized initial-boundary compatibility condition \eqref{C1 cond}. Of course, \eqref{C1_v trivial} also holds, but \eqref{C1_v trivial} is gained by taking the $v$-derivative of \eqref{BC} directly. We note that both $C^{1}_{x}$ and $C^{1}_{v}$ conditions yield identical initial compatibility condition \eqref{C1 cond}, and the condition for $C^{1}_{t}$ is just a necessary condition for \eqref{C1 cond}. \\ The analysis becomes much more complicated when we study $C^{2}$ conditions. Nearly all of our analysis consist of precise equalities, instead of estimates. This makes our business much harder. First, let us consider four cases: $\nabla_{xx}, \nabla_{xv}, \nabla_{vx}, \nabla_{vv}$. These yield very complicated initial-boundary compatibility conditions and in particular they contain derivatives of each column of reflection operator $R_x$ or $\nabla_{x,v}((n(x)\otimes n(x))v)$. It is nearly impossible to give proper geometric interpretation for each term. See \eqref{xv star1} and \eqref{xv star2} for example. \\ Nevertheless, it is quite interesting that the four conditions from $\nabla_{xx}, \nabla_{xv}, \nabla_{vx}, \nabla_{vv}$ can be rearranged with respect to the order of time $t$. By matching all directional derivatives, we obtain \eqref{Cond2 1}--\eqref{Cond2 4} which contain both second-order terms and first-order terms. However, the conditions from $\nabla_{xx}, \nabla_{xv}, \nabla_{vx}, \nabla_{vv}$ must satisfy transpose compatibility condition \begin{equation} \label{trans comp} \nabla_{xv}^{T}=\nabla_{vx} \ \ \text{and} \ \ \nabla_{xx}^{T} = \nabla_{xx}, \end{equation} since we hope the solution to be $C^{2}$. However, it is extremely hard to find any good geometric meaning or properties of some terms like \begin{equation} \label{1st order} \nabla_{x}(R^{i}_{x^{1}(x,v)}),\quad \nabla_{x}(A^{i}_{v, x^{1}(x,v)}),\quad \text{for}\quad i=1,2, \end{equation} in \eqref{Cond2 1}--\eqref{Cond2 4}. If they do not have any special structures, the only way to get compatibility \eqref{trans comp} is to impose $\nabla_{x,v}f_0(x, Rv) = 0$ for all $(x,v)\in \gamma_{-}$. Then $C^{1}$ compatibility condition \eqref{C1 cond} becomes just trivial. Fortunately, however, the matrices of \eqref{1st order} have a rank $1$ structure. {\bf More surprisingly, all the null spaces are spanned by velocity $v$!} That is, from Lemma \ref{d_RA} and Lemma \ref{dx_A}, \[ \nabla_{x}(R_{x^1(x,v)}^1)v =0, \quad \nabla_{x}(R_{x^1(x,v)}^2) v =0,\quad \nabla_x(-2A_{v,x^1(x,v)}^1)v =0, \quad \nabla_x (-2A_{v,x^1(x,v)}^2)v=0. \] From these interesting results, we can derive necessary conditions \eqref{C2 cond34} for transpose compatibility \eqref{trans comp}. By imposing \eqref{C2 cond34}, we derive $C^{2}$ conditions as in Theorem \ref{thm 2}, while keeping $C^{1}$ condition \eqref{C1 cond} nontrivial. We note that all the conditions that include $\partial_{t}$ are repetitions of \eqref{Cond2 1}--\eqref{Cond2 4}. \\ In the last section, we study $C^{1}$ and $C^{2}$ regularity estimates of the solution \eqref{solution}. Essentially the regularity estimates of the solution come from the regularity estimates of characteristic $(X(0;t,x,v), V(0;t,x,v))$. For $C^{1}$ of $(X(0;t,x,v), V(0;t,x,v))$, we obtain Lemma \ref{est der X,V}. Note that we can find some cancellation that gives no singular bound for $\nabla_{v}X(0;t,x,v)$ which was found in \cite{GKTT2017} for general 3D convex domains. Growth in time need not to be exponential, but it is just linear in time $t$. The second derivative of characteristic is much more complicated and nearly impossible to try to find any cancellation, because of too many terms and combinations that appear. Instead, by studying the most singular terms only, we obtain rough bounds in Lemma \ref{2nd est der X,V}. \section{Preliminaries} Now, let us recall standard matrix notations which will be used in this paper. \\ \begin{definition} When we perform matrix multiplications throughout this paper, we basically treat a n-dimensional vector $v$ as a {\it column} vector \[ v = \begin{pmatrix} v_{1} \\ \vdots \\ v_{n} \end{pmatrix}. \] For about gradient of a smooth scalar function $a(x)$, however, we treat n-dimensional vector $\nabla a$ as a {\it row} vector, \[ \nabla a(x) := (\partial_{x_{1}} a, \partial_{x_{2}} a, \cdots, \partial_{x_{n}} a). \] For a smooth vector function $v : \mathbb{R}^{n}\rightarrow \mathbb{R}^{m}$ with $v(x)= \begin{pmatrix} v_{1}(x) \\ \vdots \\ v_{m}(x) \end{pmatrix}$, we define $\nabla_{x}v(x)$ as $m\times n$ matrix, \[ \nabla_{x}v := \begin{pmatrix} \partial_{1} v_{1} & \cdots & \partial_{n} v_{1} \\ \partial_{1} v_{2} & \cdots & \partial_{n} v_{2} \\ \vdots & \vdots & \vdots \\ \partial_{1} v_{m} & \cdots & \partial_{n} v_{m} \\ \end{pmatrix}_{m\times n} = \begin{pmatrix} & \nabla_{x} v_{1} &\\ &\vdots& \\ &\nabla_{x} v_{m}& \\ \end{pmatrix}_{m\times n} . \] We use $\otimes$ to denote tensor product \begin{equation*} a\otimes b := \begin{pmatrix} a_{1} \\ \vdots \\ a_{m} \end{pmatrix} \begin{pmatrix} b_{1} & \cdots & b_{n} \end{pmatrix}. \\ \end{equation*} \end{definition} \begin{lemma}\label{matrix notation} (1) (Product rule) For scalar function $a(x)$ and vector function $v(x)$, \[ \nabla (a(x)v(x)) = a(x)\nabla v(x) + v\otimes \nabla a(x). \] (2) (Chain rule) For vector functions $v(x)$ and $w(x)$, \[ \nabla (v(w(x))) = \nabla v(w(x)) \nabla w(x). \] (3) (Product rule) For vector functions, \[ \nabla(v(x)\cdot w(x)) = v(x)\nabla w(x) + w(x)\nabla v(x). \] (4) For matrix $d\times d$ matrix $A(x)$ and $d\times 1$ vector $v(x)$, \begin{equation} \label{d_matrix} \begin{split} \nabla_{x} (A(x)v(x)) &= A(x)\nabla v(x) + \begin{pmatrix} v(x)\nabla A^{1}(x) \\ \vdots \\ v(x)\nabla A^{d}(x) \end{pmatrix} \\ &= A(x)\nabla v(x) + \sum_{k=1}^{d} \partial_{k}A(x) E_{k}, \end{split} \end{equation} where $A^{i}(x)$ is $i$-th row of $A(x)$ and $E_{k}$ is $d\times d$ matrix whose $k$th column is $v$ and others are zero. (Here $\partial_{k}A(x)$ means elementwise $x_{k}$-derivative of $A(x)$.) Moreover, if $A = A(\theta(x))$ for some smooth $\theta:\Omega\rightarrow \mathbb{R}$, \\ \begin{equation} \label{d_matrix_theta} \nabla_{x} (A(\theta)v(x)) = A(\theta)\nabla v(x) + \partial_{\theta}A(\theta)v \otimes \nabla_{x}\theta. \end{equation} \end{lemma} \begin{proof} Only \eqref{d_matrix_theta} needs some explanation. When $A=A(\theta(x))$, \begin{equation*} \begin{split} \nabla_{x} (A(\theta)v(x)) &= A(\theta)\nabla v(x) + \sum_{k=1}^{d} \partial_{k}A(\theta) E_{k} = A(\theta)\nabla v(x) + \sum_{k=1}^{d} \partial_{\theta}A(\theta) \partial_{k}\theta(x)E_{k} \\ &= A(\theta)\nabla v(x) + \partial_{\theta}A(\theta) \begin{pmatrix} & & \\ \partial_{1}\theta(x) v & \cdots & \partial_{d}\theta(x)v \\ & & \\ \end{pmatrix} \\ &= A(\theta)\nabla v(x) + \partial_{\theta}A(\theta)v \otimes \nabla_{x}\theta(x). \end{split} \end{equation*} \end{proof} \begin{lemma} \label{nabla xv b} We have the following computations where $x_{\mathbf{b}} = x_{\mathbf{b}}(x,v)$ and $t_{\mathbf{b}}=t_{\mathbf{b}}(x,v)$. \\ \begin{equation*} \begin{split} \nabla_{x}t_{\mathbf{b}} &= \frac{n(x_{\mathbf{b}})}{v\cdot n(x_{\mathbf{b}})} , \\ \nabla_{v}t_{\mathbf{b}} &= -t_{\mathbf{b}}\nabla_{x}t_{\mathbf{b}} = -t_{\mathbf{b}}\frac{n(x_{\mathbf{b}})}{v\cdot n(x_{\mathbf{b}})} , \\ \nabla_{x}x_{\mathbf{b}} &= I - \frac{v\otimes n(x_{\mathbf{b}})}{v\cdot n(x_{\mathbf{b}})}, \\ \nabla_{v}x_{\mathbf{b}} &= -t_{\mathbf{b}}\Big(I - \frac{v\otimes n(x_{\mathbf{b}})}{v\cdot n(x_{\mathbf{b}})} \Big). \\ \end{split} \end{equation*} \end{lemma} \begin{proof} Remind the definition of $x_{\mathbf{b}}$ and $t_{\mathbf{b}}$ \begin{equation*} x_{\mathbf{b}}=x-t_{\mathbf{b}} v, \quad t_{\mathbf{b}}=\sup\{ s \; \vert \; x-sv \in \Omega\}. \end{equation*} Since $\xi(x) =0$ for $x \in \partial\Omega$, we have $\xi(x_{\mathbf{b}}) = \xi(x-t_{\mathbf{b}} v)=0$. Taking the $x\mbox{-}$derivative $\nabla_x$, we get \begin{align*} \nabla_x(\xi(x_{\mathbf{b}}))&= (\nabla\xi)(x_{\mathbf{b}}) -[ (\nabla \xi)(x_{\mathbf{b}}) \cdot v]\nabla_x t_{\mathbf{b}} \\ &= 0, \end{align*} where the first equality comes from product rule in Lemma \ref{matrix notation}. Thus, we can derive \begin{equation*} \nabla_x t_{\mathbf{b}} = \frac{( \nabla \xi)(x_{\mathbf{b}})}{[ (\nabla \xi)(x_{\mathbf{b}}) \cdot v]} = \frac{ n(x_{\mathbf{b}})}{v \cdot n(x_{\mathbf{b}}) }. \end{equation*} Similarly, taking the $v\mbox{-}$derivative $\nabla_v$ and product rule in Lemma \ref{matrix notation} yields \begin{equation*} \nabla_v(\xi(x_{\mathbf{b}})) = (\nabla \xi)(x_{\mathbf{b}})(-t_{\mathbf{b}} I - v \otimes \nabla_v t_{\mathbf{b}})= 0, \end{equation*} which implies $\nabla_v t_{\mathbf{b}} = - t_{\mathbf{b}} \frac{n(x_{\mathbf{b}})}{ v\cdot n(x_{\mathbf{b}})}.$ It follows from the calculation of $\nabla_x t_{\mathbf{b}}$ and $\nabla_v t_{\mathbf{b}}$ above that \begin{align*} \nabla_x x_{\mathbf{b}} &= \nabla_x ( x- t_{\mathbf{b}} v) = I - v \otimes \nabla_x t_{\mathbf{b}} = I - \frac{v \otimes n(x_{\mathbf{b}})}{v \cdot n (x_{\mathbf{b}})} \\ \nabla_v x_{\mathbf{b}} &= \nabla_v (x-t_{\mathbf{b}} v) = -t_{\mathbf{b}} I - v \otimes \nabla_v t_{\mathbf{b}} = -t_{\mathbf{b}} \left(I - \frac{ v \otimes n(x_{\mathbf{b}})}{ v \cdot n(x_{\mathbf{b}})}\right). \end{align*} \end{proof} \begin{lemma} \label{d_n} For $n(x_{\mathbf{b}}(x,v))$, we have the following derivative rules, \begin{equation} \label{normal} \nabla_x [n(x_{\mathbf{b}})] = I - \frac{v \otimes n(x_{\mathbf{b}})}{v \cdot n (x_{\mathbf{b}})}, \quad \nabla_v [n(x_{\mathbf{b}})] = -t_{\mathbf{b}} \Big( I - \frac{v \otimes n(x_{\mathbf{b}})}{v \cdot n (x_{\mathbf{b}})} \Big), \end{equation} where $x_{\mathbf{b}}=x_{\mathbf{b}}(x,v)$. \end{lemma} \begin{proof} For $\nabla_x n(x_{\mathbf{b}})$, we apply the chain rule in Lemma \ref{matrix notation} to $(\nabla \xi)(x_{\mathbf{b}})$ and $\frac{1}{\vert (\nabla \xi)(x_{\mathbf{b}})\vert }$ respectively. Because $\nabla \xi (x) \neq 0 $ at the boundary $x \in \partial\Omega$ in a circle, it is possible to apply the chain rule to $\frac{1}{\vert (\nabla \xi)(x_{\mathbf{b}})\vert }$. Taking $x\mbox{-}$derivative $\nabla_x$, one obtains \begin{align*} \nabla_x [(\nabla \xi)(x_{\mathbf{b}})] &= (\nabla ^2 \xi)(x_{\mathbf{b}})\nabla_x x_{\mathbf{b}}, \\ \nabla_x \left[ \frac{1}{ \vert (\nabla \xi)(x_{\mathbf{b}}) \vert} \right] & =- \frac{(\nabla\xi)(x_{\mathbf{b}}) (\nabla^2\xi)(x_{\mathbf{b}}) \nabla_xx_{\mathbf{b}}}{\vert (\nabla \xi)(x_{\mathbf{b}}) \vert^3}. \end{align*} Hence, \begin{align*} \nabla_x [n(x_{\mathbf{b}})] = \nabla_x \left[ \frac{ (\nabla \xi)(x_{\mathbf{b}})}{ \vert (\nabla \xi)(x_{\mathbf{b}}) \vert } \right] &= \frac{1}{ \vert (\nabla \xi)(x_{\mathbf{b}}) \vert } \nabla_x [ (\nabla \xi)(x_{\mathbf{b}}) ] + (\nabla \xi)(x_{\mathbf{b}}) \otimes \nabla_x \left [ \frac{1}{\vert (\nabla \xi)(x_{\mathbf{b}}) \vert} \right] \\ & = \frac{1}{ \vert (\nabla \xi)(x_{\mathbf{b}}) \vert }(\nabla ^2 \xi)(x_{\mathbf{b}})\nabla_x x_{\mathbf{b}} - \nabla \xi(x_{\mathbf{b}}) \otimes \frac{(\nabla\xi)(x_{\mathbf{b}}) (\nabla^2\xi)(x_{\mathbf{b}}) \nabla_xx_{\mathbf{b}}}{\vert (\nabla \xi)(x_{\mathbf{b}}) \vert^3}\\ &= \frac{1}{ \vert (\nabla \xi)(x_{\mathbf{b}}) \vert }\Big( I - n(x_{\mathbf{b}}) \otimes n(x_{\mathbf{b}})\Big) (\nabla^2 \xi)(x_{\mathbf{b}}) \nabla_x x_{\mathbf{b}}. \end{align*} Since $|\nabla\xi(x_{\mathbf{b}})| =1 $ and $\nabla^{2}\xi = I_{2}$, we deduce \begin{align*} \nabla_x [n(x_{\mathbf{b}})] &= \Big( I - n(x_{\mathbf{b}})\otimes n(x_{\mathbf{b}}) \Big) \Big( I - \frac{v \otimes n(x_{\mathbf{b}})}{v \cdot n (x_{\mathbf{b}})} \Big) \\ &= I - \frac{v \otimes n(x_{\mathbf{b}})}{v \cdot n (x_{\mathbf{b}})} - n(x_{\mathbf{b}})\otimes n(x_{\mathbf{b}}) + n(x_{\mathbf{b}})\otimes n(x_{\mathbf{b}}) \\ &= I - \frac{v \otimes n(x_{\mathbf{b}})}{v \cdot n (x_{\mathbf{b}})}. \end{align*} The case for $\nabla_v [n(x_{\mathbf{b}})]$ is nearly same with extra term $-t_{\mathbf{b}}$ which comes from Lemma \ref{nabla xv b}. \\ \end{proof} \hide \begin{lemma} For fixed $x\in\Omega$, we can classify direction $\S^{2}$ into three parts, \begin{equation*} \begin{split} R_{0} &:= \{ \hat{r}\in \S^{2} : \nabla_{x}t_{\mathbf{b}}(x,v)\} \\ R_{1} &:= \{ \hat{r}\in \S^{2} : \} \\ R_{2} &:= \{ \hat{r}\in \S^{2} : \} \end{split} \end{equation*} \end{lemma} \begin{proof} (i) From Proposition \eqref{nabla xv b} \[ \frac{\partial}{\partial\varepsilon}t_{\mathbf{b}}(x+\varepsilon\hat{r}, v)\vert_{\varepsilon=0} = \nabla_{x}t_{\mathbf{b}}(x,v)\cdot\hat{r} = \frac{\hat{r}\cdot n(x_{\mathbf{b}}(x,v))}{v\cdot n(x_{\mathbf{b}}(x,v))} \] \end{proof} \unhide \section{Initial-boundary compatibility condition for $C^{1}_{t,x,v}$} \begin{lemma} \label{lem_RA} Recall definition \eqref{def A} of the matrix $A_{v,x}$. We have the following identities, for $(x,v) \in \{\partial\Omega \times \mathbb{R}^d\} \backslash \gamma_0$, \begin{equation} \label{RA} \begin{split} R_xA_{v,x} &= \frac{1}{v\cdot n(x)} Q(v\otimes v)Q^{T} = \frac{1}{v\cdot n(x)} (Qv)\otimes (Qv), \\ A_{v,x} R_x &= \frac{1}{v\cdot n(x)} R_xQ(v\otimes v)Q^{T}R_x = -\frac{1}{R_xv\cdot n(x)} (QR_xv)\otimes (QR_xv), \\ \end{split} \end{equation} \begin{equation} \label{A2} \begin{split} A^{2}_{v,x} &= \frac{1}{(v\cdot n(x))^{2}} (QR_xv\otimes QR_xv)(Qv\otimes Qv), \end{split} \end{equation} \begin{equation} \label{Av=0} A_{v,x}v =0, \end{equation} where $Q := Q_{\frac{\pi}{2}}$ is counterclockwise rotation by angle $\frac{\pi}{2}$. \end{lemma} \begin{proof} We compute \begin{equation*} \begin{split} R_xA_{v,x}R_x &:= \left[\left((v\cdot n(x))I - (n(x) \otimes v) \right)\left(I + \frac{v\otimes n(x)}{v\cdot n(x)}\right)\right] \\ &= \big(Qv \otimes Qn(x)\big)\left(I + \frac{v\otimes n(x)}{v\cdot n(x)}\right). \end{split} \end{equation*} Now let us define $\tau(x)= Q_{-\frac{\pi}{2}}n(x)$ as tangential vector at $x\in\partial\Omega$. ($n$ as y-axis and $\tau$ as x-axis) Then, \begin{equation*} \begin{split} R_xA_{v,x}R_x &:= Qv \otimes \Big( -\tau - \frac{v\cdot\tau}{v\cdot n(x)}n(x) \Big) \\ &= -\frac{1}{v\cdot n(x)} Qv\otimes \Big( (v\cdot n(x))\tau + (v\cdot\tau)n(x) \Big) \\ &= -\frac{1}{v\cdot n(x)} Qv\otimes \big( R_xQ^{T}v \big) \\ &= \frac{1}{v\cdot n(x)} Qv\otimes \big( R_xQv \big) \\ &= \frac{1}{v\cdot n(x)} Q(v\otimes v)Q^{T}R_x, \\ \end{split} \end{equation*} and we get \eqref{RA} using $R_xQ=-R_xQ^T$, because \[ Q^{T}R_xQ = I - 2Q^{T}(n(x)\otimes n(x))Q = I - 2\tau\otimes\tau = -R_x. \\ \] \eqref{A2} is simply obatined by \eqref{RA}. By definition of $A_{v,x}$ in \eqref{def A}, one obtains that \begin{align*} A_{v,x}v = \left[\left((v\cdot n(x))I+(n(x) \otimes v) \right)\left(I-\frac{v\otimes n(x)}{v\cdot n(x)}\right)\right]v=\left((v\cdot n(x))I+(n(x)\otimes v))\right)(v-v)=0. \end{align*} \end{proof} Now, throughout this section, we study $C^{1}_{t,x,v}(\mathbb{R}_{+}\times \Omega\times \mathbb{R}^{2})$ of $f(t,x,v)$ of \eqref{solution} when \begin{equation} \label{t1 zero} 0 = t^{1}(t,x,v) \ \text{or equivalently} \ t = t_{\mathbf{b}}(x,v). \end{equation} \subsection{$C^{1}_{v}$ condition of $f$} Since we assume \eqref{t1 zero}, $X(0;t,x,v) = x^{1}(x,v) = x_{\mathbf{b}}(x,v) \in \partial \Omega$. To derive compatibility condition for $C^{1}_{v}$ of $f(t,x,v)$, we consider $v$-perturbation and use the following notation for perturbed trajectory: \begin{equation} \label{XV epsilon v} X^{\epsilon}(0) := X(0;t,x,v+\epsilon \hat{r}) , \quad V^{\epsilon}(0):=V(0;t,x,v+\epsilon \hat{r} ), \end{equation} where $\hat{r}\in\mathbb{R}^{2}$ is a unit-vector. As $\epsilon \rightarrow 0$, we simply get \begin{equation*} \lim_{\epsilon \rightarrow 0} X(0;t,x,v+\epsilon \hat{r}) = x^{1}(x,v) = x_{\mathbf{b}}(x,v), \end{equation*} from continuity of $X(0;t,x,v)$ in $v$. However, $V(0;t,x,v)$ is not continuous in $v$ because of \eqref{BC}. Explicitly, from Lemma \ref{nabla xv b}, \begin{equation} \label{R12_v} \frac{\partial}{\partial\varepsilon}t_{\mathbf{b}}(x, v+\varepsilon\hat{r})\vert_{\varepsilon=0} = \nabla_{v}t_{\mathbf{b}}(x,v)\cdot\hat{r} = -t_{\mathbf{b}}\frac{\hat{r}\cdot n(x_{\mathbf{b}}(x,v))}{v\cdot n(x_{\mathbf{b}}(x,v))},\quad \text{where}\quad v\cdot n(x_{\mathbf{b}}(x,v)) < 0. \end{equation} So we define, for fixed $(x,v)$, $v\neq 0$, \begin{equation} \label{set R_vel} \begin{split} R_{vel, 1} &:= \{ \hat{r}\in \S^{2} : \hat{r}\cdot n(x_{\mathbf{b}}(x,v)) < 0 \}, \\ R_{vel, 2} &:= \{ \hat{r}\in \S^{2} : \hat{r}\cdot n(x_{\mathbf{b}}(x,v)) \geq 0 \}. \\ \end{split} \end{equation} Then from \eqref{R12_v}, $\nabla_{v}t_{\mathbf{b}}(x,v)\cdot\hat{r} > 0$ when $\hat{r}\in R_{vel, 1}$ and $\nabla_{v}t_{\mathbf{b}}(x,v)\cdot\hat{r} \leq 0$ when $\hat{r}\in R_{vel, 2}$. Therefore, for two unit vectors $\hat{r}_1\in R_{vel, 1}$ and $\hat{r}_2\in R_{vel, 2}$, by continuity argument, \begin{equation*} \lim_{\epsilon \rightarrow 0+} V(0;t,x,v+\epsilon \hat{r}_1) = v, \quad \lim_{\epsilon \rightarrow 0+} V (0;t,x,v+\epsilon \hat{r}_2) = v^1=R_{x^{1}}v. \\ \end{equation*} We consider directional derivatives with respect to $\hat{r}_1$ and $\hat{r}_2$. If $f$ belongs to the $C^1_v$ class, $\nabla_v f(t,x,v)$ exists and directional derivatives of $f$ with respect to $\hat{r}_1,\hat{r}_2$ will be $\nabla_v f(t,x,v) \hat{r}_1,\;\nabla_v f(t,x,v) \hat{r}_2$. Using \eqref{BC}, we have $f_{0}(x^{1}, v) = f_{0}(x^{1}, v^{1})$ and hence \begin{align*} \nabla_v f(t,x,v) \hat{r}_1 &= \lim _{\epsilon\rightarrow 0+} \frac{1}{\epsilon}\left ( f(t,x,v+\epsilon \hat{r}_1) - f(t,x,v) \right )\\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon}\left( f_0(X(0;t,x,v+\epsilon \hat{r}_1),V(0;t,x,v+\epsilon \hat{r}_1)) - f_0(X(0;t,x,v),V(0;t,x,v)) \right)\\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \left( f_0(X^{\epsilon}(0), V^{\epsilon}(0))- f_0 (X^{\epsilon}(0),v)+f_0(X^{\epsilon}(0),v) -f_0(X(0),v) \right) \\ &=\nabla_x f_0(X(0),v) \cdot \lim_{s\rightarrow 0+} \nabla_v X(s) \hat{r}_1+ \nabla_v f_0(X(0),v) \lim_{s \rightarrow 0+} \nabla_v V(s)\hat{r}_1, \\ \nabla_v f(t,x,v) \hat{r}_2 &= \lim _{\epsilon\rightarrow 0+} \frac{1}{\epsilon}\left ( f(t,x,v+\epsilon \hat{r}_2) - f(t,x,v) \right )\\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon}\left( f_0(X(0;t,x,v+\epsilon \hat{r}_2),V(0;t,x,v+\epsilon \hat{r}_2)) - f_0(X(0;t,x,v),V(0;t,x,v)) \right)\\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \left( f_0(X^{\epsilon}(0), V^{\epsilon}(0))- f_0 (X^{\epsilon}(0),v^{1})+f_0(X^{\epsilon}(0),v^{1}) -f_0(X(0),v^{1}) \right) \\ &=\nabla_x f_0(X(0),v^{1}) \cdot \lim_{s\rightarrow 0-} \nabla_v X(s) \hat{r}_2+ \nabla_v f_0(X(0),v^{1}) \lim_{s \rightarrow 0-} \nabla_v V(s)\hat{r}_2, \end{align*} which implies \begin{eqnarray} && \nabla_v f(t,x,v) =\nabla_x f_0(X(0),v) \lim_{s\rightarrow 0+} \nabla_v X(s)+ \nabla_v f_0(X(0),v) \lim_{s \rightarrow 0+} \nabla_v V(s), \label{case12 r1}\\ && \nabla_v f(t,x,v) =\nabla_x f_0(X(0),v^{1}) \lim_{s\rightarrow 0-} \nabla_v X(s)+ \nabla_v f_0(X(0),v^{1}) \lim_{s \rightarrow 0-} \nabla_v V(s). \label{case12 r2} \end{eqnarray} \noindent Since \begin{equation} \label{nabla XV_v+} \lim_{s\rightarrow 0+} \nabla_v X(s)= \lim_{s\rightarrow 0+}\nabla_{v}(x-v(t-s)) = -t I_{2\times 2}, \quad \lim_{s \rightarrow 0+} \nabla_v V(s) = \lim_{s \rightarrow 0+} \nabla_v v = I_{2\times 2}, \end{equation} $\nabla_v f(t,x,v)$ of \eqref{case12 r1} becomes \begin{equation} \label{c_1} \nabla_v f(t,x,v) = -t \nabla_x f_0(X(0),v) + \nabla_v f_0(X(0),v). \\ \end{equation} For \eqref{case12 r2}, using the product rule in Lemma \ref{matrix notation} and \eqref{normal} in Lemma \ref{d_n}, we have \begin{equation} \label{nabla XV_v-} \begin{split} \lim_{s\rightarrow 0-} \nabla_v X(s)& = \lim_{s\rightarrow 0-} \nabla_v (x^1 - (t^1+s)v^1)= \lim_{s\rightarrow 0-} \nabla_v x^1 + v^{1}\otimes\nabla_{v}t_{\mathbf{b}} \\ &= -t\left(I-\frac{v\otimes n(x^1)}{v\cdot n(x^1)}\right) - t\frac{v^1 \otimes n(x^1)}{ v \cdot n(x^1)} \\ &= -t \Big( I -\frac{1}{v\cdot n(x^1)} \big( 2(v\cdot n(x^{1}))n(x^{1}) \big)\otimes n(x^{1}) \Big) = -tR_{x^{1}}, \\ \lim_{s \rightarrow 0-} \nabla_v V(s) &= \lim_{s \rightarrow 0-} \nabla_v (R_{x^{1}}v) \\ &=\lim_{s \rightarrow 0-} \left( I- 2(v \cdot n(x^1) ) \nabla_v n(x^1) -2 n(x^1)\otimes n(x^1) -2 (n(x^1)\otimes v ) \nabla_v n(x^1)\right)\\ &=R_{x^{1}} +2t(v\cdot n(x^1)) \left( I - \frac{ v \otimes n(x^1)}{ v \cdot n(x^1)} \right) +2t (n(x^1) \otimes v)\left( I - \frac{ v \otimes n(x^1)}{ v \cdot n(x^1)} \right) \\ &= R_{x^{1}} + 2t A_{v, x^{1}}, \end{split} \end{equation} where $A_{v, x^{1}}$ is defined in \eqref{def A}. Hence, using \eqref{nabla XV_v-}, $\nabla_v f(t,x,v)$ in \eqref{case12 r2} becomes \begin{align} \label{c_2} \begin{split} \nabla_v f(t,x,v) &= -t \nabla_x f_0(X(0),R_{x^{1}}v) R_{x^{1}} \\ &\quad +\nabla_v f_0(X(0),R_{x^{1}}v)R_{x^{1}} + t\nabla_v f_0(X(0),R_{x^{1}}v)\left[2\left((v\cdot n(x^1))I+(n(x^1) \otimes v) \right)\left(I-\frac{v\otimes n(x^1)}{v\cdot n(x^1)}\right)\right] \\ &= -t \nabla_x f_0(x^{1},R_{x^1}v) R_{x^{1}} + \nabla_v f_0(x^{1},R_{x^{1}}v) (R_{x^{1}} + 2tA_{v, x^{1}}), \end{split} \end{align} where we used $v\cdot n (x^1) = - v^1 \cdot n(x^1)$. Meanwhile, taking $\nabla_{v}$ to specular reflection \eqref{BC} directly, we get \begin{equation} \label{comp_v} \nabla_{v}f_{0}(x,v) =\nabla_{v}f_{0}(x,R_{x}v)R_{x}, \quad \forall x\in\partial\Omega. \end{equation} Comparing \eqref{c_1}, \eqref{c_2}, and \eqref{comp_v}, we deduce \begin{align} \label{c_v} \begin{split} \nabla_x f_0( x^{1},v) &= \nabla_x f_0(x^{1},R_{x^1}v) R_{x^{1}} - 2\nabla_v f_0(x^{1}, R_{x^{1}}v)A_{v,x^{1}},\quad (x^{1}, v)\in \gamma_{-}. \end{split} \end{align} \subsection{$C^{1}_{x}$ condition of $f$} Recall we assumed \eqref{t1 zero}. Similar to previous subsection, we define $x$-perturbed trajectory, \begin{equation} \label{XV epsilon x} X^{\epsilon}(0) :=X(0;t,x+\epsilon \hat{r}, v ), \quad V^{\epsilon}(0) := V(0;t,x+\epsilon \hat{r}, v), \end{equation} where $\hat{r}\in\mathbb{R}^{2}$ is a unit-vector. As $\epsilon \rightarrow 0$, we simply get \begin{equation*} \lim_{\epsilon \rightarrow 0} X(0;t,x+\epsilon\hat{r},v) = x^{1}(x,v). \end{equation*} Similar to previous subsection, using Lemma \ref{nabla xv b}, \begin{equation} \label{R12_x} \frac{\partial}{\partial\varepsilon}t_{\mathbf{b}}(x+\varepsilon\hat{r}, v)\big\vert_{\varepsilon=0} = \nabla_{x}t_{\mathbf{b}}(x,v)\cdot\hat{r} = \frac{\hat{r}\cdot n(x_{\mathbf{b}}(x,v))}{v\cdot n(x_{\mathbf{b}}(x,v))},\quad \text{where}\quad v\cdot n(x_{\mathbf{b}}(x,v)) < 0. \end{equation} So we define, for fixed $(x,v)$, $v\neq 0$, \begin{equation} \label{set R_sp} \begin{split} R_{sp, 1} &:= \{ \hat{r}\in \S^{2} : \hat{r}\cdot n(x_{\mathbf{b}}(x,v)) > 0 \}, \\ R_{sp, 2} &:= \{ \hat{r}\in \S^{2} : \hat{r}\cdot n(x_{\mathbf{b}}(x,v)) \leq 0 \}. \\ \end{split} \end{equation} Then from \eqref{R12_x}, $\nabla_{x}t_{\mathbf{b}}(x,v)\cdot\hat{r} > 0$ when $\hat{r}\in R_{sp, 1}$ and $\nabla_{x}t_{\mathbf{b}}(x,v)\cdot\hat{r} \leq 0$ when $\hat{r}\in R_{sp, 2}$. Therefore, for two unit vectors $\hat{r}_1\in R_{sp, 1}$ and $\hat{r}_2\in R_{sp, 2}$, by continuity argument, \begin{equation*} \lim_{\epsilon \rightarrow 0+} V(0;t,x,v+\epsilon \hat{r}_1) = v, \quad \lim_{\epsilon \rightarrow 0+} V (0;t,x,v+\epsilon \hat{r}_2) = v^1=R_{x^{1}}v. \end{equation*} Using similar arguments in previous subsection, we obtain \begin{eqnarray} && \nabla_x f(t,x,v) \hat{r}_{1} =\nabla_x f_0(X(0),v) \lim_{s\rightarrow 0+} \nabla_x X(s)\hat{r}_{1} + \nabla_v f_0(X(0),v) \lim_{s \rightarrow 0+} \nabla_x V(s)\hat{r}_{1}, \label{case12 r1 x}\\ && \nabla_x f(t,x,v)\hat{r}_{2} =\nabla_x f_0(X(0),Rv) \lim_{s\rightarrow 0-} \nabla_x X(s) \hat{r}_{2} + \nabla_v f_0(X(0),Rv) \lim_{s \rightarrow 0-} \nabla_x V(s) \hat{r}_{2}. \label{case12 r2 x} \end{eqnarray} Since \begin{equation} \label{nabla XV_x+} \lim_{s\rightarrow 0+} \nabla_x X(s) = I_{2\times 2},\quad \lim _{s \rightarrow 0+} \nabla_x V(s)=0_{2 \times 2}, \end{equation} $\nabla_{x}f(t,x,v)$ of \eqref{case12 r1 x} becomes \begin{equation} \label{c_3} \nabla_x f(t,x,v) = \nabla_x f_0(X(0),v). \end{equation} For $\nabla_{x}f(t,x,v)$ of \eqref{case12 r2 x}, we apply the product rule in Lemma \ref{matrix notation} and \eqref{normal} in Lemma \ref{d_n} to get \begin{equation} \label{nabla XV_x-} \begin{split} \lim_{s\rightarrow 0-} \nabla_x X(s)& = \lim_{s\rightarrow 0-} \nabla_x (x^1 - (t^1+s)v^1)=\left(I-\frac{v\otimes n(x^1)}{v\cdot n(x^1)}\right) + \frac{v^1 \otimes n(x^1)}{ v \cdot n(x^1)} = R_{x^{1}},\\ \lim_{s \rightarrow 0-} \nabla_x V(s) &= \lim_{s \rightarrow 0-} \nabla_x (R_{x^{1}}v) \\ &=\lim_{s \rightarrow 0-} \left( - 2(v \cdot n(x^1) ) \nabla_x n(x^1) -2 (n(x^1)\otimes v ) \nabla_x n(x^1)\right)\\ &=-2(v\cdot n(x^1)) \left( I - \frac{ v \otimes n(x^1)}{ v \cdot n(x^1)} \right) -2 (n(x^1) \otimes v)\left( I - \frac{ v \otimes n(x^1)}{ v \cdot n(x^1)} \right) \\ &= -2 A_{v,x^{1}}. \end{split} \end{equation} Hence, using \eqref{nabla XV_x-}, $\nabla_x f(t,x,v)$ in \eqref{case12 r2 x} becomes \begin{align} \label{c_4} \begin{split} \nabla_x f(t,x,v) &= \nabla_x f_0(X(0), R_{x^{1}}v) R_{x^{1}} - 2\nabla_v f_0(X(0),R_{x^{1}}v) A_{v,x^{1}}. \end{split} \end{align} Combining \eqref{c_3} and \eqref{c_4}, \begin{align}\label{c_x} \begin{split} \nabla_x f_0( x^{1},v) &= \nabla_x f_0(x^{1},R_{x^{1}}v) R_{x^{1}} - 2\nabla_v f_0(x^{1}, R_{x^{1}}v) A_{v,x^{1}},\quad (x^{1}, v)\in \gamma_{-}, \end{split} \end{align} which is identical to \eqref{c_v}. \\ \hide which exactly coincides with \eqref{c_v}. We rewrite compatibility condition as \begin{equation} \begin{split} \nabla_x f_0( x^{1},v) &= \nabla_x f_0(x^{1},R_{x^{1}}v) R_{x^{1}} - \nabla_v f_0(x^{1}, R_{x^{1}}v)\left[2\left((v\cdot n(x^1))I+(n(x^1) \otimes v) \right)\left(I-\frac{v\otimes n(x^1)}{v\cdot n(x^1)}\right)\right],\quad (x^{1},v) \in \gamma_{-}. \end{split} \end{equation} \unhide \subsection{$C^{1}_{t}$ condition of $f$} To check the $C^1_t$ condition, we define \begin{align}\label{Perb_t} X^\epsilon(0):=X(0;t+\epsilon, x,v), \quad V^\epsilon(0):= V(0;t+\epsilon,x,v). \end{align} More specifically, \begin{align*} X^\epsilon(0)=x^1-(t^1+\epsilon) R_{x^{1}}v, \quad V^\epsilon(0)= R_{x^{1}}v,\quad \epsilon > 0, \end{align*} and \begin{align*} X^\epsilon(0)=x-(t+\epsilon)v,\quad V^\epsilon(0)=v,\quad \epsilon < 0. \end{align*} Thus, the case ($\epsilon>0$) describes the situation after bounce (backward in time) and the case ($\epsilon<0$) describes the situation just before bounce (backward in time). Then, for $\epsilon>0$, \begin{align*} f_t(t,x,v)&= \lim_{\epsilon\rightarrow0+}\frac{f(t+\epsilon,x,v)-f(t,x,v)}{\epsilon} \\ &=\lim_{\epsilon\rightarrow 0+} \frac{f_0(X^\epsilon(0),V^\epsilon(0))-f_0(X(0),V(0))}{\epsilon}\\ &=\lim_{\epsilon\rightarrow 0+} \frac{f_0(X^\epsilon(0),R_{x^{1}}v)-f_0(X(0),R_{x^{1}}v)}{\epsilon}\\ &=\nabla_x f_0(x^1,R_{x^{1}}v) \lim_{\epsilon \rightarrow 0+} \frac{X^\epsilon(0)-X(0)}{\epsilon}\\ &=-\nabla_x f_0(x^1,R_{x^{1}}v)R_{x^{1}}v. \end{align*} We only consider the situation just before collision and then \begin{align*} f_t(t,x,v)&= \lim_{\epsilon\rightarrow0-}\frac{f(t+\epsilon,x,v)-f(t,x,v)}{\epsilon} \\ &= \lim_{\epsilon\rightarrow0-}\frac{f_0(X^\epsilon(0),v)-f_0(X(0),v)}{\epsilon}\\ &= \nabla_xf_0(x^1,v) \lim_{\epsilon\rightarrow0-} \frac{X^\epsilon(0)-X(0)}{\epsilon}\\ &=-\nabla_x f_0(x^1,v)v. \end{align*} Thus, we derive a $C^1_t$ condition \begin{equation} \label{c_t} \nabla_x f_0(x^1,v)v = \nabla_x f_0(x^1,R_{x^{1}}v)R_{x^{1}}v,\quad (x^{1}, v)\in \gamma_{-}. \end{equation} Actually, \eqref{c_t} is just particular case of \eqref{c_v}, because of \eqref{Av=0}. \hide {\color{blue} \begin{remark} \label{trivial case} Let us consider trivial case : $f(t,x,v) = f_{0}(v)$, spatially independent case. Since specular reflection holds for all $x\in \partial\Omega$, $f_{0}$ should be radial function, $f_{0}(v) = f_{0}(|v|)$. \eqref{Cond} also holds for this case, because vector $\nabla_{v}f_{0}(x^{1},Rv)$ has $Rv$ direction and \begin{equation} \begin{split} &\underbrace{(Rv)}_{\text{row vector}}\left((v\cdot n(x^1))I+(n(x^1) \otimes v) \right)\left(I-\frac{v\otimes n(x^1)}{v\cdot n(x^1)}\right) \\ &= \left((v\cdot n(x^1)) (Rv) + (Rv\cdot n(x^1)) v \right)\left(I-\frac{v\otimes n(x^1)}{v\cdot n(x^1)}\right) \\ &= (v\cdot n(x^1)) Rv - (v\cdot n(x^{1}))v - (Rv\cdot v)n(x^{1}) + |v|^{2}n(x^{1}) \\ &= (v\cdot n(x^1)) \big(v - 2(v\cdot n(x^{1}))n(x^{1}) \big)- (v\cdot n(x^{1}))v - \big(|v|^{2} - 2|v\cdot n(x^{1})|^{2}\big) n(x^{1}) + |v|^{2}n(x^{1}) \\ &= 0. \end{split} \end{equation} \end{remark} } \unhide \subsection{Proof of Theorem \ref{thm 1}} \begin{proof} [Proof of Theorem \ref{thm 1}] If $0 \neq t^{k}$ for any $k\in \mathbb{N}$, then $X(0;t,x,v)$ and $V(0;t,x,v)$ are both smooth function of $(t,x,v)$. By chain rule and $f_0\in C^{1}_{x,v}$, $f(t,x,v)$ of \eqref{solution} is also $C^{1}_{t,x,v}$. \\ Now let us assume $ 0 = t^{k}(t,x,v)$ for some $k\in \mathbb{N}$. From discontinuous property of $V(0;t,x,v)$, we consider the following two cases: \begin{align*} & \quad \lim_{s\rightarrow 0+} \nabla_v V(s) \textcolor{blue}{ ( \text{or} \ \nabla_vX(s))} = \underbrace{ \lim_{s\rightarrow 0+} \frac{\partial V(s)\textcolor{blue}{( \text{or} \ \partial X(s))}}{\partial(t^{k-1},x^{k-1}, v^{k-1})} } \frac{\partial(t^{k-1},x^{k-1}, v^{k-1})}{\partial v},\\ & \quad \lim_{s\rightarrow 0-} \nabla_v V(s) \textcolor{blue}{( \text{or} \ \nabla_vX(s))}= \underbrace{ \lim_{s\rightarrow 0-} \frac{\partial V(s)\textcolor{blue}{( \text{or} \ \partial X(s))}}{\partial(t^{k},x^{k}, v^{k})}\frac{\partial(t^{k},x^k,v^k)}{\partial(t^{k-1},x^{k-1},v^{k-1})} } \frac{\partial(t^{k-1},x^{k-1}, v^{k-1})}{\partial v}. \end{align*} First, we note that the factor $\displaystyle \frac{\partial(t^{k-1},x^{k-1}, v^{k-1})}{\partial v}$ which is common for both of above is smooth. From Lemma \ref{nabla xv b}, $t^{1}(t,x,v) = t - t_{\mathbf{b}}(x,v)$, $x^{1}(x,v) = x - t_{\mathbf{b}}(x,v) v$, and $v^{1}(x,v) = R_{x_{\mathbf{b}}(x,v)}v$ are all smooth functions of $(x,v)$ if $(t^{1}, x^{1}, v^{1})$ is nongrazing at $x^{1}$. Now, let us consider the mapping \[ (t^{1}, x^{1}, v^{1}) \mapsto (t^{2}, x^{2}, v^{2}) \] which is smooth by \[ t^{2} = t^{1} - t_{\mathbf{b}}(x^{1}, v^{1}),\quad x^{2} = x^{1} - v^{1}t_{\mathbf{b}}(x^{1}, v^{1}),\quad v^{2} = R_{x^{2}}v^{1}. \] (Note that the derivative of $t_{\mathbf{b}}$ on $\partial\Omega \times \mathbb{R}^{3}_{v}$ can be performed by its local parametrization.) By the chain rule, we easily derive that $(t^{k}, x^{k}, v^{k})$ is smooth in $(x, v)$. For explicit computation and their Jacobian, we refer to \cite{KimLee}. Now, it suffices to compare above two underbraced terms only. {\bf It means that no generality is lost by setting $k=1$.} \\ Initial-boundary compatibility conditions for $C^{1}_{t,x,v}$ were obtained in \eqref{c_v}, \eqref{c_x}, and \eqref{c_t}. Since compatibility conditions \eqref{c_x} and \eqref{c_t} are covered by \eqref{c_v}, $f(t,x,v)\in C^{1}_{t,x,v}$ once \eqref{c_v} holds. To change \eqref{c_v} into more symmetric presentation \eqref{C1 cond}, we apply \eqref{comp_v} and multiply invertible matrix $R_{x^{1}}$ on both sides from the right to obtain \begin{equation*} \begin{split} &\big( \nabla_x f_0(x^{1},v) + \nabla_v f_0(x^{1}, v) R_{x^1}A_{v,x^{1}} \big) R_{x^1} = \nabla_x f_0(x^1, R_{x^1}v) - \nabla_v f_0(x^1, R_{x^1}v) A_{v,x^{1}}R_{x^1}. \\ \end{split} \end{equation*} This yields \begin{equation*} \begin{split} \Big[ \nabla_x f_0( x,v) + \nabla_v f_0(x, v) \frac{ (Qv)\otimes (Qv) }{v\cdot n(x)} \Big]R_x &= \nabla_x f_0(x, R_xv) + \nabla_v f_0(x, R_xv) \frac{ (QR_xv)\otimes (QR_xv) }{R_xv\cdot n(x)}, \end{split} \end{equation*} by \eqref{RA}. \\ Now we claim that compatibility condition \eqref{c_v} also holds for $(x^{1}, v)\in \gamma_{+}$. By multiplying $R_{x^{1}}$ both sides and using $R^2_{x^1} = I$, $R_{x^1}n(x^1) = -n(x^1)$, and \eqref{comp_v}, we obtain \begin{equation} \label{C1 gamma+} \begin{split} \nabla_x f_0( x^{1}, R_{x^1}v) &= \nabla_x f_0(x^{1}, v) R_{x^{1}} + 2 \nabla_v f_0(x^{1}, v) \underbrace{ R_{x^1}\left[\left((v\cdot n(x^1))I+(n(x^1) \otimes v) \right)\left(I-\frac{v\otimes n(x^1)}{v\cdot n(x^1)}\right)\right] R_{x^1} }. \end{split} \end{equation} Since $R_{x^1}=R^{T}_{x^1}$ (transpose), the underbraced term is written as \begin{equation} \begin{split} &R_{x^{1}} A_{v,x^{1}} R_{x^{1}} \\ &= R_{x^{1}} \left[\left((v\cdot n(x^1))I+(n(x^1) \otimes v) \right)\left(I-\frac{v\otimes n(x^1)}{v\cdot n(x^1)}\right)\right] R_{x^{1}} \\ &= -(R_{x^1}v\cdot n(x^{1}))I - R_{x^1}v\otimes R_{x^1}n(x^{1}) + R_{x^1}n(x^{1})\otimes R_{x^1}v - \frac{R_{x^1}n(x^{1})\otimes R_{x^1}n(x^{1})}{v\cdot n(x^{1})}|R_{x^1}v|^{2} \\ &= - \left[\left((R_{x^1}v\cdot n(x^1))I+(n(x^1) \otimes R_{x^1}v) \right)\left(I-\frac{R_{x^1}v\otimes n(x^1)}{R_{x^1}v\cdot n(x^1)}\right)\right] \\ &= -A_{R_{x^1}v,x^{1}}, \end{split} \end{equation} and hence \eqref{C1 gamma+} is identical to \eqref{c_v} when $(x^{1},v)\in \gamma_{+}$. Finally, we will prove that $f(t,x,v)$ is not of class $C^1_{t,x,v}$ at time $t$ such that $t^k(t,x,v)=0$ for some $k$ if \eqref{C1 cond} does not hold. As we used the chain rule, we set $t^1(t,x,v)=0$. Thus, it suffices to prove that $f(t,x,v)$ is not of class $C^1_{t,x,v}$ at time $t$ which satisfies $t^1(t,x,v)=0$ if \eqref{C1 cond} is not satisfied for $(X(0;t,x,v),v)\in \gamma_-$. Remind directional derivatives with respect to $\hat{r}_1$ and $\hat{r}_2$ to get $f \in C^1_{t,x,v}(\mathbb{R}_+\times \mathcal{I})$. In $C^1_v$ case, we deduced two conditions \eqref{case12 r1} and \eqref{case12 r2} from directional derivatives. However, if initial data $f_0$ does not satisfy the condition \eqref{C1 cond} at $(X(0;t,x,v),v)\in \gamma_-$, two conditions cannot coincide. It means that $f(t,x,v)$ is not $C^1_v$ at $t$ such that $t^1(t,x,v)=0$. Similar to $C^1_{t,x}$ cases, we get the same result. \end{proof} \section{Initial-boundary compatibility condition for $C^{2}_{t,x,v}$} As mentioned in the beginning of the previous section, we treat the problem \eqref{eq} as 2D problem in a unit disk $\{x\in\mathbb{R}^{2} : |x| < 1 \}$. And, throughout this section, we use the following notation to interchange column and row for notational convenience, \[ \begin{pmatrix} a \\ b \end{pmatrix} \stackrel{c\leftrightarrow r}{=} \begin{pmatrix} a & b \end{pmatrix} ,\quad \begin{pmatrix} a & b \end{pmatrix} \stackrel{r\leftrightarrow c}{=} \begin{pmatrix} a \\ b \end{pmatrix}. \\ \] Similar to previous section, we assume \eqref{t1 zero}, i.e., $0=t^{1}(t,x,v)$. We also assume $f_0$ satisfies specular reflection \eqref{BC} and $C^{1}_{t,x,v}$ compatibility condition \eqref{c_v} (or \eqref{C1 cond}) in this section. \\ \subsection{Condition for $\nabla_{xv}$} Similar to previous section, we split perturbed direction into \eqref{set R_sp}. We also note that $\nabla_{v}f(t,x,v)$ can be written as \eqref{c_1} or \eqref{c_2}, which are identical by assuming \eqref{c_v}. First, using \eqref{c_1}, $\hat{r}_{1}$ of \eqref{set R_sp}, and notation \eqref{XV epsilon x} \begin{equation} \label{nabla_xv f case1} \begin{split} &\nabla_{xv} f(t,x,v) \hat{r}_1 \stackrel{c\leftrightarrow r}{=} \lim _{\epsilon\rightarrow 0+} \frac{1}{\epsilon}\left ( \nabla_{v}f(t,x+\epsilon \hat{r}_1,v) - \nabla_{v}f(t,x,v) \right ) \\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon}\Big( \nabla_{v}\big[ f_0(X(0;t,x+\epsilon \hat{r}_1,v),V(0;t,x+\epsilon \hat{r}_1,v)) \big] - \big( -t \nabla_x f_0(X(0),v) + \nabla_v f_0(X(0),v) \big)\Big) \\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{v}X^{\varepsilon}(0) + \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{v}V^{\varepsilon}(0) \\ &\quad\quad\quad\quad - \big( -t \nabla_x f_0(X(0),v) + \nabla_v f_0(X(0),v) \big) \Big\} \\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ -t\big[ \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) - \nabla_{x}f_{0}(X(0), v) \big] + \big[ \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) - \nabla_{v}f_{0}(X(0), v) \big] \Big\} \\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ -t\big[ \nabla_{x}f_{0}(X^{\varepsilon}(0), v) - \nabla_{x}f_{0}(X(0), v) \big] + \big[ \nabla_{v}f_{0}(X^{\varepsilon}(0), v) - \nabla_{v}f_{0}(X(0), v) \big] \Big\} \\ &\stackrel{r\leftrightarrow c}{=} \nabla_{xx}f_{0}(X(0),v)\lim_{s\rightarrow 0+} \nabla_{x}X(s) (-t )\hat{r}_1 + \nabla_{xv}f_{0}(X(0),v)\lim_{s\rightarrow 0+}\nabla_{x}X(s) \hat{r}_1 \\ &= \Big( \nabla_{xx}f_{0}(x^{1},v) (-t ) + \nabla_{xv}f_{0}(x^{1},v) \Big) \hat{r}_1, \end{split} \end{equation} where we have used \eqref{nabla XV_v+}, \eqref{nabla XV_x+}, $\nabla_{v}X^{\varepsilon}(0)=-t I_{2}$., and $\nabla_{v}V^{\varepsilon}(0)= I_{2}$. Similarly, using \eqref{c_2} and $\hat{r}_{2}$ of \eqref{set R_sp}, \begin{equation} \notag \begin{split} &\nabla_{xv} f(t,x,v) \hat{r}_2 \stackrel{c\leftrightarrow r}{=} \lim _{\epsilon\rightarrow 0+} \frac{1}{\epsilon}\left ( \nabla_{v}f(t,x+\epsilon \hat{r}_2,v) - \nabla_{v}f(t,x,v) \right )\\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{v}X^{\varepsilon}(0) + \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{v}V^{\varepsilon}(0) \\ &\quad\quad\quad - \big( -t \nabla_x f_0(X(0),R_{x^1}v) R_{x^{1}} + \nabla_v f_0(X(0),R_{x^1}v) (R_{x^1} + 2tA_{v, x^{1}})\big) \Big\} \\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{\nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{v}X^{\varepsilon}(0) + t \nabla_x f_0(X(0),R_{x^1}v) R_{x^{1}} \Big\} \\ &\quad + \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{v}V^{\varepsilon}(0) - \nabla_v f_0(X(0),R_{x^1}v) (R_{x^1} + 2tA_{v, x^{1}}) \Big\} \\ &:= I_{xv,1} + I_{xv,2} . \end{split} \end{equation} Using \eqref{nabla XV_v-} and \eqref{nabla XV_x-}, \begin{equation*} \begin{split} I_{xv,1} &:= \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{\nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{v}X^{\varepsilon}(0) - \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \lim_{s\rightarrow 0-}\nabla_{v}X(s) \\ &\quad\quad \quad\quad\quad + \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \lim_{s\rightarrow 0-} \nabla_{v}X(s) + t \nabla_x f_0(X(0),R_{x^1}v) R_{x^{1}} \Big\} ,\quad\quad \lim_{s\rightarrow 0-} \nabla_{v}X(s) = -tR_{x^{1}}, \\ &= \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \Big( \nabla_{v}X^{\varepsilon}(0) - \lim_{s\rightarrow 0-} \nabla_{v}X(s) \Big) + \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon}\Big( \nabla_{x}f_{0}(X^{\varepsilon}(0),V^{\varepsilon}(0)) - \nabla_{x}f_{0}(X(0), R_{x^1}v) \Big) (-tR_{x^1}) \\ &\stackrel{r\leftrightarrow c}{=} \Big[ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big( \nabla_{v}X^{\varepsilon}(0) - \lim_{s\rightarrow 0-} \nabla_{v}X(s) \Big) \Big]^{T} \\ &\quad + (-tR_{x^1}) \Big( \nabla_{xx}f_{0}(x^{1}, R_{x^1}v) \lim_{s\rightarrow 0-}\nabla_{x}X(s) + \nabla_{vx}f_{0}(x^{1}, R_{x^1}v)\lim_{s\rightarrow 0-}\nabla_{x}V(s) \Big) \hat{r}_{2} \\ &= \underbrace{ \Big[ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big( \nabla_{v}X(0;t, x+\epsilon \hat{r}_{2}, v) - \lim_{s\rightarrow 0-} \nabla_{v}X(s) \Big) \Big]^{T} }_{:=(*)_{xv,1}\hat{r}_{2} } \\ &\quad + (-tR_{x^1})\big[ \nabla_{xx}f_{0}(x^{1}, R_{x^1}v) R_{x^1} + \nabla_{vx}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \big] \hat{r}_{2}, \\ \end{split} \end{equation*} and \begin{equation*} \begin{split} I_{xv,2} &:= \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{v}V^{\varepsilon}(0) - \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \lim_{s\rightarrow 0-}\nabla_{v}V(s) \\ &\quad\quad\quad\quad + \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) (R_{x^{1}} + 2tA_{v,x^{1}}) - \nabla_v f_0(X(0),R_{x^1}v) (R_{x^1} + 2tA_{v, x^{1}})\Big\} \\ &= \nabla_{v}f_{0}(x^{1}, R_{x^1}v)\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big( \nabla_{v}V^{\varepsilon}(0) - \lim_{s\rightarrow 0-}\nabla_{v}V(s) \Big) \\ &\quad\quad\quad\quad + \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big( \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) - \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \Big) (R_{x^{1}} + 2tA_{v,x^{1}}) \\ &\stackrel{r\leftrightarrow c}{=} \Big[ \nabla_{v}f_{0}(x^{1}, R_{x^1}v)\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big( \nabla_{v}V^{\varepsilon}(0) - \lim_{s\rightarrow 0-}\nabla_{v}V(s) \Big) \Big]^{T} \\ &\quad\quad\quad\quad + (R_{x^{1}} + 2tA^{T}_{v,x^{1}})\Big( \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) \lim_{s\rightarrow 0-} \nabla_{x}X(s) + \nabla_{vv}f_{0}(x^{1}, R_{x^1}v)\lim_{s\rightarrow 0-}\nabla_{x}V(s) \Big) \hat{r}_{2} \\ &= \underbrace{ \Big[ \nabla_{v}f_{0}(x^{1}, R_{x^1}v)\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big( \nabla_{v}V(0;t,x+\epsilon \hat{r}_{2}, v) - \lim_{s\rightarrow 0-}\nabla_{v}V(s) \Big) \Big]^{T} }_{:=(*)_{xv,2} \hat{r}_{2}} \\ &\quad\quad\quad\quad + (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \big[ \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + \nabla_{vv}f_{0}(x^{1}, R_{x^1}v) (-2A_{v, x^{1}}) \big] \hat{r}_{2}. \\ \end{split} \end{equation*} Now we compute two underbraced $(*)_{xv,1}$ and $(*)_{xv,2}$ \\ \begin{equation} \label{xv star1} \begin{split} (*)_{xv,1} \hat{r}_{2} &= \Big[ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big( \nabla_{v}X^{\varepsilon}(0) - \lim_{s\rightarrow 0-} \nabla_{v}X(s) \Big) \Big]^{T} \\ &= \Big[ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{s \rightarrow 0-}\nabla_{x}(\partial_{v_{1}}X(s)) \hat{r}_{2}, \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{s \rightarrow 0-}\nabla_{x}(\partial_{v_{2}}X(s)) \hat{r}_{2} \Big]^{T} \\ &= \begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{s \rightarrow 0-}\nabla_{x}(\partial_{v_{1}}X(s)) \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{s \rightarrow 0-}\nabla_{x}(\partial_{v_{2}}X(s)) \end{bmatrix} \hat{r}_{2} \\ &= \begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(-t R_{x^{1}(x,v)}^1) \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(-t R_{x^{1}(x,v)}^2) \end{bmatrix} \hat{r}_{2}. \\ \end{split} \end{equation} Similarly, \begin{equation} \label{xv star2} \begin{split} (*)_{xv,2} \hat{r}_{2} &= \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \lim_{s \rightarrow 0-}\nabla_{x}(\partial_{v_{1}}V(s)) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \lim_{s \rightarrow 0-}\nabla_{x}(\partial_{v_{2}}V(s)) \end{bmatrix} \hat{r}_{2} \\ &= \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^1 + 2t A_{v,x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^2 + 2t A_{v,x^{1}(x,v)}^2) \end{bmatrix} \hat{r}_{2}, \\ \end{split} \end{equation} where $A^i$ means $i$th column of matrix $A$. Therefore, \begin{equation} \label{nabla_xv f case2} \begin{split} \nabla_{xv}f(t,x,v) &= \underline{(*)_{xv,1}}_{\eqref{xv star1}} + \underline{(*)_{xv,2}}_{\eqref{xv star2}} \\ &\quad + (-tR_{x^1})\big[ \nabla_{xx}f_{0}(x^{1}, R_{x^1}v) R_{x^1} + \nabla_{vx}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \big] \\ &\quad + (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \big[ \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + \nabla_{vv}f_{0}(x^{1}, R_{x^1}v) (-2A_{v, x^{1}}) \big]. \\ \end{split} \end{equation} From \eqref{nabla_xv f case1} and \eqref{nabla_xv f case2}, we get the following compatibility condition \begin{equation} \label{xv comp} \begin{split} &(-t)\nabla_{xx}f_{0}(x^{1},v) + \nabla_{xv}f_{0}(x^{1},v) \\ &= \underline{(*)_{xv,1}}_{\eqref{xv star1}} + \underline{(*)_{xv,2}}_{\eqref{xv star2}} \\ &\quad + (-tR_{x^1}) \nabla_{xx}f_{0}(x^{1}, R_{x^1}v) R_{x^1} + (-tR_{x^1})\nabla_{vx}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \\ &\quad + (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \nabla_{vv}f_{0}(x^{1}, R_{x^1}v) (-2A_{v, x^{1}}) . \end{split} \end{equation} \subsection{Condition for $\nabla_{vv}$} We split perturbed direction into \eqref{set R_vel}. $\nabla_{v}f(t,x,v)$ can be written as \eqref{c_1} or \eqref{c_2}. Using \eqref{c_1}, $\hat{r}_{1}$ of \eqref{set R_vel}, and notation \eqref{XV epsilon v}, \\ \begin{equation} \label{nabla_vv f case1} \begin{split} &\nabla_{vv} f(t,x,v) \hat{r}_1 \\ &\stackrel{c\leftrightarrow r}{=} \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ -t\big[ \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) - \nabla_{x}f_{0}(X(0), v) \big] + \big[ \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) - \nabla_{v}f_{0}(X(0), v) \big] \Big\} \\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ -t\big[ \nabla_{x}f_{0}(X^{\varepsilon}(0), v+\epsilon \hat{r}_{1} ) - \nabla_{x}f_{0}(X(0), v) \big] + \big[ \nabla_{v}f_{0}(X^{\varepsilon}(0), v+\epsilon \hat{r}_{1} ) - \nabla_{v}f_{0}(X(0), v) \big] \Big\} \\ &\stackrel{r\leftrightarrow c}{=} \Big[ -t \nabla_{xx}f_{0}(X(0),v)\lim_{s\rightarrow 0+} \nabla_{v}X(s) -t \nabla_{vx}f_{0}(X(0),v)\lim_{s\rightarrow 0+} \nabla_{v}V(s) \\ &\quad + \nabla_{xv}f_{0}(X(0),v)\lim_{s\rightarrow 0+}\nabla_{v}X(s) + \nabla_{vv}f_{0}(X(0),v)\lim_{s\rightarrow 0+}\nabla_{v}V(s) \Big] \hat{r}_1 \\ &= \Big[ (-t ) \nabla_{xx}f_{0}(x^{1},v) (-t) + (-t )\nabla_{vx}f_{0}(x^{1},v) + \nabla_{xv}f_{0}(x^{1},v) (-t ) + \nabla_{vv}f_{0}(x^{1},v) \Big] \hat{r}_1, \end{split} \end{equation} where we have used \eqref{nabla XV_v+} and note that we have $\nabla_{v}X^{\varepsilon}(0)=-t I_{2}$ and $\nabla_{v}V^{\varepsilon}(0)= I_{2}$ for $v+\varepsilon\hat{r}_1$ case also. \\ Similarly, using \eqref{c_2}, $\hat{r}_{2}$ of \eqref{set R_vel}, and notation \eqref{XV epsilon v}, \begin{equation*} \begin{split} &\nabla_{vv} f(t,x,v) \hat{r}_{2} \\ &\stackrel{c\leftrightarrow r}{=} \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{\nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{v}X^{\varepsilon}(0) + t \nabla_x f_0(X(0),R_{x^1}v) R_{x^{1}} \Big\} \\ &\quad + \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{v}V^{\varepsilon}(0) - \nabla_v f_0(X(0),R_{x^1}v) (R_{x^1} + 2tA_{v, x^{1}}) \Big\} \\ &:= I_{vv,1} + I_{vv,2}, \end{split} \end{equation*} and each $I_{vv,1},I_{vv,2}$ are estimated by \begin{equation*} \begin{split} I_{vv,1} &:= \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{\nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{v}X^{\varepsilon}(0) - \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \lim_{s\rightarrow 0-}\nabla_{v}X(s) \\ &\quad\quad \quad\quad\quad + \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \lim_{s\rightarrow 0-} \nabla_{v}X(s) + t \nabla_x f_0(X(0),R_{x^1}v) R_{x^{1}} \Big\} ,\quad \lim_{s\rightarrow 0-} \nabla_{v}X(s) = -tR_{x^{1}}, \\ &\stackrel{r \leftrightarrow c}{= } \Big[ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big( \nabla_{v}X^{\varepsilon}(0) - \lim_{s\rightarrow 0-} \nabla_{v}X(s) \Big) \Big]^{T} \\ &\quad + (-tR_{x^1}) \Big( \nabla_{xx}f_{0}(x^{1}, R_{x^1}v) \lim_{s\rightarrow 0-}\nabla_{v}X(s) + \nabla_{vx}f_{0}(x^{1}, R_{x^1}v)\lim_{s\rightarrow 0-}\nabla_{v}V(s) \Big) \hat{r}_{2} \\ &= \underbrace{ \Big[ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big( \nabla_{v}X(0; t, x, v+\epsilon \hat{r}_{2}) - \lim_{s\rightarrow 0-} \nabla_{v}X(s) \Big) \Big]^{T} }_{(*)_{vv,1}\hat{r}_{2} } \\ &\quad + (-tR_{x^1}) \big[ \nabla_{xx}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^1}) + \nabla_{vx}f_{0}(x^{1}, R_{x^1}v) (R_{x^1} + 2tA_{v,x^{1}}) \big] \hat{r}_{2}, \\ \end{split} \end{equation*} \begin{equation*} \begin{split} I_{vv,2} &:= \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{v}V^{\varepsilon}(0) - \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \lim_{s\rightarrow 0-}\nabla_{v}V(s) \\ &\quad\quad\quad\quad + \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) (R_{x^{1}} + 2tA_{v,x^{1}}) - \nabla_v f_0(X(0),R_{x^1}v) (R_{x^1} + 2tA_{v, x^{1}}) \Big\} \\ &\stackrel{r\leftrightarrow c}{=} \Big[ \nabla_{v}f_{0}(x^{1}, R_{x^1}v)\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big( \nabla_{v}V^{\varepsilon}(0) - \lim_{s\rightarrow 0-}\nabla_{v}V(s) \Big) \Big]^{T} \\ &\quad\quad\quad\quad + (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \Big( \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) \lim_{s\rightarrow 0-} \nabla_{v}X(s) + \nabla_{vv}f_{0}(x^{1}, R_{x^1}v)\lim_{s\rightarrow 0-}\nabla_{v}V(s) \Big) \hat{r}_{2} \\ &= \underbrace{ \Big[ \nabla_{v}f_{0}(x^{1}, R_{x^1}v)\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big( \nabla_{v}V(0; t, x, v+\epsilon \hat{r}_{2}) - \lim_{s\rightarrow 0-}\nabla_{v}V(s) \Big) \Big]^{T} }_{(*)_{vv,2} \hat{r}_{2} } \\ &\quad\quad\quad\quad + (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \big[ \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^1}) + \nabla_{vv}f_{0}(x^{1}, R_{x^1}v) (R_{x^{1}} + 2tA_{v,x^{1}}) \big] \hat{r}_{2}. \\ \end{split} \end{equation*} Similar to \eqref{xv star1} and \eqref{xv star2}, using Lemma \ref{nabla xv b}, \begin{equation} \label{vv star1} \begin{split} (*)_{vv,1} \hat{r}_{2} &= \begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{v}(-t R_{x^{1}(x,v)}^1) \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{v}(-t R_{x^{1}(x,v)}^2) \end{bmatrix} \hat{r}_{2} = t^{2} \begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^1) \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^2) \end{bmatrix} \hat{r}_{2}, \\ \end{split} \end{equation} \begin{equation} \label{vv star2} \begin{split} (*)_{vv,2} \hat{r}_{2} &= \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{v}(R_{x^{1}(x,v)}^1 + 2t A_{v,x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{v}(R_{x^{1}(x,v)}^2 + 2t A_{v,x^{1}(x,v)}^2) \end{bmatrix} \hat{r}_{2} \\ &= -t \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^2) \end{bmatrix} \hat{r}_{2} - 2t^{2} \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(A_{v,x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(A_{v,x^{1}(x,v)}^2) \end{bmatrix} \hat{r}_{2} \\ &\quad + 2t \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{v}A^{1}_{v,x^{1}} \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{v}A^{2}_{v,x^{1}} \end{bmatrix} \hat{r}_{2}. \\ \end{split} \end{equation} Hence, we get \begin{equation} \label{nabla_vv f case2} \begin{split} &\nabla_{vv} f(t,x,v) \\ &= \underline{(*)_{vv,1}}_{\eqref{vv star1}} + \underline{(*)_{vv,2}}_{\eqref{vv star2}} \\ &\quad + (-tR_{x^1}) \big[ \nabla_{xx}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^1}) + \nabla_{vx}f_{0}(x^{1}, R_{x^1}v) (R_{x^1} + 2tA_{v,x^{1}}) \big] \\ &\quad + (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \big[ \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^1}) + \nabla_{vv}f_{0}(x^{1}, R_{x^1}v) (R_{x^{1}} + 2tA_{v,x^{1}}) \big]. \\ \end{split} \end{equation} Then from \eqref{nabla_vv f case1} and \eqref{nabla_vv f case2} we get the following compatibility condition. \begin{equation} \label{vv comp} \begin{split} &(-t ) \nabla_{xx}f_{0}(x^{1},v) (-t) + (-t )\nabla_{vx}f_{0}(x^{1},v) + \nabla_{xv}f_{0}(x^{1},v) (-t ) + \nabla_{vv}f_{0}(x^{1},v) \\ &= \underline{(*)_{vv,1}}_{\eqref{vv star1}} + \underline{(*)_{vv,2}}_{\eqref{vv star2}} \\ &\quad + (-tR_{x^1}) \nabla_{xx}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^1}) + (-tR_{x^1})\nabla_{vx}f_{0}(x^{1}, R_{x^1}v) (R_{x^1} + 2tA_{v,x^{1}}) \\ &\quad + (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^1}) + (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \nabla_{vv}f_{0}(x^{1}, R_{x^1}v) (R_{x^{1}} + 2tA_{v,x^{1}}). \\ \end{split} \end{equation} \subsection{Condition for $\nabla_{xx}$} We split perturbed direction into \eqref{set R_sp}. $\nabla_{x}f(t,x,v)$ can be written as \eqref{c_3} or \eqref{c_4}, which are identical due to \eqref{c_v}. Using \eqref{c_3}, $\hat{r}_{1}$ of \eqref{set R_sp}, and notation \eqref{XV epsilon x}, \\ \begin{equation} \label{nabla_xx f case1} \begin{split} &\nabla_{xx} f(t,x,v) \hat{r}_1 \stackrel{c \leftrightarrow r}{=} \lim _{\epsilon\rightarrow 0+} \frac{1}{\epsilon}\left ( \nabla_{x}f(t,x+\epsilon \hat{r}_1,v) - \nabla_{x}f(t,x,v) \right ) \\ &= \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon}\Big( \nabla_{x}\big[ f_0(X(0;t,x+\epsilon \hat{r}_1,v),V(0;t,x+\epsilon \hat{r}_1,v)) \big] - \nabla_{x}f_{0}(X(0),v) \Big) \\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{x}X^{\varepsilon}(0) + \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \underbrace{\nabla_{x}V^{\varepsilon}(0)}_{=0} - \nabla_{x}f_{0}(X(0),v) \Big\} \\ &\stackrel{r \leftrightarrow c}{=} \nabla_{xx}f_{0}(x^{1},v) \lim_{s \rightarrow 0+}\nabla_{x}X(s) \hat{r}_1 = \nabla_{xx}f_{0}(x^{1},v) \hat{r}_1, \end{split} \end{equation} where we have used \eqref{nabla XV_v+}, \eqref{nabla XV_x+}, $\nabla_{x}X^{\varepsilon}(0) = I_{2}$, and $\nabla_{x}V^{\varepsilon}(0)= 0$. Similarly, using \eqref{c_4}, $\hat{r}_{2}$ of \eqref{set R_sp}, and notation \eqref{XV epsilon x}, \\ \begin{equation} \notag \begin{split} &\nabla_{xx} f(t,x,v) \hat{r}_2 \stackrel{c \leftrightarrow r}{=} \lim _{\epsilon\rightarrow 0+} \frac{1}{\epsilon}\left ( \nabla_{x}f(t,x+\epsilon \hat{r}_2,v) - \nabla_{x}f(t,x,v) \right )\\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{x}X^{\varepsilon}(0) + \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{x}V^{\varepsilon}(0) \\ &\quad\quad\quad - \big( \nabla_x f_0(X(0), R_{x^1}v) R_{x^{1}} - 2\nabla_v f_0(X(0),R_{x^1}v) A_{v,x^{1}} \big) \Big\} \\ &= \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{x}X^{\varepsilon}(0) - \nabla_x f_0(X(0), R_{x^1}v) R_{x^{1}} \Big\} \\ &\quad + \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{x}V^{\varepsilon}(0) + 2\nabla_v f_0(X(0),R_{x^1}v) A_{v,x^{1}} \Big\} \\ &:= I_{xx,1} + I_{xx,2} , \end{split} \end{equation} where \begin{equation*} \begin{split} I_{xx,1} &:= \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{x}X^{\varepsilon}(0) - \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \lim_{s \rightarrow 0-}\nabla_{x}X(s) \\ &\quad + \Big( \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) R_{x^{1}} - \nabla_x f_0(X(0), R_{x^1}v) R_{x^{1}} \Big) \Big\} \\ &\stackrel{r \leftrightarrow c}{=} \underbrace{ \Big[ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big( \nabla_{x}X(0; t, x+\epsilon\hat{r}_{2}, v) - \lim_{s \rightarrow 0-}\nabla_{x}X(s) \Big) \Big]^{T} }_{(*)_{xx,1}\hat{r}_{2}} \\ &\quad + R_{x^{1}} \big[ \nabla_{xx}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + \nabla_{vx}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \big] \hat{r}_{2}, \end{split} \end{equation*} \begin{equation*} \begin{split} I_{xx,2} &:= \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{x}V^{\varepsilon}(0) - \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0))\lim_{s \rightarrow 0-}\nabla_{x}V(s) \\ &\quad - 2\nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) A_{v,x^{1}} + 2\nabla_v f_0(X(0),R_{x^1}v) A_{v,x^{1}} \Big\} \\ &\stackrel{r\leftrightarrow c}{=} \Big[ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big( \nabla_{x}V^{\varepsilon}(0) - \lim_{s \rightarrow 0-}\nabla_{x}V(s) \Big) \Big]^{T} \\ &\quad + (- 2A^{T}_{v,x^{1}}) \Big\{ \nabla_{xv}f_{0}(x^{1}, R_{x^1}v)\lim_{s \rightarrow 0-}\nabla_{x}X(s) + \nabla_{vv}f_{0}(x^{1}, R_{x^1}v)\lim_{s \rightarrow 0-}\nabla_{x}V(s) \Big\} \hat{r}_{2} \\ &= \underbrace{ \Big[ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big( \nabla_{x}V(0; t, x+\epsilon\hat{r}_{2}, v) - \lim_{s \rightarrow 0-}\nabla_{x}V(s) \Big) \Big]^{T} }_{(*)_{xx,2}\hat{r}_{2}} \\ &\quad + (- 2A^{T}_{v,x^{1}}) \big[ \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + \nabla_{vv}f_{0}(x^{1}, R_{x^1}v)(-2 A_{v,x^{1}}) \big] \hat{r}_{2}. \\ \end{split} \end{equation*} Similar to \eqref{xv star1} and \eqref{xv star2}, \begin{equation} \label{xx star1} \begin{split} (*)_{xx,1} \hat{r}_{2} &= \begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^1) \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^2) \end{bmatrix} \hat{r}_{2}, \\ \end{split} \end{equation} \begin{equation} \label{xx star2} \begin{split} (*)_{xx,2} \hat{r}_{2} &= \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(- 2 A_{v,x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(- 2 A_{v,x^{1}(x,v)}^2) \end{bmatrix} \hat{r}_{2}. \\ \end{split} \end{equation} Hence, \begin{equation} \label{nabla_xx f case2} \begin{split} &\nabla_{xx} f(t,x,v) \\ &= \underline{(*)_{xx,1}}_{\eqref{xx star1}} + \underline{(*)_{xx,2}}_{\eqref{xx star2}} \\ &\quad + R_{x^{1}} \big[ \nabla_{xx}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + \nabla_{vx}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \big] \\ &\quad + (- 2A^{T}_{v,x^{1}}) \big[ \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + \nabla_{vv}f_{0}(x^{1}, R_{x^1}v)(-2 A_{v,x^{1}}) \big]. \end{split} \end{equation} Then, from \eqref{nabla_xx f case1} and \eqref{nabla_xx f case2}, we get the following compatibility condition \begin{equation} \label{xx comp} \begin{split} &\nabla_{xx}f_{0}(x^{1},v) \\ &= \underline{(*)_{xx,1}}_{\eqref{xx star1}} + \underline{(*)_{xx,2}}_{\eqref{xx star2}} \\ &\quad + R_{x^{1}} \nabla_{xx}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + R_{x^{1}} \nabla_{vx}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \\ &\quad + (- 2A^{T}_{v,x^{1}}) \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + (- 2A^{T}_{v,x^{1}}) \nabla_{vv}f_{0}(x^{1}, R_{x^1}v)(-2 A_{v,x^{1}}). \\ \end{split} \end{equation} \subsection{Condition for $\nabla_{vx}$} We split perturbed direction into \eqref{set R_vel}. $\nabla_{x}f(t,x,v)$ can be written as \eqref{c_3} or \eqref{c_4}. Using \eqref{c_3}, $\hat{r}_{1}$ of \eqref{set R_vel}, and notation \eqref{XV epsilon v}, \\ \begin{equation} \label{nabla_vx f case1} \begin{split} &\nabla_{vx} f(t,x,v) \hat{r}_1 \stackrel{c\leftrightarrow r}{=} \lim _{\epsilon\rightarrow 0+} \frac{1}{\epsilon}\left ( \nabla_{x}f(t,x,v+\epsilon \hat{r}_1) - \nabla_{x}f(t,x,v) \right ) \\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon}\Big( \nabla_{x}\big[ f_0(X(0;t,x ,v+\epsilon \hat{r}_1),V(0;t,x ,v+\epsilon \hat{r}_1)) \big] - \nabla_{x}f_{0}(X(0),v) \Big) \\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{x}X^{\varepsilon}(0) + \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{x}V^{\varepsilon}(0) - \nabla_{x}f_{0}(X(0),v) \Big\} \\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ \nabla_{x}f_{0}(X^{\varepsilon}(0), v+\epsilon \hat{r}_{1} ) \nabla_{x}X^{\varepsilon}(0) + \nabla_{v}f_{0}(X^{\varepsilon}(0), v+\epsilon \hat{r}_{1} ) \underbrace{ \nabla_{x}V^{\varepsilon}(0) }_{=0} - \nabla_{x}f_{0}(X(0),v) \Big\} \\ &\stackrel{r\leftrightarrow c}{=} \nabla_{xx}f_{0}(x^{1}, v)\lim_{s \rightarrow 0+}\nabla_{v}X(s)\hat{r}_{1} + \nabla_{vx}f_{0}(x^{1}, v) \lim_{s \rightarrow 0+}\nabla_{v}V(s)\hat{r}_{1} \\ &= \big( \nabla_{xx}f_{0}(x^{1}, v)(-t) + \nabla_{vx}f_{0}(x^{1}, v) \big) \hat{r}_{1} , \\ \end{split} \end{equation} where we have used \eqref{nabla XV_v+}, \eqref{nabla XV_x+}, $\nabla_{x}X^{\varepsilon}(0) = I_{2}$, and $\nabla_{x}V^{\varepsilon}(0)= 0$. Similalry, using \eqref{c_4}, $\hat{r}_{2}$ of \eqref{set R_vel}, and notation \eqref{XV epsilon v}, \\ \begin{equation} \notag \begin{split} &\nabla_{vx} f(t,x,v) \hat{r}_2 \stackrel{c\leftrightarrow r}{=} \lim _{\epsilon\rightarrow 0+} \frac{1}{\epsilon}\left ( \nabla_{x}f(t,x,v+\epsilon \hat{r}_2) - \nabla_{x}f(t,x,v) \right )\\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{x}X^{\varepsilon}(0) + \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{x}V^{\varepsilon}(0) \\ &\quad\quad\quad - \big( \nabla_x f_0(X(0), R_{x^1}v) R_{x^{1}} - 2\nabla_v f_0(X(0),R_{x^1}v) A_{v,x^{1}} \big) \Big\} \\ &= \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{x}X^{\varepsilon}(0) - \nabla_x f_0(X(0), R_{x^1}v) R_{x^{1}} \Big\} \\ &\quad + \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{x}V^{\varepsilon}(0) + 2\nabla_v f_0(X(0),R_{x^1}v) A_{v,x^{1}} \Big\} \\ &:= I_{vx,1} + I_{vx,2}, \end{split} \end{equation} where \begin{equation*} \begin{split} I_{vx,1} &:= \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{x}X^{\varepsilon}(0) - \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \lim_{s \rightarrow 0-}\nabla_{x}X(s) \\ &\quad + \Big( \nabla_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) R_{x^{1}} - \nabla_x f_0(X(0), R_{x^1}v) R_{x^{1}} \Big) \Big\} \\ &\stackrel{r\leftrightarrow c}{=} \Big[ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big( \nabla_{x}X^{\varepsilon}(0) - \lim_{s \rightarrow 0-}\nabla_{x}X(s) \Big) \Big]^{T} \\ &\quad + R_{x^{1}} \Big\{ \nabla_{xx}f_{0}(x^{1}, R_{x^1}v)\lim_{s \rightarrow 0-}\nabla_{v}X(s) + \nabla_{vx}f_{0}(x^{1}, R_{x^1}v)\lim_{s \rightarrow 0-}\nabla_{v}V(s) \Big\} \hat{r}_{2} \\ &= \underbrace{ \Big[ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big( \nabla_{x}X(0; t, x, v+\epsilon \hat{r}_{2}) - \lim_{s \rightarrow 0-}\nabla_{x}X(s) \Big) \Big]^{T} }_{(*)_{vx,1} \hat{r}_{2} } \\ &\quad + R_{x^{1}} \big[ \nabla_{xx}f_{0}(x^{1}, R_{x^1}v)(-tR_{x^1}) + \nabla_{vx}f_{0}(x^{1}, R_{x^1}v) (R_{x^{1}} + 2tA_{v,x^{1}}) \big] \hat{r}_{2}, \end{split} \end{equation*} \begin{equation*} \begin{split} I_{vx,2} &:= \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big\{ \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nabla_{x}V^{\varepsilon}(0) - \nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0))\lim_{s \rightarrow 0-}\nabla_{x}V(s) \\ &\quad - 2\nabla_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) A_{v,x^{1}} + 2\nabla_v f_0(X(0),R_{x^1}v) A_{v,x^{1}} \Big\} \\ &\stackrel{r\leftrightarrow c}{=} \Big[ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big( \nabla_{x}V^{\varepsilon}(0) - \lim_{s \rightarrow 0-}\nabla_{x}V(s) \Big) \Big]^{T} \\ &\quad + (-2A^{T}_{v,x^{1}}) \Big\{ \nabla_{xv}f_{0}(x^{1}, R_{x^1}v)\lim_{s \rightarrow 0-}\nabla_{v}X(s) + \nabla_{vv}f_{0}(x^{1}, R_{x^1}v)\lim_{s \rightarrow 0-}\nabla_{v}V(s) \Big\} \hat{r}_{2} \\ &= \underbrace{ \Big[ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \Big( \nabla_{x}V(0; t, x, v+\epsilon \hat{r}_{2}) - \lim_{s \rightarrow 0-}\nabla_{x}V(s) \Big) \Big]^{T} }_{(*)_{vx,2}\hat{r}_{2}} \\ &\quad + (-2A^{T}_{v,x^{1}}) \big[ \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^{1}}) + \nabla_{vv}f_{0}(x^{1}, R_{x^1}v) ( R_{x^{1}} + 2tA_{v,x^{1}}) \big] \hat{r}_{2}. \end{split} \end{equation*} Similar to \eqref{xv star1} and \eqref{xv star2}, \begin{equation} \label{vx star1} \begin{split} (*)_{vx,1} \hat{r}_{2} &= \begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{v}(R_{x^{1}(x,v)}^1) \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{v}(R_{x^{1}(x,v)}^2) \end{bmatrix} \hat{r}_{2} = -t \begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^1) \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^2) \end{bmatrix} \hat{r}_{2}, \\ \end{split} \end{equation} \begin{equation} \label{vx star2} \begin{split} (*)_{vx,2} \hat{r}_{2} &= \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{v}(- 2 A_{v,x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{v}(- 2 A_{v,x^{1}(x,v)}^2) \end{bmatrix} \hat{r}_{2} \\ &= 2t \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(A_{v,x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(A_{v,x^{1}(x,v)}^2) \end{bmatrix} \hat{r}_{2} -2 \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{v}A^{1}_{v,x^{1}} \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{v}A^{2}_{v,x^{1}} \end{bmatrix} \hat{r}_{2}. \\ \end{split} \end{equation} Hence, \begin{equation} \label{nabla_vx f case2} \begin{split} &\nabla_{vx} f(t,x,v) \\ &= \underline{(*)_{vx,1}}_{\eqref{vx star1}} + \underline{(*)_{vx,2}}_{\eqref{vx star2}} \\ &\quad + R_{x^{1}} \big[ \nabla_{xx}f_{0}(x^{1}, R_{x^1}v)(-tR_{x^1}) + \nabla_{vx}f_{0}(x^{1}, R_{x^1}v) (R_{x^{1}} + 2tA_{v,x^{1}}) \big] \\ &\quad + (-2A^{T}_{v,x^{1}}) \big[ \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^{1}}) + \nabla_{vv}f_{0}(x^{1}, R_{x^1}v) ( R_{x^{1}} + 2tA_{v,x^{1}}) \big]. \end{split} \end{equation} Then from \eqref{nabla_vx f case1} and \eqref{nabla_vx f case2} we get the following compatibility condition \begin{equation} \label{vx comp} \begin{split} &\nabla_{xx}f_{0}(x^{1}, v)(-t) + \nabla_{vx}f_{0}(x^{1}, v) \\ &= \underline{(*)_{vx,1}}_{\eqref{vx star1}} + \underline{(*)_{vx,2}}_{\eqref{vx star2}} \\ &\quad + R_{x^{1}} \nabla_{xx}f_{0}(x^{1}, R_{x^1}v)(-tR_{x^1}) + R_{x^{1}} \nabla_{vx}f_{0}(x^{1}, R_{x^1}v) (R_{x^{1}} + 2tA_{v,x^{1}}) \\ &\quad + (-2A^{T}_{v,x^{1}}) \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^{1}}) + (-2A^{T}_{v,x^{1}}) \nabla_{vv}f_{0}(x^{1}, R_{x^1}v) ( R_{x^{1}} + 2tA_{v,x^{1}}). \\ \end{split} \end{equation} \subsection{Compatibility conditions for transpose : $\nabla_{xv}^{T} = \nabla_{vx}$ and $\nabla_{xx}^{T} = \nabla_{xx}$} First, we claim that \eqref{xv comp}, \eqref{vv comp}, \eqref{xx comp}, and \eqref{vx comp} imply the following four conditions for $(x^{1}, v)\in \gamma_{-}$ \begin{eqnarray} \nabla_{xv}f_{0}(x^{1},v) &=& R_{x^{1}}\nabla_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + R_{x^{1}}\nabla_{vv}f_{0}(x^{1}, R_{x^1}v) (-2A_{v, x^{1}}) \notag \\ &&\quad + \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^2) \end{bmatrix} ,\quad x^{1}=x^{1}(x,v), \label{Cond2 1} \\ \nabla_{xx}f_{0}(x^{1},v) &=& R_{x^{1}} \nabla_{xx}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + R_{x^{1}} \nabla_{vx}f_{0}(x^{1}, R_{x^1}v)(-2A_{v,x^{1}}) \notag \\ &&\quad + (-2A^{T}_{v,x^{1}}) \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + (-2A^{T}_{v,x^{1}}) \nabla_{vv}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \notag \\ &&\quad + \begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^1) \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^2) \end{bmatrix} - 2 \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(A_{v,x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(A_{v,x^{1}(x,v)}^2) \end{bmatrix}, \label{Cond2 2} \\ \nabla_{vv}f_{0}(x^{1},v) &=& R_{x^{1}}\nabla_{vv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}}, \label{Cond2 3} \\ \nabla_{vx}f_{0}(x^{1},v) &=& R_{x^{1}}\nabla_{vx}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + (-2A^{T}_{v,x^{1}})\nabla_{vv}f_{0}(x^{1}, R_{x^1}v)R_{x^{1}} \notag \\ &&\quad -2 \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{v}A^{1}_{v,x^{1}} \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{v}A^{2}_{v,x^{1}} \end{bmatrix}. \label{Cond2 4} \end{eqnarray} \eqref{xx comp} is just identical to \eqref{Cond2 2}. Then applying \eqref{xx comp} to \eqref{vx comp} and \eqref{xv comp}, we obtain \eqref{Cond2 1} and \eqref{Cond2 4}, respectively. Finally, applying \eqref{Cond2 1}, \eqref{Cond2 2}, and \eqref{Cond2 4} to \eqref{vv comp}, we obtain \eqref{Cond2 3} which is true by taking $\nabla_{v}^{2}$ to \eqref{BC} directly. \\ From \eqref{Cond2 1}--\eqref{Cond2 4}, we must check conditions to guarantee necessary conditions, $\nabla_{xv}^{T} = \nabla_{vx}$ and $\nabla_{xx}^{T} = \nabla_{xx}$. \\ \subsubsection{$\nabla_{xv}^T=\nabla_{vx}$} From \eqref{Cond2 1} and \eqref{Cond2 4}, we need \begin{equation} \label{T invariant} \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^2) \end{bmatrix}^{T} = -2 \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{v}A^{1}_{v,x^{1}} \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{v}A^{2}_{v,x^{1}} \end{bmatrix}. \end{equation} To check \eqref{T invariant}, we explicitly compute $\nabla_x(R_{x^1(x,v)}^1),\nabla_x(R_{x^1(x,v)}^2),\nabla_v(-2A^1_{v,x^1}),$ and $\nabla_v(-2A_{v,x^1}^2)$ in the following Lemma. \begin{lemma} \label{d_RA} Recall reflection operator $R_{x^1}$ in \eqref{BC} and $A_{v,x^1}$ in \eqref{def A}, \begin{equation*} A_{v,x^1} := \left[ \left((v\cdot n(x^1))I +(n(x^1)\otimes v)\right) \left(I-\frac{v\otimes n(x^1)}{v\cdot n(x^1)}\right)\right]. \end{equation*} We write that $A^i$ is the $i$th column of matrix $A$ and $\nabla_vA^i_{v,y}$ be the $v$-derivative of $A_{v,y}^i$ for $1\leq i \leq 2$ and $(v,y) \in \mathbb{R}^2 \times \partial \Omega$. Then, \begin{align*} &\nabla_x (R_{x^1(x,v)}^1) = \begin{bmatrix} \dfrac{-4v_2n_1n_2}{v\cdot n(x^1)} & \dfrac{4v_1n_1n_2}{v\cdot n(x^1)} \\ \dfrac{-2v_2(n_2^2-n_1^2)}{v\cdot n(x^1)} & \dfrac{2v_1(n_2^2-n_1^2)}{v\cdot n(x^1)} \end{bmatrix}, \quad \nabla_x (R_{x^1(x,v)}^2)= \begin{bmatrix} \dfrac{-2v_2(n_2^2-n_1^2)}{v\cdot n(x^1)} & \dfrac{2v_1(n_2^2-n_1^2)}{v\cdot n(x^1)}\\ \dfrac{4v_2n_1n_2}{v\cdot n(x^1)} & \dfrac{-4v_1n_1n_2}{v\cdot n(x^1)} \end{bmatrix},\\ &\nabla_v(-2A_{v,x^1}^1)= \begin{bmatrix} -\dfrac{2v_2^2n_1}{(v\cdot n(x^1))^2} & -2n_2-\dfrac{2v_1^2n_1^2n_2}{(v\cdot n(x^1))^2} + \dfrac{4v_1v_2n_1^3}{(v\cdot n(x^1))^2} + \dfrac{2v_2^2n_1^2n_2}{(v\cdot n(x^1))^2} \\ -\dfrac{2v_2^2n_2}{(v\cdot n(x^1))^2} & 2n_1 -\dfrac{2v_1^2 n_1n_2^2}{(v\cdot n(x^1))^2} + \dfrac{4v_1v_2 n_1^2n_2}{(v\cdot n(x^1))^2} + \dfrac{2v_2^2 n_1n_2^2}{(v\cdot n(x^1))^2} \end{bmatrix},\\ &\nabla_v(-2A_{v,x^1}^2)= \begin{bmatrix} 2n_2+\dfrac{2v_1^2n_1^2n_2}{(v\cdot n(x^1))^2} + \dfrac{4v_1v_2n_1n_2^2}{(v\cdot n(x^1))^2} - \dfrac{2v_2^2n_1^2n_2}{(v\cdot n(x^1))^2} & -\dfrac{2v_1^2n_1}{(v\cdot n(x^1))^2} \\ -2n_1 -\dfrac{2v_2^2 n_1n_2^2}{(v\cdot n(x^1))^2} + \dfrac{4v_1v_2 n_2^3}{(v\cdot n(x^1))^2} + \dfrac{2v_1^2 n_1n_2^2}{(v\cdot n(x^1))^2} & -\dfrac{2v_1^2n_2}{(v\cdot n(x^1))^2} \end{bmatrix}, \end{align*} where $v_i$ be the $i$th component of $v$. We denote the $i$th component $n_i(x,v)$ of $n(x^1)$ as $n_i$, that is, $n_i$ depends on $x,v$. Moreover, the following identity holds that \begin{equation}\label{prop d_R} \nabla_{x}(R_{x^1(x,v)}^1)v =0, \quad \nabla_{x}(R_{x^1(x,v)}^2) v =0. \end{equation} \end{lemma} \begin{proof} Recall the definition of the reflection matrix $R_{x^1}$ and $-2A_{v,x^1}$: \begin{align*} R_{x^1}&=I-2n(x^1)\otimes n(x^1) = \begin{bmatrix} 1-2n_1^2 & -2n_1n_2 \\ -2n_1n_2 & 1-2n_2^2 \end{bmatrix},\\ -2A_{v,x^1}&= -2 \left[ \left((v\cdot n(x^1))I +(n(x^1)\otimes v)\right) \left(I-\frac{v\otimes n(x^1)}{v\cdot n(x^1)}\right)\right]\\ &=\begin{bmatrix} -2v_2n_2 -\dfrac{2v_1v_2n_1n_2}{v\cdot n(x^1)} +\dfrac{2v_2^2 n_1^2}{ v\cdot n(x^1)} & 2v_1n_2 + \dfrac{2v_1^2n_1n_2}{v\cdot n(x^1)} -\dfrac{2v_1v_2n_1^2}{ v\cdot n(x^1)} \\ 2v_2n_1 -\dfrac{2v_1v_2n_2^2}{v\cdot n(x^1)} +\dfrac{2v_2^2n_1n_2}{v\cdot n(x^1)} & -2v_1n_1 -\dfrac{2v_1v_2n_1n_2}{v\cdot n(x^1)} + \dfrac{2v_1^2n_2^2}{v \cdot n(x^1)} \end{bmatrix}. \end{align*} To find $\nabla_x (R_{x^1(x,v)}^1),\nabla_x (R_{x^1(x,v)}^2)$, we use \eqref{normal} in Lemma \ref{d_n}: \begin{equation} \label{comp_dn} \nabla_x [n(x^1(x,v))] = I-\frac{v\otimes n(x^1)}{ v\cdot n(x^1)}= \begin{bmatrix} \dfrac{v_2n_2}{v\cdot n(x^1)} & -\dfrac{v_1n_2}{v\cdot n(x^1)} \\ -\dfrac{v_2n_1}{v \cdot n(x^1)} & \dfrac{v_1n_1}{v\cdot n(x^1)} \end{bmatrix}. \end{equation} Firstly, we directly calculate $\nabla_x (R_{x^1(x,v)}^1)$ and $\nabla_x(R_{x^1(x,v)}^2)$ using \eqref{comp_dn}: \begin{align*} \nabla_x (R_{x^1(x,v)}^1) = \nabla_x \begin{bmatrix} 1-2n_1^2 \\ -2n_1n_2 \end{bmatrix}= \begin{bmatrix} \dfrac{-4v_2n_1n_2}{v\cdot n(x^1)} & \dfrac{4v_1n_1n_2}{v\cdot n(x^1)} \\ \dfrac{-2v_2(n_2^2-n_1^2)}{v\cdot n(x^1)} & \dfrac{2v_1(n_2^2-n_1^2)}{v\cdot n(x^1)} \end{bmatrix},\\ \nabla_x (R_{x^1(x,v)}^2) = \nabla_x \begin{bmatrix} -2n_1n_2 \\ 1-2n_2^2 \end{bmatrix}= \begin{bmatrix} \dfrac{-2v_2(n_2^2-n_1^2)}{v\cdot n(x^1)} & \dfrac{2v_1(n_2^2-n_1^2)}{v\cdot n(x^1)}\\ \dfrac{4v_2n_1n_2}{v\cdot n(x^1)} & \dfrac{-4v_1n_1n_2}{v\cdot n(x^1)} \end{bmatrix}. \end{align*} Next, we calculate the $v$-derivative of $[-2A_{v,x^1}^1]$: \begin{align*} (\nabla_v (-2A_{v,x^1}^1))_{(1,1)} &= -\frac{2v_2n_1n_2 (v \cdot n(x^1))-2v_1v_2n_1^2n_2}{(v\cdot n(x^1))^2}-\frac{2v_2^2n_1^3}{(v\cdot n(x^1))^2}=-\frac{2v_2^2n_1}{(v\cdot n(x^1))^2}, \\ (\nabla_v (-2A_{v,x^1}^1))_{(1,2)} &=-2n_2-\frac{2v_1n_1n_2(v\cdot n(x^1))-2v_1v_2n_1n_2^2}{(v\cdot n(x^1))^2} +\frac{4v_2n_1^2 (v\cdot n(x^1)) -2v_2^2n_1^2n_2}{(v\cdot n(x^1))^2}\\ &= -2n_2-\dfrac{2v_1^2n_1^2n_2}{(v\cdot n(x^1))^2} + \dfrac{4v_1v_2n_1^3}{(v\cdot n(x^1))^2} + \dfrac{2v_2^2n_1^2n_2}{(v\cdot n(x^1))^2},\\ (\nabla_v (-2A_{v,x^1}^1))_{(2,1)}&=-\frac{2v_2n_2^2 (v\cdot n(x^1)) -2v_1v_2n_1n_2^2}{(v\cdot n(x^1))^2}-\frac{2v_2^2n_1^2n_2}{(v\cdot n(x^1))^2}=-\frac{2v_2^2n_2}{(v\cdot n(x^1))^2},\\ (\nabla_v (-2A_{v,x^1}^1))_{(2,2)}&=2n_1-\frac{2v_1n_2^2(v \cdot n(x^1))-2v_1v_2n_2^3}{(v\cdot n(x^1))^2}+\frac{4v_2n_1n_2 (v\cdot n(x^1)) -2v_2^2n_1n_2^2}{(v\cdot n(x^1))^2}\\ &= 2n_1 -\dfrac{2v_1^2 n_1n_2^2}{(v\cdot n(x^1))^2} + \dfrac{4v_1v_2 n_1^2n_2}{(v\cdot n(x^1))^2} + \dfrac{2v_2^2 n_1n_2^2}{(v\cdot n(x^1))^2}. \end{align*} Similarly, we deduce the $v$-derivative of $[-2A_{v,x^1}^2]$. We derived $\nabla_x(R_{x^1(x,v)}^1)$ and $\nabla_x(R_{x^1(x,v)}^2)$, and then \eqref{prop d_R} follows from direct calculation that \begin{align*} \nabla_{x}(R_{x^1(x,v)}^1) v = \begin{bmatrix} \dfrac{-4v_2n_1n_2}{v\cdot n(x^1)} & \dfrac{4v_1n_1n_2}{v\cdot n(x^1)} \\ \dfrac{-2v_2(n_2^2-n_1^2)}{v\cdot n(x^1)} & \dfrac{2v_1(n_2^2-n_1^2)}{v\cdot n(x^1)} \end{bmatrix}\begin{bmatrix} v_1 \\ v_2 \end{bmatrix} = 0, \\ \nabla_x(R_{x^1(x,v)}^2) v =\begin{bmatrix} \dfrac{-2v_2(n_2^2-n_1^2)}{v\cdot n(x^1)} & \dfrac{2v_1(n_2^2-n_1^2)}{v\cdot n(x^1)}\\ \dfrac{4v_2n_1n_2}{v\cdot n(x^1)} & \dfrac{-4v_1n_1n_2}{v\cdot n(x^1)} \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \end{bmatrix}=0. \end{align*} \end{proof} Back to the point, we find the condition of $\nabla_v f_0(x^1,Rv)$ satisfying \eqref{T invariant}. Since \begin{align*} \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, Rv) \nabla_{x}(R_{x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, Rv) \nabla_{x}(R_{x^{1}(x,v)}^2) \end{bmatrix}^{T}=\begin{bmatrix} \nabla_vf_0(x^1,Rv) \dfrac{\partial}{\partial x_1} (R_{x^1(x,v)}^1) & \nabla_vf_0(x^1,Rv) \dfrac{\partial}{\partial x_1} (R_{x^1(x,v)}^2)\\ \nabla_vf_0(x^1,Rv) \dfrac{\partial}{\partial x_2} (R_{x^1(x,v)}^1) & \nabla_vf_0(x^1,Rv) \dfrac{\partial}{\partial x_2} (R_{x^1(x,v)}^2) \end{bmatrix}, \\ -2 \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, Rv) \nabla_{v}A^{1}_{v,x^{1}} \\ \nabla_{v}f_{0}(x^{1}, Rv) \nabla_{v}A^{2}_{v,x^{1}} . \end{bmatrix}=\begin{bmatrix} \nabla_v f_0(x^1,R_{x^1}v) \dfrac{\partial}{\partial v_1}(-2A^1_{v,x^1}) & \nabla_v f_0(x^1,R_{x^1}v) \dfrac{\partial}{\partial v_2}(-2A^1_{v,x^1})\\ \nabla_v f_0(x^1,R_{x^1}v) \dfrac{\partial}{\partial v_1}(-2A^2_{v,x^1}) & \nabla_v f_0(x^1,R_{x^1}v) \dfrac{\partial}{\partial v_2}(-2A^2_{v,x^1}) \end{bmatrix}, \end{align*} it suffices to find the condition of $\nabla_v f_0(x^1,R_{x^1}v)$ such that \begin{align*} \nabla_vf_0(x^1,R_{x^1}v) \left( \dfrac{\partial}{\partial x_1} (R_{x^1(x,v)}^1) -\dfrac{\partial}{\partial v_1} (-2A_{v,x^1}^1)\right) =0, \quad \nabla_vf_0(x^1,R_{x^1}v) \left( \dfrac{\partial}{\partial x_2} (R_{x^1(x,v)}^1) -\dfrac{\partial}{\partial v_1} (-2A_{v,x^1}^2)\right) =0,\\ \nabla_vf_0(x^1,R_{x^1}v) \left( \dfrac{\partial}{\partial x_1} (R_{x^1(x,v)}^2) -\dfrac{\partial}{\partial v_2} (-2A_{v,x^1}^1)\right) =0,\quad \nabla_vf_0(x^1,R_{x^1}v) \left( \dfrac{\partial}{\partial x_2} (R_{x^1(x,v)}^2) -\dfrac{\partial}{\partial v_2} (-2A_{v,x^1}^2)\right) =0. \end{align*} We denote column vectors \begin{align*} K_1 := \dfrac{\partial}{\partial x_1} (R_{x^1(x,v)}^1) -\dfrac{\partial}{\partial v_1} (-2A_{v,x^1}^1), \quad K_2:= \dfrac{\partial}{\partial x_2} (R_{x^1(x,v)}^1) -\dfrac{\partial}{\partial v_1} (-2A_{v,x^1}^2),\\ K_3 := \dfrac{\partial}{\partial x_1} (R_{x^1(x,v)}^2) -\dfrac{\partial}{\partial v_2} (-2A_{v,x^1}^1), \quad K_4:=\dfrac{\partial}{\partial x_2} (R_{x^1(x,v)}^2) -\dfrac{\partial}{\partial v_2} (-2A_{v,x^1}^2). \end{align*} To determine whether $\nabla_v f_0(x^1,R_{x^1}v)$ is a nonzero vector or not for \eqref{T invariant}, we need to calculate the following determinant \begin{align*} \det \begin{bmatrix} \vert & \vert \\ K_i & K_j \\ \vert & \vert \end{bmatrix}, \quad 1\leq i < j \leq 4. \end{align*} If every determinant has a value of zero, then $\nabla_v f_0(x^1,R_{x^1}v)$ satisfying \eqref{T invariant} is not the zero vector. We now show that every determinant is $0$ and $\nabla_v f_0(x^1,R_{x^1}v)$ is parallel to a particular direction to satisfy \eqref{T invariant}. Using Lemma \ref{d_RA} and $\vert n(x^1) \vert = n_1^2 +n_2^2=1$,\\ \textrm{(Case 1)} $(K_1 \leftrightarrow K_4) $ \begin{align*} \det \begin{bmatrix} \vert & \vert \\ K_1 & K_4 \\ \vert & \vert \end{bmatrix}&= \left(\dfrac{-2}{v\cdot n(x^1)}\right)^2 \det \begin{bmatrix} 2v_2n_1n_2 -\dfrac{v_2^2n_1}{v\cdot n(x^1)} & v_1 (n_1^2-n_2^2)-\dfrac{v_1^2n_1}{v\cdot n(x^1)}\\ v_2(n_2^2-n_1^2)-\dfrac{v_2^2n_2}{v\cdot n(x^1)} & 2v_1n_1n_2 -\dfrac{v_1^2n_2}{v\cdot n(x^1)} \end{bmatrix}\\ &= \left(\dfrac{-2}{v\cdot n(x^1)}\right)^2\left[\left( 4v_1v_2n_1^2n_2^2-\frac{2v_1^2v_2n_1n_2^2}{v\cdot n(x^1)} -\frac{2v_1v_2^2n_1^2n_2}{v\cdot n(x^1)} +\dfrac{v_1^2v_2^2n_1n_2}{(v\cdot n(x^1))^2}\right) \right. \\ &\quad \left. -\left(-v_1v_2(n_2^2-n_1^2)^2-\frac{v_1v_2^2n_2(n_1^2-n_2^2)}{v\cdot n(x^1)} -\frac{v_1^2v_2n_1(n_2^2-n_1^2)}{v\cdot n(x^1)}+\frac{v_1^2v_2^2n_1n_2}{(v\cdot n(x^1))^2}\right) \right]\\ \hide &=\left(\dfrac{-2}{v\cdot n(x^1)}\right)^2\left[\left( 4v_1v_2n_1^2n_2^2 +v_1v_2(n_2^2-n_1^2)^2\right)+\left(-\frac{2v_1^2v_2n_1n_2^2}{v\cdot n(x^1)}+\frac{v_1^2v_2n_1(n_2^2-n_1^2)}{v \cdot n(x^1)}\right) \right. \\ &\quad \left.+\left(\frac{v_1v_2^2n_2(n_1^2-n_2^2)}{v\cdot n(x^1)} -\frac{2v_1v_2^2n_1^2n_2}{v\cdot n(x^1)} \right) \right]\\ \unhide &=\left(\frac{-2}{v\cdot n(x^1)}\right)^2\left( v_1v_2 -\frac{v_1^2v_2 n_1}{ v\cdot n(x^1)} -\frac{v_1v_2^2n_2}{v\cdot n(x^1)}\right)\\ &=0, \end{align*} \textrm{(Case 2)} $(K_1 \leftrightarrow K_2)$ \begin{align*} \det \begin{bmatrix} \vert & \vert \\ K_1 & K_2 \\ \vert & \vert \end{bmatrix}&=\left(\frac{-2}{v\cdot n(x^1)}\right)^2 \det \begin{bmatrix} 2v_2n_1n_2 -\dfrac{v_2^2n_1}{v\cdot n(x^1)} & -v_1n_1n_2+v_2n_2^2 -\dfrac{(v_2^2-v_1^2)n_1^2n_2}{v\cdot n(x^1)} +\dfrac{2v_1v_2n_1n_2^2}{v\cdot n(x^1)} \\ v_2(n_2^2-n_1^2)-\dfrac{v_2^2n_2}{v\cdot n(x^1)} & -v_1n_2^2 -v_2n_1n_2 -\dfrac{(v_2^2-v_1^2)n_1n_2^2}{v\cdot n(x^1)} +\dfrac{2v_1v_2n_2^3}{v \cdot n(x^1)} \end{bmatrix}\\ &= \left(\dfrac{-2}{v\cdot n(x^1)}\right)^2 \left[ \left(-2v_1v_2 n_1 n_2^3 -2v_2^2 n_1^2 n_2^2 -\dfrac{2(v_2^2-v_1^2)v_2n_1^2n_2^3}{v\cdot n(x^1)}+\dfrac{4v_1v_2^2n_1n_2^4}{ v \cdot n(x^1)} \right. \right. \\ &\quad+ \left. \left.\dfrac{v_1v_2^2n_1n_2^2}{v\cdot n(x^1)} +\dfrac{v_2^3n_1^2n_2}{v\cdot n(x^1)} +\dfrac{(v_2^2-v_1^2)v_2^2n_1^2n_2^2}{(v \cdot n(x^1))^2} -\dfrac{2v_1v_2^3n_1n_2^3}{(v\cdot n(x^1))^2} \right) \right. \\ &\quad \left. -\left(-v_1v_2n_1n_2(n_2^2-n_1^2)+v_2^2n_2^2(n_2^2-n_1^2)-\dfrac{(v_2^2-v_1^2)v_2n_1^2n_2(n_2^2-n_1^2)}{v\cdot n(x^1)}+\dfrac{2v_1v_2^2n_1n_2^2(n_2^2-n_1^2)}{v\cdot n(x^1)} \right. \right.\\ &\quad \left. \left. +\dfrac{v_1v_2^2n_1n_2^2}{v\cdot n(x^1)} -\dfrac{v_2^3n_2^3}{v\cdot n(x^1)} +\dfrac{(v_2^2-v_1^2)v_2^2n_1^2n_2^2}{(v\cdot n(x^1))^2}-\dfrac{2v_1v_2^3n_1n_2^3}{(v\cdot n(x^1))^2} \right) \right] \\ \hide &= \left(\dfrac{-2}{v\cdot n(x^1)}\right)^2 \left[ \left(-v_1v_2n_1n_2-v_2^2n_2^2\right)+\left(-\frac{(v_2^2-v_1^2)v_2n_1^2n_2}{v\cdot n(x^1)}+\frac{2v_1v_2^2n_1n_2^2}{v\cdot n(x^1)}+\frac{v_2^3n_2}{v \cdot n(x^1)} \right) \right]\\ \unhide &=\left(\dfrac{-2}{v\cdot n(x^1)}\right)^2\left[ -\frac{v_2^3n_2^3}{v\cdot n(x^1)} -\frac{v_2^3n_1^2n_2}{v\cdot n(x^1)} +\frac{v_2^3n_2}{v\cdot n(x^1)}\right]\\ &=0, \end{align*} \textrm{(Case 3)} $(K_1 \leftrightarrow K_3)$ \begin{align*} \det \begin{bmatrix} \vert & \vert \\ K_1 & K_3 \\ \vert & \vert \end{bmatrix}&=\left(\frac{-2}{v\cdot n(x^1)}\right)^2 \det \begin{bmatrix} 2v_2n_1n_2 -\dfrac{v_2^2n_1}{v\cdot n(x^1)} &-v_2n_1^2 -v_1n_1n_2-\dfrac{(v_1^2-v_2^2)n_1^2n_2}{v\cdot n(x^1)} +\dfrac{2v_1v_2n_1^3}{v\cdot n(x^1)}\\ v_2(n_2^2-n_1^2)-\dfrac{v_2^2n_2}{v\cdot n(x^1)} & v_1n_1^2 -v_2n_1n_2 -\dfrac{(v_1^2-v_2^2)n_1n_2^2}{v\cdot n(x^1)} +\dfrac{2v_1v_2n_1^2n_2}{v\cdot n(x^1)} \end{bmatrix}\\ &= \left(\dfrac{-2}{v\cdot n(x^1)}\right)^2 \left[ \left(2v_1v_2 n_1^3 n_2 -2v_2^2 n_1^2 n_2^2 -\dfrac{2(v_1^2-v_2^2)v_2n_1^2n_2^3}{v\cdot n(x^1)}+\dfrac{4v_1v_2^2n_1^3n_2^2}{ v \cdot n(x^1)} \right. \right. \\ &\quad- \left. \left.\dfrac{v_1v_2^2n_1^3}{v\cdot n(x^1)} +\dfrac{v_2^3n_1^2n_2}{v\cdot n(x^1)} +\dfrac{(v_1^2-v_2^2)v_2^2n_1^2n_2^2}{(v \cdot n(x^1))^2} -\dfrac{2v_1v_2^3n_1^3n_2}{(v\cdot n(x^1))^2} \right) \right. \\ &\quad \left. -\left(-v_2^2n_1^2(n_2^2-n_1^2)-v_1v_2n_1n_2(n_2^2-n_1^2)-\dfrac{(v_1^2-v_2^2)v_2n_1^2n_2(n_2^2-n_1^2)}{v\cdot n(x^1)}+\dfrac{2v_1v_2^2n_1^3(n_2^2-n_1^2)}{v\cdot n(x^1)} \right. \right.\\ &\quad \left. \left. +\dfrac{v_2^3n_1^2n_2}{v\cdot n(x^1)} +\dfrac{v_1v_2^2n_1n_2^2}{v\cdot n(x^1)} +\dfrac{(v_1^2-v_2^2)v_2^2n_1^2n_2^2}{(v\cdot n(x^1))^2}-\dfrac{2v_1v_2^3n_1^3n_2}{(v\cdot n(x^1))^2} \right) \right]\\ \hide &= \left(\dfrac{-2}{v\cdot n(x^1)}\right)^2 \left[ \left(v_1v_2n_1n_2-v_2^2n_1^2\right)+\left(-\frac{(v_1^2-v_2^2)v_2n_1^2n_2}{v\cdot n(x^1)} +\frac{2v_1v_2^2n_1^3}{v \cdot n(x^1)} -\frac{v_1v_2^2n_1}{v\cdot n(x^1)}\right)\right]\\ \unhide &= \left(\dfrac{-2}{v\cdot n(x^1)}\right)^2\left[ \frac{v_1v_2^2n_1^3}{v\cdot n(x^1)} +\frac{v_1v_2^2n_1n_2^2}{v \cdot n(x^1)} -\frac{v_1v_2^2n_1}{v\cdot n(x^1)}\right]\\ &=0. \end{align*} Moreover, from (Case 1) and (Case 2), we deduce \begin{align*} \det \begin{bmatrix} \vert & \vert \\ K_2 & K_4 \\ \vert & \vert \end{bmatrix}=0. \end{align*} Likewise, it holds that \begin{align*} \det \begin{bmatrix} \vert & \vert \\ K_2 & K_3 \\ \vert & \vert \end{bmatrix}=0, \quad \det \begin{bmatrix} \vert & \vert \\ K_3 & K_4 \\ \vert & \vert \end{bmatrix}=0. \end{align*} Therefore, it means that we can find a nonzero vector $\nabla_v f_0(x^1,R_{x^1}v)$ satisfying \eqref{T invariant}. Since \begin{align*} \nabla_v f_0(x^1,R_{x^1}v) \begin{bmatrix} \vert \\ K_1 \\ \vert \end{bmatrix} = 0, \end{align*} $\nabla _v f_0 (x^1,R_{x^1}v)$ is orthogonal to the column vector $K_1$. More specifically, $\nabla_v f_0(x^1,R_{x^1}v)^T$ has the following direction \begin{align*} \frac{-2}{v\cdot n(x^1)} \begin{bmatrix} -v_2(n_2^2-n_1^2) + \dfrac{v_2^2 n_2}{v\cdot n(x^1)} \\ 2v_2n_1n_2-\dfrac{v_2^2n_1}{v\cdot n(x^1)} \end{bmatrix}&=\frac{-2}{(v\cdot n(x^1))^2} \begin{bmatrix} -v_1v_2n_1(n_2^2-n_1^2) +2v_2^2n_1^2n_2\\ 2v_1v_2n_1^2n_2 +v_2^2n_1(n_2^2-n_1^2) \end{bmatrix}\\ &=\frac{2v_2n_1}{(v\cdot n(x^1))^2} \begin{bmatrix} n_2^2-n_1^2 & -2n_1n_2\\ -2n_1n_2 & n_1^2 -n_2^2 \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \end{bmatrix}= \frac{2v_2n_1}{(v\cdot n(x^1))^2} R_{x^1}v. \end{align*} Consequently, for \eqref{T invariant}, we get the following condition \begin{align} \label{Cond3} \nabla _v f_0(x,R_xv) \parallel (R_xv)^T, \end{align} for any $x \in \partial \Omega$. \\ \subsubsection{$\nabla_{xx}^T =\nabla_{xx}$} From \eqref{Cond2 2}, we need \begin{equation*} \begin{split} &\left(\begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^1) \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^2) \end{bmatrix} + \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(-2A_{v,x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(-2A_{v,x^{1}(x,v)}^2) \end{bmatrix}\right)^T \\ &= \begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^1) \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^2) \end{bmatrix} + \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(-2A_{v,x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(-2A_{v,x^{1}(x,v)}^2) \end{bmatrix}. \end{split} \end{equation*} Thus, it suffices to check that \begin{align*} &\nabla_x f_0(x^1,R_{x^1}v) \frac{\partial}{\partial x_2}(R_{x^1(x,v)}^1) +\nabla_v f_0(x^1,R_{x^1}v) \frac{\partial}{\partial x_2} (-2A_{v,x^1(x,v)}^1)\\ &= \nabla_x f_0(x^1,R_{x^1}v) \frac{\partial}{\partial x_1}(R_{x^1(x,v)}^2) +\nabla_v f_0(x^1,R_{x^1}v) \frac{\partial}{\partial x_1} (-2A_{v,x^1(x,v)}^2). \end{align*} In other words, we have to find the condition of $\nabla_x f_0 (x^1,R_{x^1}v)$ to satisfy \begin{align}\label{xx_sym2} \nabla_xf_0(x^1,R_{x^1}v) \left[\frac{\partial}{\partial x_2}(R_{x^1(x,v)}^1)-\frac{\partial}{\partial x_1}(R_{x^1(x,v)}^2) \right] = \nabla_v f_0(x^1,R_{x^1}v) \left[ \frac{\partial}{\partial x_1} (-2A_{v,x^1(x,v)}^2)-\frac{\partial}{\partial x_2} (-2A_{v,x^1(x,v)}^1)\right]. \end{align} Since we computed $\nabla_x (R_{x^1(x,v)}^1), \nabla_x (R_{x^1(x,v)}^2)$ in Lemma \ref{d_RA}, we represent $\nabla_x (-2A_{v,x^1(x,v)}^1)$ and $\nabla_x (-2A_{v,x^1(x,v)}^2)$ by components. \begin{lemma} \label{dx_A} Recall the matrix $A_{v,x}$ defined in \eqref{def A}, and then \begin{equation*} A_{v,x^1} = \left[ \left((v\cdot n(x^1))I +(n(x^1)\otimes v)\right) \left(I-\frac{v\otimes n(x^1)}{v\cdot n(x^1)}\right)\right]. \end{equation*} If we write that $A^i$ is the $i$th column of matrix $A$, then \begin{align*} &\nabla_x(-2A_{v,x^1(x,v)}^1) \\ &= \begin{bmatrix} \dfrac{4v_1^2v_2^2n_1^3 +2v_1v_2^3(3n_1^2n_2-n_2^3)+ 2v_2^4(3n_1n_2^2+n_1^3)}{(v\cdot n(x^1))^3} & \dfrac{-4v_1^3v_2n_1^3-2v_1^2v_2^2(3n_1^2n_2-n_2^3)-2v_1v_2^3(3n_1n_2^2+n_1^3)}{(v\cdot n(x^1))^3}\\ \dfrac{4v_2^4n_2^3+2v_1v_2^3(3n_1n_2^2-n_1^3)+2v_1^2v_2^2(3n_1^2n_2+n_2^3)}{(v\cdot n(x^1))^3} & \dfrac{-4v_1v_2^3n_2^3-2v_1^2v_2^2(3n_1n_2^2-n_1^3)-2v_1^3v_2(3n_1^2n_2+n_2^3)}{(v\cdot n(x^1))^3} \end{bmatrix},\\ &\nabla_x(-2A_{v,x^1(x,v)}^2)\\ &= \begin{bmatrix} \dfrac{-4v_1^3v_2n_1^3-2v_1v_2^3(3n_1n_2^2+n_1^3) -2v_1^2v_2^2(3n_1^2n_2-n_2^3)}{(v\cdot n(x^1))^3} & \dfrac{4v_1^4n_1^3 +2v_1^2v_2^2(3n_1n_2^2+n_1^3)+2v_1^3v_2 (3n_1^2n_2-n_2^3)}{(v \cdot n(x^1))^3}\\ \dfrac{-4v_1v_2^3n_2^3 -2v_1^3v_2(3n_1^2n_2+n_2^3) -2v_1^2v_2^2(3n_1n_2^2-n_1^3)}{(v\cdot n(x^1))^3} & \dfrac{4v_1^2 v_2^2 n_2^3 +2v_1^4(3n_1^2n_2+n_2^3)+2v_1^3v_2(3n_1n_2^2-n_1^3)}{(v \cdot n(x^1))^3} \end{bmatrix}, \end{align*} where $v_i$ be the $i$th component of $v$. We denote the $i$th component $n_i(x,v)$ of $n(x^1)$ as $n_i$, that is, $n_i$ depends on $x,v$. Furthermore, it holds that \begin{equation} \label{prop d_A} \nabla_x(-2A_{v,x^1(x,v)}^1)v =0, \quad \nabla_x (-2A_{v,x^1(x,v)}^2)v=0. \end{equation} \end{lemma} \begin{proof} We write the matrix $-2A_{v,x^1}$ by components: \begin{align*} -2A_{v,x^1}=\begin{bmatrix} -2v_2n_2 -\dfrac{2v_1v_2n_1n_2}{v\cdot n(x^1)} +\dfrac{2v_2^2 n_1^2}{ v\cdot n(x^1)} & 2v_1n_2 + \dfrac{2v_1^2n_1n_2}{v\cdot n(x^1)} -\dfrac{2v_1v_2n_1^2}{ v\cdot n(x^1)} \\ 2v_2n_1 -\dfrac{2v_1v_2n_2^2}{v\cdot n(x^1)} +\dfrac{2v_2^2n_1n_2}{v\cdot n(x^1)} & -2v_1n_1 -\dfrac{2v_1v_2n_1n_2}{v\cdot n(x^1)} + \dfrac{2v_1^2n_2^2}{v \cdot n(x^1)} \end{bmatrix}. \end{align*} For $\nabla_x(-2A_{v,x^1(x,v)}^1)$, we firstly take a derivative of $(1,1)$ component of $-2A_{v,x^1}$ with respect to $x_1$ \begin{align*} &\frac{\partial}{\partial x_1} \left(-2v_2n_2 -\dfrac{2v_1v_2n_1n_2}{v\cdot n(x^1)} +\dfrac{2v_2^2 n_1^2}{ v\cdot n(x^1)} \right)\\ &=-2v_2 \frac{\partial n_2}{\partial x_1} + \dfrac{\left (-2v_1v_2 n_2\frac{\partial n_1}{\partial x_1} - 2 v_1v_2n_1 \frac{\partial n_2}{\partial x_1}\right)(v\cdot n(x^1))+2v_1v_2n_1n_2\left(v_1\frac{\partial n_1}{\partial x_1}+v_2\frac{\partial n_2}{\partial x_1}\right)}{(v\cdot n(x^1))^2}\\ & \quad+ \dfrac{\left(4v_2^2n_1\frac{\partial n_1}{\partial x_1}\right)(v\cdot n(x^1))-2v_2^2n_1^2\left(v_1\frac{\partial n_1}{\partial x_1}+v_2\frac{\partial n_2}{\partial x_1}\right)}{(v \cdot n(x^1))^2}\\ \hide &=\frac{4v_1v_2^2n_1^2-2v_1v_2^2n_2^2+6v_2^3n_1n_2}{(v\cdot n(x^1))^2} +\frac{2v_1^2v_2^2n_1n_2^2-4v_1v_2^3n_1^2n_2+2v_2^4n_1^3}{(v\cdot n(x^1))^3}\\ \unhide &=\dfrac{4v_1^2v_2^2n_1^3 +2v_1v_2^3(3n_1^2n_2-n_2^3)+ 2v_2^4(3n_1n_2^2+n_1^3)}{(v\cdot n(x^1))^3}, \end{align*} where we used \eqref{normal} in Lemma \ref{d_n}. Similarly, using \eqref{normal} in Lemma \ref{d_n}, we get \begin{align*} &\frac{\partial}{\partial x_2} \left(-2v_2n_2 -\dfrac{2v_1v_2n_1n_2}{v\cdot n(x^1)} +\dfrac{2v_2^2 n_1^2}{ v\cdot n(x^1)} \right)\\ &=-2v_2 \frac{\partial n_2}{\partial x_2} + \dfrac{\left (-2v_1v_2 n_2\frac{\partial n_1}{\partial x_2} - 2 v_1v_2n_1 \frac{\partial n_2}{\partial x_2}\right)(v\cdot n(x^1))+2v_1v_2n_1n_2\left(v_1\frac{\partial n_1}{\partial x_2}+v_2\frac{\partial n_2}{\partial x_2}\right)}{(v\cdot n(x^1))^2}\\ & \quad+ \frac{\left(4v_2^2n_1\frac{\partial n_1}{\partial x_2}\right)(v\cdot n(x^1))-2v_2^2n_1^2\left(v_1\frac{\partial n_1}{\partial x_2}+v_2\frac{\partial n_2}{\partial x_2}\right)}{(v \cdot n(x^1))^2}\\ \hide &=\frac{-4v_1^2v_2n_1^2-6v_1v_2^2n_1n_2+2v_1^2v_2n_2^2}{(v \cdot n(x^1))^2} + \frac{4v_1^2v_2^2n_1^2n_2-2v_1^3v_2n_1n_2^2-2v_1v_2^3n_1^3}{(v\cdot n(x^1))^3} \\ \unhide &= \dfrac{-4v_1^3v_2n_1^3-2v_1^2v_2^2(3n_1^2n_2-n_2^3)-2v_1v_2^3(3n_1n_2^2+n_1^3)}{(v\cdot n(x^1))^3},\\ &\frac{\partial}{\partial x_1} \left(2v_2n_1 -\dfrac{2v_1v_2n_2^2}{v\cdot n(x^1)} +\dfrac{2v_2^2n_1n_2}{v\cdot n(x^1)}\right)\\ &=2v_2\frac{\partial n_1}{\partial x_1} - \frac{\left(4v_1v_2n_2\frac{\partial n_2}{\partial x_1}\right)(v\cdot n(x^1))-2v_1v_2n_2^2\left(v_1\frac{\partial n_1}{\partial x_1}+v_2\frac{\partial n_2}{\partial x_1}\right) }{(v \cdot n(x^1))^2}\\ &\quad+ \frac{\left( 2v_2^2n_2\frac{\partial n_1}{\partial x_1} +2v_2^2n_1\frac{\partial n_2}{\partial x_1}\right) (v\cdot n(x^1))-2v_2^2n_1n_2\left( v_1 \frac{\partial n_1}{\partial x_1}+v_2 \frac{\partial n_2}{\partial x_1}\right)}{(v\cdot n(x^1))^2}\\ \hide &=\frac{6v_1v_2^2n_1n_2+4v_2^3n_2^2-2v_2^3n_1^2}{(v\cdot n(x^1))^2} +\frac{2v_1^2v_2^2n_2^3-4v_1v_2^3n_1n_2^2+2v_2^4n_1^2n_2}{(v\cdot n(x^1))^3}\\ \unhide &=\dfrac{4v_2^4n_2^3+2v_1v_2^3(3n_1n_2^2-n_1^3)+2v_1^2v_2^2(3n_1^2n_2+n_2^3)}{(v\cdot n(x^1))^3},\\ &\frac{\partial}{\partial x_2} \left(2v_2n_1 -\dfrac{2v_1v_2n_2^2}{v\cdot n(x^1)} +\dfrac{2v_2^2n_1n_2}{v\cdot n(x^1)}\right)\\ &=2v_2\frac{\partial n_1}{\partial x_2} - \frac{\left(4v_1v_2n_2\frac{\partial n_2}{\partial x_2}\right)(v\cdot n(x^1))-2v_1v_2n_2^2\left(v_1\frac{\partial n_1}{\partial x_2}+v_2\frac{\partial n_2}{\partial x_2}\right) }{(v \cdot n(x^1))^2}\\ &\quad+ \frac{\left( 2v_2^2n_2\frac{\partial n_1}{\partial x_2} +2v_2^2n_1\frac{\partial n_2}{\partial x_2}\right) (v\cdot n(x^1))-2v_2^2n_1n_2\left( v_1 \frac{\partial n_1}{\partial x_2}+v_2 \frac{\partial n_2}{\partial x_2}\right)}{(v\cdot n(x^1))^2}\\ \hide &\quad -\frac{2v_1v_2^2n_2^2}{(v\cdot n(x^1))^2} +\frac{2v_1v_2^2n_1^2}{(v\cdot n(x^1))^2} +\frac{2v_1^2v_2^2n_1n_2^2}{(v\cdot n(x^1))^3} -\frac{2v_1v_2^3n_1^2n_2}{(v\cdot n(x^1))^3}\\ \unhide &=\dfrac{-4v_1v_2^3n_2^3-2v_1^2v_2^2(3n_1n_2^2-n_1^3)-2v_1^3v_2(3n_1^2n_2+n_2^3)}{(v\cdot n(x^1))^3}. \end{align*} Thus, we derived $\nabla_x(-2A_{v,x^1(x,v)}^1)$. Similar to $\nabla_x(-2A_{v,x^1(x,v)}^1)$, we can obtain $\nabla_x(-2A_{v,x^1(x,v)}^2)$, and the details are omitted. By the $\nabla_x(-2A_{v,x^1(x,v)}^1)$ and $\nabla_x(-2A_{v,x^1(x,v)}^2)$ formula above, direct calculation gives \eqref{prop d_A}: \begin{footnotesize} \begin{align*} &\nabla_x(-2A_{v,x^1(x,v)}^1) v \\ &= \begin{bmatrix} \dfrac{4v_1^2v_2^2n_1^3 +2v_1v_2^3(3n_1^2n_2-n_2^3)+ 2v_2^4(3n_1n_2^2+n_1^3)}{(v\cdot n(x^1))^3} & \dfrac{-4v_1^3v_2n_1^3-2v_1^2v_2^2(3n_1^2n_2-n_2^3)-2v_1v_2^3(3n_1n_2^2+n_1^3)}{(v\cdot n(x^1))^3}\\ \dfrac{4v_2^4n_2^3+2v_1v_2^3(3n_1n_2^2-n_1^3)+2v_1^2v_2^2(3n_1^2n_2+n_2^3)}{(v\cdot n(x^1))^3} & \dfrac{-4v_1v_2^3n_2^3-2v_1^2v_2^2(3n_1n_2^2-n_1^3)-2v_1^3v_2(3n_1^2n_2+n_2^3)}{(v\cdot n(x^1))^3} \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \end{bmatrix}=0,\\ &\nabla_x(-2A_{v,x^1(x,v)}^2) v \\ &=\begin{bmatrix} \dfrac{-4v_1^3v_2n_1^3-2v_1v_2^3(3n_1n_2^2+n_1^3) -2v_1^2v_2^2(3n_1^2n_2-n_2^3)}{(v\cdot n(x^1))^3} & \dfrac{4v_1^4n_1^3 +2v_1^2v_2^2(3n_1n_2^2+n_1^3)+2v_1^3v_2 (3n_1^2n_2-n_2^3)}{(v \cdot n(x^1))^3}\\ \dfrac{-4v_1v_2^3n_2^3 -2v_1^3v_2(3n_1^2n_2+n_2^3) -2v_1^2v_2^2(3n_1n_2^2-n_1^3)}{(v\cdot n(x^1))^3} & \dfrac{4v_1^2 v_2^2 n_2^3 +2v_1^4(3n_1^2n_2+n_2^3)+2v_1^3v_2(3n_1n_2^2-n_1^3)}{(v \cdot n(x^1))^3} \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \end{bmatrix}=0. \end{align*} \end{footnotesize} \end{proof} Now, back to our consideration \eqref{xx_sym2}. By Lemma \ref{dx_A}, we have \begin{align*} \frac{\partial}{\partial x_2} (-2A_{v,x^1(x,v)}^1)= \frac{\partial}{\partial x_1}(-2A_{v,x^1(x,v)}^2), \end{align*} which implies that \begin{align*} \nabla_xf_0(x^1,R_{x^1}v) \left[\frac{\partial}{\partial x_2}(R_{x^1(x,v)}^1)-\frac{\partial}{\partial x_1}(R_{x^1(x,v)}^2) \right]=\frac{2}{v\cdot n(x^1)}\nabla_xf_0(x^1,R_{x^1}v) \begin{bmatrix} 2v_1n_1n_2 +v_2(n_2^2-n_1^2) \\ v_1(n_2^2-n_1^2)-2v_2n_1n_2 \end{bmatrix}=0. \end{align*} It means that $\nabla_x f_0(x^1,R_{x^1}v)$ is orthogonal to $\frac{\partial}{\partial x_2}(R_{x^1(x,v)}^1)-\frac{\partial}{\partial x_1}(R_{x^1(x,v)}^2)$ and $\nabla_xf_0(x^1,R_{x^1}v)^T$ has the following direction \begin{align*} \begin{bmatrix} -v_1(n_2^2-n_1^2)+2v_2n_1n_2 \\ 2v_1n_1n_2+v_2(n_2^2-n_1^2) \end{bmatrix}=-\begin{bmatrix} n_2^2-n_1^2 & -2n_1n_2 \\ -2n_1n_2 & n_1^2-n_2^2 \end{bmatrix}\begin{bmatrix} v_1 \\ v_2 \end{bmatrix}=-R_{x^1}v. \end{align*} To hold $\nabla_{xx} f_0(x^1,R_{x^1}v)^T = \nabla_{xx} f_0 (x^1,R_{x^1}v)$, the following condition \begin{align} \label{Cond4} \nabla_xf_0(x,R_xv) \parallel (R_xv)^T, \end{align} must be satisfied for $x \in \partial \Omega$. \\ \subsection{Conditions including $\partial_{t}$} In this subsection, we find conditions for $\partial_{tt}, \partial_{t}\nabla_{x}, \partial_{t}\nabla_{v}, \nabla_{x}\partial_{t}, \nabla_{v}\partial_{t}$. In the last subsubsection, we show that all these $\partial_{t}$ including compatibility conditions are covered by \eqref{Cond2 1}--\eqref{Cond2 4}, \eqref{Cond3}, and \eqref{Cond4}. \\ \subsubsection{$\partial_{tt}$} Using the same perturbation \eqref{Perb_t} in $C^1_t$ compatibility condition, we derive $C^2_t$ compatibility condition. For $\epsilon>0$, \begin{align*} \partial_t(f(t+\epsilon,x,v)-f(t,x,v))&= \partial_t (f_0(X^\epsilon(0),R_{x^1}v)-f_0(X(0),R_{x^1}v))\\ &=\left( \nabla_x f_0(X^\epsilon(0),R_{x^1}v)-\nabla_xf_0(X(0),R_{x^1}v)\right) (-R_{x^1}v) \\ &=(-R_{x^1}v)^T \left (\nabla_x f_0(X^\epsilon(0),R_{x^1}v) -\nabla_xf_0(X(0),R_{x^1}v) \right)^T, \end{align*} which implies \begin{align*} f_{tt}(t,x,v) &= \lim_{\epsilon \rightarrow 0+}\frac{ \partial_t f(t+\epsilon,x,v)-\partial_t f(t,x,v)}{\epsilon}\\ &=(-R_{x^1}v)^T \nabla_{xx} f_0(x^1,R_{x^1}v) \lim_{\epsilon\rightarrow 0+}\frac{ X^\epsilon(0)-X(0)}{\epsilon}\\ &=(-R_{x^1}v)^T \nabla_{xx} f_0(x^1,R_{x^1}v) (-R_{x^1}v). \end{align*} On the other hand, for $\epsilon<0$, it holds that \begin{align*} \partial_t(f(t+\epsilon,x,v)-f(t,x,v))= \partial_t (f_0(X^\epsilon(0),v)-f_0(X(0),v))&=\left( \nabla_x f_0(X^\epsilon(0),v)-\nabla_xf_0(X(0),v)\right) (-v)\\ &=(-v)^T \left( \nabla_x f_0(X^\epsilon(0),v)-\nabla_xf_0(X(0),v)\right)^T. \end{align*} Thus, we have \begin{align*} f_{tt}(t,x,v) &= \lim_{\epsilon \rightarrow 0-}\frac{ \partial_t f(t+\epsilon,x,v)-\partial_t f(t,x,v)}{\epsilon}\\ &=(-v)^T \nabla_{xx} f_0(x^1,v) \lim_{\epsilon\rightarrow 0-}\frac{ X^\epsilon(0)-X(0)}{\epsilon}\\ &=(-v)^T \nabla_{xx} f_0(x^1,v) (-v). \end{align*} To sum up, the condition \begin{align} \label{time cond} v^T \nabla_{xx}f_0(x^1,v)v = (R_{x^1}v)^T \nabla_{xx}f_0(x^1,R_{x^1}v)(R_{x^1}v), \end{align} must be satisfied to $f \in C^2_t$. \\ \subsubsection{$C^2_{t,x}$} We firstly use the perturbation \eqref{Perb_t} for $\epsilon <0$. From \eqref{c_3}, it holds that \begin{equation} \label{nabla_tx f case1} \begin{split} \partial_t [\nabla_xf(t,x,v)]&= \lim_{\epsilon \rightarrow 0-} \frac{ \nabla_x f(t+\epsilon,x,v) - \nabla_xf(t,x,v)}{\epsilon}\\ &=\lim_{\epsilon \rightarrow 0-} \frac{1}{\epsilon} \left( \nabla_x \left[ f_0(X(0;t+\epsilon,x,v),V(0;t+\epsilon,x,v))\right]-\nabla_xf_0(X(0),v)\right)\\ &=\lim_{\epsilon \rightarrow 0-} \frac{1}{\epsilon} \left( \nabla_x f_0(X^\epsilon(0),v)-\nabla_x f_0(X(0),v)\right)\\ &=-v^T \nabla_{xx} f_0(x^1,v), \end{split} \end{equation} where we used $\nabla_x X^{\epsilon}(0) = I_2$ and $\nabla_x V^{\epsilon}(0)=0$. On the other hand, for $\epsilon>0$, \begin{align*} X^{\epsilon}(0):= X(0;t+\epsilon,x,v)=X(0;t,x-\epsilon v, v), \quad V^{\epsilon}(0):=V(0;t+\epsilon,x,v)=R_{x^1}v. \end{align*} Similar to previous case $\nabla_{xx}$, using \eqref{nabla XV_x-} and \eqref{c_4}, \begin{align*} \partial_t [\nabla_xf(t,x,v)]&= \lim_{\epsilon \rightarrow 0+} \frac{ \nabla_x f(t+\epsilon,x,v) - \nabla_xf(t,x,v)}{\epsilon}\\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \left( \nabla_x \left[ f_0(X(0;t+\epsilon,x,v),V(0;t+\epsilon,x,v))\right] \right. \\ &\left.\quad - \left(\nabla_x f_0(X(0),R_{x^1}v)R_{x^1} -2\nabla_v f_0(X(0),R_{x^1}v)A_{v,x^1} \right) \right)\\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \left( \nabla_x f_0(X^{\epsilon}(0),V^{\epsilon}(0))\nabla_x X^{\epsilon}(0) +\nabla_v f_0(X^{\epsilon}(0),V^{\epsilon}(0)) \nabla_x V^{\epsilon}(0)\right. \\ &\quad \left. -\left( \nabla_x f_0(X(0),R_{x^1}v)R_{x^1} -2\nabla_v f_0(X(0),R_{x^1}v)A_{v,x^1}\right) \right)\\ &=\lim _{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \left( \nabla_x f_0 (X^{\epsilon}(0),R_{x^1}v) \nabla_x X^{\epsilon}(0) -\nabla_x f_0(X(0),R_{x^1}v)R_{x^1}\right) \\ &\quad + \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \left( \nabla_v f_0(X^{\epsilon}(0),R_{x^1}v) \nabla_xV^{\epsilon}(0) +2\nabla_v f_0(X(0),R_{x^1}v)A_{v,x^1}\right) \\ &:= I_{tx,1}+I_{tx,2}, \end{align*} where \begin{align*} I_{tx,1}&:=\lim_{\epsilon \rightarrow 0+}\frac{1}{\epsilon} \left(\nabla_x f_0 (X^{\epsilon}(0),R_{x^1}v) \nabla_x X^{\epsilon}(0) - \nabla_xf_0(X^\epsilon(0),R_{x^1}v) \lim_{s\rightarrow 0-}\nabla_x X(s) \right. \\ &\quad \left. +\nabla_x f_0(X^\epsilon(0),R_{x^1}v) \lim_{s\rightarrow 0-} \nabla_x X(s) -\nabla_x f_0(X(0),R_{x^1}v)R_{x^1}\right)\\ &\stackrel{r\leftrightarrow c}{=} \left[\nabla_xf_0(x^1,R_{x^1}v)\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \left( \nabla_x X^{\epsilon}(0)-\lim_{s \rightarrow 0-} \nabla_x X(s) \right)\right]^T\\ &\quad + R_{x^1}\nabla_{xx}f_0(x^1,R_{x^1}v)(-R_{x^1}v)\\ &=\begin{bmatrix} \nabla_x f_0(x^1,R_{x^1}v)\nabla_x(R_{x^1(x,v)}^1) \\ \nabla_x f_0(x^1,R_{x^1}v) \nabla_x (R_{x^1(x,v)}^2) \end{bmatrix} (-v)+R_{x^1}\nabla_{xx} f_0(x^1,R_{x^1}v) (-R_{x^1}v), \\ I_{tx,2}&:= \lim_{\epsilon \rightarrow 0+}\frac{1}{\epsilon} \left(\nabla_v f_0 (X^{\epsilon}(0),R_{x^1}v) \nabla_x V^{\epsilon}(0) - \nabla_vf_0(X^\epsilon(0),R_{x^1}v) \lim_{s\rightarrow 0-}\nabla_x V(s) \right. \\ &\quad \left. +\nabla_v f_0(X^\epsilon(0),R_{x^1}v) \lim_{s\rightarrow 0-} \nabla_x V(s) -2\nabla_v f_0(X(0),R_{x^1}v)A_{v,x^1}\right)\\ &\stackrel{r\leftrightarrow c}{=} \left[\nabla_vf_0(x^1,R_{x^1}v)\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \left( \nabla_x V^{\epsilon}(0)-\lim_{s \rightarrow 0-} \nabla_x V(s) \right)\right]^T\\ &\quad +(-2A^T_{v,x^1})\nabla_{xv} f_0(x^1,R_{x^1}v)\lim_{\epsilon \rightarrow 0+} \frac{ X^{\epsilon}(0)-X(0)}{\epsilon}\\ &= \begin{bmatrix} \nabla_v f_0(x^1,R_{x^1}v) \nabla_x (-2A_{v,x^1(x,v)}^1) \\ \nabla_v f_0(x^1,R_{x^1}v) \nabla_x(-2A_{v,x^1(x,v)}^2) \end{bmatrix} (-v)+ (-2A^T_{v,x^1}) \nabla_{xv}f_0(x^1,R_{x^1}v) (-R_{x^1}v). \end{align*} Thus, \begin{equation}\label{nabla_tx f case2} \begin{split} \partial_t [\nabla_x f(t,x,v)] &= (-v)^T \begin{bmatrix} \nabla_x f_0(x^1,R_{x^1}v) \nabla_x (R_{x^1(x,v)}^1) \\ \nabla_x f_0(x^1,R_{x^1}v) \nabla_x (R_{x^1(x,v)}^2) \end{bmatrix}^T+(-v)^T \begin{bmatrix} \nabla_v f_0(x^1,R_{x^1}v) \nabla_x (-2A_{v,x^1(x,v)}^1) \\ \nabla_v f_0(x^1,R_{x^1}v) \nabla_x(-2A_{v,x^1(x,v)}^2) \end{bmatrix}^T\\ & \quad +(-v^T)R_{x^1} \nabla_{xx} f_0(x^1,R_{x^1}v)R_{x^1}+(-v^T) R_{x^1} \nabla_{vx} f_0(x^1,R_{x^1}v)(-2A_{v,x^1}). \end{split} \end{equation} From \eqref{nabla_tx f case1} and \eqref{nabla_tx f case2}, we have the following condition \begin{equation} \label{tx comp} \begin{split} (-v^T) \nabla_{xx}f_0(x^1,v) &= (-v)^T \begin{bmatrix} \nabla_x f_0(x^1,R_{x^1}v) \nabla_x (R_{x^1(x,v)}^1) \\ \nabla_x f_0(x^1,R_{x^1}v) \nabla_x (R_{x^1(x,v)}^2) \end{bmatrix}^T+(-v)^T \begin{bmatrix} \nabla_v f_0(x^1,R_{x^1}v) \nabla_x (-2A_{v,x^1(x,v)}^1) \\ \nabla_v f_0(x^1,R_{x^1}v) \nabla_x(-2A_{v,x^1(x,v)}^2) \end{bmatrix}^T\\ & \quad +(-v^T)R_{x^1} \nabla_{xx} f_0(x^1,R_{x^1}v)R_{x^1}+(-v^T) R_{x^1} \nabla_{vx}f_0(x^1,R_{x^1}v) (-2A_{v,x^1}). \end{split} \end{equation} \subsubsection{$C^2_{t,v}$} Similar to $C^2_{t,x}$, we use \eqref{c_1} and the perturbation \eqref{Perb_t} for $\epsilon<0$ to obtain \begin{equation}\label{nabla_tv f case1} \begin{split} \partial_{t}[\nabla_{v}f(t,x,v)] &= \lim_{\epsilon \rightarrow 0-} \frac{ \nabla_v f(t+\epsilon,x,v) -\nabla_v f(t,x,v)}{ \epsilon}\\ &= \lim_{\epsilon \rightarrow 0-} \frac{1}{\epsilon} \left( \nabla_v \left[ f_0(X(0;t+\epsilon,x,v),V(0;t+\epsilon,x,v))\right]-(-t\nabla_x f_0(X(0),v)+\nabla_vf_0(X(0),v)) \right)\\ &= \lim_{\epsilon \rightarrow 0-} \frac{1}{\epsilon} \left( -(t+\epsilon) \nabla_x f_0(X^{\epsilon}(0),v) +\nabla_v f_0(X^{\epsilon}(0),v) +t\nabla_x f_0(X(0),v) -\nabla_v f_0(X(0),v) \right)\\ &=-\nabla_x f_0(x^1,v) -t(-v^T) \nabla_{xx}f_0(x^1,v) + (-v^T) \nabla_{vx}f_0(x^1,v), \end{split} \end{equation} where we have used $\nabla_v X^{\epsilon}(0) = -(t+\epsilon) I_2, \nabla_v V^{\epsilon}(0) = I_2$. For $\epsilon>0$, the perturbation \eqref{Perb_t} becomes \begin{equation*} X^{\epsilon}(0):=X(0;t+\epsilon,x,v) =X(0;t,x-\epsilon v,v) =x^1 -(t^1+\epsilon)R_{x^1}v, \quad V^{\epsilon}(0):=V(0;t+\epsilon,x,v)=R_{x^1}v. \end{equation*} By product rule, Lemma \ref{nabla xv b} and Lemma \ref{d_n}, one obtains that \begin{align*} \nabla_v [X^{\epsilon}(0)]&=\nabla_v [x^1-(t^1+\epsilon)R_{x^1}v] =-t\left(I-\frac{v \otimes n(x^1)}{v\cdot n(x^1)} \right) -R_{x^1}v \otimes \nabla_v t^1 -\epsilon \nabla_v (R_{x^1}v)\\ &=-t \left(I-\frac{v \otimes n(x^1)}{v\cdot n(x^1)} \right)-t R_{x^1}v \otimes \frac{n(x^1}{v\cdot n(x^1)} -\epsilon (R_x^1 + 2t A_{v,x^1})\\ &= -tR_{x^1} -\epsilon (R_{x^1}+2tA_{v,x^1}), \\ \nabla_v[V^{\epsilon}(0)]&= \nabla_v [R_{x^1}v] = R_{x^1}+2tA_{v,x^1}. \end{align*} Through the $v$-derivative of $X^{\epsilon}(0),V^{\epsilon}(0)$ above and \eqref{c_2}, \begin{equation} \label{nabla_tv f case2} \begin{split} \partial_t[\nabla_{v} f(t,x,v)]&= \lim_{\epsilon \rightarrow 0+} \frac{\nabla_v f(t+\epsilon,x,v) -\nabla_v f(t,x,v)}{\epsilon}\\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} (\nabla_v [f_0(X(0;t+\epsilon,x,v),V(0;t+\epsilon,x,v)]\\ &\quad -(-t\nabla_x f_0(X(0),R_{x^1}v)R_{x^1}+ \nabla_v f_0(X(0),R_{x^1}v)(R_{x^1}+2tA_{v,x^1})))\\ &=-\nabla_x f_0(x^1,R_{x^1}v)\left(R_{x^1}+2tA_{v,x^1}\right) -t \left[ \lim_{\epsilon\rightarrow 0+} \frac{1}{\epsilon} \left(\nabla_xf_0(X^{\epsilon}(0),R_{x^1}v) -\nabla_xf_0(X^{\epsilon}(0),R_{x^1}v)\right)\right] R_{x^1} \\ &\quad + \left [\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon}\left( \nabla_v f_0(X^{\epsilon}(0),R_{x^1}v) -\nabla_v f_0(X(0),R_{x^1}v) \right)\right]\left(R_{x^1}+2tA_{v,x^1}\right)\\ &\stackrel{r\leftrightarrow c}{=} -\left(R_{x^1}+2tA_{v,x^1}\right)^T\nabla_x f_0(x^1,R_{x^1}v)^T -tR_{x^1} \nabla_{xx}f_0(x^1,R_{x^1}v) \lim_{\epsilon \rightarrow 0+} \frac{X^{\epsilon}(0)-X(0)}{\epsilon} \\ &\quad + \left(R_{x^1}+2tA_{v,x^1}\right)^T \nabla_{xv} f_0(x^1,R_{x^1}v) \lim_{\epsilon \rightarrow 0+} \frac{X^{\epsilon}(0)-X(0)}{\epsilon}\\ &\stackrel{c\leftrightarrow r}{=} -\nabla_xf_0(x^1,R_{x^1}v) (R_{x^1}+2tA_{v,x^1}) -t(-v^T)R_{x^1} \nabla_{xx} f_0(x^1,R_{x^1}v) R_{x^1} \\ &\quad +(-v^T) R_{x^1} \nabla_{vx} f_0(x^1,R_{x^1}v) (R_{x^1}+2tA_{v,x^1}). \end{split} \end{equation} Summing \eqref{nabla_tv f case1} and \eqref{nabla_tv f case2} yields that \begin{equation} \label{tv comp} \begin{split} &-\nabla_x f_0(x^1,v) -t(-v^T) \nabla_{xx}f_0(x^1,v) + (-v^T) \nabla_{vx}f_0(x^1,v)\\ &= -\nabla_xf_0(x^1,R_{x^1}v) (R_{x^1}+2tA_{v,x^1})-t(-v^T)R_{x^1} \nabla_{xx} f_0(x^1,R_{x^1}v) R_{x^1} \\ &\quad +(-v^T) R_{x^1} \nabla_{vx} f_0(x^1,R_{x^1}v) (R_{x^1}+2tA_{v,x^1}). \end{split} \end{equation} \subsubsection{$C^2_{x,t}$} Similar to the $\nabla_{xv}$ case, using the same perturbation $\hat{r}_1$ of \eqref{set R_sp} and \eqref{c_3}, we have \begin{equation*} \begin{split} \nabla_x[\partial_t f(t,x,v)]\hat{r}_1&=\lim_{\epsilon \rightarrow 0+} \frac{ \partial_t f(t,x+\epsilon \hat{r}_1,v)- \partial_t f(t,x,v)}{\epsilon} \\ &= \lim_{\epsilon\rightarrow 0+} \left(\frac{ \nabla_x f(t,x+\epsilon \hat{r}_1,v) - \nabla_x f(t,x,v)}{\epsilon}\right)(-v)\\ &= \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \left(\nabla_x [f_0(X(0;t,x+\epsilon \hat{r}_1,v),V(0;t,x+\epsilon \hat{r}_1,v))] -\nabla_x f_0(X(0),v)\right)(-v)\\ &= (-v^T) \nabla_{xx} f_0(x^1,v) \hat{r}_1, \end{split} \end{equation*} where we have used $\nabla_x X^{\epsilon}(0)=I_2, \nabla_x V^{\epsilon}(0)=0$. Next, for $\hat{r}_2$ of \eqref{set R_sp}, using \eqref{Av=0} in Lemma \ref{lem_RA}, \eqref{c_4},\eqref{xx star1}, and \eqref{xx star2} gives \begin{equation*} \begin{split} \nabla_x[\partial_t f(t,x,v)] \hat{r}_2 &= \lim_{\epsilon \rightarrow 0+} \frac{ \partial_t f(t,x+\epsilon \hat{r}_2,v)- \partial_t f(t,x,v)}{\epsilon}\\ &= \lim_{\epsilon\rightarrow 0+} \left(\frac{ \nabla_x f(t,x+\epsilon \hat{r}_2,v) - \nabla_x f(t,x,v)}{\epsilon}\right)(-v)\\ &= \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} (\nabla_x [f_0(X(0;t,x+\epsilon \hat{r}_2,v),V(0;t,x+\epsilon \hat{r}_2,v))] \\ &\quad - (\nabla_x f_0(X(0),R_{x^1}v)R_{x^1} -2 \nabla_v f_0(X(0),R_{x^1}v) A_{v,x^1}) )(-v)\\ &=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \left( \nabla_x f_0(X^{\epsilon}(0),V^{\epsilon}(0))\nabla_ xX^{\epsilon}(0) - \nabla_x f_0(X(0),R_{x^1}v)R_{x^1}\right) (-v) \\ &\quad + \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \left( \nabla_v f_0(X^{\epsilon}(0),V^{\epsilon}(0))\nabla_x V^{\epsilon}(0) -\nabla_v f_0(X(0),R_{x^1}v)(-2A_{v,x^1}) \right) (-v) \\ &:=I_{xt,1}+I_{xt,2}, \end{split} \end{equation*} where \begin{align*} I_{xt,1}&=\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \left( \nabla_x f_0(X^{\epsilon}(0),V^{\epsilon}(0))\nabla_ xX^{\epsilon}(0) - \nabla_x f_0(X^{\epsilon}(0),V^{\epsilon}(0)) \lim_{s \rightarrow 0-} \nabla_x X(s) \right. \\ &\quad + \left. \nabla_x f_0(X^{\epsilon}(0),V^{\epsilon}(0))\lim_{s\rightarrow 0-} \nabla_x X(s) - \nabla_x f_0(X(0),R_{x^1}v)R_{x^1}\right)(-v)\\ &= (-v^T) \begin{bmatrix} \nabla_x f_0(x^1,R_{x^1}v) \nabla_x (R_{x^1(x,v)}^1) \\ \nabla_x f_0(x^1,R_{x^1}v) \nabla_x (R_{x^1(x,v)}^2) \end{bmatrix} \hat{r}_2\\ &\quad +(-v^T)R_{x^1} \nabla_{xx} f_0(x^1,R_{x^1}v)R_{x^1}\hat{r}_2 +(-v^T)R_{x^1} \nabla_{vx} f_0(x^1,R_{x^1}v)(-2A_{v,x^1})\hat{r}_2,\\ I_{xt,2} &= \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \left( \nabla_v f_0(X^{\epsilon}(0),V^{\epsilon}(0))\nabla_x V^{\epsilon}(0) -\nabla_v f_0(X^\epsilon(0),V^{\epsilon}(0))\lim_{s\rightarrow 0-} \nabla_x V(s) \right. \\ &\quad + \left. \nabla_v f_0(X^\epsilon(0),V^{\epsilon}(0)) \lim_{s \rightarrow 0-} \nabla_x V(s) -\nabla_v f_0(X(0),R_{x^1}v)(-2A_{v,x^1})\right)(-v) \\ &=(-v^T) \begin{bmatrix} \nabla_vf_0(x^1,R_{x^1}v) \nabla_x (-2A_{v,x^1(x,v)}^1) \\ \nabla_v f_0(x^1,R_{x^1}v) \nabla_x (-2A_{v,x^1(x,v)}^2) \end{bmatrix}\hat{r}_2 \\ & \quad +(-v^T)(-2A^T_{v,x^1}) \left( \nabla_{xv} f_0(x^1,R_{x^1}v) R_{x^1} +\nabla_{vv} f_0(x^1,R_{x^1}v) (-2A_{v,x^1}) \right) \hat{r}_2,\\ &=(-v^T) \begin{bmatrix} \nabla_vf_0(x^1,R_{x^1}v) \nabla_x (-2A_{v,x^1(x,v)}^1) \\ \nabla_v f_0(x^1,R_{x^1}v) \nabla_x (-2A_{v,x^1(x,v)}^2) \end{bmatrix}\hat{r}_2. \end{align*} To sum up the above, we get the following condition: \begin{equation} \label{xt comp} \begin{split} (-v^T) \nabla_{xx} f_0(x^1,v)&= (-v^T) \begin{bmatrix} \nabla_x f_0(x^1,R_{x^1}v) \nabla_x (R_{x^1(x,v)}^1)\\ \nabla_x f_0(x^1,R_{x^1}v) \nabla_x (R_{x^1(x,v)}^2)\end{bmatrix} +(-v^T) \begin{bmatrix} \nabla_vf_0(x^1,R_{x^1}v) \nabla_x (-2A_{v,x^1(x,v)}^1) \\ \nabla_v f_0(x^1,R_{x^1}v) \nabla_x (-2A_{v,x^1(x,v)}^2) \end{bmatrix}\\ &\quad +(-v^T)R_{x^1} \nabla_{xx} f_0(x^1,R_{x^1}v)R_{x^1}+(-v^T)R_{x^1} \nabla_{vx} f_0(x^1,R_{x^1}v)(-2A_{v,x^1}). \end{split} \end{equation} \subsubsection{$C^2_{v,t}$} Using the perturbation $\hat{r}_1$ of \eqref{set R_sp} and \eqref{c_3}, \begin{equation*} \begin{split} \nabla_v [\partial_t f(t,x,v)] \hat{r}_1 &=\lim_{\epsilon\rightarrow 0+} \frac{ \partial_t f(t,x,v+\epsilon \hat{r}_1) -\partial_t f(t,x,v)}{\epsilon}\\ &=\lim_{\epsilon \rightarrow 0+} \left(\frac{\nabla_x f(t,x,v+\epsilon \hat{r}_1) (-(v+\epsilon \hat{r}_1)) -\nabla_x f(t,x,v) (-v)}{\epsilon} \right) \\ &=-\lim_{\epsilon \rightarrow 0+} \nabla_x [f_0(X(0;t,x,v+\epsilon \hat{r}_1), V(0;t,x,v+\epsilon \hat{r}_1))]\hat{r}_1 \\ &+\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} ( \nabla_x [f_0(X(0;t,x,v+\epsilon \hat{r}_1), V(0;t,x,v+\epsilon \hat{r}_1))]-\nabla_x f_0(X(0),v))(-v)\\ &=-\nabla_x f_0(X(0),v) \hat{r}_1 +\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} (\nabla_x f_0(X^{\epsilon}(0),V^{\epsilon}(0))-\nabla_x f_0(X(0),v))(-v)\\ &=-\nabla_x f_0(x^1,v)\hat{r}_1 +(-v^T) \nabla_{xx} f_0(x^1,v) (-t\hat{r}_1) +(-v^T) \nabla_{vx} f_0(x^1,v) \hat{r}_1, \end{split} \end{equation*} where $X^{\epsilon}(0):=X(0;t,x,v+\epsilon \hat{r}_1) = x-t(v+\epsilon \hat{r}_1), V^{\epsilon}(0):=V(0;t,x,v+\epsilon \hat{r}_1) =v+\epsilon \hat{r}_1$. Similar to the case $\nabla_{vx}$, for the perturbation $\hat{r}_2$ of \eqref{set R_sp}, using \eqref{nabla XV_x-}, \eqref{c_4} and \eqref{Av=0} in Lemma \ref{lem_RA} yields: \begin{equation*} \begin{split} \nabla_v[\partial_t f(t,x,v)] \hat{r}_2 &= \lim_{\epsilon \rightarrow 0+} \frac{\partial_t f(t,x,v+\epsilon \hat{r}_2) -\partial_t f(t,x,v)}{\epsilon}\\ &=\lim_{\epsilon \rightarrow 0+} \left( \frac{ \nabla_x f(t,x,v+\epsilon \hat{r}_2)(-(v+\epsilon \hat{r}_2))-\nabla_x f(t,x,v)(-v)}{\epsilon}\right)\\ &=-\lim_{\epsilon \rightarrow 0+} \nabla_x [f_0(X(0;t,x,v+\epsilon \hat{r}_2),V(0;t,x,v+\epsilon \hat{r}_2))]\hat{r}_2 \\ &\quad +\lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \left( \nabla_x [f_0(X(0;t,x,v+\epsilon \hat{r}_2), V(0;t,x,v+\epsilon \hat{r}_2))] \right. \\ &\quad - \left. \left(\nabla_x f_0(x^1,R_{x^1}v)R_{x^1}-2\nabla_v f_0(x^1,R_{x^1}v)A_{v,x^1} \right)\right)(-v)\\ &=-\left(\nabla_x f_0(x^1,R_{x^1}v) R_{x^1} +\nabla_v f_0(x^1,R_{x^1}v)(-2A_{v,x^1})\right) \hat{r}_2 \\ &\quad + \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \left( \nabla_x f_0(X^{\epsilon}(0),V^{\epsilon}(0))\nabla_x X^{\epsilon}(0) -\nabla_x f_0(X(0),R_{x^1}v)R_{x^1}\right)(-v) \\ &\quad + \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \left( \nabla_v f_0(X^{\epsilon}(0),V^{\epsilon}(0))\nabla_x V^{\epsilon}(0)-\nabla_v f_0(X(0),R_{x^1}v)(-2A_{v,x^1})\right)(-v)\\ &:=-\left(\nabla_x f_0(x^1,R_{x^1}v) R_{x^1} +\nabla_v f_0(x^1,R_{x^1}v)(-2A_{v,x^1})\right) \hat{r}_2 + I_{vt,1}+I_{vt,2}, \end{split} \end{equation*} where \begin{align*} I_{vt,1}&:= \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \left( \nabla_x f_0(X^{\epsilon}(0),V^{\epsilon}(0))\nabla_x X^{\epsilon}(0)-\nabla_x f_0(X^{\epsilon}(0),V^{\epsilon}(0))\lim_{s\rightarrow 0-} \nabla_x X(s) \right. \\ &\quad + \left. \nabla_x f_0(X^\epsilon(0),V^\epsilon(0))\lim_{s \rightarrow 0-} \nabla_x X(s) - \nabla_xf_0(X(0),R_{x^1}v)R_{x^1}\right) (-v) \\ &=(-v^T) \begin{bmatrix} \nabla_x f_0(x^1,R_{x^1}v) \nabla_v (R_{x^1(x,v)}^1) \\ \nabla_x f_0(x^1,R_{x^1}v) \nabla_v (R_{x^1(x,v)}^2) \end{bmatrix}\hat{r}_2 \\ &\quad + (-v^T)R_{x^1} \left( \nabla_{xx} f_0(x^1,R_{x^1}v) (-tR_{x^1}) +\nabla_{vx} f_0(x^1,R_{x^1}v) (R_{x^1}+2tA_{v,x^1})\right)\hat{r}_2,\\ I_{vt,2}&:= \lim_{\epsilon \rightarrow 0+} \frac{1}{\epsilon} \left( \nabla_v f_0(X^{\epsilon}(0),V^{\epsilon}(0))\nabla_x V^{\epsilon}(0)-\nabla_v f_0(X^\epsilon(0),V^\epsilon(0))\lim_{s\rightarrow 0-} \nabla_xV(s) \right. \\ & \quad \left.+ \nabla_v f_0(X^\epsilon(0),V^\epsilon(0))\lim_{s\rightarrow 0-} \nabla_xV(s) -\nabla_v f_0(X(0),R_{x^1}v)(-2A_{v,x^1}) \right)(-v)\\ &=(-v^T)\begin{bmatrix} \nabla_v f_0(x^1,R_{x^1}v) \nabla_v (-2A_{v,x^1(x,v)}^1) \\ \nabla_v f_0(x^1,R_{x^1}v) \nabla_v (-2A_{v,x^1(x,v)}^2) \end{bmatrix}\hat{r}_2\\ &\quad +(-v^T)(-2A^T_{v,x^1}) \left( \nabla_{xv} f_0(x^1,R_{x^1}v)(-tR_{x^1}) +\nabla_{vv} f_0(x^1,R_{x^1}v)(R_{x^1}+2tA_{v,x^1}) \right) \hat{r}_2\\ &=(-v^T)\begin{bmatrix} \nabla_v f_0(x^1,R_{x^1}v) \nabla_v (-2A_{v,x^1(x,v)}^1) \\ \nabla_v f_0(x^1,R_{x^1}v) \nabla_v (-2A_{v,x^1(x,v)}^2) \end{bmatrix}\hat{r}_2. \end{align*} Thus, we have the following compatibility condition: \begin{equation} \label{vt comp} \begin{split} &-\nabla_x f_0(x^1,v) +tv^T\nabla_{xx} f_0(x^1,v) +(-v^T) \nabla_{vx} f_0(x^1,v) \\ &=- \left(\nabla_x f_0(x^1,R_{x^1}v) R_{x^1} +\nabla_v f_0(x^1,R_{x^1}v)(-2A_{v,x^1})\right)\\ &\quad +(-v^T) \begin{bmatrix} \nabla_x f_0(x^1,R_{x^1}v) \nabla_v (R_{x^1(x,v)}^1) \\ \nabla_x f_0(x^1,R_{x^1}v) \nabla_v (R_{x^1(x,v)}^2) \end{bmatrix} + (-v^T) \begin{bmatrix} \nabla_v f_0(x^1,R_{x^1}v) \nabla_v (-2A_{v,x^1(x,v)}^1) \\ \nabla_v f_0(x^1,R_{x^1}v) \nabla_v (-2A_{v,x^1(x,v)}^2) \end{bmatrix}\\ &\quad +tv^T R_{x^1} \nabla_{xx} f_0(x^1,R_{x^1}v) R_{x^1} +(-v^T)R_{x^1} \nabla_{vx} f_0(x^1,R_{x^1}v)(R_{x^1}+2tA_{v,x^1}). \end{split} \end{equation} \subsubsection{Derive $C^2_{tt},C^2_{tx}, C^2_{tv},C^2_{xt},C^2_{vt}$ compatibility conditions from \eqref{Cond2 1}--\eqref{Cond2 4},\eqref{Cond3} and \eqref{Cond4}} So far, we have derived \eqref{Cond2 1}--\eqref{Cond2 4} to satisfy $f\in C^2_{xv},C^2_{xx},C^2_{vx},C^2_{vv}$. In \eqref{Cond2 1}--\eqref{Cond2 4}, since $\nabla_{xv} f_0(x^1,v)$ is the same as $\nabla_{vx} f_0(x^1,v)^T$, we need to assume \eqref{Cond3}. Similarly, we obtained \eqref{Cond3} because $\nabla_{xx} f_0(x^1,v)$ is a symmetric matrix. In this subsection, we will show that the compatibility conditions $C^2_{tt}$ \eqref{time cond}, $C^2_{tx}$ \eqref{tx comp}, $C^2_{tv}$ \eqref{tv comp}, $C^2_{xt}$ \eqref{xt comp}, and $C^2_{vt}$ \eqref{vt comp} are induced under \eqref{Cond2 1}--\eqref{Cond2 4},\eqref{Cond3}, and \eqref{Cond4}. Firstly, we consider $C^2_{tt}$ compatibility condition. Using \eqref{Av=0} in Lemma \ref{lem_RA}, \eqref{prop d_R}, and \eqref{prop d_A}, one has \begin{equation*} \begin{split} v^T \nabla_{xx}f_0(x^1,v) v &= v^T \Bigg(R_{x^{1}} \nabla_{xx}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + R_{x^{1}} \nabla_{vx}f_{0}(x^{1}, R_{x^1}v)(-2A_{v,x^{1}}) \\ &\quad + (-2A^{T}_{v,x^{1}}) \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + (-2A^{T}_{v,x^{1}}) \nabla_{vv}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \\ &\quad + \begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^1) \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^2) \end{bmatrix} - 2 \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(A_{v,x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(A_{v,x^{1}(x,v)}^2) \end{bmatrix} \Bigg) v\\ &=v^TR_{x^1}\nabla_{xx}f_0(x^1,R_{x^1}v)R_{x^1} v +v^T \begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^1) \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^2) \end{bmatrix} v \\ &\quad + v^T \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(-2A_{v,x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(-2A_{v,x^{1}(x,v)}^2) \end{bmatrix}v\\ &= (R_{x^1}v)^T \nabla_{xx} f_0(x^1,R_{x^1}v) (R_{x^1}v). \end{split} \end{equation*} In \eqref{tx comp}, the left-hand side is \begin{align*} (-v^T) \nabla_{xx} f_0(x^1,v) &= (-v^T) \Bigg(R_{x^{1}} \nabla_{xx}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + R_{x^{1}} \nabla_{vx}f_{0}(x^{1}, R_{x^1}v)(-2A_{v,x^{1}}) \\ &\quad + (-2A^{T}_{v,x^{1}}) \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + (-2A^{T}_{v,x^{1}}) \nabla_{vv}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \\ &\quad + \begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^1) \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^2) \end{bmatrix} - 2 \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(A_{v,x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(A_{v,x^{1}(x,v)}^2) \end{bmatrix} \Bigg)\\ &= (-v^T) R_{x^1}\nabla_{xx} f_0(x^1,R_{x^1}v) R_{x^1} + (-v^T)R_{x^1} \nabla_{vx} f_0(x^1,R_{x^1}v) (-2A_{v,x^1}) \\ &\quad + (-v^T) \begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^1) \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^2) \end{bmatrix} +(-v^T) \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(-2A_{v,x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(-2A_{v,x^{1}(x,v)}^2) \end{bmatrix}, \end{align*} where we have used \eqref{Av=0}. When we assume \eqref{Cond4}, it holds that $\nabla_{xx}f_0(x^1,v)$ is a symmetric matrix. In other words, \begin{align*} &\left(\begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^1) \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^2) \end{bmatrix} + \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(-2A_{v,x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(-2A_{v,x^{1}(x,v)}^2) \end{bmatrix}\right)^T \\ &= \begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^1) \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^2) \end{bmatrix} + \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(-2A_{v,x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(-2A_{v,x^{1}(x,v)}^2) \end{bmatrix}, \end{align*} which implies that \begin{equation} \label{vRA prop} \begin{split} &(-v^T) \begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^1) \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^2) \end{bmatrix} +(-v^T) \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(-2A_{v,x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(-2A_{v,x^{1}(x,v)}^2) \end{bmatrix}\\ &=\left( \left( \begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^1) \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(R_{x^{1}(x,v)}^2) \end{bmatrix} + \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(-2A_{v,x^{1}(x,v)}^1) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}(-2A_{v,x^{1}(x,v)}^2) \end{bmatrix}\right)(-v)\right)^T=0, \end{split} \end{equation} due to \eqref{prop d_R} and \eqref{prop d_A}. Therefore, the left-hand side in \eqref{tx comp} becomes \begin{equation} \label{tx comp left} (-v^T) \nabla_{xx} f_0(x^1,v) = (-v^T) R_{x^1} \nabla_{xx} f_0(x^1,R_{x^1}v) R_{x^1} + (-v^T) \nabla_{vx} f_0(x^1,R_{x^1}v) (-2A_{v,x^1}). \end{equation} Using \eqref{vRA prop}, the right-hand side in \eqref{tx comp} is \begin{equation}\label{tx comp right} \begin{split} &(-v)^T \begin{bmatrix} \nabla_x f_0(x^1,R_{x^1}v) \nabla_x (R_{x^1(x,v)}^1) \\ \nabla_x f_0(x^1,R_{x^1}v) \nabla_x (R_{x^1(x,v)}^2) \end{bmatrix}^T+(-v)^T \begin{bmatrix} \nabla_v f_0(x^1,R_{x^1}v) \nabla_x (-2A_{v,x^1(x,v)}^1) \\ \nabla_v f_0(x^1,R_{x^1}v) \nabla_x(-2A_{v,x^1(x,v)}^2) \end{bmatrix}^T\\ & \quad +(-v^T)R_{x^1} \nabla_{xx} f_0(x^1,R_{x^1}v)R_{x^1}+(-v^T) R_{x^1} \nabla_{vx}f_0(x^1,R_{x^1}v) (-2A_{v,x^1})\\ & =(-v^T)R_{x^1} \nabla_{xx} f_0(x^1,R_{x^1}v)R_{x^1}+(-v^T) R_{x^1} \nabla_{vx}f_0(x^1,R_{x^1}v) (-2A_{v,x^1}).\\ \end{split} \end{equation} From \eqref{tx comp left} and \eqref{tx comp right}, we derive \eqref{tx comp} under the assumption \eqref{Cond2 1}--\eqref{Cond2 4},\eqref{Cond3}, and \eqref{Cond4}. For the left-hand side in \eqref{tv comp}, we use \eqref{Av=0}, the $C^1$ compatibility condition \eqref{c_x}, \eqref{Cond2 1}--\eqref{Cond2 4}, and \eqref{vRA prop}: \begin{align*} &-\nabla_x f_0(x^1,v) +tv^T \nabla_{xx} f_0(x^1,v) + (-v^T) \nabla_{vx} f_0(x^1,v) \\ &= -\nabla_x f_0(x^1,R_{x^1}v)R_{x^1} - \nabla_v f_0(x^1,R_{x^1}v) (-2A_{v,x^1}) \\ &\quad +tv^TR_{x^1}\nabla_{xx} f_0(x^1,R_{x^1}v)R_{x^1} +tv^T R_{x^1} \nabla_{vx} f_0(x^1,R_{x^1}v) (-2A_{v,x^1})\\ &\quad +(-v^T)R_{x^1} \nabla_{vx}f_0(x^1,R_{x^1}v)R_{x^1} +(-v)^T \begin{bmatrix} \nabla_v f_0(x^1,R_{x^1}v)\nabla_v(-2A_{v,x^1}^1)\\ \nabla_v f_0(x^1,R_{x^1}v)\nabla_v(-2A_{v,x^1}^2) \end{bmatrix}. \end{align*} Since $\nabla_{xv}f_0(X(0),v)^T = \nabla_{vx} f_0(X(0),v)$ under \eqref{Cond3}, it holds that \begin{equation} \label{RA prop} \begin{bmatrix} \nabla_v f_0(x^1,R_{x^1}v)\nabla_v(-2A_{v,x^1}^1)\\ \nabla_v f_0(x^1,R_{x^1}v)\nabla_v(-2A_{v,x^1}^2) \end{bmatrix}^T = \begin{bmatrix} \nabla_v f_0(x^1,R_{x^1}v) \nabla_x (R_{x^1(x,v)}^1)\\ \nabla_v f_0(x^1,R_{x^1}v) \nabla_x (R_{x^1(x,v)}^2) \end{bmatrix}. \end{equation} Since \eqref{RA} in Lemma \ref{lem_RA}, \eqref{prop d_R}, \eqref{Cond3}, and the formula \eqref{RA prop} above, it follows that \begin{equation} \label{A prop} \begin{split} &\nabla_v f_0(x^1,R_{x^1}v) (-2A_{v,x^1}) = C(R_{x^1}v)^T (-2A_{v,x^1})=-\frac{2C}{v\cdot n(x^1)} v^T (Qv) \otimes (Qv) =0, \\ &(-v)^T\begin{bmatrix} \nabla_v f_0(x^1,R_{x^1}v)\nabla_v(-2A_{v,x^1}^1)\\ \nabla_v f_0(x^1,R_{x^1}v)\nabla_v(-2A_{v,x^1}^2) \end{bmatrix}= \left( \begin{bmatrix} \nabla_v f_0(x^1,R_{x^1}v) \nabla_x (R_{x^1(x,v)}^1) \\ \nabla_v f_0(x^1,R_{x^1}v) \nabla_x (R_{x^1(x,v)}^2) \end{bmatrix} (-v) \right)^T=0, \end{split} \end{equation} where $C$ is an arbitrary constant. And then, one obtains that \begin{equation} \label{tv comp left} \begin{split} &-\nabla_x f_0(x^1,v) +tv^T \nabla_{xx} f_0(x^1,v) + (-v^T) \nabla_{vx} f_0(x^1,v) \\ &= -\nabla_x f_0(x^1,R_{x^1}v)R_{x^1} +tv^TR_{x^1} \nabla_{xx} f_0(x^1,R_{x^1}v)R_{x^1} +tv^T R_{x^1} \nabla_{vx} f_0(x^1,R_{x^1}v) (-2A_{v,x^1})\\ &\quad +(-v^T)R_{x^1} \nabla_{vx}f_0(x^1,R_{x^1}v)R_{x^1}. \end{split} \end{equation} By \eqref{Av=0} and \eqref{Cond4}, the right-hand side in \eqref{tv comp} is \begin{equation*} \begin{split} & -\nabla_xf_0(x^1,R_{x^1}v) (R_{x^1}+2tA_{v,x^1})-t(-v^T)R_{x^1} \nabla_{xx} f_0(x^1,R_{x^1}v) R_{x^1} \\ &\quad +(-v^T) R_{x^1} \nabla_{vx} f_0(x^1,R_{x^1}v) (R_{x^1}+2tA_{v,x^1}) \\ &=-\nabla_x f_0(x^1,R_{x^1}v) R_{x^1} -2Ct (R_{x^1}v)^TA_{v,x^1} +tv^T R_{x^1} \nabla_{xx}f_0(x^1,R_{x^1}v) R_{x^1} \\ &\quad +tv^T R_{x^1} \nabla_{vx} f_0(x^1,R_{x^1}v) (-2A_{v,x^1}) + (-v^T) R_{x^1} \nabla_{vx} f_0(x^1,R_{x^1}v) R_{x^1}\\ & = -\nabla_x f_0(x^1,R_{x^1}v)R_{x^1} +tv^TR_{x^1} \nabla_{xx} f_0(x^1,R_{x^1}v)R_{x^1} +tv^T R_{x^1} \nabla_{vx} f_0(x^1,R_{x^1}v) (-2A_{v,x^1})\\ &\quad +(-v^T)R_{x^1} \nabla_{vx}f_0(x^1,R_{x^1}v)R_{x^1}, \end{split} \end{equation*} where $C$ is an arbitrary constant. Thus, the left-hand side in \eqref{tv comp} is the same as the right-hand side in \eqref{tv comp} under \eqref{Cond2 1}--\eqref{Cond2 4}, \eqref{Cond3}, and \eqref{Cond4}. The left-hand side in \eqref{xt comp} is as follows: \begin{equation*} (-v^T) \nabla_{xx}f_0(x^1,v) = (-v^T) R_{x^1} \nabla_{xx} f_0(x^1,R_{x^1}v) R_{x^1} +(-v^T) \nabla_{vx} f_0(x^1,R_{x^1}v) (-2A_{v,x^1}), \end{equation*} by \eqref{tx comp left}. Using \eqref{vRA prop}, the right-hand side in \eqref{xt comp} can be further computed by \begin{equation*} \begin{split} &(-v^T) \begin{bmatrix} \nabla_x f_0(x^1,R_{x^1}v) \nabla_x (R_{x^1(x,v)}^1) \\ \nabla_x f_0(x^1,R_{x^1}v) \nabla_x (R_{x^1(x,v)}^2)\end{bmatrix} +(-v^T) \begin{bmatrix} \nabla_vf_0(x^1,R_{x^1}v) \nabla_x (-2A_{v,x^1(x,v)}^1) \\ \nabla_v f_0(x^1,R_{x^1}v) \nabla_x (-2A_{v,x^1(x,v)}^2) \end{bmatrix}\\ &\quad +(-v^T)R_{x^1} \nabla_{xx} f_0(x^1,R_{x^1}v)R_{x^1}+(-v^T)R_{x^1} \nabla_{vx} f_0(x^1,R_{x^1}v)(-2A_{v,x^1})\\ &= (-v^T)R_{x^1} \nabla_{xx} f_0(x^1,R_{x^1}v)R_{x^1}+(-v^T)R_{x^1} \nabla_{vx} f_0(x^1,R_{x^1}v)(-2A_{v,x^1}). \end{split} \end{equation*} Hence, the \eqref{xt comp} condition can be deduced by \eqref{Cond2 1}--\eqref{Cond2 4},\eqref{Cond3}, and \eqref{Cond4}. Finally, the \eqref{vt comp} condition is the last remaining case. The left-hand side in \eqref{vt comp} comes from \eqref{tv comp left}: \begin{align*} &-\nabla_x f_0(x^1,v) +tv^T \nabla_{xx} f_0(x^1,v) + (-v^T) \nabla_{vx} f_0(x^1,v) \\ &= -\nabla_x f_0(x^1,R_{x^1}v)R_{x^1} +tv^TR_{x^1} \nabla_{xx} f_0(x^1,R_{x^1}v)R_{x^1} +tv^T R_{x^1} \nabla_{vx} f_0(x^1,R_{x^1}v) (-2A_{v,x^1})\\ &\quad +(-v^T)R_{x^1} \nabla_{vx}f_0(x^1,R_{x^1}v)R_{x^1}. \end{align*} Since \eqref{Av=0} in Lemma \ref{lem_RA}, \eqref{vRA prop}, \eqref{A prop}, and \begin{align*} \quad \begin{bmatrix} \nabla_x f_0(x^1,R_{x^1}v) \nabla_v(R_{x^1(x,v)}^1) \\ \nabla_x f_0(x^1,R_{x^1}v) \nabla_v(R_{x^1(x,v)}^2) \end{bmatrix}&= (-t)\begin{bmatrix} \nabla_x f_0(x^1,R_{x^1}v) \nabla_x(R_{x^1(x,v)}^1) \\\nabla_x f_0(x^1,R_{x^1}v) \nabla_x(R_{x^1(x,v)}^2) \end{bmatrix},\\ \begin{bmatrix} \nabla_v f_0(x^1,R_{x^1}v) \nabla_v (-2A_{v,x^1(x,v)}^1) \\ \nabla_v f_0(x^1,R_{x^1}v) \nabla_v (-2A_{v,x^1(x,v)}^2) \end{bmatrix}&=(-t) \begin{bmatrix} \nabla_v f_0(x^1,R_{x^1}v) \nabla_x (-2A_{v,x^1(x,v)}^1) \\ \nabla_v f_0(x^1,R_{x^1}v) \nabla_x (-2A_{v,x^1(x,v)}^2) \end{bmatrix}+ \begin{bmatrix} \nabla_v f_0(x^1,R_{x^1}v) \nabla_v (-2A_{v,x^1}^1)\\ \nabla_v f_0(x^1,R_{x^1}v) \nabla_v (-2A_{v,x^1}^2) \end{bmatrix}, \end{align*} the right-hand side in \eqref{vt comp} can be simplified as \begin{equation*} \begin{split} & - \left(\nabla_x f_0(x^1,R_{x^1}v) R_{x^1} +\nabla_v f_0(x^1,R_{x^1}v)(-2A_{v,x^1})\right)\\ &\quad +(-v^T) \begin{bmatrix} \nabla_x f_0(x^1,R_{x^1}v) \nabla_v (R_{x^1(x,v)}^1) \\ \nabla_x f_0(x^1,R_{x^1}v) \nabla_v (R_{x^1(x,v)}^2) \end{bmatrix} + (-v^T) \begin{bmatrix} \nabla_v f_0(x^1,R_{x^1}v) \nabla_v (-2A_{v,x^1(x,v)}^1) \\ \nabla_v f_0(x^1,R_{x^1}v) \nabla_v (-2A_{v,x^1(x,v)}^2) \end{bmatrix}\\ &\quad + tv^T R_{x^1} \nabla_{xx} f_0(x^1,R_{x^1}v) R_{x^1} +(-v^T)R_{x^1} \nabla_{vx} f_0(x^1,R_{x^1}v)(R_{x^1}+2tA_{v,x^1}) \\ &=-\nabla_x f_0(x^1,R_{x^1}v)R_{x^1} +tv^T \begin{bmatrix} \nabla_x f_0(x^1,R_{x^1}v) \nabla_x(R_{x^1(x,v)}^1) \\\nabla_x f_0(x^1,R_{x^1}v) \nabla_x(R_{x^1(x,v)}^2) \end{bmatrix}+tv^T \begin{bmatrix} \nabla_v f_0(x^1,R_{x^1}v) \nabla_x (-2A_{v,x^1(x,v)}^1) \\ \nabla_v f_0(x^1,R_{x^1}v) \nabla_x (-2A_{v,x^1(x,v)}^2) \end{bmatrix} \\ &\quad +(-v^T) \begin{bmatrix} \nabla_v f_0(x^1,R_{x^1}v) \nabla_v (-2A_{v,x^1}^1) \\ \nabla_v f_0(x^1,R_{x^1}v) \nabla_v (-2A_{v,x^1}^2) \end{bmatrix} + tv^T R_{x^1} \nabla_{xx} f_0(x^1,R_{x^1}v) R_{x^1} \\ &\quad +(-v^T)R_{x^1} \nabla_{vx} f_0(x^1,R_{x^1}v)(R_{x^1}+2tA_{v,x^1})\\ &= -\nabla_x f_0(x^1,R_{x^1}v)R_{x^1}+tv^T R_{x^1}\nabla_{xx} f_0(x^1,R_{x^1}v) R_{x^1}+tv^T R_{x^1}\nabla_{vx} f_0(x^1,R_{x^1}v)(-2A_{v,x^1}) \\ &\quad +(-v^T)R_{x^1} \nabla_{vx} f_0(x^1,R_{x^1}v) R_{x^1}. \end{split} \end{equation*} Hence, the \eqref{vt comp} condition can be obtained under \eqref{Cond2 1}--\eqref{Cond2 4},\eqref{Cond3}, and \eqref{Cond4}. \\ \hide \subsubsection{Symmetric presentation of \eqref{Cond} under assumption \eqref{if}} Above can be more simplified. Note that the 3rd condition of \eqref{Cond2 1}--\eqref{Cond2 4} is just obvious by specular reflection BC (taking $\nabla_{v}$ twice). From 1st and 4th condition, we have \begin{equation} \begin{split} (-2A^{T}_{v,x^{1}}) \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} &= (-2A^{T}_{v,x^{1}}) R_{x^{1}} \nabla_{xv}f_{0}(x^{1}, v) - (-2A^{T}_{v,x^{1}}) \nabla_{vv}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \\ R_{x^{1}} \nabla_{vx}f_{0}(x^{1}, R_{x^1}v)(-2A_{v,x^{1}}) &= \nabla_{vx}f_{0}(x^{1}, v)R_{x^{1}}(-2A_{v,x^{1}}) - (-2A^{T}_{v,x^{1}})\nabla_{vv}f_{0}(x^{1}, R_{x^1}v)(-2A_{v,x^{1}}). \end{split} \end{equation} Plugging into 2nd condition, \eqref{Cond2 2} is rewritten as \begin{equation} \label{re 2} \begin{split} & \nabla_{xx}f_{0}(x^{1},v) + \nabla_{vx}f_{0}(x^{1}, v)R_{x^{1}}A_{v,x^{1}} + (R_{x^{1}}A_{v,x^{1}})^{T} \nabla_{xv}f_{0}(x^{1}, v) \\ &= R_{x^{1}}\nabla_{xx}f_{0}(x^{1}, R_{x^1}v)R_{x^{1}} + R_{x^{1}}\nabla_{vx}f_{0}(x^{1}, R_{x^1}v)(-A_{v,x^{1}} ) \\ &\quad + (-A^{T}_{v,x^{1}}) \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + {\color{blue} \begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}[R_{x^{1}}]_{1} \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}[R_{x^{1}}]_{2} \end{bmatrix} } \end{split} \end{equation} \\ {\bf Conclusion} From \eqref{Cond2 3} and Lemma \ref{lem_RA}, \\ the \eqref{Cond2 1} can be written as symmetric form \\ \begin{equation} \label{sym Cond2_1} \begin{split} R_{x^{1}} \Big[ \nabla_{xv}f_{0}(x^{1},v) + \nabla_{vv}f_{0}(x^{1},v) \frac{ (Qv)\otimes (Qv)}{v\cdot n} \Big] R_{x^{1}} &= \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) + \nabla_{vv}f_{0}(x^{1}, R_{x^1}v) \frac{(QR_{x^1}v)\otimes (QR_{x^1}v)}{R_{x^1}v\cdot n(x^1)} . \end{split} \end{equation} Similarly, 4th one give \begin{equation} \label{sym Cond2_2} \begin{split} R_{x^{1}} \Big[ \nabla_{vx}f_{0}(x^{1},v) + \frac{ (Qv)\otimes (Qv)}{v\cdot n} \nabla_{vv}f_{0}(x^{1}, v) \Big] R_{x^{1}} &= \nabla_{vx}f_{0}(x^{1}, R_{x^1}v) + \frac{(QR_{x^1}v)\otimes (QR_{x^1}v)}{R_{x^1}v\cdot n(x^1)} \nabla_{vv}f_{0}(x^{1}, R_{x^1}v) . \end{split} \end{equation} \eqref{re 2} condition (\eqref{Cond2 2}) yields \begin{equation} \label{sym Cond2_3} \begin{split} &R_{x^{1}}\Big[ \nabla_{xx}f_{0}(x^{1},v) + \nabla_{vx}f_{0}(x^{1}, v) \frac{ (Qv)\otimes (Qv)}{v\cdot n} + \frac{ (Qv)\otimes (Qv)}{v\cdot n} \nabla_{xv}f_{0}(x^{1}, v) \Big] R_{x^{1}} \\ &= \nabla_{xx}f_{0}(x^{1}, R_{x^1}v) + \nabla_{vx}f_{0}(x^{1}, R_{x^1}v)\frac{(QR_{x^1}v)\otimes (QR_{x^1}v)}{R_{x^1}v\cdot n(x^1)} + \frac{(QR_{x^1}v)\otimes (QR_{x^1}v)}{R_{x^1}v\cdot n(x^1)} \nabla_{xv}f_{0}(x^{1}, R_{x^1}v) \\ &\quad + {\color{blue} \underbrace{ R_{x^{1}} \begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}[R_{x^{1}}]_{1} \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_{x}[R_{x^{1}}]_{2} \end{bmatrix} R_{x^{1}} }_{(?)} } \end{split} \end{equation} \unhide \subsection{Proof of Theorem \ref{thm 2}} \begin{proof} [Proof of Theorem \ref{thm 2}] By the same argument of the proof of Theorem \ref{thm 1}, it suffices to set $k=1$. Through this section, we have shown that \eqref{Cond2 1}--\eqref{Cond2 4}, \eqref{Cond3}, and \eqref{Cond4} yield $C^{2}_{t,x,v}$ regularity of $f(t,x,v)$ of \eqref{solution}. However, \eqref{Cond2 3} is just an obvious consequence of \eqref{BC} and \eqref{Cond2 4} is identical to \eqref{Cond2 1} since we assume \eqref{C2 cond34} which is the same as \eqref{Cond3} and \eqref{Cond4}. So, we omit \eqref{Cond2 3} and \eqref{Cond2 4} in the statement. In Remark \ref{extension C2 cond34}, under \eqref{C2 cond34}, we derived that \begin{equation*} \nabla_x f_0(x,v)R_x = \nabla_x f_0(x,R_xv) \quad \textrm{and} \quad \nabla_v f_0(x,v) \frac{(Qv) \otimes (Qv)}{v\cdot n} R_x = \nabla_v f_0(x,R_xv)\frac{(QR_xv)\otimes(QR_xv)}{R_x v\cdot n}, \end{equation*} for all $(x,v) \in \gamma_- \cup \gamma_+$. In Remark \ref{example}, we showed that \begin{equation*} f_0(x,v)=G(x,\vert v \vert), \quad (x,v) \in \partial \Omega \times \mathbb{R}^2, \end{equation*} where $G$ is a $C^1_{x,v}$ function. Notice that the function $G$ must be $C^2_{x,v}$ to be $f_0 \in C^2_{x,v}(\bar\Omega\times \mathbb{R}^2)$ in Theorem \ref{thm 2}. Since $f_0(x,v)=G(x,\vert v \vert)$ be a radial function with respect to $v$ and $\nabla_x f_0(x,v) \parallel v^T$ for all $(x,v)\in \gamma_-\cup \gamma_+$, $\nabla_x f_0(x,v)$ must be $0$ on $\partial \Omega$. Now let us change \eqref{Cond2 1} and \eqref{Cond2 2} into symmetric forms. First, we multiply \eqref{Cond2 1} by $R_{x^1}$ from both left and right. Then applying \eqref{Cond2 3} and \eqref{RA}, we obtain \begin{align*} R_{x^1} \Big[ \nabla_{xv}f_{0}(x^1,v) + \nabla_{vv}f_{0}(x^1,v) \frac{ (Qv)\otimes (Qv)}{v\cdot n(x^1)} \Big] R_{x^1} &= \nabla_{xv}f_{0}(x^1, R_{x^1}v) + \nabla_{vv}f_{0}(x^1, R_{x^1}v) \frac{(QR_{x^1}v)\otimes (QR_{x^1}v)}{R_{x^1}v\cdot n(x^1)} \notag \\ &\quad+ R_{x^1} \begin{bmatrix} \nabla_{v}f_{0}(x^1 , R_{x^1}v) \nabla_x(R^1_{x^1(x,v)}) \\ \nabla_{v}f_{0}(x^1, R_{x^1}v) \nabla_x(R^2_{x^1(x,v)}) \end{bmatrix} R_{x^1}. \end{align*} Also, plugging the above into \eqref{Cond2 2} and using \eqref{RA} again, we obtain \begin{align*} &R_{x^1}\Big[ \nabla_{xx}f_{0}(x^1,v) + \nabla_{vx}f_{0}(x^1, v) \frac{ (Qv)\otimes (Qv)}{v\cdot n(x^1)} + \frac{ (Qv)\otimes (Qv)}{v\cdot n(x^1)} \nabla_{xv}f_{0}(x^1, v) \Big] R_{x^1} \\ &= \nabla_{xx}f_{0}(x^1, R_{x^1}v) + \nabla_{vx}f_{0}(x^1, R_{x^1}v)\frac{(QR_{x^1}v)\otimes (QR_{x^1}v)}{R_{x^1}v\cdot n(x^1)} + \frac{(QR_{x^1}v)\otimes (QR_{x^1}v)}{R_{x^1}v\cdot n(x^1)} \nabla_{xv}f_{0}(x^1, R_{x^1}v) \\ &\quad -2R_x^1 \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{v}A^{1}_{v,x^1} \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_{v}A^{2}_{v,x^1} \end{bmatrix} R_{x^1}A_{v,x^1}R_{x^1} + A_{v,x^1}\begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_x(R^1_{x^1(x,v)}) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_x(R^2_{x^1(x,v)}) \end{bmatrix}R_{x^1} \\ &\quad + R_{x^1} \begin{bmatrix} \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_x (R^1_{x^1(x,v)}) \\ \nabla_{x}f_{0}(x^{1}, R_{x^1}v) \nabla_x(R^2_{x^1(x,v)}) \end{bmatrix} R_{x^1} - 2 R_{x^1} \begin{bmatrix} \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_x(A^1_{v,x^1(x,v)}) \\ \nabla_{v}f_{0}(x^{1}, R_{x^1}v) \nabla_x(A^2_{v,x^1(x,v)}) \end{bmatrix} R_{x^1}. \end{align*} By Lemma \ref{d_RA} and Lemma \ref{dx_A}, $\nabla_x(R^1_{x^1(x,v)}), \nabla_x(R^2_{x^1(x,v)}), \nabla_x(A^1_{v,x^1(x,v)})$, and $\nabla_v(A^2_{v,x^1(x,v)})$ depend only on $n(x^1)$ and $v$. We rewrite $x^1$ as $x$ for $(x,v) \in \gamma_-$ because $n(x^1)=x^1$. Since $\nabla_x f_0(x,v)=0$ for $x\in \partial \Omega$, we obtain \eqref{C2 cond 1} and \eqref{C2 cond 2}. Lastly, we will prove that $f(t,x,v)$ is not of class $C^2_{t,x,v}$ at time $t$ such that $t^k(t,x,v)=0$ for some $k$ if one of these conditions \eqref{C2 cond34}, \eqref{C2 cond 1}, and \eqref{C2 cond 2} for $(x,v)\in \gamma_-$ does not hold. Similar to the proof of Theorem \ref{thm 1}, it suffices to set $k=1$ and prove that $f(t,x,v)$ is not a class of $C^2_{t,x,v}$ at time $t$ satisfying $t^1(t,x,v)=0$. Let $t^*$ be time $t$ such that $t^1(t,x,v)=0$. Remind that the condition \eqref{C2 cond34} was necessary to satisfy $\nabla_{xv}^Tf_0(x,v)=\nabla_{vx}f_0(x,v)$ and $\nabla_{xx}^T f_0(x,v) = \nabla_{xx}f_0(x,v)$ for $x\in \partial \Omega$. In other words, $\nabla_{xv}f_0(x,v)^T \neq \nabla_{vx} f_0(x,v)$ and $\nabla_{xx}^Tf_0(x,v)\neq \nabla_{xx}f_0(x,v)$ without \eqref{C2 cond34}. In $\nabla_{xv}$ and $\nabla_{vx}$ with direction $\hat{r}_1$, we derived \eqref{nabla_xv f case1} and \eqref{nabla_vx f case1} at $t^*$: \begin{equation*} \nabla_{xv}f(t,x,v) = (-t) \nabla_{xx} f_0(x^1,v) + \nabla_{xv} f_0(x^1,v), \quad \nabla_{vx}f(t,x,v)=(-t) \nabla_{xx}f_0(x^1,v)+\nabla_{vx}f_0(x^1,v). \end{equation*} Thus, if \eqref{C2 cond34} is not provided, $\nabla_{xv}^T f(t,x,v) \neq \nabla_{vx}f(t,x,v)$. This implies that $f(t,x,v)$ is not $C^2_{t,x,v}$ at time $t^*$. Next, we do not assume \eqref{C2 cond 1} for $(x,v)\in \gamma_-$. The condition \eqref{C2 cond 1} is derived from $\nabla_{xv}$ compatibility condition \eqref{Cond2 1}. Therefore, directional derivatives \eqref{nabla_xv f case1} and \eqref{nabla_xv f case2} with respect to $\hat{r}_1$ and $\hat{r}_2$ are not the same. It means that $f(t,x,v)$ is not $C^2_{t,x,v}$ at time $t^*$. Finally, we assume that \eqref{C2 cond 2} does not hold for $(x,v)\in \gamma_-$. The condition \eqref{C2 cond 2} comes from $\nabla_{xx},\nabla_{xv}$ and $\nabla_{vx}$ compatibility conditions \eqref{Cond2 1}, \eqref{Cond2 2}, and \eqref{Cond2 4}. One may assume without loss of generality that the initial data $f_0$ satisfies \eqref{C2 cond34} and \eqref{C2 cond 1}. Then, only $\nabla_{xx}$ compatibility condition \eqref{Cond2 2} is not satisfied. Similar to the above, directional derivatives $\nabla_{xx}$ with respect to $\hat{r}_1$ and $\hat{r}_2$ are not the same. Then, $f(t,x,v)$ is not $C^2_{t,x,v}$ at time $t^*$ without \eqref{C2 cond 2}. This finishes the proof. \end{proof} \section{Regularity estimate of $f$} \subsection{First order estimates of characteristics} Using Definition \ref{notation}, \begin{equation*} V(0;t,x,v) = R_{\ell} R_{\ell-1} \cdots R_{2} R_{1} v, \quad \text{for some $\ell$ such that}\quad t^{\ell+1} < 0 \leq t^{\ell}, \end{equation*} where \[ R_{j} = I - 2 n(x^{j})\otimes n(x^{j}). \] For above $\ell$, \begin{equation*} X(0;t,x,v) = x^{\ell} - v^{\ell}t^{\ell}, \end{equation*} where inductively, \[ x^{k} = x^{k-1} - v^{k-1}(t^{k-1} - t^{k}),\quad 2\leq k \leq \ell, \] and \[ x^{1} = x - v(t-t^{1}) = x - vt_{\mathbf{b}}. \] Or using rotational symmetry, we can also express \[ x^{\ell} = Q_{\theta}^{\ell-1}x^{1}, \] where $Q_{\theta}$ is operator (matrix) which means rotation(on the boundary of the disk) by $\theta$. $\theta$ is uniquely determined by its first (backward in time) bounce angle $v\cdot n(x_{\mathbf{b}})$. \\ \begin{lemma} \label{der theta} Here, $\theta$ is the angle at which $v$ is rotated to $v^1$. Moreover, $\theta>0$ is the same as the angle of rotation from $x^{k}$ to $x^{k+1}$ for $k=1,2,\cdots,l-1$. Then, derivatives of $\theta$ with respect to $x$ and $v$ are \begin{equation} \label{d_theta} \nabla_x \theta =-\frac{2}{\sin \frac{\theta}{2}} Q_{-\frac{\theta}{2}}n(x^{1}), \quad \nabla_v \theta = 2\left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} - \frac{1}{\vert v \vert}\right) Q_{-\frac{\theta}{2}}n(x^{1}), \end{equation} provided $n(x^1)\cdot v \neq0$. \end{lemma} \begin{proof} From the definition of $\theta$, \begin{equation} \label{theta} \cos \left( \frac{\pi}{2} - \frac{\theta}{2} \right) = \sin \left( \frac{\theta}{2} \right) = - \left[ n(x^1) \cdot \frac{v}{\vert v \vert} \right]. \end{equation} Thus, taking $\nabla_x$ yields \begin{align*} \frac{1}{2} \cos \frac{\theta}{2} \nabla_x \theta= -\frac{v}{\vert v \vert} \nabla_x \left( n(x^1)\right)=-\frac{v}{\vert v \vert} \left( I - \frac{v \otimes n(x^1)}{v \cdot n(x^1)} \right)=-\frac{v}{\vert v \vert}+ \frac{\vert v \vert}{v\cdot n(x^1)} n(x^1), \end{align*} where we used the product rule in Lemma \ref{matrix notation} and \eqref{normal} in Lemma \ref{d_n}. Note that rotating an angle $\phi=\frac{\pi}{2}-\frac{\theta}{2}>0$ on a normal vector $n(x^1)$ gives the vector $- \frac{v}{\vert v \vert}$. In other words, it holds that \begin{equation} \label{v_n} -\frac{v}{\vert v \vert} = Q_{\phi} n (x^1), \end{equation} where $Q_{\phi}= \begin{bmatrix} \cos \phi & -\sin \phi \\ \sin \phi & \cos \phi \end{bmatrix} =\begin{bmatrix} \sin \frac{\theta}{2} & -\cos \frac{\theta}{2} \\ \cos \frac{\theta}{2} & \sin \frac{\theta}{2} \end{bmatrix}$. Thus, \begin{align*} \nabla_x \theta &= \frac{2}{\cos \frac{\theta}{2}} \left( Q_{\phi} - \frac{1}{\sin \frac{\theta}{2}} I \right) n(x^1)=\frac{2}{\cos \frac{\theta}{2}\sin \frac{\theta}{2}} \begin{bmatrix} \sin^2 \frac{\theta}{2} -1 & -\cos \frac{\theta}{2} \sin \frac{\theta}{2} \\ \sin\frac{\theta}{2}\cos \frac{\theta}{2}& \sin^2 \frac{\theta}{2} -1\end{bmatrix}n(x^1) \\ &= -\frac{2}{\sin \frac{\theta}{2}} \begin{bmatrix} \cos \frac{\theta}{2} & \sin \frac{\theta}{2} \\ -\sin \frac{\theta}{2} & \cos \frac{\theta}{2} \end{bmatrix}n(x^1) = -\frac{2}{\sin \frac{\theta}{2}} Q_{-\frac{\theta}{2}}n(x^{1}). \end{align*} Similarly, taking the derivative $\nabla_v$ of both sides in \eqref{theta}: \begin{align*} \frac{1}{2} \cos \frac{\theta}{2} \nabla_v \theta=-\frac{v}{\vert v \vert} \nabla_v \left( n(x^1)\right) -n(x^1) \left( \frac{1}{\vert v \vert} I- \frac{v \otimes v}{\vert v \vert^3} \right)&=t_{\mathbf{b}}\frac{v}{\vert v \vert} \Big(I - \frac{v\otimes n(x^1)}{v\cdot n(x^1)} \Big)-n(x^1) \left( \frac{1}{\vert v \vert} I- \frac{v \otimes v}{\vert v \vert^3} \right)\\ &=t_{\mathbf{b}} \frac{v}{\vert v \vert} -t_{\mathbf{b}} \frac{ \vert v \vert}{v\cdot n(x^1)} n (x^1) - \frac{1}{\vert v \vert} n(x^1) + \frac{v \cdot n(x^1)}{\vert v \vert^2} \frac{v}{\vert v \vert}, \end{align*} where we used the product rule in Lemma \ref{matrix notation} and \eqref{normal} in Lemma \ref{d_n}. From \eqref{v_n}, \begin{align*} \nabla_v \theta &= \frac{2}{\cos \frac{\theta}{2} \sin \frac{\theta}{2}} \left( -t_{\mathbf{b}}\sin\frac{\theta}{2} \left[Q_{\phi}-\frac{1}{\sin \frac{\theta}{2}}I \right]n(x^1)+\frac{\sin^2\frac{\theta}{2}}{\vert v \vert} \left[ Q_{\phi} - \frac{1}{\sin \frac{\theta}{2}} I\right]n(x^1) \right)\\ &=\frac{2 t_{\mathbf{b}}}{\sin \frac{\theta}{2}} \begin{bmatrix} \cos \frac{\theta}{2}& \sin \frac{\theta}{2} \\ -\sin \frac{\theta}{2} & \cos \frac{\theta}{2} \end{bmatrix}n(x^1) -\frac{2}{\vert v \vert} \begin{bmatrix} \cos \frac{\theta}{2} & \sin\frac{\theta}{2} \\ -\sin\frac{\theta}{2} & \cos \frac{\theta}{2} \end{bmatrix} n(x^1)\\ &=2\left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} - \frac{1}{\vert v \vert}\right) \begin{bmatrix} \cos \frac{\theta}{2} & \sin\frac{\theta}{2} \\ -\sin\frac{\theta}{2} & \cos \frac{\theta}{2} \end{bmatrix} n(x^1) = 2\left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} - \frac{1}{\vert v \vert}\right) Q_{-\frac{\theta}{2}}n(x^{1}). \end{align*} \end{proof} \begin{lemma} \label{X,V} Let $(t,x,v) \in \mathbb{R}_+\times \Omega\times \mathbb{R}^2$. The specular characteristics $X(0;t,x,v)$ and $V(0;t,x,v)$ are defined in Definition \ref{notation}. Whenever $n(x^1)\cdot v\neq0$, we have derivatives of the characteristics $X(0;t,x,v)$ and $V(0;t,x,v)$: \begin{align} \label{n_x,v} \begin{split} \nabla_x X(0;t,x,v) &= Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdot n(x^1)} \right) +t^l l Q_{l\theta-\frac{\pi}{2}} \left( v \otimes \nabla_x \theta \right) - \frac{1}{\vert v \vert \sin \frac{\theta}{2}}Q_\theta^l \left(v \otimes n(x^1)\right) \\ &\quad -\frac{\vert v \vert(t-t_{\mathbf{b}}-t^l)}{2}Q_{(l-\frac{1}{2})\theta -\pi} \left(n(x^1) \otimes \nabla_x \theta \right), \\ \nabla_v X(0;t,x,v)&=-t_{\mathbf{b}} Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdot n(x^1)} \right) -t^l Q_\theta ^l+t^l l Q_{l\theta-\frac{\pi}{2}} \left( v \otimes \nabla_v \theta \right)+\frac{t_{\mathbf{b}}}{\vert v \vert \sin \frac{\theta}{2}} Q_{\theta}^l \left(v \otimes n(x^1)\right) \\ &\quad - \frac{2(l-1)\sin\frac{\theta}{2}}{\vert v \vert^3} Q_\theta^l \left(v \otimes v\right) -\frac{\vert v \vert(t-t_{\mathbf{b}}-t^l)}{2}Q_{(l-\frac{1}{2})\theta - \pi}\left(n(x^1) \otimes \nabla_v \theta\right), \\ \nabla_x V(0;t,x,v)&= -lQ_{l\theta-\frac{\pi}{2}} \left( v\otimes \nabla_x \theta \right),\\ \nabla_v V(0;t,x,v)&= Q_{\theta} ^l -lQ_{l\theta-\frac{\pi}{2}} \left( v\otimes \nabla_v \theta \right), \end{split} \end{align} where $\theta$ is the angle given in Lemma \ref{der theta}, $t_{\mathbf{b}}$ is the backward exit time defined in Definition \ref{notation}, $l$ is the bouncing number, and $Q_\theta$ is a rotation matrix by $\theta$. \end{lemma} \begin{proof} Recall \begin{align*} X(0;t,x,v) = x^l - v^l t^l , \quad V(0;t,x,v) = v^l. \end{align*} Using the rotation matrix $Q_\theta$, $x^l$ and $v^l$ can be expressed by \begin{align} \label{x,v_l} x^l = Q_{\theta}^{l-1} x^1, \quad v^l = Q_{\theta}^l v. \end{align} By the chain rule, \begin{align*} \frac{\partial{(X(0;t,x,v),V(0;t,x,v)})}{\partial{(x,v)}}= \frac{\partial{(X(0;t,x,v),V(0;t,x,v))}}{\partial{(t^l,x^l,v^l)}} \frac{\partial(t^l,x^l,v^l)}{\partial(x,v)}= \begin{bmatrix} -v^l & I & -t^l I \\ \textbf{0}_{2\times 1} & \textbf{0}_{2 \times 2} & I\end{bmatrix} \begin{bmatrix} \nabla_x t^l & \nabla_v t^l \\ \nabla_x x^l & \nabla_v x^l \\ \nabla_x v^l & \nabla_v v^l \end{bmatrix}, \end{align*} where $I$ is a $2\times 2$ identity matrix. For the derivative of $X(0;t,x,v),V(0;t,x,v)$, it is necessary to find the derivative of $t^l,x^l,$ and $v^l$. Using the expression \eqref{x,v_l} and \eqref{d_matrix} in Lemma \ref{matrix notation}, we derive \begin{align*} \nabla_x x^l &= \nabla_x \left[ Q_\theta ^{l-1} x^1 \right]=Q_{\theta}^{l-1} \nabla_x x^1 -(l-1)\left(\begin{bmatrix} \sin(l-1)\theta & \cos(l-1)\theta \\ - \cos (l-1)\theta & \sin(l-1)\theta \end{bmatrix} x^1\right) \otimes \nabla_x \theta\\ &\hspace{.3cm} \qquad \qquad \qquad =Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdot n(x^1)} \right) -(l-1)\left(\begin{bmatrix} \sin(l-1)\theta & \cos(l-1)\theta \\ - \cos (l-1)\theta & \sin(l-1)\theta \end{bmatrix} x^1\right) \otimes \nabla_x \theta,\\ \nabla_v x^l &= \nabla_v \left[ Q_\theta ^{l-1} x^1 \right]=Q_{\theta}^{l-1} \nabla_v x^1 -(l-1)\left(\begin{bmatrix} \sin(l-1)\theta & \cos(l-1)\theta \\ - \cos (l-1)\theta & \sin(l-1)\theta \end{bmatrix} x^1\right) \otimes \nabla_v \theta\\ &\hspace{.3cm} \qquad \qquad \qquad =-t_{\mathbf{b}} Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdot n(x^1)} \right) -(l-1)\left(\begin{bmatrix} \sin(l-1)\theta & \cos(l-1)\theta \\ - \cos (l-1)\theta & \sin(l-1)\theta \end{bmatrix} x^1\right) \otimes \nabla_v \theta,\\ \nabla_x v^l &= \nabla_x \left[ Q_\theta^l v \right]= -l \left( \begin{bmatrix} \sin l \theta & \cos l \theta \\ -\cos l \theta & \sin l \theta \end{bmatrix} v\right) \otimes \nabla_x \theta, \\ \nabla_v v^l &= \nabla_v \left [ Q_\theta^l v \right]= Q_{\theta} ^l -l \left( \begin{bmatrix} \sin l \theta & \cos l \theta \\ -\cos l \theta & \sin l \theta \end{bmatrix} v\right) \otimes \nabla_v \theta. \end{align*} For the derivative of $t^l$, we rewrite $t^l$ as \begin{align*} t^l = t-(t-t^1) - \sum_{k=1}^{l-1}(t^k-t^{k+1})= t- t_{\mathbf{b}} -\sum_{k=1}^{l-1} (t^k-t^{k+1}). \end{align*} Since $\displaystyle t^k-t^{k+1}=\frac{2\sin\frac{\theta}{2}}{\vert v \vert}$ for all $k=1,2,\dots,l-1$, it holds that \begin{align} \label{t_ell} t^l = t-t_{\mathbf{b}}- \frac{2(l-1)\sin \frac{\theta}{2}}{\vert v \vert}, \quad l-1 = \frac{\vert v \vert}{ 2 \sin \frac{\theta}{2}} \left(t-t_{\mathbf{b}} -t^l\right). \end{align} Taking the derivative of $t^l$ with respect to $x,v$ \begin{align} \label{nabla x,v t_ell} \begin{split} \nabla_x t^l &= -\nabla_x t_{\mathbf{b}} -\frac{(l-1) \cos \frac{\theta}{2}}{\vert v \vert} \nabla_x \theta=-\frac{n(x^1)}{v \cdot n(x^1)}-\frac{(l-1) \cos \frac{\theta}{2}}{\vert v \vert} \nabla_x \theta=\frac{1}{\vert v \vert \sin\frac{\theta}{2}} n(x^1) -\frac{(l-1) \cos \frac{\theta}{2}}{\vert v \vert} \nabla_x \theta, \\ \nabla_v t^l &= -\nabla_v t_{\mathbf{b}} + \frac{2(l-1)\sin \frac{\theta}{2}}{\vert v \vert^3} v -\frac{(l-1) \cos \frac{\theta}{2}}{\vert v \vert} \nabla_v \theta=t_{\mathbf{b}} \frac{n(x^1)}{v \cdot n(x^1)} +\frac{2(l-1)\sin \frac{\theta}{2}}{\vert v \vert^3} v -\frac{(l-1) \cos \frac{\theta}{2}}{\vert v \vert} \nabla_v \theta\\ &\hspace{7.1cm}=-\frac{t_{\mathbf{b}}}{\vert v \vert \sin \frac{\theta}{2}} n(x^1) +\frac{2(l-1)\sin \frac{\theta}{2}}{\vert v \vert^3} v -\frac{(l-1) \cos \frac{\theta}{2}}{\vert v \vert} \nabla_v \theta. \end{split} \end{align} Also note that, from \eqref{v_n} and \eqref{t_ell}, we have \begin{equation} \label{cancel} \begin{split} &-(l-1)Q_{(l-1)\theta-\frac{\pi}{2}} \left(x^1 \otimes \nabla \theta\right) +\frac{(l-1)\cos\frac{\theta}{2}}{\vert v \vert} Q_{\theta}^l \left(v \otimes \nabla \theta\right) \\ &= -(l-1)\left(Q_{(l-1)\theta -\frac{\pi}{2}} +\cos \frac{\theta}{2} Q_{l\theta}Q_{\frac{\pi}{2}-\frac{\theta}{2}}\right) \left(n(x^1) \otimes \nabla \theta \right) \\ &= - (l-1) Q_{(l-1)\theta -\frac{\pi}{2}} \begin{bmatrix} \sin^2 \frac{\theta}{2} & \sin \frac{\theta}{2} \cos \frac{\theta}{2} \\ -\sin \frac{\theta}{2} \cos \frac{\theta}{2} & \sin^2 \frac{\theta}{2} \end{bmatrix} \left(n(x^1) \otimes \nabla \theta \right) \\ &= -\frac{\vert v \vert(t-t_{\mathbf{b}}-t^l)}{2}Q_{(l-\frac{1}{2})\theta -\pi} \left(n(x^1) \otimes \nabla \theta \right). \end{split} \end{equation} Hence, using \eqref{cancel} and $x^{1}=n(x^{1})$, \begin{align*} \begin{split} \nabla_x X(0;t,x,v) &= \nabla_x x^l - t^l \nabla_x v^l -v^l \otimes \nabla_x t^l\\ &= Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdot n(x^1)} \right) -(l-1)\left(\begin{bmatrix} \sin(l-1)\theta & \cos(l-1)\theta \\ - \cos (l-1)\theta & \sin(l-1)\theta \end{bmatrix} x^1\right) \otimes \nabla_x \theta \\ & \quad +t^l l \left( \begin{bmatrix} \sin l \theta & \cos l \theta \\ -\cos l \theta & \sin l \theta \end{bmatrix} v\right) \otimes \nabla_x \theta-\frac{1}{\vert v \vert \sin \frac{\theta}{2}}Q_\theta^l \left(v \otimes n(x^1)\right) +\frac{(l-1)\cos\frac{\theta}{2}}{\vert v \vert} Q_{\theta}^l \left(v \otimes \nabla_x \theta\right) \\ &=Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdot n(x^1)} \right) +t^l l Q_{l\theta-\frac{\pi}{2}} \left( v \otimes \nabla_x \theta \right) - \frac{1}{\vert v \vert \sin \frac{\theta}{2}}Q_\theta^l \left(v \otimes n(x^1)\right) \\ &\quad -(l-1)Q_{(l-1)\theta-\frac{\pi}{2}} \left(x^1 \otimes \nabla_x \theta\right) +\frac{(l-1)\cos\frac{\theta}{2}}{\vert v \vert} Q_{\theta}^l \left(v \otimes \nabla_x \theta\right) \\ &=Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdot n(x^1)} \right) +t^l l Q_{l\theta-\frac{\pi}{2}} \left( v \otimes \nabla_x \theta \right) - \frac{1}{\vert v \vert \sin \frac{\theta}{2}}Q_\theta^l \left(v \otimes n(x^1)\right) \\ &\quad -\frac{\vert v \vert(t-t_{\mathbf{b}}-t^l)}{2}Q_{(l-\frac{1}{2})\theta -\pi} \left(n(x^1) \otimes \nabla_x \theta \right), \\ \nabla_v X(0;t,x,v)&=\nabla_v x^l -t^l \nabla_v v^l-v^l \otimes \nabla_v t^l\\ &=-t_{\mathbf{b}} Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdot n(x^1)} \right) -(l-1)\left(\begin{bmatrix} \sin(l-1)\theta & \cos(l-1)\theta \\ - \cos (l-1)\theta & \sin(l-1)\theta \end{bmatrix} x^1\right) \otimes \nabla_v \theta \\ &\quad -t^l Q_\theta ^l+t^l l \left( \begin{bmatrix} \sin l \theta & \cos l \theta \\ -\cos l \theta & \sin l \theta \end{bmatrix} v\right) \otimes \nabla_v \theta\\ &\quad +\frac{t_{\mathbf{b}}}{\vert v \vert \sin \frac{\theta}{2}} Q_{\theta}^l \left(v \otimes n(x^1)\right) - \frac{2(l-1)\sin\frac{\theta}{2}}{\vert v \vert^3} Q_\theta^l \left(v \otimes v\right)+ \frac{(l-1)\cos \frac{\theta}{2}}{\vert v \vert} Q_\theta^l\left( v \otimes \nabla_v \theta \right),\\ &= -t_{\mathbf{b}} Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdot n(x^1)} \right) -t^l Q_\theta ^l+t^l l Q_{l\theta-\frac{\pi}{2}} \left( v \otimes \nabla_v \theta \right) +\frac{t_{\mathbf{b}}}{\vert v \vert \sin \frac{\theta}{2}} Q_{\theta}^l \left(v \otimes n(x^1)\right)\\ &\quad - \frac{2(l-1)\sin\frac{\theta}{2}}{\vert v \vert^3} Q_\theta^l \left(v \otimes v\right) -(l-1)Q_{(l-1)\theta-\frac{\pi}{2}}\left(x^1 \otimes \nabla_v \theta \right) + \frac{(l-1)\cos \frac{\theta}{2}}{\vert v \vert} Q_\theta^l\left( v \otimes \nabla_v \theta \right)\\ &=-t_{\mathbf{b}} Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdot n(x^1)} \right) - t^l Q_\theta ^l+t^l l Q_{l\theta-\frac{\pi}{2}} \left( v \otimes \nabla_v \theta \right)+\frac{t_{\mathbf{b}}}{\vert v \vert \sin \frac{\theta}{2}} Q_{\theta}^l \left(v \otimes n(x^1)\right) \\ &\quad - \frac{2(l-1)\sin\frac{\theta}{2}}{\vert v \vert^3} Q_\theta^l \left(v \otimes v\right) -\frac{\vert v \vert(t-t_{\mathbf{b}}-t^l)}{2}Q_{(l-\frac{1}{2})\theta - \pi}\left(n(x^1) \otimes \nabla_v \theta\right) ,\\ \nabla_x V(0;t,x,v)&=\nabla_x v^l = -l \left( \begin{bmatrix} \sin l \theta & \cos l \theta \\ -\cos l \theta & \sin l \theta \end{bmatrix} v\right) \otimes \nabla_x \theta=-lQ_{l\theta-\frac{\pi}{2}} \left( v\otimes \nabla_x \theta \right),\\ \nabla_v V(0;t,x,v)&= \nabla_v v^l = Q_{\theta} ^l -l \left( \begin{bmatrix} \sin l \theta & \cos l \theta \\ -\cos l \theta & \sin l \theta \end{bmatrix} v\right) \otimes \nabla_v \theta=Q_{\theta}^l -l Q_{l \theta-\frac{\pi}{2}} \left( v\otimes \nabla_v \theta \right). \end{split} \end{align*} \end{proof} \begin{lemma} The exit backward time $t_{\mathbf{b}}$ and the $l$-th bouncing backward time $t^l$ are defined in Definition \ref{notation}. Then, it holds that \begin{align}\label{tb esti} t_{\mathbf{b}} \leq \frac{2\sin \frac{\theta}{2}}{ \vert v \vert}, \quad t^l \leq \frac{2\sin \frac{\theta}{2}}{ \vert v \vert}. \end{align} \end{lemma} \begin{proof} Note that \begin{align*} t_{\mathbf{b}} = t-t^1= \frac{\vert x - x^1\vert}{ \vert v \vert }, \quad t^l = \frac{\vert x^l - X(0;t,x,v) \vert}{\vert v^l \vert}. \end{align*} Whenever $\theta$ is the angle at which $v$ is rotated to $v^1$, one obtains that \begin{align*} \vert x-x^1 \vert\leq 2 \sin \frac{\theta}{2}, \quad \vert x^l - X(0;t,x,v)\vert \leq 2 \sin \frac{\theta}{2}. \end{align*} From the above inequalities and $\vert v^l \vert = \vert v \vert $, we obtain \begin{align*} t_{\mathbf{b}} \leq \frac{2\sin \frac{\theta}{2}}{ \vert v \vert}, \quad t^l \leq \frac{2\sin \frac{\theta}{2}}{ \vert v \vert}. \end{align*} \end{proof} \begin{lemma} \label{est der X,V} Under the same assumption in Lemma \ref{X,V}, we have estimates of derivatives for the characteristics $X(0;t,x,v)$ and $V(0;t,x,v)$ \begin{align*} \begin{split} \vert \nabla_x X(0;t,x,v) \vert &\lesssim \frac{\vert v \vert} { \vert v \cdot n(x_{\mathbf{b}}) \vert}\left( 1 + \vert v \vert t\right),\\ \vert \nabla_v X(0;t,x,v) \vert &\lesssim \frac{1} { \vert v \vert}\left( 1 + \vert v \vert t \right), \\ \vert \nabla_x V(0;t,x,v) \vert & \lesssim \frac{\vert v \vert^3}{ \vert v \cdot n(x_{\mathbf{b}}) \vert^2} \left( 1+ \vert v \vert t \right), \\ \vert \nabla_v V(0;t,x,v) \vert & \lesssim \frac{\vert v \vert}{ \vert v \cdot n(x_{\mathbf{b}}) \vert} \left( 1+ \vert v \vert t \right), \end{split} \end{align*} where $n(x_{\mathbf{b}})$ is outward unit normal vector at $x_{\mathbf{b}} = x-t_{\mathbf{b}} v \in \partial\Omega$. \\ \end{lemma} \begin{remark} First-order derivatives of characteristics $(X,V)$ for general 3D convex domain were obtained in \cite{GKTT2017}. Lemma \ref{est der X,V} is simple version in 2D disk and its singular orders coincide with the results of \cite{GKTT2017}. \\ \end{remark} \begin{proof} By \eqref{n_x,v} in Lemma \ref{X,V}, we have \begin{align*} \nabla_x X(0;t,x,v) &= Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x_{\mathbf{b}})}{v\cdot n(x_{\mathbf{b}})} \right)-\frac{\vert v \vert(t-t_{\mathbf{b}}-t^l)}{2}Q_{(l-\frac{1}{2})\theta -\pi} \left(n(x_{\mathbf{b}}) \otimes \nabla_x \theta \right)\\ &\quad +t^l l Q_{l\theta-\frac{\pi}{2}} \left( v \otimes \nabla_x \theta \right) - \frac{1}{\vert v \vert \sin \frac{\theta}{2}}Q_\theta^l \left(v \otimes n(x_{\mathbf{b}})\right). \end{align*} We define a matrix norm by \begin{equation*} \vert A \vert = \max _{i,j} a_{i,j}, \end{equation*} where $a_{i,j}$ is the $(i,j)$ component of the matrix $A$. Then, we can easily check that \begin{equation*} \vert a \otimes b \vert \leq \vert a \vert \vert b \vert, \end{equation*} for any $a,b \in \mathbb{R}^n$. To find upper bound of $\nabla_x X(0;t,x,v)$, we only need to consider $\nabla_x \theta$ and $t^l \times l$. By \eqref{d_theta},\eqref{t_ell}, and \eqref{tb esti}, \begin{align} \label{e_1} \vert \nabla_x \theta \vert =\left \vert \frac{2}{\sin \frac{\theta}{2}} Q_{-\frac{\theta}{2}} n(x_{\mathbf{b}}) \right \vert \leq \frac{2 \vert v \vert}{\vert v \cdot n(x_{\mathbf{b}}) \vert}, \quad t^l \times l \leq \frac{2 \sin\frac{\theta}{2}}{\vert v \vert}\times \left(\frac{\vert v \vert}{2\sin \frac{\theta}{2}}t +1 \right) \leq t+\frac{2}{\vert v \vert}. \end{align} Using the above inequalities, we derive that \begin{align*} \vert \nabla_x X(0;t,x,v) \vert &\leq 1+ \frac{ \vert v \vert}{\vert v \cdot n(x_{\mathbf{b}}) \vert} + \frac{ \vert v \vert t}{2} \vert \nabla_x \theta \vert + t^l l \vert v \vert \vert \nabla_x \theta \vert + \frac{1}{ \vert v \cdot n(x_{\mathbf{b}}) \vert} \vert v \vert \\ &\leq 1+\frac{ \vert v \vert}{ \vert v \cdot n(x_{\mathbf{b}}) \vert } + \frac{ \vert v \vert^2}{ \vert v \cdot n(x_{\mathbf{b}}) \vert} t + \frac{ 2\vert v \vert^2}{\vert v \cdot n(x_{\mathbf{b}}) \vert } \left( t + \frac{2}{\vert v \vert} \right) +\frac{\vert v \vert}{ \vert v \cdot n(x_{\mathbf{b}}) \vert}\\ &\lesssim \frac{\vert v \vert}{\vert v \cdot n(x_{\mathbf{b}}) \vert} \left( 1+ \vert v \vert t\right). \end{align*} \end{proof} Recall the derivative $\nabla_v X(0;t,x,v)$ in Lemma \ref{X,V}. \begin{align*} \nabla_v X(0;t,x,v)&=-t_{\mathbf{b}} Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x_{\mathbf{b}})}{v\cdot n(x_{\mathbf{b}})} \right)-\frac{\vert v \vert(t-t_{\mathbf{b}}-t^l)}{2}Q_{(l-\frac{1}{2})\theta - \pi}\left(n(x_{\mathbf{b}}) \otimes \nabla_v \theta\right) \\ &\quad -t^l Q_\theta ^l+t^l l Q_{l\theta-\frac{\pi}{2}} \left( v \otimes \nabla_v \theta \right)+\frac{t_{\mathbf{b}}}{\vert v \vert \sin \frac{\theta}{2}} Q_{\theta}^l \left(v \otimes n(x_{\mathbf{b}})\right)- \frac{2(l-1)\sin\frac{\theta}{2}}{\vert v \vert^3} Q_\theta^l \left(v \otimes v\right). \end{align*} Similarly, to estimate $\nabla_v X(0;t,x,v)$, we need to estimate $\nabla_v \theta$. From \eqref{d_theta} and \eqref{tb esti}, we directly compute \begin{align} \label{e_2} \vert \nabla_v \theta \vert = 2 \left \vert \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}}- \frac{1}{\vert v \vert} \right) Q_{-\frac{\theta}{2}}n(x_{\mathbf{b}}) \right \vert \leq \frac{6}{\vert v \vert}. \end{align} Thus, \begin{align*} \vert \nabla_v X(0;t,x,v) \vert &\leq t_{\mathbf{b}} \left( 1+ \frac{\vert v \vert}{ \vert v \cdot n(x_{\mathbf{b}}) \vert} \right) +\frac{\vert v \vert t}{2} \vert \nabla_v \theta \vert + t^l + \left( t^l \times l \right) \vert v \vert \vert \nabla_v \theta \vert + \frac{ t_{\mathbf{b}}}{ \vert v \vert \sin \frac{\theta}{2}} \vert v \vert + \frac{2(l-1) \sin\frac{\theta}{2}}{\vert v \vert^3} \vert v \vert^2 \\ &\leq \frac{2\sin \frac{\theta}{2}}{\vert v \vert} \left( 1+ \frac{\vert v \vert}{ \vert v \cdot n(x_{\mathbf{b}}) \vert} \right)+\frac{\vert v \vert t}{2} \times \frac{6}{\vert v \vert}+\frac{2\sin \frac{\theta}{2}}{\vert v \vert} + 6\left(t +\frac{2}{\vert v \vert} \right)\\ &\quad + \frac{2\sin \frac{\theta}{2}}{\vert v \vert} \frac{1}{\vert v \vert \sin \frac{\theta}{2}} \vert v \vert + \frac{(t-t_{\mathbf{b}}-t^l)}{\vert v \vert^2} \vert v \vert^2\\ &\lesssim \frac{1}{\vert v \vert} \left (1+ \vert v \vert t \right), \end{align*} where we used \eqref{tb esti} and \eqref{e_1}. For $\nabla_{x,v} V(0;t,x,v)$, using \eqref{n_x,v}, \eqref{t_ell}, \eqref{e_1}, and \eqref{e_2} gives \begin{align*} \vert \nabla_x V(0;t,x,v) \vert &= \left \vert -l Q_{l\theta-\frac{\pi}{2}} \left( v \otimes \nabla_x \theta \right) \right \vert \leq \left(\frac{\vert v \vert}{\vert 2\sin \frac{\theta}{2}\vert} t +1\right) \vert v \vert \vert \nabla_x \theta \vert \lesssim \frac{ \vert v \vert^3}{\vert v \cdot n(x_{\mathbf{b}}) \vert^2}(1+ \vert v \vert t), \\ \vert \nabla_v V(0;t,x,v) \vert&= \left \vert Q_{\theta}^l - lQ_{l\theta -\frac{\pi}{2}} (v \otimes \nabla_v \theta) \right \vert \leq 1+ \left(\frac{\vert v \vert}{\vert 2 \sin \frac{\theta}{2}\vert} t +1\right) \vert v \vert \vert \nabla_v \theta \vert\lesssim \frac{\vert v \vert}{\vert v \cdot n(x_{\mathbf{b}}) \vert} (1+ \vert v \vert t). \end{align*} \subsection{Second-order estimates of characteristics} \begin{lemma} $n(x_{\mathbf{b}})$ is outward unit normal vector at $x_{\mathbf{b}}\in \partial \Omega$. For $(x_{\mathbf{b}},v) \notin \gamma_0$, it follows that \begin{equation} \label{est der n} \vert \nabla_x [n(x_{\mathbf{b}})] \vert \lesssim \frac{\vert v \vert}{\vert v \cdot n(x_{\mathbf{b}}) \vert}, \quad \vert \nabla_v [n(x_{\mathbf{b}})] \vert \lesssim \frac{1}{\vert v \vert}. \end{equation} \end{lemma} \begin{proof} We denote the components of $v$ and $n(x_{\mathbf{b}})$ by $(v_1,v_2)$ and $(n_1,n_2)$. By \eqref{normal} in Lemma \ref{d_n} and \eqref{tb esti}, we have \begin{align*} \nabla_x[n(x_{\mathbf{b}})] &= I- \frac{v\otimes n(x_{\mathbf{b}})}{v\cdot n(x_{\mathbf{b}})} = \frac{1}{v\cdot n(x_{\mathbf{b}})}\begin{bmatrix} v_2n_2 & -v_1n_2 \\ -v_2n_1 & v_1n_1 \end{bmatrix},\\ \nabla_v [n(x_{\mathbf{b}})]&=-t_{\mathbf{b}}\left(I- \frac{v\otimes n(x_{\mathbf{b}})}{v\cdot n(x_{\mathbf{b}})}\right)=\frac{-t_{\mathbf{b}}}{v\cdot n(x_{\mathbf{b}})} \begin{bmatrix} v_2n_2 & -v_1n_2 \\ -v_2n_1 & v_1n_1 \end{bmatrix}, \end{align*} which is further bounded by \begin{equation*} \vert \nabla_x [n(x_{\mathbf{b}})] \vert \lesssim \frac{\vert v \vert }{\vert v \cdot n(x_{\mathbf{b}})\vert}, \quad \vert \nabla_v [n(x_{\mathbf{b}})] \vert \lesssim \frac{1}{\vert v \vert}. \end{equation*} \end{proof} \begin{lemma} The exit backward time $t_{\mathbf{b}}$ and the $l$-th bouncing backward time $t^l$ are defined in Definition \ref{notation}. Then, we have the following estimates \begin{equation} \label{est der t_ell} \begin{split} &\vert \nabla_x t^1 \vert \lesssim \frac{1}{\vert v \vert \vert \sin\frac{\theta}{2} \vert}, \quad \vert \nabla_v t^1 \vert \lesssim \frac{1}{\vert v \vert^2},\\ &\vert \nabla_x t^l \vert \lesssim \frac{1}{\vert v \vert \sin^2\frac{\theta}{2}}(1+\vert v\vert t), \quad \vert \nabla_v t^l \vert \lesssim \frac{1}{\vert v \vert^2 \vert \sin \frac{\theta}{2} \vert }(1+\vert v \vert t), \end{split} \end{equation} whenever $v\cdot n(x_{\mathbf{b}}) \neq 0$. \end{lemma} \begin{proof} Since $t^1 = t-t_b$, it follows from Lemma \ref{nabla xv b} that \begin{align*} \nabla_x t^1 = -\nabla_x t_{\mathbf{b}} = -\frac{n(x_{\mathbf{b}})}{v\cdot n(x_{\mathbf{b}})}, \quad \nabla_v t^1 = -\nabla_v t_{\mathbf{b}} = t_{\mathbf{b}} \frac{n(x_{\mathbf{b}})}{v\cdot n(x_{\mathbf{b}})}. \end{align*} Using the above and \eqref{tb esti} implies that \begin{align*} \vert \nabla_x t^1 \vert \lesssim \frac{1}{\vert v \vert \left \vert \sin \frac{\theta}{2}\right \vert}, \quad \vert \nabla_v t^1 \vert \lesssim \frac{1}{\vert v \vert^2}. \end{align*} By \eqref{nabla x,v t_ell} in the proof of Lemma \ref{X,V}, we have \begin{align*} \nabla_x t^l &= \frac{1}{\vert v \vert \sin \frac{\theta}{2}} n(x_{\mathbf{b}}) - \frac{(l-1)\cos \frac{\theta}{2}}{\vert v \vert} \nabla_x \theta, \\ \nabla_v t^l &= -\frac{t_{\mathbf{b}}}{\vert v \vert \sin \frac{\theta}{2}} n(x_{\mathbf{b}}) +\frac{2(l-1)\sin \frac{\theta}{2}}{\vert v \vert^3} v -\frac{(l-1)\cos \frac{\theta}{2}}{\vert v \vert} \nabla_v \theta. \end{align*} By \eqref{t_ell} in the proof of Lemma \ref{X,V}, the bouncing number $l$ can be bounded by \begin{equation} \label{ell est} l= 1+ \frac{\vert v \vert}{2\sin \frac{\theta}{2}} (t-t_{\mathbf{b}}-t^l) \leq 1+ \frac{\vert v \vert }{2\left \vert \sin \frac{\theta}{2}\right \vert} t \lesssim \frac{1}{\left \vert \sin\frac{\theta}{2}\right \vert} (1+\vert v \vert t). \end{equation} Then, from \eqref{tb esti}, \eqref{e_1},\eqref{e_2}, and \eqref{ell est}, one obtains that \begin{align*} \vert \nabla_x t^l \vert &\lesssim \frac{1}{\vert v \vert \vert \sin \frac{\theta}{2}\vert} + \frac{1}{\vert v \vert \sin^2 \frac{\theta}{2}}(1+\vert v \vert t)\lesssim \frac{1}{\vert v \vert \sin^2 \frac{\theta}{2}} (1+\vert v \vert t), \\ \vert\nabla_v t^l \vert &\lesssim \frac{1}{\vert v \vert^2}+\frac{1}{\vert v \vert^2}(1+\vert v \vert t) + \frac{1}{\vert v \vert^2}(1+\vert v \vert t) \lesssim \frac{1}{\vert v \vert^2} (1+\vert v \vert t). \end{align*} \end{proof} \begin{lemma} \label{2nd est der X,V} The characteristics $X(0;t,x,v)$ and $V(0;t,x,v)$ are defined in Definition \ref{notation}. Under the same assumption in Lemma \ref{X,V}, we have estimates for the second derivatives of characteristics \begin{equation*} \begin{split} &\vert \nabla_{xx} X(0;t,x,v) \vert \lesssim \frac{\vert v \vert^4}{\vert v \cdot n(x_{\mathbf{b}})\vert^4}(1+\vert v \vert^2 t^2), \quad \vert \nabla_{vx} X(0;t,x,v) \vert \lesssim \frac{\vert v \vert^2}{\vert v \cdot n(x_{\mathbf{b}}) \vert^3}(1+\vert v \vert^2 t^2), \\ &\vert \nabla_{xv} X(0;t,x,v) \vert \lesssim \frac{\vert v \vert^2}{\vert v \cdot n(x_{\mathbf{b}}) \vert^3}(1+\vert v \vert^2 t^2), \quad \vert \nabla_{vv}X(0;t,x,v) \vert \lesssim \frac{1}{\vert v \cdot n(x_{\mathbf{b}}) \vert^2}(1+\vert v \vert^2 t^2),\\ &\vert \nabla_{xx} V(0;t,x,v) \vert \lesssim \frac{\vert v \vert^5}{\vert v \cdot n(x_{\mathbf{b}}) \vert^4} (1+\vert v \vert^2t^2), \quad \vert \nabla_{vx} V(0;t,x,v) \vert \lesssim \frac{\vert v \vert^3}{\vert v \cdot n(x_{\mathbf{b}})\vert^3}(1+\vert v \vert^2 t^2),\\ &\vert \nabla_{xv} V(0;t,x,v) \vert \lesssim \frac{\vert v \vert^3}{\vert v \cdot n(x_{\mathbf{b}})\vert^3}(1+\vert v \vert^2 t^2), \quad \vert \nabla_{vv} V(0;t,x,v) \vert \lesssim \frac{\vert v \vert}{\vert v \cdot n(x_{\mathbf{b}})\vert^2} (1+\vert v \vert^2 t^2), \end{split} \end{equation*} where $\vert \nabla_{xx,xv,vv} X(0;t,x,v) (or \; V(0;t,x,v))\vert $ is given by $\sup_{i,j} \vert \nabla_{ij}X(0;t,x,v) (or \; \nabla_{ij}V(0;t,x,v))\vert$ for $i,j \in \{x_1,x_2,v_1,v_2\}$. \end{lemma} \begin{proof} We denote the components of $v$ and $n(x_{\mathbf{b}})$ by $(v_1,v_2)$ and $(n_1,n_2)$. To estimate $\vert \nabla_{xx} X(0;t,x,v)\vert$, we need to determine which component in the matrix $\nabla_x X(0;t,x,v)$ has the highest singularity $\frac{1}{\sin \frac{\theta}{2}}$ and travel length $(1+\vert v \vert t)$ order when we take the derivative with respect to $x$. In estimates \eqref{e_1},\eqref{est der n},\eqref{est der t_ell}, and \eqref{ell est}, we already checked singularity and travel length order for some terms. Considering these estimates, we get the highest singularity and travel length order in the $x$-derivative of the (1,1) component of the matrix $\nabla_x X(0;t,x,v)$. Hence, we only consider the (1,1) component among components in the matrix $\nabla_x X(0;t,x,v)$. In fact, from Lemma \ref{X,V}, the (1,1) component $[\nabla_x X(0;t,x,v)]_{(1,1)}$ of the matrix $\nabla_x X(0;t,x,v)$ is \begin{align*} &[\nabla_x X(0;t,x,v)]_{(1,1)}\\ &= \cos((l-1)\theta) \frac{v_2n_2}{v\cdot n(x_{\mathbf{b}})} + \sin((l-1)\theta) \frac{v_2 n_1}{v\cdot n(x_{\mathbf{b}})}\\ &\quad +\frac{\vert v \vert(t^1-t^l)}{\sin \frac{\theta}{2}}\left(-n_1^2 \cos ((l-\frac{1}{2})\theta) \cos \frac{\theta}{2} -n_1n_2 \cos ((l-\frac{1}{2})\theta) \sin \frac{\theta}{2}+n_1n_2 \sin((l-\frac{1}{2})\theta)\cos \frac{\theta}{2} \right. \\ &\left. \qquad \qquad \qquad \qquad +n_2^2 \sin ((l-\frac{1}{2})\theta) \sin \frac{\theta}{2} \right)\\ &\quad-\frac{2t^l l}{\sin \frac{\theta}{2}} \left( v_1n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2n_1 \cos l\theta \cos \frac{\theta}{2} +v_2n_2 \cos l\theta \sin \frac{\theta}{2}\right)\\ &\quad -\frac{1}{\vert v \vert \sin \frac{\theta}{2}}\left( v_1 n_1 \cos l\theta -v_2n_1 \sin l\theta\right)\\ &\lesssim \frac{\vert v \vert}{\vert v \cdot n(x_{\mathbf{b}}) \vert} + \frac{1}{\vert \sin \frac{\theta}{2} \vert} (1+\vert v \vert t) +\frac{1}{\vert \sin \frac{\theta}{2}\vert} \lesssim \frac{\vert v \vert }{\vert v \cdot n(x_{\mathbf{b}}) \vert}(1+\vert v \vert t) , \end{align*} where the first inequality comes from \eqref{tb esti}, \eqref{ell est}, and \begin{equation*} t^1-t^l= \frac{2(l-1)\sin\frac{\theta}{2}}{\vert v \vert} \lesssim \frac{1}{\vert v \vert}(1+\vert v \vert t). \end{equation*} Similarly, the $(1,1)$ components of matrices $\nabla_v X(0;t,x,v), \nabla_x V(0;t,x,v)$, and $\nabla_v V(0;t,x,v)$ satisfy inequalities in Lemma \ref{est der X,V}. Similar as estimate $\vert \nabla_{xx} X(0;t,x,v)\vert$, we only consider $(1,1)$ components of derivative matrices for $X(0;t,x,v)$ and $V(0;t,x,v)$ to get estimates. When we differentiate $[\nabla_x X(0;t,x,v)]_{(1,1)}$ with respect to $x$, the terms containing $\frac{t^l l}{\sin \frac{\theta}{2}}$ are main terms that increase the singularity $\frac{1}{\sin \frac{\theta}{2}}$ and travel length $(1+\vert v \vert t)$ order. $\frac{t^l l}{\sin \frac{\theta}{2}}$ has a singularity order 1 and travel length order 1 because \begin{equation*} \left \vert \frac{t^l l}{\sin \frac{\theta}{2}}\right \vert \lesssim \frac{1}{\vert \sin \frac{\theta}{2}\vert } \times \frac{\vert \sin \frac{\theta}{2}\vert }{\vert v \vert} \times \frac{1}{\vert \sin \frac{\theta}{2}\vert}(1+\vert v \vert t)=\frac{ \vert v \vert}{\vert v \cdot n(x_{\mathbf{b}})\vert}(1+\vert v \vert t), \end{equation*} where we have used \eqref{tb esti} and \eqref{ell est}. On the other hand, if we take of the term $\frac{t^l l}{\sin \frac{\theta}{2}}$ with respect to $x$, the singularity and travel length order become $4$ and $2$ respectively: \begin{align*} \left \vert \nabla_x \left(\frac{t^l l}{\sin \frac{\theta}{2}}\right)\right \vert =\left \vert \frac{l}{\sin \frac{\theta}{2}} \nabla_x t^l -\frac{t^l l\cos\frac{\theta}{2}}{2\sin^2 \frac{\theta}{2}} \nabla_x \theta\right \vert &\lesssim \frac{1}{\vert v \vert \sin^4 \frac{\theta}{2}}(1+\vert v \vert^2t^2) + \frac{1}{\vert v \vert \vert \sin^3 \frac{\theta}{2}\vert}(1+\vert v \vert t)\\ &\lesssim \frac{\vert v \vert^3}{\vert v\cdot n(x_{\mathbf{b}})\vert^4} (1+\vert v \vert ^2 t^2), \end{align*} where \eqref{tb esti}, \eqref{e_1}, \eqref{est der t_ell}, and \eqref{ell est} have been used. Hence, it suffices to estimate the following terms in $[\nabla_x X(0;t,x,v)]_{(1,1)}$ \begin{equation*} -\frac{2t^l l}{\sin \frac{\theta}{2}} \left( v_1n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2n_1 \cos l\theta \cos \frac{\theta}{2} +v_2n_2 \cos l\theta \sin \frac{\theta}{2}\right):=I_{1}, \end{equation*} to obtain estimate for $\vert \nabla_{xx} X(0;t,x,v) \vert$. Taking the $x$-derivative to the above terms, one obtains \begin{align*} \nabla_x I_{1} &= \left ( \frac{-2l\nabla_x t^l}{\sin \frac{\theta}{2}} +\frac{2t^l l\cos\frac{\theta}{2}\nabla_x \theta}{2\sin^2 \frac{\theta}{2}} \right )\Big( v_1n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2n_1 \cos l\theta \cos \frac{\theta}{2} +v_2n_2 \cos l\theta \sin \frac{\theta}{2}\Big)\\ &\quad -\frac{2t^l l}{\sin \frac{\theta}{2}} \left( v_1 \sin l \theta \cos \frac{\theta}{2}\nabla_x n_1 +lv_1 n_1 \cos l\theta \cos \frac{\theta}{2}\nabla_x \theta -\frac{1}{2} v_1 n_1 \sin l\theta \sin \frac{\theta}{2} \nabla_x \theta \right. \\ &\qquad \qquad \quad \left. + v_1 \sin l \theta \sin \frac{\theta}{2}\nabla_x n_2 +lv_1 n_2 \cos l\theta \sin \frac{\theta}{2}\nabla_x \theta +\frac{1}{2} v_1 n_2 \sin l\theta \cos \frac{\theta}{2} \nabla_x \theta \right. \\ &\qquad \qquad \quad \left. + v_2 \cos l\theta \cos \frac{\theta}{2}\nabla_x n_1 -lv_2 n_1 \sin l \theta \cos \frac{\theta}{2} \nabla_x \theta -\frac{1}{2} v_2n_1\cos l\theta \sin \frac{\theta}{2}\nabla_x \theta \right. \\ &\qquad \qquad \quad \left. + v_2 \cos l\theta \sin \frac{\theta}{2}\nabla_x n_2 -lv_2 n_2 \sin l\theta \sin \frac{\theta}{2} \nabla_x \theta + \frac{1}{2} v_2 n_2 \cos l \theta \cos \frac{\theta}{2}\nabla_x \theta \right). \end{align*} Using \eqref{tb esti},\eqref{e_1},\eqref{est der n},\eqref{est der t_ell}, and \eqref{ell est}, one can further bound the above as \begin{align*} \vert \nabla_x I_{1}\vert &\lesssim \frac{\vert v \vert^3}{\vert v \cdot n(x_{\mathbf{b}})\vert^4} (1+\vert v \vert^2t^2) \times \vert v \vert + \frac{1}{\vert v \cdot n(x_{\mathbf{b}})\vert}(1+\vert v \vert t) \times \left( \vert v \vert \vert \nabla_x n(x_{\mathbf{b}}) \vert + l\vert v \vert\vert \nabla_x \theta \vert\right )\\ &\lesssim \frac{\vert v \vert^4}{\vert v \cdot n(x_{\mathbf{b}}) \vert^4} (1+\vert v \vert^2 t^2). \end{align*} Therefore, we get \begin{align*} \vert \nabla_{xx} X(0;t,x,v) \vert \lesssim \frac{\vert v \vert^4}{\vert v\cdot n(x_{\mathbf{b}}) \vert^4} (1+\vert v \vert^2t^2). \end{align*} For estimate of $\vert \nabla_{vx} X(0;t;x,v)\vert$, similar to the case $\vert \nabla_{xx} X(0;t,x,v) \vert$, we only consider terms $I_1$. By taking the $v$-derivative to $I_1$, we obtain \begin{align*} \nabla_v I_1&=\left ( \frac{-2l \nabla_v t^l}{\sin \frac{\theta}{2}} +\frac{2t^l l\cos\frac{\theta}{2}\nabla_v \theta}{2\sin^2 \frac{\theta}{2}} \right )\Big( v_1n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2n_1 \cos l\theta \cos \frac{\theta}{2} +v_2n_2 \cos l\theta \sin \frac{\theta}{2}\Big)\\ &\quad -\frac{2t^l l}{\sin \frac{\theta}{2}} \left( n_1 \sin l \theta \cos \frac{\theta}{2}\nabla_v v_1 + v_1 \sin l \theta \cos \frac{\theta}{2}\nabla_v n_1 +lv_1 n_1 \cos l\theta \cos \frac{\theta}{2}\nabla_v \theta -\frac{1}{2} v_1 n_1 \sin l\theta \sin \frac{\theta}{2} \nabla_v \theta \right. \\ &\qquad \qquad \quad \left. + n_2\sin l\theta \sin \frac{\theta}{2} \nabla_v v_1+ v_1\sin l \theta \sin \frac{\theta}{2} \nabla_v n_2 +lv_1 n_2 \cos l\theta \sin \frac{\theta}{2}\nabla_v \theta +\frac{1}{2} v_1 n_2 \sin l\theta \cos \frac{\theta}{2} \nabla_v \theta \right. \\ &\qquad \qquad \quad \left. + n_1 \cos l\theta \cos \frac{\theta}{2} \nabla_v v_2+v_2\cos l\theta \cos \frac{\theta}{2} \nabla_v n_1 -lv_2 n_1 \sin l \theta \cos \frac{\theta}{2} \nabla_v \theta -\frac{1}{2} v_2n_1\cos l\theta \sin \frac{\theta}{2}\nabla_v \theta \right. \\ &\qquad \qquad \quad \left. +n_2\cos l \theta \sin \frac{\theta}{2} \nabla_v v_2+ v_2\cos l\theta \sin \frac{\theta}{2} \nabla_v n_2 -lv_2 n_2 \sin l\theta \sin \frac{\theta}{2} \nabla_v \theta + \frac{1}{2} v_2 n_2 \cos l \theta \cos \frac{\theta}{2}\nabla_v \theta \right). \end{align*} Using \eqref{tb esti},\eqref{e_2},\eqref{est der n},\eqref{est der t_ell}, and \eqref{ell est} yields that \begin{align*} \vert \nabla_v I_1 \vert &\lesssim \left( \frac{1}{\vert v \vert^2 \vert \sin^3 \frac{\theta}{2}\vert} (1+\vert v \vert^2 t^2)+\frac{1}{\vert v \vert^2 \vert \sin ^2 \frac{\theta}{2}\vert}(1+\vert v \vert t) \right)\times \vert v \vert+\frac{1}{\vert v \cdot n(x_{\mathbf{b}}) \vert} ( 1+ \vert v \vert \vert \nabla_v n(x_{\mathbf{b}}) \vert +l\vert v \vert \vert \nabla_v \theta\vert)\\ &\lesssim \frac{\vert v \vert^2}{\vert v \cdot n(x_{\mathbf{b}}) \vert^3} (1+\vert v \vert ^2 t^2). \end{align*} Hence, one obtains that \begin{align*} \vert \nabla_{vx} X(0;t,x,v) \vert \lesssim \frac{\vert v \vert^2}{\vert v \cdot n(x_{\mathbf{b}}) \vert^3} (1+\vert v \vert ^2 t^2). \end{align*} By Lemma \ref{X,V}, we write the $(1,1)$ component of $\nabla_v X(0;t,x,v)$: \begin{align*} &[\nabla_v X(0;t,x,v)]_{(1,1)}\\ &=-t_{\mathbf{b}} \left(\cos (l-1)\theta \frac{v_2n_2}{v\cdot n(x_{\mathbf{b}})} +\sin (l-1)\theta \frac{v_2n_1}{v \cdot n(x_{\mathbf{b}})} \right) -t^l \cos l\theta \\ &\quad +2lt^l \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{1}{\vert v \vert}\right) \left( v_1 n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2 n_1 \cos l\theta \cos \frac{\theta}{2} +v_2 n_2 \cos l\theta \sin \frac{\theta}{2}\right) \\ &\quad + \frac{t_{\mathbf{b}}}{\vert v \vert \sin \frac{\theta}{2}} (v_1n_1\cos l\theta -v_2 n_1 \sin l \theta) -\frac{2(l-1)\sin \frac{\theta}{2}}{\vert v \vert^3}(v_1^2\cos l\theta - v_1v_2 \sin l \theta) \\ &\quad -\vert v \vert (t^1-t^l) \left(\frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}}-\frac{1}{\vert v \vert}\right) \left(-n_1^2 \cos (l-\frac{1}{2})\theta \cos \frac{\theta}{2} +n_1n_2 \sin (l-1)\theta +n_2^2 \sin (l-\frac{1}{2})\theta \sin \frac{\theta}{2}\right). \end{align*} Similar to $\nabla_x X(0;t,x,v)$, main terms in $\nabla_v X(0;t,x,v)$ are \begin{align*} 2lt^l \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{1}{\vert v \vert}\right) \left( v_1 n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2 n_1 \cos l\theta \cos \frac{\theta}{2} +v_2 n_2 \cos l\theta \sin \frac{\theta}{2}\right):=I_2. \end{align*} As we take derivative to $\nabla_v X(0;t,x,v)$ with respect to $x$ and $v$, $I_2$ mainly contributes to increase singularity and travel length order. Thus, we only differentiate terms $I_2$ to get estimate for $\vert \nabla_{xv} X(0;t,x,v) \vert $ and $\vert \nabla_{vv} X(0;t,x,v)\vert$. Firstly, taking $x$ derivative to $I_2$ gives \begin{align*} \nabla_x I_2 &= 2l \nabla_x t^l \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{1}{\vert v \vert}\right) \left( v_1 n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2 n_1 \cos l\theta \cos \frac{\theta}{2} +v_2 n_2 \cos l\theta \sin \frac{\theta}{2}\right)\\ &\quad + 2lt^l \left( \frac{\nabla_x t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{t_{\mathbf{b}}\cos \frac{\theta}{2}\nabla_x \theta}{2\sin^2 \frac{\theta}{2}} \right)\Big( v_1 n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2 n_1 \cos l\theta \cos \frac{\theta}{2} +v_2 n_2 \cos l\theta \sin \frac{\theta}{2}\Big)\\ &\quad +2lt^l \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{1}{\vert v \vert}\right)\left( v_1 \sin l\theta \cos \frac{\theta}{2} \nabla_xn_1+lv_1 n_1 \cos l\theta \cos \frac{\theta}{2} \nabla_x \theta -\frac{1}{2}v_1 n_1 \sin l \theta \sin \frac{\theta}{2} \nabla_x \theta \right. \\ &\qquad \qquad \qquad \qquad \qquad \quad +\left. v_1\sin l \theta \sin \frac{\theta}{2} \nabla_x n_2+lv_1n_2\cos l \theta \sin \frac{\theta}{2} \nabla_x \theta +\frac{1}{2} v_1 n_2 \sin l\theta \cos \frac{\theta}{2} \nabla_x \theta \right.\\ &\qquad \qquad \qquad \qquad \qquad \quad +\left. v_2\cos l\theta \cos \frac{\theta}{2}\nabla_x n_1 -lv_2n_1 \sin l \theta \cos \frac{\theta}{2} \nabla_x \theta -\frac{1}{2} v_2 n_1 \cos l\theta \sin \frac{\theta}{2} \nabla_x \theta \right. \\ &\qquad \qquad \qquad \qquad \qquad \quad +\left. v_2 \cos l\theta \sin \frac{\theta}{2} \nabla_x n_2 -lv_2n_2 \sin l \theta \sin \frac{\theta}{2} \nabla_x \theta +\frac{1}{2} v_2n_2 \cos l\theta \cos \frac{\theta}{2} \nabla_x \theta \right). \end{align*} Hence, it follows from \eqref{tb esti},\eqref{e_1},\eqref{est der n},\eqref{est der t_ell}, and \eqref{ell est} that \begin{align*} \vert \nabla_x I_2 \vert &\lesssim \frac{1}{\vert v \vert \vert \sin ^3 \frac{\theta}{2} \vert} (1+\vert v \vert^2 t^2) \times \frac{1}{\vert v \vert} \times \vert v \vert + \frac{1}{\vert v\vert}(1+\vert v \vert t) \times\frac{1}{\vert v \vert \sin^2 \frac{\theta}{2}} \times \vert v \vert \\ &\quad + \frac{1}{\vert v \vert}(1+\vert v \vert t)\times \frac{1}{\vert v \vert} \times \frac{\vert v \vert}{\sin ^2\frac{\theta}{2}} (1+\vert v \vert t)\\ &\lesssim \frac{\vert v \vert^2}{\vert v \cdot n(x_{\mathbf{b}}) \vert^3} (1+\vert v \vert^2 t^2), \end{align*} which yields $\vert \nabla_{xv} X(0;t,x,v) \vert$ estimate \begin{align*} \vert \nabla_{xv} X(0;t,x,v) \lesssim \frac{\vert v \vert^2}{\vert v \cdot n(x_{\mathbf{b}}) \vert^3} (1+\vert v \vert^2 t^2). \end{align*} Similarly, we consider $\nabla_v I_2$: \begin{align*} \nabla_v I_2 &= 2l \nabla_v t^l \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{1}{\vert v \vert}\right) \left( v_1 n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2 n_1 \cos l\theta \cos \frac{\theta}{2} +v_2 n_2 \cos l\theta \sin \frac{\theta}{2}\right)\\ &\quad + 2lt^l \left( \frac{\nabla_v t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{t_{\mathbf{b}}\cos \frac{\theta}{2}}{2\sin^2 \frac{\theta}{2}} \nabla_v \theta+\frac{v}{\vert v \vert^3}\right)\left( v_1 n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2 n_1 \cos l\theta \cos \frac{\theta}{2} \right.\\ &\left. \qquad \hspace{5.7cm} +v_2 n_2 \cos l\theta \sin \frac{\theta}{2}\right)\\ &\quad +2lt^l \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{1}{\vert v \vert}\right)\left( n_1 \sin l\theta \cos \frac{\theta}{2} \nabla_v v_1+v_1 \sin l\theta \cos \frac{\theta}{2} \nabla_vn_1+lv_1 n_1 \cos l\theta \cos \frac{\theta}{2} \nabla_v \theta \right.\\ &\qquad \qquad \qquad \qquad \qquad \quad\left. -\frac{1}{2}v_1 n_1 \sin l \theta \sin \frac{\theta}{2} \nabla_v \theta +n_2 \sin l \theta \sin \frac{\theta}{2} \nabla_v v_1 +v_1\sin l \theta \sin \frac{\theta}{2} \nabla_v n_2 \right. \\ &\qquad \qquad \qquad \qquad \qquad \quad\left. +lv_1n_2\cos l \theta \sin \frac{\theta}{2} \nabla_v\theta +\frac{1}{2} v_1 n_2 \sin l\theta \cos \frac{\theta}{2} \nabla_v \theta +n_1\cos l\theta \cos \frac{\theta}{2} \nabla_v v_2 \right.\\ &\qquad \qquad \qquad \qquad \qquad \quad +\left. v_2\cos l\theta \cos \frac{\theta}{2}\nabla_v n_1 -lv_2n_1 \sin l \theta \cos \frac{\theta}{2} \nabla_v \theta -\frac{1}{2} v_2 n_1 \cos l\theta \sin \frac{\theta}{2} \nabla_v \theta \right. \\ &\qquad \qquad \qquad \qquad \qquad \quad +\left. n_2 \cos l\theta \sin \frac{\theta}{2} \nabla_v v_2+v_2 \cos l\theta \sin \frac{\theta}{2} \nabla_v n_2 -lv_2n_2 \sin l \theta \sin \frac{\theta}{2} \nabla_v \theta \right.\\ &\qquad \qquad \qquad \qquad \qquad \quad \left.+\frac{1}{2} v_2n_2 \cos l\theta \cos \frac{\theta}{2} \nabla_v \theta \right). \end{align*} By \eqref{tb esti},\eqref{e_2},\eqref{est der n},\eqref{est der t_ell}, and \eqref{ell est}, the above can be further bounded by \begin{align*} \vert \nabla_v I_2 \vert &\lesssim \frac{1}{\vert v \vert^2 \sin^2 \frac{\theta}{2}} (1+\vert v \vert^2 t^2) \times \frac{1}{\vert v \vert} \times \vert v \vert + \frac{1}{\vert v \vert}(1+\vert v \vert t)\times \frac{1}{\vert v \vert^2 \vert \sin \frac{\theta}{2}\vert} \times \vert v \vert +\frac{1}{\vert v \vert}(1+\vert v \vert t)\times \frac{1}{\vert v \vert \vert \sin \frac{\theta}{2} \vert} (1+\vert v \vert t)\\ &\lesssim \frac{1}{\vert v \cdot n(x_{\mathbf{b}}) \vert^2} (1+\vert v \vert^2 t^2) . \end{align*} Hence, $\vert \nabla_{vv} X(0;t,x,v)\vert$ is bounded by \begin{align*} \vert \nabla_{vv} X(0;t,x,v) \vert \lesssim \frac{1}{\vert v \cdot n(x_{\mathbf{b}}) \vert^2} (1+\vert v \vert^2 t^2). \end{align*} To get estimate for $\vert \nabla_{xx} V(0;t,x,v)\vert$ and $\vert \nabla_{vx} V(0;t,x,v)\vert$, we now consider $[\nabla_{x} V(0;t,x,v)]_{(1,1)}$: \begin{align*} [\nabla_x V(0;t,x,v)]_{(1,1)} = \frac{2l}{\sin \frac{\theta}{2}} \left(v_1n_1 \sin l \theta \cos \frac{\theta}{2} +v_1n_2 \sin l \theta \sin \frac{\theta}{2} +v_2n_1 \cos l\theta \cos \frac{\theta}{2} + v_2n_2 \cos l\theta \sin \frac{\theta}{2}\right), \end{align*} by Lemma \ref{X,V}. In $[\nabla_x V(0;t,x,v)]_{(1,1)}$, the main terms are \begin{align*} \frac{2l}{\sin \frac{\theta}{2}} v_1n_1 \sin l \theta \cos \frac{\theta}{2} \quad \textrm{and} \quad \frac{2l}{\sin \frac{\theta}{2}} v_2 n_1 \cos l\theta \cos \frac{\theta}{2}, \end{align*} because these terms have the highest singularity order in $[\nabla_x V(0;t,x,v)]_{(1,1)}$. Thus, for $\vert \nabla_{xx}V(0;t,x,v) \vert$, we now take the $x$-derivative for main terms: \begin{align*} &\nabla_x \left(\frac{2l}{\sin \frac{\theta}{2}}\left(v_1n_1\sin l \theta \cos \frac{\theta}{2} + v_2n_1 \cos l \theta \cos \frac{\theta}{2}\right)\right)\\ &= -\frac{l\cos \frac{\theta}{2}}{\sin ^2 \frac{\theta}{2}}\nabla_x \theta \left(v_1n_1\sin l \theta \cos \frac{\theta}{2} + v_2n_1 \cos l \theta \cos \frac{\theta}{2}\right)\\ &\quad + \frac{2l}{\sin \frac{\theta}{2}} \left(v_1 \sin l \theta \cos \frac{\theta}{2} \nabla_x n_1 +lv_1 n_1 \cos l\theta \cos \frac{\theta}{2} \nabla_x \theta -\frac{1}{2} v_1 n_1 \sin l\theta \sin \frac{\theta}{2} \nabla_x \theta \right. \\ &\qquad \qquad \quad + \left. v_2\cos l \theta \cos \frac{\theta}{2} \nabla_x n_1 -lv_2 n_1 \sin l\theta \cos \frac{\theta}{2} \nabla_x \theta -\frac{1}{2} v_2n_1 \cos l\theta \sin \frac{\theta}{2} \nabla_x \theta \right)\\ &:=I_3. \end{align*} By \eqref{e_1},\eqref{est der n}, and \eqref{ell est}, $I_3$ can be further bounded by \begin{align*} \vert I_3 \vert \lesssim \frac{\vert v \vert}{\sin^4 \frac{\theta}{2}}(1+\vert v \vert t) + \frac{\vert v \vert }{ \sin^4\frac{\theta}{2}}(1+\vert v \vert^2 t^2)\lesssim \frac{\vert v \vert^5}{\vert v \cdot n(x_{\mathbf{b}}) \vert^4}(1+\vert v \vert^2 t^2), \end{align*} which implies that \begin{align*} \vert \nabla_{xx} V(0;t,x,v) \vert \lesssim \frac{\vert v \vert^5}{\vert v \cdot n(x_{\mathbf{b}}) \vert^4} (1+\vert v \vert^2t^2). \end{align*} Similarly, we firstly take the $v$-derivative for main terms in $[\nabla_x V(0;t,x,v)]_{(1,1)}$ and then estimate $v$-derivatives. Then, we deduce \begin{align*} \vert \nabla_{vx} V(0;t,x,v) \vert \lesssim \frac{ \vert v \vert^3}{\vert v \cdot n(x_{\mathbf{b}}) \vert^3} (1+ \vert v \vert^2 t^2), \end{align*} where we have used \eqref{e_2},\eqref{est der n}, and \eqref{ell est}. Lastly, it remains to estimate $\vert \nabla_{xv}V(0;t,x,v)\vert$ and $\vert \nabla_{vv} V(0;t,x,v)\vert$. Let us consider the $(1,1)$ component of $\nabla_v V(0;t,x,v)$: \begin{align*} [\nabla_v V(0;t,x,v)]_{(1,1)}&=\cos l\theta -2l \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{1}{\vert v \vert}\right) \left(v_1n_1 \sin l \theta \cos \frac{\theta}{2} +v_1 n_2 \sin l \theta \sin \frac{\theta}{2} \right.\\ &\left. \qquad \qquad \qquad \qquad \qquad \qquad \quad + v_2 n_1 \cos l\theta \cos \frac{\theta}{2} +v_2n_2 \cos l\theta \sin \frac{\theta}{2}\right), \end{align*} by Lemma \ref{X,V}. Similar to previous cases, main terms in $[\nabla_v V(0;t,x,v)]_{(1,1)}$ are \begin{align*} -2l \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{1}{\vert v \vert}\right)\left(v_1n_1 \sin l \theta \cos \frac{\theta}{2} +v_2 n_1 \cos l\theta \cos \frac{\theta}{2}\right):=I_4, \end{align*} by the same reason. Taking the $x$-derivative for $I_4$, we get \begin{align*} \nabla_x I_4 &= -2l \left( \frac{\nabla_x t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{t_{\mathbf{b}} \cos \frac{\theta}{2}}{2\sin^2\frac{\theta}{2}} \nabla_x \theta\right)\left(v_1n_1 \sin l \theta \cos \frac{\theta}{2} +v_2 n_1 \cos l\theta \cos \frac{\theta}{2}\right)\\ &\quad -2l \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{1}{\vert v \vert}\right)\left(v_1 \sin l \theta \cos \frac{\theta}{2} \nabla_x n_1 + lv_1n_1 \cos l\theta \cos \frac{\theta}{2}\nabla_x \theta -\frac{1}{2} v_1n_1 \sin l \theta \sin \frac{\theta}{2} \nabla_x \theta \right. \\ & \qquad \qquad \qquad \qquad \qquad +\left. v_2 \cos l \theta \cos \frac{\theta}{2} \nabla_x n_1 -l v_2n_1 \sin l \theta \cos \frac{\theta}{2} \nabla_x \theta -\frac{1}{2} v_2 n_1 \cos l\theta \sin \frac{\theta}{2} \nabla_x \theta\right). \end{align*} Using \eqref{tb esti},\eqref{e_1},\eqref{est der n},\eqref{est der t_ell}, and \eqref{ell est}, one obtains that \begin{align*} \vert \nabla_x I_4 \vert \lesssim \frac{1}{\vert \sin\frac{\theta}{2}\vert } (1+\vert v \vert t)\times \frac{1}{\vert v \vert \sin^2 \frac{\theta}{2}} \times \vert v \vert +\frac{1}{\vert \sin \frac{\theta}{2} \vert}(1+\vert v \vert t) \times \frac{1}{\vert v \vert} \times \frac{\vert v \vert}{ \sin^2 \frac{\theta}{2}}(1+\vert v \vert t)\lesssim \frac{\vert v \vert^3}{\vert v \cdot n(x_{\mathbf{b}}) \vert^3} (1+\vert v \vert^2t^2). \end{align*} Hence, we get estimate for $\vert\nabla_{xv}V(0;t,x,v)\vert$ \begin{align*} \vert \nabla_{xv} V(0;t,x,v) \vert \lesssim \frac{\vert v \vert^3}{\vert v \cdot n(x_{\mathbf{b}}) \vert^3} (1+\vert v \vert^2 t^2). \end{align*} Similarly, we take the $v$-derivative to main terms $I_4$ and estimate $\nabla_v I_4$ to get $\vert\nabla_{vv} V(0;t,x,v)\vert$. From \eqref{tb esti},\eqref{e_2},\eqref{est der n}, \eqref{est der t_ell}, and \eqref{ell est}, we derive \begin{align*} \vert \nabla_{vv} V(0;t,x,v) \vert \lesssim \frac{\vert v \vert}{ \vert v \cdot n(x_{\mathbf{b}}) \vert^2} (1+\vert v \vert^2 t^2). \end{align*} \end{proof} \subsection{Proof of Theorem \ref{thm 3}} \begin{proof} [Proof of Theorem \ref{thm 3}] \textit{Step 1} First, we prove $C^{1}$ estimate. Note that it is easy to derive \begin{equation} \label{dt XV} \partial_{t}X(0;t,x,v) = -v^{k},\quad \partial_{tt}X(0;t,x,v) = 0,\quad \partial_{t}V(0;t,x,v) = 0, \quad \partial_{tt}V(0;t,x,v) = 0, \end{equation} where we assumed $t^{k+1} < 0 < t^{k}$ for some integer $k$. For $i\in\{t,x,v\}$, \begin{equation} \label{chain} \nabla_{i}f(t,x,v) = \nabla_{x}f_0 \nabla_{i}X(0;t,x,v) + \nabla_{v}f_0 \nabla_{i} V(0;t,x,v). \end{equation} Hence using Lemma \ref{est der X,V} and \eqref{dt XV}, we obtain \begin{equation*} \begin{split} |\partial_{t}f| &\lesssim \|f_0\|_{C^{1}}|v|, \\ |\nabla_{x}f| &\lesssim \|f_0\|_{C^{1}} \frac{|v|^{2}}{|v\cdot n(x_{\mathbf{b}})|^{2}} \langle v \rangle (1 + |v|t), \\ |\nabla_{v}f| &\lesssim \|f_0\|_{C^{1}} \frac{1}{|v\cdot n(x_{\mathbf{b}})|} \langle v \rangle (1 + |v|t), \\ \end{split} \end{equation*} where $x_{\mathbf{b}} = x_{\mathbf{b}}(x,v)$ and $\langle v \rangle := 1 + |v|$. So we obtain \eqref{C1 bound}. \\ \textit{Step 2} Now we compute second-order estimate. For $\nabla_{xx}f$, from \eqref{chain}, Lemma\ref{est der X,V}, and Lemma \ref{2nd est der X,V}, we obtain \begin{equation*} \begin{split} |\nabla_{xx}f| &= |\nabla_{x} \big( \nabla_{x}f_0 \nabla_{x}X(0;t,x,v) + \nabla_{v}f_0 \nabla_{x} V(0;t,x,v) \big)| \\ &\lesssim \|f_0\|_{C^{1}} \big( |\nabla_{xx}X(0) | + |\nabla_{xx}V(0)| \big) + \|f_0\|_{C^{2}} \big( |\nabla_{x}X(0)| + |\nabla_{x}V(0)| \big)^{2} \\ &\lesssim \|f_0\|_{C^{2}} \frac{|v|^{4}}{|v\cdot n(x_{\mathbf{b}})|^{4}} \langle v \rangle^{2} (1 + |v|t)^{2}, \end{split} \end{equation*} \begin{equation*} \begin{split} |\nabla_{vx}f| &= |\nabla_{v} \big( \nabla_{x}f_0 \nabla_{x}X(0;t,x,v) + \nabla_{v}f_0 \nabla_{x} V(0;t,x,v) \big)| \\ &\lesssim \|f_0\|_{C^{1}} \big( |\nabla_{vx}X(0) | + |\nabla_{vx}V(0)| \big) + \|f_0\|_{C^{2}} \big( |\nabla_{x}X(0)| + |\nabla_{x}V(0)| \big)\big( |\nabla_{v}X(0)| + |\nabla_{v}V(0)| \big) \\ &\lesssim \|f_0\|_{C^{2}} \frac{|v|^{2}}{|v\cdot n(x_{\mathbf{b}})|^{3}} \langle v \rangle^{2} (1 + |v|t)^{2}, \end{split} \end{equation*} and \begin{equation*} \begin{split} |\nabla_{vv}f| &= |\nabla_{v} \big( \nabla_{x}f_0 \nabla_{v}X(0;t,x,v) + \nabla_{v}f_0 \nabla_{v} V(0;t,x,v) \big)| \\ &\lesssim \|f_0\|_{C^{1}} \big( |\nabla_{vv}X(0) | + |\nabla_{vv}V(0)| \big) + \|f_0\|_{C^{2}} \big( |\nabla_{v}X(0)| + |\nabla_{v}V(0)| \big)^{2} \\ &\lesssim \|f_0\|_{C^{2}} \frac{1}{|v\cdot n(x_{\mathbf{b}})|^{2}} \langle v \rangle^{2} (1 + |v|t)^{2}, \end{split} \end{equation*} where $|\nabla_{xx, vx, vv}X|$ means $\sup_{i,j,k}|\nabla_{ij}X_{k}(0;t,x,v)|$ for $i,j \in\{ x_{1}, x_{2}, v_{1}, v_{2}\}$ and $k \in \{1,2\}$. (Also similar for $\nabla_{ij}V$.) Combining above three estimates, we obtain \eqref{C2 bound}. Second derivative estimates which contain at least one $\partial_{t}$ also yield the same upper bound from \eqref{dt XV}. We omit the details. \\ \end{proof} \noindent{\bf Acknowledgments.} The authors thank Haitao Wang for suggestion and fruitful discussion. Their research is supported by the National Research Foundation of Korea(NRF) grant funded by the Korean government(MSIT)(No. NRF-2019R1C1C1010915). DL is also supported by the POSCO Science Fellowship of POSCO TJ Park Foundation. The authors sincerely appreciate the anonymous referees for their valuable comments and suggestions on the paper. \bibliographystyle{plain}
proofpile-arXiv_065-74
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Implementation Details} We provide more implementation details on how each probing experiments is conducted, as well as the software libraries that it depends on. \paragraph{Software Library} For all the probing experiments, we rely on a NLP toolkit called JIANT \cite{phang2020jiant}. Jiant is a framework that supports both multi-task learning and transfer learning. We followed the edge probing experiment guide provided by the library and expanded the edge probing tasks with our own tasks. For classifiers, we used the same implementation of linear classifier and MLP classifier as the original edge probing paper \cite{edge_probing_paper}. Our pre-trained language models are from huggingface's transformers library \cite{wolf2020huggingfaces}. \paragraph{Hyperparameters} We will briefly describe our selection process for key hyperparameters for all the probing tasks. For learning rate, we follow \citet{edge_probing_paper}'s practice, where 0.0001 is used for training. We performed an empirical selection for the batch size and epoch number. We set the batch size to 4 and epoch number to 10. We observed that each probing classifier requires enough training iterations and training batches to fully extract and interpret the corresponding linguistic information from the pre-trained language model representations. We also need to avoid a large amount of train data examples being exposed to the classifier to ensure that the classifier did not just learn the task. Combining these two criterion, we observe that empirically, a batch size with 4 and an epoch number with 10 is the optimal among other combinations. Overall, our selection of the important hyperparameters is summarized below: \begin{itemize} \item Learning rate: $1\times10^{-4}$ \item Batch Size: 4 \item Epochs: 10 \end{itemize} \paragraph{Model Freezing and Training} For probing experiments, we follow the convention way of probing language models, by freezing the parameters of each pre-trained language model encoders. This way, the pre-trained model parameters will not update through gradient propagation and thus prevent it from learning the task. We train each probing classifiers by minimizing a loss function through gradient descent. For binary classification we use the binary cross-entropy loss. For multi-label classification, we use the softmax loss which can ensue an exclusivity constraint. For gradient descent, we use the Adam optimizer \cite{Kingma2015AdamAM} following \citet{edge_probing_paper}'s choice. \section{Dataset Construction} In this section we provide more details to the construction of some datasets. \paragraph{SemGraph} To construct the dataset used for the SemGraph task, we first generated the universal dependencies of sentences. Then we developed an algorithm to draw edges of two words as well as identifying these words' roles (i.e. concept, relation or modifier) based on the type of dependency between them. Finally we use role-labeling tools to examine the roles of vertices and refine our data. \paragraph{Contradiction Signature}: Leveraging the fact that a pair of sentences has a similar context, we automatically mark the changing parts and then manually verified that the spans indeed contradict each other. \paragraph{Lexical (SA-Lex)} Since two sentences in a pair of the Breaking-NLI dataset have very similar contexts, the part that is changing consists of our target spans. We automatically marked such places and verified that lexical semantic change happens from the first place to the second one. Then we randomly picked one word from each sentence to form an unaligned pair after we check that the two words are not related. \paragraph{Anaphora (SA-AP)} Before manually annotate which entity the pronoun refers to, we utilize universal dependencies to decide the locations of potential entities and the target pronoun. \paragraph{Sentiment (SA-ST)} We first concatenated a sample with a speech fragment from another randomly selected sample. Next we automatically draw connections (i.e. aligned or contradict) from the original speech fragment to the hypothesis based on the relation of this pair of sentences given in the original dataset. Finally, we manually checked if the added fragment could be seen as related to the hypothesis. If so, we would assign a specific label (aligned or contradict) to indicate their relation instead of classifying them as unaligned. To construct a vertex probing dataset, we assigned the \textbf{Unaligned} label to all the words outside of speech fragments and based on their relations with the hypothesis. \paragraph{Relational Knowledge (SA-RK)} To construct this dataset, we first locate a span that contains the named entities appeared in the hypothesis. Next we manually adjust them to properly include the information needed to reveal the relation of these entities. \section{Introduction} \label{sec:intro} Pre-trained language models have replaced traditional symbolic-based natural language processing systems on a variety of language understanding tasks, mainly because symbolic-based NLP systems often rely on linguistic properties as features. Those features are hard to acquire. Many types of linguistic information are either hand-written rules or background knowledge extracted from traditional knowledge base, which make symbolic-based systems hard to scale up on large benchmarks such as GLUE \cite{wang-etal-2018-glue}. On the other hand, many recent probing studies have revealed that sentence representations of pre-trained language models encode a large amount of linguistic information and background knowledge \cite{edge_probing_paper, petroni-etal-2019-language, Bouraoui2020InducingRK}. However, it remains unknown if these representations also encode implicit linguistic information for inference crucial to symbolic inference systems. \begin{figure}[tb!] \centering \includegraphics[width=6.3cm]{figs/inferencekg_probe.png} \caption{\small Given pre-trained language models, the probing classifier extracts linguistic infromation for a given probing task. The amount of intimation is measured by the probing accuracy and the information gain, compared with baseline word embeddings.} \label{fig:probe} \end{figure} In this paper, we propose an inference information probing framework (Figure \ref{fig:probe}). We define a set of probing tasks that focus on different types of linguistic information required by symbolic systems. In particular, we cover linguistic information for simple and complex semantic phenomena. Simple semantic phenomena often relies on partial or no context and does not require advanced linguistic skills like contextual understanding and reasoning. Our simple phenomena include word-to-word semantic relations, lexical semantics, and contradiction signatures. Complex phenomena depends on multiple types of reasoning skills like reasoning on event context, monotonicity, coreference, and commonsense knowledge. For complex phenomena, we probe sentiment, relational knowledge, anaphora resolution, and monotonicity reasoning. We are interested in answering two questions: (1) Do pre-trained language models encode linguistic information essential to symbolic inference systems? (2) Do pre-trained language models acquire new linguistic information for inference during the fine-tuning process for the NLI task? For each task, we conducted probing experiments on multiple contextualized language models and compared results to several strong baselines. Our analysis shows that language models encode diverse types of linguistic information for inference. In particular, they encode more information on simple semantic phenomena than complex semantic phenomena. Our label-wise qualitative analysis revealed that the amount of information encoded by language models for each task is different across labels which justifies our previous findings. Moreover, we found that pre-trained language models can obtain some types of the missing linguistic information through fine-tuning for the NLI task. Overall, our findings show that pre-trained language models can be potential linguistic knowledge bases supporting symbolic inference systems. \paragraph{Contributions} Our contributions are as follows: \begin{enumerate} \item Our work expands on prior probing studies by studying a wider range of linguistic information, including simple and complex semantic phenomena. \item Our experiments allow classifier expressiveness to be analyzed in a more complex setting covering syntactic and semantic linguistic properties beyond prior works. \item Our study provides insights into what types of new linguistic information pre-trained language models obtain during fine-tuning on large NLI datasets. This contributes to the interpretability of NLI models. \end{enumerate} \section{Related Work} Recent studies have reported the existence of linguistic properties encoded in the self-attention weights of language models' contextualized representations. These linguistic properties include syntactic structure, semantic knowledge, and some world knowledge \cite{rogers-etal-2020-primer}. Several studies train and evaluate a probing classifier on top of different language models' contextualized representations to explore the existence of information about linguistic properties. These studies have shown that pre-trained language models encode some levels of syntactic and semantic knowledge. \citet{hewitt-manning-2019-structural} recovered syntactic dependencies from BERT's embeddings by learning transformation matrices. \citet{edge_probing_paper}, which is more directly related to our work, proposed the edge probing framework and found that contextualized embeddings encode information about named entity types, relations, semantic roles and proto roles based on the high accuracy of the probing classifier. Some probing studies focus on inducing factual knowledge captured in pre-trained language models. A majority of the studies rely on the Masked Language Modeling (MLM) component of the model which can be adapted to induce knowledge easily, since the model only needs to fill in the blanks. \citet{petroni-etal-2019-language} showed that pre-trained BERT encodes relational knowledge competitive with those that are accessed from knowledge bases using traditional NLP methods. They also found that BERT has strong ability to recall factual knowledge prior to any fine-tuning, making it a good candidate for open-domain QA systems (unsupervised). \citet{Bouraoui2020InducingRK} proposed a method to induce relations from pre-trained language models. They first found potential sentences that express a relation in a large corpus. A subset of sentences was used as templates. They then fine-tuned a language model to predict whether a given word pair form some relation. They found strong evidence that relations can be obtained from language models. Compared to existing work, we extend common syntactic and semantic tasks to a range of tasks that focus on more complex linguistic phenomenons. Some of our tasks, such as semantic graph construction and monotonicity polarization, require both syntactic and semantic information. Probing for more complex linguistic tasks allows us to diagnose the particular advantages of language models over conventional NLP systems. It also allows us to study the expressiveness of probing classifiers in a more complex setting beyond syntactic tasks. Moreover, our experiments on fine-tuned NLI language models provides insights into the type of linguistic information they capture through fine-tuning. \section{Probing Methodology} \subsection{Edge Probing and Vertex Probing} Edge probing is a simple and useful probing framework proposed by \citet{edge_probing_paper}. It can provide a uniform set of metrics and architectures across diverse task types. Formally, a sentence is defined as a list of tokens [$t_0, t_1, t_2, ..., t_n$] and an edge target as \{$s_1, s_2, \mathcal{L}$\} where $s_1$ and $s_2$ are two end exclusive spans with $s_1$ = \{$i^{s1}, j^{s1}$) and $s_2$ = \{$i^{s2}, j^{s2}$). $\mathcal{L}$ is a label assigned to the pair of spans which the classifier needs to accurately predict. The label set for $\mathcal{L}$ is different across tasks including both binary labels and multi labels. Each sentence [$t_0, t_1, t_2, ..., t_n$] is encoded by a language model into contextualized sentence representation [$e_0, e_1, e_2, ..., e_n$]. A projection layer concatenated with a self-attention pooling operator will be applied to the representation to extract span representations according to the index position of two spans $s_1$ = \{$i^{s1}, j^{s1}$) and $s_2$ = \{$i^{s2}, j^{s2}$). As \citet{edge_probing_paper} mentioned, the pooling is fixed-length and only operates within the bounds of a span to ensure that the classifier can only access information of the rest of the sentence from the contextual sentence representation. The two span representations are concatenated and passed to the classifier to predict the label. To ensure we only probe a pre-trained language model without modifying its parameters, we freeze its parameters to not allow for gradient updates. The vertex probing framework has the same settings and formulations as the edge probing framework, except that vertex probing operates on every token in a sentence. The classifier receives only a single span representation as input. Formally, the definition is very similar to that of the sequence tagging task. With a list of tokens [$t_0, t_1, t_2, ..., t_n$], we define each token as a single span target $s$ = \{($i^{s}, j^{s}$), $\mathcal{L}$\}. The vertex probing is used to predict which words belong to a category in the label set. \input{tables/data_example} \subsection{Classifier Selection} Selecting a good probing classifier is essential to the probing process. We first choose the linear classifier. According to \citet{hewitt-liang-2019-designing}, the linear classifier is less expressive and thus is prevented from memorizing the task. However, \citet{pimentel-etal-2020-information} uses probing to estimate the mutual information between a representation-valued and a linguistic-property–valued random variable. They argue that the most optimal probe should be used to minimize the chance of misinterpreting a representation's encoded information, and therefore achieve the optimal estimate of mutual information. To lessen the chance of misinterpretation, we conducted probing using a Multi-layer Perceptron (MLP) classifier with one hidden layer. \subsection{Experiment Setup} To answer both questions 1 and 2 in the introduction, we experiment with five pre-trained language models. We selected BERT-base and BERT-large \cite{devlin-etal-2019-bert}, RoBERTa-base and RoBERTa-large \cite{Liu2019RoBERTaAR}, and DeBERTa \cite{he2021deberta}. All five models can provide contextualized sentence representations and have shown impressive performance on the GLUE \cite{wang-etal-2018-glue} benchmark. Our experiment setup follows three types of evaluation methods to interpret the probing performance. \paragraph{Probing Accuracy} We probe pre-trained language models and the baseline word embeddings using linear and MLP classifiers. Then, we compare their performance to determine if the pre-trained models improve over the baselines. If such improvement is significant, the pre-trained models contain more information about a task than the baseline. Otherwise, they do not contain enough information to benefit a task. We select four uncontextualized word embeddings as our baselines, including random embedding, FastText \cite{joulin2017bag}, Glove \cite{pennington-etal-2014-glove}, and Word2Vec \cite{word2vec2013}. We also conduct a label-wise qualitative analysis for each task to explore if the amount of information is encoded differently across the task-specific labels. Finally, to determine if language models can learn the missing linguistic information for inference, we evaluate NLI models fine-tuned on MultiNLI \cite{williams-etal-2018-broad}, using probing tasks that do not benefit from the pre-trained models. \paragraph{Control Task} \citet{hewitt-liang-2019-designing} argue that accuracy cannot fully validate that a representation encodes linguistic information since a highly expressive classifier could have memorized the task. They proposed the use of control tasks to complement probings. The main idea is to validate if a classifier could predict task outputs independently of a representation's linguistic properties. If the classifier can achieve high accuracy in this setting, the accuracy does not necessarily reflect the properties of the representation. A control task has the same input as the associated linguistic task, but it assigns random labels to the input data. The selectivity, which is the difference between the linguistic task accuracy and the control task accuracy, is used to measure the quality of a probe. A good probe should have high selectivity meaning that it has low control task accuracy but high linguistic task accuracy. \paragraph{Information-theoretic Probing} \citet{pimentel-etal-2020-information} argue that the task of supervised probing is an attempt to measure how much information a neural representation can provide for a task. They operationalized probing as estimating the mutual information between a representation and a probing task. The mutual information from a target representation is compared to the information estimated from a control function’s representation, which serves as a baseline for comparison. In our experiments, we use uncontextualized baselines as control functions. We estimate the information gain between a contextualized embedding and a baseline. Information gain measures how much more information about a task a contextualized embedding has over a baseline. In addition, we transform the gain into a percentage measurement to make the results more interpretable. \section{Inference Information Probes} In this section, we introduce a list of edge and vertex probing tasks for probing implicit linguistic information for symbolic inference methods in pre-trained language model representations. To discover potential tasks that can provide essential linguistic information for symbolic inferences, we studied four major logical systems for NLI, all with high accuracy on SICK \cite{Marelli2014ASC}, and several challenge datasets for the NLI task. They include NLI systems based on natural logic \cite{abzianidze-2020-learning}, monotonicity reasoning \cite{hu-etal-2020-monalog, chen-etal-2021-neurallog}, and theorem proving \cite{yanaka-etal-2018-acquisition}. \subsection{Semantic Graph Construction (SemGraph)} This task probes the graph-based abstract meaning representation for sentences, a type of knowledge found effective in symbolic systems for acquiring paraphrase pairs and selecting correct inference steps \cite{yanaka-etal-2018-acquisition, chen-etal-2021-neurallog}. The task is to construct a semantic graph that captures connections between concepts, modifiers, and relations in a sentence. Relations are words that form a connection, including verbs and prepositions. Concepts are arguments connected by a relation such as objects and subjects. Each concept connects to a set of modifiers that attribute to it. An example semantic graph is shown in Table \ref{tab:data_exp}. We define this as an edge probing task and assign a label to a pair of tokens. A label is selected from the label set: concept-to-relation, concept-to-modifier, relation-to-concept, relation-to-modifier, relation-to-relation, modifier-to-relation, modifier-to-concept. To construct the dataset, we use dependency parsing and semantic role labeling tools to identify concepts, modifiers, and relations in a sentence and the connection between them. We selected premises from the SNLI test set as our inputs and split them into training and testing sets. \input{tables/dataset} \subsection{Semantic Alignment (SA)} This set of tasks probes the linguistic information for inference involving semantically aligned phrase or word pairs. These aligned pairs can often serve as an explanation of the entailment gold-label \cite{abzianidze-2020-learning, chen-etal-2021-neurallog}. We cover a wide range of semantic phenomena common in natural language inference including lexical (SA-Lex), anaphora (SA-AP), sentiment (SA-ST), and relational knowledge (SA-RK). Table \ref{tab:data_exp} lists each type of semantic alignment with associated examples. Probing data are first collected from multiple challenge datasets for NLU and then are manually annotated for the edge and vertex probing framework. For the sentiment task, we noticed that the aligned phrases are always part of a person's saying, leading a model to solve the task quickly by memorization. To avoid this, we concatenated each premise with speech fragments from another randomly selected premise to build a more complex premise. For instance, in the example in Table \ref{tab:data_exp}, \textit{I found this product to be way too big} is a speech fragment from another premise sample. \input{tables/instance_accuracy} We formulate each task as either an edge probing or a vertex probing task during annotation. For edge probing tasks, we assign either \textbf{Aligned} or \textbf{Unaligned} to a pair of spans. For example, in the Lexical example in Table \ref{tab:data_exp}, ($s^2$: [\textit{saxophone}], $s^3$: [\textit{instrument}]) are aligned, and ($s^1$: [\textit{man}], $s^3$: [\textit{instrument}]) are unaligned. In a vertex probing task, we label a token as either \textbf{Aligned$_1$} (the token belongs to the first phrase of the aligned pair), \textbf{Aligned$_2$} (the token belongs to the second phrase of the aligned pair), or \textbf{Unaligned} (the token is not in any aligned phrases). For example, in the relational-knowledge example in Table \ref{tab:data_exp}, \{\textit{Dirk}, \textit{Nowitski}, \textit{is}, \textit{a}, \textit{current}, \textit{NBA}, \textit{star}\} are tokens in the first phrase of the aligned pair, \{\textit{Dirk}, \textit{Nowitski}, \textit{plays}, \textit{in}, \textit{the}, \textit{NBA}\} are tokens in the second phrase of the aligned pair, and \{\textit{Dallas}, \textit{Mavericks}, \textit{as}, \textit{an}, \textit{all-purpose}, \textit{forward}\} are unaligned tokens. We apply edge probing to tasks with single word spans (lexical and anaphora) and vertex probing to tasks involving multi-word spans (sentiment and relational knowledge). In general, vertex probing adds more distractors into the task to increase the difficulty level, ensuring that the models are using the correct types of reasoning when making a decision. \subsection{Contradiction Signature (ContraSig)} Being able to reason about contradictions between a pair of sentences is a fundamental requirement of Natural Language Inference. To determine a contradictory relationship, systems often rely on contradiction signatures, or possible rationales of contradiction in the sentence pair. Contradiction signature detection tests for both syntax and fundamental semantics. We define this task as vertex probing and manually annotated one dataset for detecting contradiction in text \cite{Marelli2014ASC} by labeling tokens in the first phrase of a contradiction signature as \textbf{Contra-sig$_1$}, tokens in the second phrase of a contradiction signature as \textbf{Contra-sig$_2$}, and irrelevant tokens as \textbf{None}. Table \ref{tab:data_exp} shows an example, with \{\textit{have}, \textit{each}, \textit{played}, \textit{twice}\} as irrelevant tokens, \{\textit{defeats}, \textit{Germany}\} as tokens in the first phrase of the contradiction signature, and \{\textit{haven't}, \textit{beaten}, \textit{anybody}, \textit{yet}\} as tokens in the second phrase of the contradiction signature. \subsection{Monotonicity Polarization (Monotonicity)} Monotonicity information supports word-replacement-based logical inferences that NLI systems can use. For each token, we assign a monotonicity mark that is either \textbf{Monotone} ($\uparrow$), \textbf{Antitone} ($\downarrow$), or \textbf{None} ($=$). To construct our dataset, we annotated monotonicity information on all sentences in the MED dataset \cite{yanaka-etal-2019-neural} as training examples using a monotonicity annotation tool called Udep2Mono \cite{chen-gao-2021-IWCS}. For testing, we extended a challenging gold-label monotonicity dataset used by Udep2Mono that includes multiple levels of monotonicity changes from different quantifiers and logical operators. For each sentence, we replicate ten sentences following the same syntactic format. Vertex probing is used for monotonicity polarization because a model must predict every token's monotonicity information. \section{Experiment Results and Findings} \subsection{Do LMs encode information for inference?} Here we evaluate the degree to which pre-trained language models encode implicit information of linguistic properties that is essential to logical inference systems. We conducted probes on the five pre-trained language models. Table \ref{tab:instance} shows results from the probing experiments. \paragraph{Semantic Graph Construction} With the linear classifier, all language models achieve high probing accuracy that outperforms the baselines. Together, with high selectivity, this is strong evidence that the information in these models' representations allows the classifier to recover the connectives of concepts, relations, and modifiers. Interestingly, the MLP classifier did not improve on the linear classifier significantly, suggesting that the information is easy to interpret. The performance here is consistent with language models' good performance on dependency parsing, semantic role labeling, and part-of-speech tagging \cite{edge_probing_paper} which are related to semantic graph construction. \paragraph{Semantic Alignment} We observe that on lexical and anaphora based alignments (SA-Lex, SA-AP), language models show high probing accuracy that improves over the baselines significantly when using the MLP Classifier. This is evidence that language models encode linguistic information of these two types of semantic alignment since they also show high selectivity. Language models do not improve over the baselines when using the linear classifier, suggesting that these types of linguistic information might be hard to interpret. Language models only improved trivially over the baselines for alignments involving sentiment and relational knowledge (SA-ST, SA-RK). The insignificant improvement suggests that models weakly capture the information on these complex semantic alignment. Overall, the trend is that pre-trained models tend to show poor performance on complex semantic alignment that requires understanding and reasoning. This behavior is consistent with \citet{edge_probing_paper}'s finding that language models only encode a small amount of information on semantic reasoning. \paragraph{Contradiction Signature and Monotonicity} For the task on contradiction signature detection, all language models show poor performance with linear classifier except DeBERTa, which has relatively high accuracy (78.5\%), validating that information on contradiction signatures is more accessible from DeBERTa than the other four models. After using the MLP classifier, all models' accuracy increased significantly (above 90\%) while maintaining very high selectivity. We attribute this partly to the fact that many contradictions are simple morphological negation and antonyms, which can be largely detected by using lexical semantics and syntax. The high accuracy is thus strong evidence that language models do encode a good amount of information on syntax and lexical semantics. For the monotonicity polarization task, language models show low accuracy with both the linear classifier and the MLP classifier. This suggests that these language models may not encode much monotonicity information that can support the polarization. Again, the results here show that pre-trained models encode more information on simple semantic phenomena (contradiction signature) than complex ones (monotonicity). \subsection{Label-wise Qualitative Analysis} To further understand the amount of linguistic information that pre-trained language models capture, we analyze label-wise probing quality for each task. Label-wise accuracy per task are shown in figure \ref{fig:acc_label}. We first observe that on some tasks (SA-Lex, SA-AP, ContraSig, SemGraph), pre-trained language models show high and balanced accuracy across labels. These behaviors are strong evidence that these models encode rich information on simple semantic phenomena. On the semantic graph construction task, the accuracy distribution is similar across models. The relatively low accuracy on modifier-to-relation (m-r) and modifier-to-concept (m-c) show the incompleteness of information in language models that support the linking of modifiers to words being modified. Language models seem to encode information for linking concepts to corresponding words since the accuracy is consistently high. Note that although anaphora (SA-AP) is a complex phenomena, models show decent performance which validates the finding from the previous experiment. For other complex semantic alignment (SA-ST, SA-RK), the heatmaps show highly imbalanced label-wise accuracy. The models have higher accuracy on words in the hypothesis of an aligned pair than words in the premise, which has a much more complicated context. Since vertex-probing needs to locate phrases in a premise contributing to an entailment, the low accuracy on predicting span locations in a premise suggests that language models only encode very little linguistic information on complex semantic phenomena. For monotonicity polarization, the accuracy on each label is very different. Across language models, the accuracy on monotone polarity is higher than that on antitone and neutral polarity. This is consistent with other findings from other probing studies on monotonicity reasoning. \cite{yanaka-etal-2019-neural, geiger-etal-2020-neural}. The results from this analysis validates our main finding from the previous experiment: models tend to encode less information on complex semantic phenomena. \begin{figure}[t!] \centering \includegraphics[width=7cm]{plots/labelwise.drawio.png} \caption{\small Plots here shows label-wise accuracy across models for each inference information probing task. Here LM1-5 stands for the five language models in order (BERT-base, BERT-large, RoBERTa-base, RoBERTa-large, DeBERTa).} \label{fig:acc_label} \end{figure} \subsection{Discussions} \paragraph{Information-Theoretic Probing} We conducted additional experiments on information-theoretical probing to validate our findings based on probing accuracy. Recall that here we want to estimate the information gain between a pre-trained representation and a baseline representation. We followed \citet{pimentel-etal-2020-information}'s practice by using a more powerful probing classifier (MLP). We compared each language model to the best-performed baseline embedding. As table \ref{tab:information_gain} shows, pre-trained language models on average encode more than 50\% more information on the eight probing tasks. Overall, pre-trained language models encode more information than baseline embedding consistently across all tasks. Among these, the highest information gain of language models is on lexical alignment (more than 100\% of increase). This is surprising since baseline word embeddings are a representation of word semantics. We hypothesize that this is due to the proximity of words that contradicts each other in the embedding space. Based on the results, we conclude that pre-trained language models encode significantly more information on linguistic information of logical inference than conventional word embeddings. \paragraph{Classifier Expressiveness} Some of our findings contradict several statements made by \citet{hewitt-liang-2019-designing} regarding classifier expressiveness. First, they claim that one should choose a less expressive classifier over a highly expressive one (linear over MLP) since the prior one has higher selectivity. However, based on the accuracy, we observe that the linear classifier has worse performance for semantically-oriented tasks than the MLP classifier. We also found that the MLP classifier can achieve similar selectivity as a linear classifier for these tasks while achieving higher accuracy. These findings suggest that linear classifiers could misinterpret semantic information from a representation, which supports \citet{pimentel-etal-2020-information}'s arguments on using a more powerful classifier. Secondly, they claim that a probing classifier with sufficient expressiveness can learn any task on top of a lossless representation with enough training examples. However, we found that even high expressive classifiers like MLP fail to perform well on tasks with monotonicity and higher-level semantic reasoning. \input{tables/information_gain} \input{tables/finetune} \paragraph{Can LMs learn missing information?} Here we evaluate whether pre-trained language models can obtain linguistic information for inference, missing from their pre-trained representations, through fine-tuning for the NLI task. We select a version fine-tuned on the MultiNLI dataset for each language model. We probed these fine-tuned models for three tasks that did not benefit from the pre-trained language models. Our probing results are shown in Table \ref{tab:finetune}, and we only record probings with the best performance here. We observe that all language models' fine-tuned representations improve over their pre-trained representations significantly on the sentiment (SA-ST) and relational-knowledge-based (SA-RK) semantic alignment. In contrast, they do not improve the performance on monotonicity polarization (Monotonicity). The results show that language models can capture linguistic information on some types of semantic reasoning but not on monotonicity. Possible explanations are that these models could not obtain monotonicity information during fine-tuning, or the training data of MultiNLI does not contain enough examples about monotonicity. \section{Conclusions and Future Work} We presented a systematic study on determining if pre-trained language models encode implicit linguistic information essential to symbolic inference methods. For each probing task, we constructed associating datasets. We then conducted probings on pre-trained language models using both linear and MLP classifiers for each task. In general, we first found that baseline word embeddings do not contain much linguistic information for inference and contextualized representations of language models encode some levels of linguistic information. However, they encode more information on simple semantic phenomena than on complex phenomena. Moreover, we found that linear classifiers can predict correctly under syntactical information but often misinterpret information on semantics resulting in low classification accuracy. Our label-wise qualitative analysis found that the amount of linguistic information being encoded is different across task-specific labels. In particular, language models encode more linguistic information for one label than other labels. This label-wise information difference again justifies the absence and incompleteness of some linguistic information for inference in language models. Furthermore, we found that language models can effectively learn some types of missing information on complex semantic reasoning through fine-tuning for the NLI task. Overall, language models show potential to serve as knowledge bases of linguistic information that supports robust symbolic reasoning. We believe that our probing and analysis provide an adequate picture of whether critical linguistic information for inference exists in pre-trained language models. Moreover, our probing can inspire future systems on combining neural and symbolic NLP methods. For future work, one could conduct further analysis on each type of linguistic information in language models by constructing more detailed probing datasets. One could also design logical systems that can access linguistic information from pre-trained language models and apply them in the inference process for improved performance on large benchmarks. \section{Acknowledgments} We thank the anonymous reviewers for their thoughtful and constructive comments. Thanks also to our advisors Laurence S. Moss and Michael Wollowski for their feedback on earlier drafts of this work. Special thanks to the Machine Learning for Language Group at NYU for their wonderful NLP toolkit, JIANT \cite{phang2020jiant}. \section{Introduction} \label{sec:intro} Pre-trained language models have replaced traditional symbolic-based natural language processing systems on a variety of language understanding tasks, mainly because symbolic-based NLP systems often rely on linguistic properties as features. Those features are hard to acquire. Many types of linguistic information are either hand-written rules or background knowledge extracted from traditional knowledge base, which make symbolic-based systems hard to scale up on large benchmarks such as GLUE \cite{wang-etal-2018-glue}. On the other hand, many recent probing studies have revealed that sentence representations of pre-trained language models encode a large amount of linguistic information and background knowledge \cite{edge_probing_paper, petroni-etal-2019-language, Bouraoui2020InducingRK}. However, it remains unknown if these representations also encode implicit linguistic information for inference crucial to symbolic inference systems. \begin{figure}[tb!] \centering \includegraphics[width=6.3cm]{figs/inferencekg_probe.png} \caption{\small Given pre-trained language models, the probing classifier extracts linguistic infromation for a given probing task. The amount of intimation is measured by the probing accuracy and the information gain, compared with baseline word embeddings.} \label{fig:probe} \end{figure} In this paper, we propose an inference information probing framework (Figure \ref{fig:probe}). We define a set of probing tasks that focus on different types of linguistic information required by symbolic systems. In particular, we cover linguistic information for simple and complex semantic phenomena. Simple semantic phenomena often relies on partial or no context and does not require advanced linguistic skills like contextual understanding and reasoning. Our simple phenomena include word-to-word semantic relations, lexical semantics, and contradiction signatures. Complex phenomena depends on multiple types of reasoning skills like reasoning on event context, monotonicity, coreference, and commonsense knowledge. For complex phenomena, we probe sentiment, relational knowledge, anaphora resolution, and monotonicity reasoning. We are interested in answering two questions: (1) Do pre-trained language models encode linguistic information essential to symbolic inference systems? (2) Do pre-trained language models acquire new linguistic information for inference during the fine-tuning process for the NLI task? For each task, we conducted probing experiments on multiple contextualized language models and compared results to several strong baselines. Our analysis shows that language models encode diverse types of linguistic information for inference. In particular, they encode more information on simple semantic phenomena than complex semantic phenomena. Our label-wise qualitative analysis revealed that the amount of information encoded by language models for each task is different across labels which justifies our previous findings. Moreover, we found that pre-trained language models can obtain some types of the missing linguistic information through fine-tuning for the NLI task. Overall, our findings show that pre-trained language models can be potential linguistic knowledge bases supporting symbolic inference systems. \paragraph{Contributions} Our contributions are as follows: \begin{enumerate} \item Our work expands on prior probing studies by studying a wider range of linguistic information, including simple and complex semantic phenomena. \item Our experiments allow classifier expressiveness to be analyzed in a more complex setting covering syntactic and semantic linguistic properties beyond prior works. \item Our study provides insights into what types of new linguistic information pre-trained language models obtain during fine-tuning on large NLI datasets. This contributes to the interpretability of NLI models. \end{enumerate} \section{Related Work} Recent studies have reported the existence of linguistic properties encoded in the self-attention weights of language models' contextualized representations. These linguistic properties include syntactic structure, semantic knowledge, and some world knowledge \cite{rogers-etal-2020-primer}. Several studies train and evaluate a probing classifier on top of different language models' contextualized representations to explore the existence of information about linguistic properties. These studies have shown that pre-trained language models encode some levels of syntactic and semantic knowledge. \citet{hewitt-manning-2019-structural} recovered syntactic dependencies from BERT's embeddings by learning transformation matrices. \citet{edge_probing_paper}, which is more directly related to our work, proposed the edge probing framework and found that contextualized embeddings encode information about named entity types, relations, semantic roles and proto roles based on the high accuracy of the probing classifier. Some probing studies focus on inducing factual knowledge captured in pre-trained language models. A majority of the studies rely on the Masked Language Modeling (MLM) component of the model which can be adapted to induce knowledge easily, since the model only needs to fill in the blanks. \citet{petroni-etal-2019-language} showed that pre-trained BERT encodes relational knowledge competitive with those that are accessed from knowledge bases using traditional NLP methods. They also found that BERT has strong ability to recall factual knowledge prior to any fine-tuning, making it a good candidate for open-domain QA systems (unsupervised). \citet{Bouraoui2020InducingRK} proposed a method to induce relations from pre-trained language models. They first found potential sentences that express a relation in a large corpus. A subset of sentences was used as templates. They then fine-tuned a language model to predict whether a given word pair form some relation. They found strong evidence that relations can be obtained from language models. Compared to existing work, we extend common syntactic and semantic tasks to a range of tasks that focus on more complex linguistic phenomenons. Some of our tasks, such as semantic graph construction and monotonicity polarization, require both syntactic and semantic information. Probing for more complex linguistic tasks allows us to diagnose the particular advantages of language models over conventional NLP systems. It also allows us to study the expressiveness of probing classifiers in a more complex setting beyond syntactic tasks. Moreover, our experiments on fine-tuned NLI language models provides insights into the type of linguistic information they capture through fine-tuning. \section{Probing Methodology} \subsection{Edge Probing and Vertex Probing} Edge probing is a simple and useful probing framework proposed by \citet{edge_probing_paper}. It can provide a uniform set of metrics and architectures across diverse task types. Formally, a sentence is defined as a list of tokens [$t_0, t_1, t_2, ..., t_n$] and an edge target as \{$s_1, s_2, \mathcal{L}$\} where $s_1$ and $s_2$ are two end exclusive spans with $s_1$ = \{$i^{s1}, j^{s1}$) and $s_2$ = \{$i^{s2}, j^{s2}$). $\mathcal{L}$ is a label assigned to the pair of spans which the classifier needs to accurately predict. The label set for $\mathcal{L}$ is different across tasks including both binary labels and multi labels. Each sentence [$t_0, t_1, t_2, ..., t_n$] is encoded by a language model into contextualized sentence representation [$e_0, e_1, e_2, ..., e_n$]. A projection layer concatenated with a self-attention pooling operator will be applied to the representation to extract span representations according to the index position of two spans $s_1$ = \{$i^{s1}, j^{s1}$) and $s_2$ = \{$i^{s2}, j^{s2}$). As \citet{edge_probing_paper} mentioned, the pooling is fixed-length and only operates within the bounds of a span to ensure that the classifier can only access information of the rest of the sentence from the contextual sentence representation. The two span representations are concatenated and passed to the classifier to predict the label. To ensure we only probe a pre-trained language model without modifying its parameters, we freeze its parameters to not allow for gradient updates. The vertex probing framework has the same settings and formulations as the edge probing framework, except that vertex probing operates on every token in a sentence. The classifier receives only a single span representation as input. Formally, the definition is very similar to that of the sequence tagging task. With a list of tokens [$t_0, t_1, t_2, ..., t_n$], we define each token as a single span target $s$ = \{($i^{s}, j^{s}$), $\mathcal{L}$\}. The vertex probing is used to predict which words belong to a category in the label set. \input{tables/data_example} \subsection{Classifier Selection} Selecting a good probing classifier is essential to the probing process. We first choose the linear classifier. According to \citet{hewitt-liang-2019-designing}, the linear classifier is less expressive and thus is prevented from memorizing the task. However, \citet{pimentel-etal-2020-information} uses probing to estimate the mutual information between a representation-valued and a linguistic-property–valued random variable. They argue that the most optimal probe should be used to minimize the chance of misinterpreting a representation's encoded information, and therefore achieve the optimal estimate of mutual information. To lessen the chance of misinterpretation, we conducted probing using a Multi-layer Perceptron (MLP) classifier with one hidden layer. \subsection{Experiment Setup} To answer both questions 1 and 2 in the introduction, we experiment with five pre-trained language models. We selected BERT-base and BERT-large \cite{devlin-etal-2019-bert}, RoBERTa-base and RoBERTa-large \cite{Liu2019RoBERTaAR}, and DeBERTa \cite{he2021deberta}. All five models can provide contextualized sentence representations and have shown impressive performance on the GLUE \cite{wang-etal-2018-glue} benchmark. Our experiment setup follows three types of evaluation methods to interpret the probing performance. \paragraph{Probing Accuracy} We probe pre-trained language models and the baseline word embeddings using linear and MLP classifiers. Then, we compare their performance to determine if the pre-trained models improve over the baselines. If such improvement is significant, the pre-trained models contain more information about a task than the baseline. Otherwise, they do not contain enough information to benefit a task. We select four uncontextualized word embeddings as our baselines, including random embedding, FastText \cite{joulin2017bag}, Glove \cite{pennington-etal-2014-glove}, and Word2Vec \cite{word2vec2013}. We also conduct a label-wise qualitative analysis for each task to explore if the amount of information is encoded differently across the task-specific labels. Finally, to determine if language models can learn the missing linguistic information for inference, we evaluate NLI models fine-tuned on MultiNLI \cite{williams-etal-2018-broad}, using probing tasks that do not benefit from the pre-trained models. \paragraph{Control Task} \citet{hewitt-liang-2019-designing} argue that accuracy cannot fully validate that a representation encodes linguistic information since a highly expressive classifier could have memorized the task. They proposed the use of control tasks to complement probings. The main idea is to validate if a classifier could predict task outputs independently of a representation's linguistic properties. If the classifier can achieve high accuracy in this setting, the accuracy does not necessarily reflect the properties of the representation. A control task has the same input as the associated linguistic task, but it assigns random labels to the input data. The selectivity, which is the difference between the linguistic task accuracy and the control task accuracy, is used to measure the quality of a probe. A good probe should have high selectivity meaning that it has low control task accuracy but high linguistic task accuracy. \paragraph{Information-theoretic Probing} \citet{pimentel-etal-2020-information} argue that the task of supervised probing is an attempt to measure how much information a neural representation can provide for a task. They operationalized probing as estimating the mutual information between a representation and a probing task. The mutual information from a target representation is compared to the information estimated from a control function’s representation, which serves as a baseline for comparison. In our experiments, we use uncontextualized baselines as control functions. We estimate the information gain between a contextualized embedding and a baseline. Information gain measures how much more information about a task a contextualized embedding has over a baseline. In addition, we transform the gain into a percentage measurement to make the results more interpretable. \section{Inference Information Probes} In this section, we introduce a list of edge and vertex probing tasks for probing implicit linguistic information for symbolic inference methods in pre-trained language model representations. To discover potential tasks that can provide essential linguistic information for symbolic inferences, we studied four major logical systems for NLI, all with high accuracy on SICK \cite{Marelli2014ASC}, and several challenge datasets for the NLI task. They include NLI systems based on natural logic \cite{abzianidze-2020-learning}, monotonicity reasoning \cite{hu-etal-2020-monalog, chen-etal-2021-neurallog}, and theorem proving \cite{yanaka-etal-2018-acquisition}. \subsection{Semantic Graph Construction (SemGraph)} This task probes the graph-based abstract meaning representation for sentences, a type of knowledge found effective in symbolic systems for acquiring paraphrase pairs and selecting correct inference steps \cite{yanaka-etal-2018-acquisition, chen-etal-2021-neurallog}. The task is to construct a semantic graph that captures connections between concepts, modifiers, and relations in a sentence. Relations are words that form a connection, including verbs and prepositions. Concepts are arguments connected by a relation such as objects and subjects. Each concept connects to a set of modifiers that attribute to it. An example semantic graph is shown in Table \ref{tab:data_exp}. We define this as an edge probing task and assign a label to a pair of tokens. A label is selected from the label set: concept-to-relation, concept-to-modifier, relation-to-concept, relation-to-modifier, relation-to-relation, modifier-to-relation, modifier-to-concept. To construct the dataset, we use dependency parsing and semantic role labeling tools to identify concepts, modifiers, and relations in a sentence and the connection between them. We selected premises from the SNLI test set as our inputs and split them into training and testing sets. \input{tables/dataset} \subsection{Semantic Alignment (SA)} This set of tasks probes the linguistic information for inference involving semantically aligned phrase or word pairs. These aligned pairs can often serve as an explanation of the entailment gold-label \cite{abzianidze-2020-learning, chen-etal-2021-neurallog}. We cover a wide range of semantic phenomena common in natural language inference including lexical (SA-Lex), anaphora (SA-AP), sentiment (SA-ST), and relational knowledge (SA-RK). Table \ref{tab:data_exp} lists each type of semantic alignment with associated examples. Probing data are first collected from multiple challenge datasets for NLU and then are manually annotated for the edge and vertex probing framework. For the sentiment task, we noticed that the aligned phrases are always part of a person's saying, leading a model to solve the task quickly by memorization. To avoid this, we concatenated each premise with speech fragments from another randomly selected premise to build a more complex premise. For instance, in the example in Table \ref{tab:data_exp}, \textit{I found this product to be way too big} is a speech fragment from another premise sample. \input{tables/instance_accuracy} We formulate each task as either an edge probing or a vertex probing task during annotation. For edge probing tasks, we assign either \textbf{Aligned} or \textbf{Unaligned} to a pair of spans. For example, in the Lexical example in Table \ref{tab:data_exp}, ($s^2$: [\textit{saxophone}], $s^3$: [\textit{instrument}]) are aligned, and ($s^1$: [\textit{man}], $s^3$: [\textit{instrument}]) are unaligned. In a vertex probing task, we label a token as either \textbf{Aligned$_1$} (the token belongs to the first phrase of the aligned pair), \textbf{Aligned$_2$} (the token belongs to the second phrase of the aligned pair), or \textbf{Unaligned} (the token is not in any aligned phrases). For example, in the relational-knowledge example in Table \ref{tab:data_exp}, \{\textit{Dirk}, \textit{Nowitski}, \textit{is}, \textit{a}, \textit{current}, \textit{NBA}, \textit{star}\} are tokens in the first phrase of the aligned pair, \{\textit{Dirk}, \textit{Nowitski}, \textit{plays}, \textit{in}, \textit{the}, \textit{NBA}\} are tokens in the second phrase of the aligned pair, and \{\textit{Dallas}, \textit{Mavericks}, \textit{as}, \textit{an}, \textit{all-purpose}, \textit{forward}\} are unaligned tokens. We apply edge probing to tasks with single word spans (lexical and anaphora) and vertex probing to tasks involving multi-word spans (sentiment and relational knowledge). In general, vertex probing adds more distractors into the task to increase the difficulty level, ensuring that the models are using the correct types of reasoning when making a decision. \subsection{Contradiction Signature (ContraSig)} Being able to reason about contradictions between a pair of sentences is a fundamental requirement of Natural Language Inference. To determine a contradictory relationship, systems often rely on contradiction signatures, or possible rationales of contradiction in the sentence pair. Contradiction signature detection tests for both syntax and fundamental semantics. We define this task as vertex probing and manually annotated one dataset for detecting contradiction in text \cite{Marelli2014ASC} by labeling tokens in the first phrase of a contradiction signature as \textbf{Contra-sig$_1$}, tokens in the second phrase of a contradiction signature as \textbf{Contra-sig$_2$}, and irrelevant tokens as \textbf{None}. Table \ref{tab:data_exp} shows an example, with \{\textit{have}, \textit{each}, \textit{played}, \textit{twice}\} as irrelevant tokens, \{\textit{defeats}, \textit{Germany}\} as tokens in the first phrase of the contradiction signature, and \{\textit{haven't}, \textit{beaten}, \textit{anybody}, \textit{yet}\} as tokens in the second phrase of the contradiction signature. \subsection{Monotonicity Polarization (Monotonicity)} Monotonicity information supports word-replacement-based logical inferences that NLI systems can use. For each token, we assign a monotonicity mark that is either \textbf{Monotone} ($\uparrow$), \textbf{Antitone} ($\downarrow$), or \textbf{None} ($=$). To construct our dataset, we annotated monotonicity information on all sentences in the MED dataset \cite{yanaka-etal-2019-neural} as training examples using a monotonicity annotation tool called Udep2Mono \cite{chen-gao-2021-IWCS}. For testing, we extended a challenging gold-label monotonicity dataset used by Udep2Mono that includes multiple levels of monotonicity changes from different quantifiers and logical operators. For each sentence, we replicate ten sentences following the same syntactic format. Vertex probing is used for monotonicity polarization because a model must predict every token's monotonicity information. \section{Experiment Results and Findings} \subsection{Do LMs encode information for inference?} Here we evaluate the degree to which pre-trained language models encode implicit information of linguistic properties that is essential to logical inference systems. We conducted probes on the five pre-trained language models. Table \ref{tab:instance} shows results from the probing experiments. \paragraph{Semantic Graph Construction} With the linear classifier, all language models achieve high probing accuracy that outperforms the baselines. Together, with high selectivity, this is strong evidence that the information in these models' representations allows the classifier to recover the connectives of concepts, relations, and modifiers. Interestingly, the MLP classifier did not improve on the linear classifier significantly, suggesting that the information is easy to interpret. The performance here is consistent with language models' good performance on dependency parsing, semantic role labeling, and part-of-speech tagging \cite{edge_probing_paper} which are related to semantic graph construction. \paragraph{Semantic Alignment} We observe that on lexical and anaphora based alignments (SA-Lex, SA-AP), language models show high probing accuracy that improves over the baselines significantly when using the MLP Classifier. This is evidence that language models encode linguistic information of these two types of semantic alignment since they also show high selectivity. Language models do not improve over the baselines when using the linear classifier, suggesting that these types of linguistic information might be hard to interpret. Language models only improved trivially over the baselines for alignments involving sentiment and relational knowledge (SA-ST, SA-RK). The insignificant improvement suggests that models weakly capture the information on these complex semantic alignment. Overall, the trend is that pre-trained models tend to show poor performance on complex semantic alignment that requires understanding and reasoning. This behavior is consistent with \citet{edge_probing_paper}'s finding that language models only encode a small amount of information on semantic reasoning. \paragraph{Contradiction Signature and Monotonicity} For the task on contradiction signature detection, all language models show poor performance with linear classifier except DeBERTa, which has relatively high accuracy (78.5\%), validating that information on contradiction signatures is more accessible from DeBERTa than the other four models. After using the MLP classifier, all models' accuracy increased significantly (above 90\%) while maintaining very high selectivity. We attribute this partly to the fact that many contradictions are simple morphological negation and antonyms, which can be largely detected by using lexical semantics and syntax. The high accuracy is thus strong evidence that language models do encode a good amount of information on syntax and lexical semantics. For the monotonicity polarization task, language models show low accuracy with both the linear classifier and the MLP classifier. This suggests that these language models may not encode much monotonicity information that can support the polarization. Again, the results here show that pre-trained models encode more information on simple semantic phenomena (contradiction signature) than complex ones (monotonicity). \subsection{Label-wise Qualitative Analysis} To further understand the amount of linguistic information that pre-trained language models capture, we analyze label-wise probing quality for each task. Label-wise accuracy per task are shown in figure \ref{fig:acc_label}. We first observe that on some tasks (SA-Lex, SA-AP, ContraSig, SemGraph), pre-trained language models show high and balanced accuracy across labels. These behaviors are strong evidence that these models encode rich information on simple semantic phenomena. On the semantic graph construction task, the accuracy distribution is similar across models. The relatively low accuracy on modifier-to-relation (m-r) and modifier-to-concept (m-c) show the incompleteness of information in language models that support the linking of modifiers to words being modified. Language models seem to encode information for linking concepts to corresponding words since the accuracy is consistently high. Note that although anaphora (SA-AP) is a complex phenomena, models show decent performance which validates the finding from the previous experiment. For other complex semantic alignment (SA-ST, SA-RK), the heatmaps show highly imbalanced label-wise accuracy. The models have higher accuracy on words in the hypothesis of an aligned pair than words in the premise, which has a much more complicated context. Since vertex-probing needs to locate phrases in a premise contributing to an entailment, the low accuracy on predicting span locations in a premise suggests that language models only encode very little linguistic information on complex semantic phenomena. For monotonicity polarization, the accuracy on each label is very different. Across language models, the accuracy on monotone polarity is higher than that on antitone and neutral polarity. This is consistent with other findings from other probing studies on monotonicity reasoning. \cite{yanaka-etal-2019-neural, geiger-etal-2020-neural}. The results from this analysis validates our main finding from the previous experiment: models tend to encode less information on complex semantic phenomena. \begin{figure}[t!] \centering \includegraphics[width=7cm]{plots/labelwise.drawio.png} \caption{\small Plots here shows label-wise accuracy across models for each inference information probing task. Here LM1-5 stands for the five language models in order (BERT-base, BERT-large, RoBERTa-base, RoBERTa-large, DeBERTa).} \label{fig:acc_label} \end{figure} \subsection{Discussions} \paragraph{Information-Theoretic Probing} We conducted additional experiments on information-theoretical probing to validate our findings based on probing accuracy. Recall that here we want to estimate the information gain between a pre-trained representation and a baseline representation. We followed \citet{pimentel-etal-2020-information}'s practice by using a more powerful probing classifier (MLP). We compared each language model to the best-performed baseline embedding. As table \ref{tab:information_gain} shows, pre-trained language models on average encode more than 50\% more information on the eight probing tasks. Overall, pre-trained language models encode more information than baseline embedding consistently across all tasks. Among these, the highest information gain of language models is on lexical alignment (more than 100\% of increase). This is surprising since baseline word embeddings are a representation of word semantics. We hypothesize that this is due to the proximity of words that contradicts each other in the embedding space. Based on the results, we conclude that pre-trained language models encode significantly more information on linguistic information of logical inference than conventional word embeddings. \paragraph{Classifier Expressiveness} Some of our findings contradict several statements made by \citet{hewitt-liang-2019-designing} regarding classifier expressiveness. First, they claim that one should choose a less expressive classifier over a highly expressive one (linear over MLP) since the prior one has higher selectivity. However, based on the accuracy, we observe that the linear classifier has worse performance for semantically-oriented tasks than the MLP classifier. We also found that the MLP classifier can achieve similar selectivity as a linear classifier for these tasks while achieving higher accuracy. These findings suggest that linear classifiers could misinterpret semantic information from a representation, which supports \citet{pimentel-etal-2020-information}'s arguments on using a more powerful classifier. Secondly, they claim that a probing classifier with sufficient expressiveness can learn any task on top of a lossless representation with enough training examples. However, we found that even high expressive classifiers like MLP fail to perform well on tasks with monotonicity and higher-level semantic reasoning. \input{tables/information_gain} \input{tables/finetune} \paragraph{Can LMs learn missing information?} Here we evaluate whether pre-trained language models can obtain linguistic information for inference, missing from their pre-trained representations, through fine-tuning for the NLI task. We select a version fine-tuned on the MultiNLI dataset for each language model. We probed these fine-tuned models for three tasks that did not benefit from the pre-trained language models. Our probing results are shown in Table \ref{tab:finetune}, and we only record probings with the best performance here. We observe that all language models' fine-tuned representations improve over their pre-trained representations significantly on the sentiment (SA-ST) and relational-knowledge-based (SA-RK) semantic alignment. In contrast, they do not improve the performance on monotonicity polarization (Monotonicity). The results show that language models can capture linguistic information on some types of semantic reasoning but not on monotonicity. Possible explanations are that these models could not obtain monotonicity information during fine-tuning, or the training data of MultiNLI does not contain enough examples about monotonicity. \section{Conclusions and Future Work} We presented a systematic study on determining if pre-trained language models encode implicit linguistic information essential to symbolic inference methods. For each probing task, we constructed associating datasets. We then conducted probings on pre-trained language models using both linear and MLP classifiers for each task. In general, we first found that baseline word embeddings do not contain much linguistic information for inference and contextualized representations of language models encode some levels of linguistic information. However, they encode more information on simple semantic phenomena than on complex phenomena. Moreover, we found that linear classifiers can predict correctly under syntactical information but often misinterpret information on semantics resulting in low classification accuracy. Our label-wise qualitative analysis found that the amount of linguistic information being encoded is different across task-specific labels. In particular, language models encode more linguistic information for one label than other labels. This label-wise information difference again justifies the absence and incompleteness of some linguistic information for inference in language models. Furthermore, we found that language models can effectively learn some types of missing information on complex semantic reasoning through fine-tuning for the NLI task. Overall, language models show potential to serve as knowledge bases of linguistic information that supports robust symbolic reasoning. We believe that our probing and analysis provide an adequate picture of whether critical linguistic information for inference exists in pre-trained language models. Moreover, our probing can inspire future systems on combining neural and symbolic NLP methods. For future work, one could conduct further analysis on each type of linguistic information in language models by constructing more detailed probing datasets. One could also design logical systems that can access linguistic information from pre-trained language models and apply them in the inference process for improved performance on large benchmarks. \section{Acknowledgments} We thank the anonymous reviewers for their thoughtful and constructive comments. Thanks also to our advisors Laurence S. Moss and Michael Wollowski for their feedback on earlier drafts of this work. Special thanks to the Machine Learning for Language Group at NYU for their wonderful NLP toolkit, JIANT \cite{phang2020jiant}. \section{Implementation Details} We provide more implementation details on how each probing experiments is conducted, as well as the software libraries that it depends on. \paragraph{Software Library} For all the probing experiments, we rely on a NLP toolkit called JIANT \cite{phang2020jiant}. Jiant is a framework that supports both multi-task learning and transfer learning. We followed the edge probing experiment guide provided by the library and expanded the edge probing tasks with our own tasks. For classifiers, we used the same implementation of linear classifier and MLP classifier as the original edge probing paper \cite{edge_probing_paper}. Our pre-trained language models are from huggingface's transformers library \cite{wolf2020huggingfaces}. \paragraph{Hyperparameters} We will briefly describe our selection process for key hyperparameters for all the probing tasks. For learning rate, we follow \citet{edge_probing_paper}'s practice, where 0.0001 is used for training. We performed an empirical selection for the batch size and epoch number. We set the batch size to 4 and epoch number to 10. We observed that each probing classifier requires enough training iterations and training batches to fully extract and interpret the corresponding linguistic information from the pre-trained language model representations. We also need to avoid a large amount of train data examples being exposed to the classifier to ensure that the classifier did not just learn the task. Combining these two criterion, we observe that empirically, a batch size with 4 and an epoch number with 10 is the optimal among other combinations. Overall, our selection of the important hyperparameters is summarized below: \begin{itemize} \item Learning rate: $1\times10^{-4}$ \item Batch Size: 4 \item Epochs: 10 \end{itemize} \paragraph{Model Freezing and Training} For probing experiments, we follow the convention way of probing language models, by freezing the parameters of each pre-trained language model encoders. This way, the pre-trained model parameters will not update through gradient propagation and thus prevent it from learning the task. We train each probing classifiers by minimizing a loss function through gradient descent. For binary classification we use the binary cross-entropy loss. For multi-label classification, we use the softmax loss which can ensue an exclusivity constraint. For gradient descent, we use the Adam optimizer \cite{Kingma2015AdamAM} following \citet{edge_probing_paper}'s choice. \section{Dataset Construction} In this section we provide more details to the construction of some datasets. \paragraph{SemGraph} To construct the dataset used for the SemGraph task, we first generated the universal dependencies of sentences. Then we developed an algorithm to draw edges of two words as well as identifying these words' roles (i.e. concept, relation or modifier) based on the type of dependency between them. Finally we use role-labeling tools to examine the roles of vertices and refine our data. \paragraph{Contradiction Signature}: Leveraging the fact that a pair of sentences has a similar context, we automatically mark the changing parts and then manually verified that the spans indeed contradict each other. \paragraph{Lexical (SA-Lex)} Since two sentences in a pair of the Breaking-NLI dataset have very similar contexts, the part that is changing consists of our target spans. We automatically marked such places and verified that lexical semantic change happens from the first place to the second one. Then we randomly picked one word from each sentence to form an unaligned pair after we check that the two words are not related. \paragraph{Anaphora (SA-AP)} Before manually annotate which entity the pronoun refers to, we utilize universal dependencies to decide the locations of potential entities and the target pronoun. \paragraph{Sentiment (SA-ST)} We first concatenated a sample with a speech fragment from another randomly selected sample. Next we automatically draw connections (i.e. aligned or contradict) from the original speech fragment to the hypothesis based on the relation of this pair of sentences given in the original dataset. Finally, we manually checked if the added fragment could be seen as related to the hypothesis. If so, we would assign a specific label (aligned or contradict) to indicate their relation instead of classifying them as unaligned. To construct a vertex probing dataset, we assigned the \textbf{Unaligned} label to all the words outside of speech fragments and based on their relations with the hypothesis. \paragraph{Relational Knowledge (SA-RK)} To construct this dataset, we first locate a span that contains the named entities appeared in the hypothesis. Next we manually adjust them to properly include the information needed to reveal the relation of these entities.
proofpile-arXiv_065-75
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Blazars form a peculiar class of active galactic nuclei (AGNs). They are radio-loud objects pointing their relativistic jets towards an observer and having a non-thermal continuum along the entire electromagnetic spectrum. One of their properties is rapid variability in different energy bands, lasting from months down to minutes. Generally, blazars are split into two groups, namely flat spectrum radio quasars (FSRQs) and BL Lacertae (BL Lac) objects, based on characteristics visible in their optical spectra, wherein FSRQs have prominent emission lines and BL Lacs are featureless or with weak lines only. We aim to conduct a comprehensive analysis of the temporal properties of the light curves (LCs) of blazars in our sample, in particular constraining the shape and features of the power spectral density (PSD) to look for short- and long-lasting features. This can be used to establish the variability regions and physical processes responsible for variability and, in some cases, to estimate the black hole (BH) mass of blazars. First of all, we employ standard and well established methods to study characteristics of PSDs, such as breaks, which can point to regions responsible for variability. Subsequently, we want to verify the existence of quasi-periodic oscillations (QPO) which is defined as \textit{''concentration of variability power over a limited frequency range''} \citep{Vaughan2005}. This can shed additional light on the structure of blazars. \section{The sample} We analysed with a number of techniques the \textit{Fermi}-LAT $\gamma$-ray LCs of 11 well known blazars, including six FSRQs, PKS 1510$-$089, 3C~279, B2~1520+31, B2~1633+38, 3C~454.3, and PKS 1830$-$211, and five BL Lacs, Mrk~421, Mrk~501, PKS~0716+714, PKS 2155$-$304, and TXS 0506+056. We performed a standard binned maximum likelihood analysis\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/binned\_likelihood\_tutorial.html}}, using the \textsc{Fermitools}\footnote{\url{https://github.com/fermi-lat/Fermitools-conda/wiki}} and the \textsc{fermipy}\footnote{\url{https://fermipy.readthedocs.io/en/latest/}} packages. In this analysis, we used data from the \textit{LAT 8-year Source Catalog} \citep[4FGL; ][]{Fermi2019}, spanning the time range of 239557417---577328234 MET, which corresponds to $\sim$11 years, in the energy range of 100~MeV up to 300~GeV. We generated a set of three LCs for each blazar, using 7, 10, and 14~days binning. Only the observations with the test statistic $TS>25$ (significance of $\gtrsim 5\sigma$) were taken into account. Eventually, 33 LCs were generated and then analysed. Since the fraction of missing points can reach 13\%, we utilised the method of interpolation by autoregressive and moving average \citep[MIARMA; ][]{granado2015}. Figure~\ref{LCs} presents examples of the LCs. \begin{figure}[!h] \centering \includegraphics[width=0.8\textwidth]{LC_3C454_7d_5sigma.png} \includegraphics[width=0.8\textwidth]{Texas_14d.png} \caption{Logarithmic LCs of 3C~454.3 (top panel) and TXS 0506+056 (bottom panel). The red points are the observed data and the green points are the interpolations done with MIARMA. } \label{LCs} \end{figure} \section{Methodology} This proceedings is based on the \cite{Tarnopolski2020} publication and the methodology is described there in details. \\ \textbf{Fourier transform and Lomb-Scargle periodogram} \citep{Lomb1976,Scar82} are methods to generate a global PSD for a given time series, to calculate indices, to check their relation to the coloured noise, and to investigate global components of the LCs, such as short- and long-lasting variations. We fitted three models to the generated PSDs, namely a pure power law (PL), a PL with Poisson noise (PLC), and a smoothly broken PL (SBPL). The better model was chosen based on the Akaike Information Criterion \citep[$AIC_c$; ][]{Akai74} which we evaluated via the difference, $\Delta_i=AIC_{c,i}-AIC_{c,\rm min}$, between the $AIC_c$ of the $i$-th model and the minimal value ($AIC_{c,\rm min}$). If $\Delta_i<2$, then both models are equally good and we chose the pure PL model as the adequate one since it is simpler. \\ \textbf{Wavelet scalogram} \citep{Lenoir2018} is a two-dimensional time-frequency representation of the energy-density map showing the temporal localisation of a frequency present in the signal and allowing us to study the local components and their time evolution. The significance testing to search for QPOs at the level $\geq3\sigma$ was employed within this method.\\ \textbf{ARMA and CARMA modelling}: the autoregressive moving average process \citep[ARMA;][]{Scargle1981} and the continuous-time ARMA process \citep[CARMA;][]{kelly2014} are stochastic processes applied to detect different types of variability in the data, to uncover QPOs, and to determine the variability-based classification of the astrophysical sources. In the case of the CARMA processes, a PSD can be composed of a number of zero-centered Lorentzians, which define breaks, while the non-zero-centered Lorentzians are used to model QPOs. Moreover, the CARMA modelling allows us to handle irregular sampling and error measurements.\\ \textbf{Hurst exponent} \citep{Hurs51} measures the statistical self-similarity of a time series. The self similarity is connected to a long range dependence, referred to as \textit{memory}, of a process via the autocorrelation function. The properties of $H$ can be summarized as follows: $H$ takes values between 0 and 1; $H=0.5$ is for an uncorrelated process (white noise or Brownian motion); if $H>0.5$, than a process is persistent, i.e. exhibits long-term memory; while in the opposite case, if $H<0.5$, one deals with an anti-persistent (short-term memory) process.\\ \textbf{The $\mathcal{A-T}$ plane} \citep{tarnopolski2016} is used to differentiate various types of coloured noise. The plane consists of the fraction of \textit{turning points} ($\mathcal{T}$) to verify the noisiness of a time series and the \textit{Abbe value} ($\mathcal{A}$) which quantifies the smoothness of a time series. If $\mathcal{T}$ is asymptotically equal to 2/3, the time series constitutes a purely random time series or white noise. A process with $\mathcal{T}>2/3$ is more noisy than white noise. For $\mathcal{T}<2/3$, a process is less noisy than white noise. \section{Results} We analysed the \textit{Fermi}-LAT $\gamma$-ray LCs of 11 blazars, six FSRQs and five BL Lacs, employing a number of techniques. We found the following results. \begin{figure}[!h] \centering \includegraphics[width=0.9\textwidth]{PKS2155_7day_cut.png} \caption{Wavelet scalogram of the 7-day binned LC of PKS~2155-304. Right panel displays the global wavelet periodogram. The magenta contours in the scalogram and the magenta line in the periodogram denote 3$\sigma$ local and global confidence levels, respectively. The shadowed area shows the cone of influence, i.e. the erroneous zone.} \label{Scalogram} \end{figure} \begin{enumerate} \item The $\beta$ values calculated with the Fourier spectra and the LSP are consistent with each other for the majority of blazars; however, we noticed a discrepancy for 3C~454.3. In this case, the Fourier PSD is described by a pure PL, while the LSP is fitted better by a PLC. The former PSD is flatter than the latter. The fit of the SBPL model was not competing in any of the cases under consideration. Overall, the shapes of PSDs indicate a coloured noise with $1\lesssim\beta\lesssim 2$, i.e. between pink and red noise. We suggest that each object can be treated as realisation of one stochastic process underlying the observed variability. \item The only significant ($\geq3\sigma$) QPO we found using the wavelet scalograms is the well-known QPO in PKS~2155$-$304, with a period of $612\pm 42$~days (Figure~\ref{Scalogram}). Moreover, we noticed a QPO candidate in B2~1633+38 data, evolving from $P\sim 500$~days to $P>1000$~days, and in PKS~0716+714 at $P\gtrsim 1000$~days, lasting 2 to 3 cycles only. These objects require additional observations to actually conclude whether a QPO exists in their data. We do not find significant QPOs in the studied LCs of the remaining blazars in our sample. \item ARMA and CARMA models suggest breaks in the PSDs at time scales of a few hundred days in all blazars in the sample but 3C~454.3 and B2~1520+31. We searched for the best CARMA$(p,q)$ model with $1\leqslant p\leqslant 7$ and $0\leqslant q\leqslant 6$, $q < p$. We obtained the orders $(1,0)$ or $(2,1)$ for the majority of cases. In general, FSRQs are described with the former model, while the latter represents BL Lacs. We do not observe any additional features in the data of the analysed objects. \item The Hurst exponents are $>0.5$ for the majority of BL Lacs in our sample, indicating the presence of long-term memory. The FSRQs swing back and forth between $H\lesssim 1$ and $H\gtrsim 0$. Only 3C~454.3 keeps $H<0.5$, being an anti-persistent system. Also Mrk~421 and PKS~0716+714 are oscillating in the entire range of the $H$ limit. This evolution behaviour does not allow us to formulate an unambiguous claim about the persistence of these objects. \item The FSRQs are characterised by lower values of $\mathcal{A}$ than BL Lacs and these two classes of blazars are clearly separated on the $\mathcal{A-T}$ plane (Figure~\ref{fig_AT_separation}). This was earlier discovered by \cite{zywucka2020} in the I-band optical LCs of blazars and blazar candidates behind the Magellanic Clouds \citep{zywucka2018}. This finding shows that the flux changes are different for the two blazar classes, thus they should be driven by different physical mechanisms or take place in different blazar components. The separation allows us to distinguish blazar classes based on LCs without including multiwavelength, polarimetric, and spectroscopic properties. Furthermore, the location in the $\mathcal{A}-\mathcal{T}$ plane indicates properties in the structure of LCs, which are not revealed by other methods used in this work. \end{enumerate} \begin{figure}[!h] \centering \includegraphics[width=0.7\textwidth]{AT_plot.png} \caption{Locations in the $\mathcal{A}-\mathcal{T}$ plane of blazars from our sample. The FSRQs are denoted with blue and green colors, while BL Lacs are marked with with yellow, orange, and red. The separation between FSRQs and BL Lacs is denoted with the dashed vertical gray line. The dark gray area is the region between the pure PL line and $\mathcal{T}=2/3$, while the light gray regions represent the error bars of the simulations. } \label{fig_AT_separation} \end{figure} \section{Conclusions} In our research, we considered a stochastic description to model the variability of blazars. All blazars in our sample are characterized by long timescales consistent with a conclusion that their variability originates in the accretion disk. The timescales also point out the physical processes responsible for $\gamma$-ray production, i.e. external Compton in the case of FSRQs and synchrotron self-Compton for the BL Lacs. The detailed elaboration on results and conclusions is presented in \cite{Tarnopolski2020}. \acknowledgments The work of N.\.{Z}. is supported by the South African Research Chairs Initiative (grant no. 64789) of the Department of Science and Innovation and the National Research Foundation\footnote{Any opinion, finding and conclusion or recommendation expressed in this material is that of the authors and the NRF does not accept any liability in this regard.} of South Africa. M.T. acknowledges support by the Polish National Science Center (NSC) through the OPUS grant No. 2017/25/B/ST9/01208. V.M. is supported by the NSC grant No. 2016/22/E/ST9/00061. J.P.-G. acknowledges financial support from the State Agency for Research of the Spanish MCIU through the ''Center of Excellence Severo Ochoa'' award to the Instituto de Astrof\'isica de Andaluc\'ia (SEV-2017-0709) and from Spanish public funds for research under project ESP2017-87676-C5-5-R.
proofpile-arXiv_065-76
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec1} Let $\Omega\subseteq \mathbb{R}^{n}~(n\geq 2)$ be a domain and $1<p<\infty$. In the monograph \cite{HKM} by Heinonen, Kilpel\"{a}inen, and Martio, the authors studied a nonlinear potential theory for the second-order quasilinear elliptic operator~$\dive(\mathcal{A}(x,\nabla u))$, which is called the $\mathcal{A}$-Laplace operator (or in short, the $\mathcal{A}$-Laplacian). We recall that the $\mathcal{A}$-Laplacian might be degenerate or singular elliptic operator that satisfies some natural local regularity assumptions. In addition, it is assumed that the $\mathcal{A}$-Laplacian is $(p-1)$-homogeneous and monotone in its second variable (for details, see Assumption \ref{ass8}). Prototypes of the $\mathcal{A}$-Laplace operator are the $p$-Laplacian $\dive{\left(|\nabla u|^{p-2}\nabla u\right)}$ and the $(p,A)$-Laplacian $$\dive{\left(|\nabla u|^{p-2}_{A}A\nabla u\right)}\triangleq \mathrm{div}\,\left((A(x)\nabla u\cdot\nabla u)^{(p-2)/2}A(x)\nabla u\right),$$ where $A$ is a locally bounded and locally uniformly elliptic matrix (see, \cite{HKM,Pinchover,Regev,Tintarev}). A systematic criticality theory has been developed for the $p$-Laplace operator and the~$(p,A)$-Laplace operator with a locally bounded potential in \cite{Tintarev} and \cite{Regev}, respectively. Furthermore, in \cite{Pinchover}, Pinchover and Psaradakis have extended the theory to the case of the $(p,A)$-Laplace operator with a potential in the local Morrey space. See \cite{Murata, Pinchoverlinear} for the criticality theory for the second-order linear elliptic (not necessarily symmetric) case. We refer also to Pinsky's book \cite{Pinsky}, where the author studies this topic from the probabilistic point of view. Moreover, a criticality theory for Schr\"{o}dinger operators on graphs has also been established by Keller, Pinchover, and Pogorzelski in \cite{Keller}. The theory has witnessed its applications in the works of Murata and Pinchover and their collaborators (see recent examples in \cite{Beckus, KellerHardy, MT}). For the case of generalized Schr\"{o}dinger forms, we refer to \cite{Takeda2014, Takeda2016}. Criticality theory has applications in a number of areas of analysis. For example in spectral theory of Schr\"odinger operators \cite{Pinchoverlinear}, variational inequalities (like Hardy, Rellich, and Hardy-Sobolev-Maz'ya type inequalities) \cite{Kovarik,HSM}, and stochastic processes \cite{Pinsky}. Among the applications in PDE we mention results concerning the large time behavior of the heat kernel \cite{PinchoverGreen}, Liouville-type theorems \cite{Lioupincho}, the behavior of the minimal positive Green function \cite{PinchoverGreen2,Pinchoverlinear}, and the asymptotic behavior of positive solutions near an isolated singularity \cite{Fraas}. The goal of the present paper is to extend the results in \cite{Pinchover,Regev,Tintarev} concerning positive solutions of the homogeneous quasilinear equation$$Q'_{p,A,V}[u]\triangleq -\dive{\left(|\nabla u|^{p-2}_{A(x)}A(x)\nabla u\right)}+V(x)|u|^{p-2}u=0\quad \mbox{ in } \Omega,$$ to the equation $$Q'_{p,\mathcal{A},V}[u]\triangleq -\dive{\mathcal{A}(x,\nabla u)}+V(x)|u|^{p-2}u=0\quad \mbox{ in } \Omega.$$ The latter equation is the {\em local} Euler-Lagrange equation of the energy functional $$Q_{p,\mathcal{A},V}[\vgf]\triangleq Q_{p,\mathcal{A},V}[\vgf;\Omega]\triangleq \int_{\Omega}\left(\mathcal{A}(x,\nabla \vgf)\cdot\nabla \vgf + V(x)|\vgf|^{p}\right)\,\mathrm{d}x \qquad \vgf\inC_c^{\infty}(\Omega).$$ Note that the equation $Q'_{p,\mathcal{A},V}[u]=0$ (and in particular, $Q'_{p,A,V}[u]=0$) is {\em half-linear}, that is, if $v$ is a solution of this equation, then for every $c\in\mathbb{R}$, $cv$ is also a solution. We assume that the potential $V$ belongs to the local Morrey space $M^{q}_{{\rm loc}}(p;\Omega)$ associated with the exponent $p$ (see Definitions~\ref{Morreydef1} and \ref{Morreydef2}), which is almost the largest class of potentials that guarantees the validity of the Harnack inequality and the H\"older continuity of solutions. The assumptions on the $\mathcal{A}$-Laplacian are as in \cite{HKM} (see Assumption \ref{ass8}). In addition, a strong convexity of $\mathcal{A}$ (Assumption \ref{ass2}) is assumed to prove certain important results in Sections~\ref{sec_eigenvalue}, \ref{criticality}, and \ref{minimal}. In fact, the local strong convexity of $\mathcal{A}$ is utilized in two different ways. One is direct (see Proposition \ref{mainlemma}). The other is indirect, i.e., via the D\'{\i}az-Sa\'{a}-type inequality (Lemma~\ref{elementary}), see Theorem \ref{maximum}. Our main results include the existence, uniqueness, and simplicity of the principal eigenvalue of the operator~$Q'_{p,\mathcal{A},V}$ in a domain~$\omega\Subset\Omega$, a weak comparison principle, and the criticality theory for $Q'_{p,\mathcal{A},V}$. Moreover, based on a Picone-type identity and a generalized H\"older inequality (see Lemma~\ref{ass1}), two alternative proofs of Agmon-Allegretto-Piepenbrink type (AAP) theorem are given (see Lemma \ref{lem_alter} and Theorem \ref{thm_AAP}, see also \cite{Agmon, Allegretto1974}, and also \cite{Pinchover} for a short updated review on the AAP theorem). In addition, we characterize in a Lipschitz domain~$\omega\Subset\Omega$ the validity of the generalized strong/weak maximum principles and the unique solvability in $W^{1,p}_0(\gw)$ of a nonnegative solution of the Dirichlet problem $Q'_{p,\mathcal{A},V}[v]=g\geq 0$ with~$g\in L^{p'}(\omega)$ via the strict positivity of the principal eigenvalue. The paper is organized as follows. In Section \ref{back}, we introduce a variational Lagrangian $F$ and then obtain from $F$ the operator~$\mathcal{A}$ by virtue of \cite[Lemma 5.9]{HKM}. We establish a generalized H\"{o}lder inequality (Lemma~\ref{ass1}) which is a key result used to prove several fundamental results, and formulate the additional assumption discussed above (Assumption~\ref{ass2}). In addition, we recall the definition of the associated local Morrey spaces, and the Morrey-Adams theorem, which is an essential tool for our study. Finally, we define the notion of weak solutions of the quasilinear equation $Q'_{p,\mathcal{A},V}[u]=0$. In Section \ref{toolbox}, we present certain a priori properties of weak solutions of the quasilinear equation $Q'_{p,\mathcal{A},V}[u]=0$, including Harnack-type inequalities, local H\"{o}lder estimate, and the Harnack convergence principle. In Section \ref{sec_eigenvalue}, we first extend D\'{\i}az-Sa\'{a} type inequalities, and then prove the coercivity and weak lower semicontinuity of certain related functionals. We also establish a Picone-type identity. Then we show that in a domain~$\omega\Subset\Omega$, the generalized principal eigenvalue is a principal eigenvalue, that is a Dirichlet eigenvalue with a nonnegative eigenfunction. Moreover, we prove that the generalized principal eigenvalue is simple. With these preliminaries, we also study the generalized weak and strong maximum principles, the positivity of the generalized principal eigenvalue and other related properties. Furthermore, we establish a weak comparison principle by virtue of the super/sub-solution technique. In Section \ref{AP}, we prove for our setting the corresponding AAP type theorem which turns out to be closely related to the existence of solutions of a certain nonlinear first-order equation of the divergence type. As a result, we show that the AAP theorem implies the uniqueness of the principal eigenvalue in a domain~$\omega\Subset\Omega$. In Section \ref{criticality}, we establish a systematic criticality theory for the operator $Q'_{p,\mathcal{A},V}$ with applications to a Hardy-Sobolev-Maz'ya inequality and the $(\mathcal{A},V)$-capacity. In Section \ref{minimal}, we study the removability of an isolated singularity. We also show that the criticality of~$Q_{p,\mathcal{A},V}$ is equivalent to the existence of a global minimal positive solution. Moreover, we prove that the existence of a minimal positive Green function, with an additional assumption in the case of~$p>n$, implies the subcriticality of $Q'_{p,\mathcal{A},V}$. Finally, we extend the results in \cite{Kovarik} and answer the question: how large can Hardy-weights be? \section{$\mathcal{A}$-Laplacian and Morrey potentials}\label{back} In this section, we introduce the~$\mathcal{A}$-Laplace operator. We recall the local Morrey space where our potential~$V$ lies and the Morrey-Adams theorem, both are defined and proved in \cite{Pinchover}. Finally, we define weak solutions and supersolutions of the quasilinear elliptic equation~$Q'_{p,\mathcal{A},V}[v]=g$. Let $g_1,g_2$ be two positive functions defined in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. We use the notation $g_1\asymp g_2$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ if there exists a positive constant $C$ such that $C^{-1}g_{2}(x)\leq g_{1}(x) \leq Cg_{2}(x)$ for all $x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$. \subsection{Variational Lagrangian $F$ and its gradient $\mathcal{A}$} In this subsection, we present a variational Lagrangian $F$ which satisfies certain desired conditions. Then we define the $\mathcal{A}$-Laplacian as the divergence of the gradient of $F$. \subsubsection{Variational Lagrangian $F$} Following the assumptions in {\cite[Page 97]{HKM}}, we list below our structural and regularity assumptions on the variational Lagrangian $F.$ \begin{assumptions}\label{ass9} {\em \label{assump1} Let~$\Omega\! \subseteq \!\mathbb{R}^{n}$ be a nonempty domain, let $F:\Omega} \def\Gx{\Xi} \def\Gy{\Psi\times \mathbb{R}^{n} \! \rightarrow \!\mathbb{R}_+$, and let $1\!<\!p\!<\!\infty$. We assume that $F$ satisfies the following conditions: \begin{itemize} \item {\bf Measurability:} For all~$\xi\in\mathbb{R}^{n}$, the mapping $x\mapsto F(x,\xi)$ is measurable in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. \item {\bf Ellipticity:} For all $\omega\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ there exist $0<\kappa_\omega\leq\nu_\omega<\infty$ such that for almost all $x\in \omega$ and all $\xi\in \mathbb{R}^n$, $\kappa_\omega|\xi|^{p}\leq F(x,\xi)\leq\nu_\omega|\xi|^{p}$. \item {\bf Convexity and differentiability with respect to $\xi$}: For a.e.~$x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, the mapping $\xi\mapsto F(x,\xi)$ is strictly convex and continuously differentiable in $\mathbb{R}^n$. \item {\bf Homogeneity:} $F(x,\lambda\xi)=|\lambda|^{p}F(x,\xi)$ for a.e.~$x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, all~$\lambda\in\mathbb{R}$, and all~$\xi\in\mathbb{R}^{n}$. \end{itemize} } \end{assumptions} The following is a useful inequality derived directly from the strict convexity of $F$. \begin{lemma}[{\cite[Lemma 5.6]{HKM}}]\label{strictconvexity} For a.e.~$x\in\Omega$ and all~$\xi_{1},\xi_{2}\in\mathbb{R}^{n}$ with~$\xi_{1}\neq\xi_{2}$, we have: $$F(x,\xi_{1})-F(x,\xi_{2})>\nabla_{\xi}F(x,\xi_{2})\cdot(\xi_{1}-\xi_{2}).$$ \end{lemma} \subsubsection{$\mathcal{A}$-Laplacian} \begin{Def} {\em Let~$\Omega\subseteq \mathbb{R}^{n}$ be a nonempty domain and $F(x,\xi)$ satisfy Assumptions \ref{ass9}. For a.e.~$x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, we denote by $\mathcal{A}(x,\xi) \triangleq \nabla_{\xi}F(x,\xi)$ the classical gradient of $F(x,\xi)$ with respect to~$\xi$. The\emph{~$\mathcal{A}$-Laplacian} is defined as the divergence of~$\mathcal{A}$. } \end{Def} \begin{remark} \emph{ By Euler's homogeneous function theorem, for a.e.~$x\in\omega$, $$\mathcal{A}(x,\xi)\cdot\xi = p F(x,\xi) \geq p\gk_\gw |\xi|^p \qquad \forall \xi\in \mathbb{R}^n .$$ Moreover, since for a.e. $x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ the nonnegative function \begin{equation}\label{newformula} \vert\xi\vert_{\mathcal{A}}=\vert\xi\vert_{\mathcal{A}(x)}\triangleq (\mathcal{A}(x,\xi)\cdot\xi)^{1/p} \end{equation} is positively homogeneous of degree $1$ in $\xi$, and $\{\xi \in \mathbb{R}^n \mid \vert\xi\vert_{\mathcal{A}}\leq 1\}$ is a convex set, it follows that for a.e. $x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, $\vert\xi\vert_{\mathcal{A}}$ is a norm on $\mathbb{R}^n$ (see, for example, \cite[Theorem 1.9]{Simon}).} \end{remark} \begin{Thm}[{\cite[Lemma 5.9]{HKM}}]\label{thm_1} Let~$\Omega\subseteq \mathbb{R}^{n}$ be a nonempty domain. For every domain $\omega\Subset\Omega$, denote $\alpha_{\omega}=\kappa_{\omega}$, $\beta_{\omega}=2^{p}\nu_{\omega}$. Then the vector-valued function~$\mathcal{A}(x,\xi): \Omega} \def\Gx{\Xi} \def\Gy{\Psi\times \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}$ satisfies the following conditions: \begin{itemize} \item {\bf Regularity:} For a.e. $x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, the function $\mathcal{A}(x,\xi ): \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ is continuous with respect to $\xi$, and $x \mapsto \mathcal{A}(x,\xi)$ is Lebesgue measurable in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ for all~$\xi\in \mathbb{R}^{n}$. \item {\bf Homogeneity:} For all~$\lambda\in {\mathbb{R}\setminus\{0\}}$, $\mathcal{A}(x,\lambda \xi)=\lambda\,|\lambda|^{p-2}\,\mathcal{A}(x,\xi).$ % \item {\bf Ellipticity:} For all domains $\omega\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, all $\xi \in \mathbb{R}^{n}$, and a.e. $x\in \omega$, \begin{equation}\label{structure} \alpha_\omega|\xi|^{p}\le\mathcal{A}(x,\xi)\cdot\xi, \quad |\mathcal{A}(x,\xi)|\le \beta_\omega\,|\xi|^{{p}-1}. \end{equation} \item {\bf Monotonicity:} For a.e.~$x\!\in\! \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ and all~$\xi\!\neq \! \eta \! \in \! \mathbb{R}^{n}$, $\big(\mathcal{A}(x,\xi)-\mathcal{A}(x,\eta)\big) \! \cdot \! (\xi-\eta)> 0.$ \end{itemize} \end{Thm} \begin{ass}\label{ass8} {\em Throughout the paper we assume that $\mathcal{A}(x,\xi)=\nabla_{\xi}F(x,\xi)$, where $F$ satisfies Assumptions~\ref{ass9}. In particular, we assume that $\mathcal{A}$ satisfies all the conditions mentioned in Theorem~\ref{thm_1}. } \end{ass} \begin{comment} \begin{Rem}{\em \red{This remark is now redundant} It can be easily checked that the equation$$-\dive{\mathcal{A}(x,\nabla u)}=0\qquad \mbox {in } \Omega} \def\Gx{\Xi} \def\Gy{\Psi $$ is the Euler-Lagrange equation of the functional $\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi F(x,\nabla u)\,\mathrm{d}x$, See \cite[theorems~5.13 and 5.18]{HKM}. } \end{Rem} \end{comment} \subsubsection{Generalized H\"older inequality} In the proof of the AAP Theorem (Theorem~\ref{thm_AAP}), we use the following generalized H\"older inequality. The inequality follows similarly to the proof of \cite[Lemma 2.2]{newpicone}, where the case $\mathcal{A}=\mathcal{A}(\xi)$ is considered. Nevertheless, since the generalized H\"older inequality is a pointwise inequality with respect to $x$, the proof holds also for $\mathcal{A}=\mathcal{A}(x,\xi)$. \begin{lemma}[Generalized H\"older inequality]\label{ass1} Let~$p'$ be the conjugate exponent of~$1<p<\infty$. Then the following inequality holds $$\big|\mathcal{A}(x,\xi)\cdot\eta\big| \leq\big(\mathcal{A}(x,\xi)\cdot\xi\big)^{1/p'}\big(\mathcal{A}(x,\eta)\cdot\eta\big)^{1/p} =\vert\xi\vert_{\mathcal{A}}^{p-1}\vert\eta\vert_{\mathcal{A}}, \qquad \forall \xi,\eta\in\mathbb{R}^{n} \mbox{ and a.e. } x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi.$$ \end{lemma} \subsubsection{Local strong convexity of $|\xi|_{\mathcal{A}}^p$} By our assumptions, for a.e. $x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, the function $\xi\mapsto |\xi|_{\mathcal{A}}^p$ defined by \eqref{newformula} is strictly convex. For certain important results in Sections~\ref{sec_eigenvalue}, \ref{criticality}, and \ref{minimal} we need to assume: \begin{ass}[Local strong convexity of $|\xi|_{\mathcal{A}}^p$]\label{ass2} {\em We suppose that $|\xi|_{\mathcal{A}}^p$ is a locally strongly convex function with respect to~$\xi$, that is, there exists $\bar{p}\geq p$ such that for every subdomain $\gw\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ there exists a positive constant $C_\gw(\bar{p}, \mathcal{A})$ such that $$|\xi|^{p}_{\mathcal{A}}-|\eta|^{p}_{\mathcal{A}}-p\mathcal{A}(x,\eta)\cdot(\xi-\eta)\geq C_\gw(\bar{p}, \mathcal{A}) |\xi-\eta|^{\bar{p}}_\mathcal{A} \qquad \forall \xi,\eta\in \mathbb{R}^n \mbox{ and a.e. } x\in \gw.$$ } \end{ass} \begin{Rem}\label{pAlaplacian} {\em See \cite[Lemma 3.4]{Regev} and \cite[Lemma 2.2]{Lioupincho} for a similar inequality for $Q'_{A,p,V}$, and in particular, for the~$(p,A)$-Laplacian. Note that in \cite{Pinchover}, for $p<2$ the authors assume the local boundedness of a positive supersolution of $Q'_{A,p,V}[u]=0$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ and its gradient, and use the H\"{o}lder inequality to obtain the desired result. } \end{Rem} \subsubsection{Pseudo-$p$-Laplacian} We present further examples of operators which fulfill the assumptions above. \begin{Def} \emph{A measurable matrix function $A:\Omega} \def\Gx{\Xi} \def\Gy{\Psi\to \mathbb{R}^{n^2}$ is called \emph{locally bounded} if for every subdomain $\omega\Subset\Omega$, there exists a positive constant~$C(\omega)$ such that,~$|A(x)\xi|\leq C(\omega)|\xi|$ for all $\xi\in\mathbb{R}^n$ and a.e. $x\in \omega$.} \end{Def} \begin{exa}\label{exa} \emph{For a.e.~$x\in\Omega$ and every~$\xi =(\xi_1,\ldots,\xi_n)\in\mathbb{R}^{n}$, let $$F(x,\xi)\triangleq \frac{1}{p}\sum_{i=1}^{n}a_{i}(x)|\xi_{i}|^{p},$$ where~$p\geq 2$ and the Lebesgue measurable functions locally satisfy $a_{i}\asymp 1$. } \end{exa} \begin{lemma}\label{pseudo} Let~$F$ be as in Example \ref{exa}. For a.e.~$x\in\Omega$ and every~$\xi =(\xi_1,\ldots,\xi_n)\in\mathbb{R}^{n}$, we have \begin{enumerate} \item[$(1)$] $\nabla_{\xi}F(x,\xi)=\mathcal{A}(x,\xi)=(a_{1}(x)|\xi_{1}|^{p-2}\xi_{1},\ldots,a_{n}(x)|\xi_{n}|^{p-2}\xi_{n})$, $|\xi|_{\mathcal{A}}^{p}=\sum_{i=1}^{n}a_{i}(x)|\xi_{i}|^{p}$; \item[$(2)$] the operator $\mathcal{A}$ satisfies Assumptions~\ref{ass8} and \ref{ass2}. \end{enumerate} Furthermore, \begin{enumerate} \item[$(3)$]for $0\leq t\leq 1$, consider the Lagrangian $F_{t,A}\triangleq tF+((1-t)/p)|\xi|_{A}^{p}$, where $A$ is a locally bounded, symmetric, and locally uniformly positive definite matrix function. Then $\mathcal{A}_{t,A}$, the gradient of $F_{t,A}$, satisfies assumptions~\ref{ass8} and \ref{ass2}. \end{enumerate} \end{lemma} \begin{remark} \emph{If $a_{i}= 1$ for all~$i=1,2,\ldots,n$, then the operator $\mathrm{div}\,\!(\mathcal{A})$ is called the {\em pseudo-$p$-Laplacian}.} \end{remark} \begin{proof}[Proof of Lemma \ref{pseudo}] Part $(1)$ is obtained by a straightforward differentiation. $(2)$ Our proof is inspired by \cite[Lemma 4.2]{Lindqvist}. Since $\displaystyle{\sum_{i=1}^{n}a_{i}(x)|\xi_{i}|^{p}}$ is convex with respect to $\xi$, we get by Lemma \ref{strictconvexity}, $$\sum_{i=1}^{n}a_{i}(x)|\xi_{i}|^{p}\geq \sum_{i=1}^{n}a_{i}(x)|\eta_{i}|^{p}+p\sum_{i=1}^{n}a_{i}(x)|\eta_{i}|^{p-2}\eta_{i}(\xi_{i}-\eta_{i}),$$ for a.e.~$x\in\Omega$ and all~$\xi,\eta\in\mathbb{R}^{n}$. Hence, $$\sum_{i=1}^{n}a_{i}(x)\left\vert\frac{\xi_{i}+\eta_{i}}{2}\right\vert^{p}\geq \sum_{i=1}^{n}a_{i}(x)|\eta_{i}|^{p}+\frac{p}{2}\sum_{i=1}^{n}a_{i}(x)|\eta_{i}|^{p-2}\eta_{i}(\xi_{i}-\eta_{i}).$$ By Clarkson's inequality for $p\geq 2$ \cite[Theorem 4.10]{Brezis}, we obtain $$\sum_{i=1}^{n}a_{i}(x)|\xi_{i}|^{p}+\sum_{i=1}^{n}a_{i}(x)|\eta_{i}|^{p}\geq 2\sum_{i=1}^{n}a_{i}(x)\left\vert\frac{\xi_{i}+\eta_{i}}{2}\right\vert^{p} +2\sum_{i=1}^{n}a_{i}(x)\left\vert\frac{\xi_{i}-\eta_{i}}{2}\right\vert^{p}.$$ Then $$\sum_{i=1}^{n}a_{i}(x)|\xi_{i}|^{p}\geq \sum_{i=1}^{n}a_{i}(x)|\eta_{i}|^{p}+ p\sum_{i=1}^{n}a_{i}(x)|\eta_{i}|^{p-2}\eta_{i}(\xi_{i}-\eta_{i}) +2^{1-p}\sum_{i=1}^{n}a_{i}(x)\left\vert\xi_{i}-\eta_{i}\right\vert^{p},$$ which gives Assumption \ref{ass2} for $p\geq 2$ because locally~$a_{i}\asymp 1$ for all~$i=1,2,\ldots,n$. Moreover, on the unit Euclidean sphere in~$\mathbb{R}^{n}$, the function~$f(\xi)\triangleq\sum_{i=1}^{n}a_{i}(x)|\xi_{i}|^{p}$ has a positive lower bound and a finite upper bound, and therefore, the local ellipticity conditions follow. $(3)$ For all~$\xi\in\mathbb{R}^{n}$ and a.e.~$x\in\Omega$, $|\xi|_{\mathcal{A}_{t,A}}^{p}= t|\xi|_{\mathcal{A}}^{p}+(1-t)|\xi|_{A}^{p}$. Hence, the local strong convexity and the ellipticity conditions of~$|\cdot|_{\mathcal{A}_{t,A}}^{p}$ follow from Remark \ref{pAlaplacian} and~$(2)$. \end{proof} \begin{comment} \begin{Rem} \red{Please add a remark on the convex combination of the functional $\frac{1}{p}\sum_{i=1}^{n}A_i(x)|\xi_{i}|^{p}$ and the $(p,A)$-Dirichlet functional}. \end{Rem}cl \end{comment} \subsection{Morrey potentials} In this subsection, we give a short review of the local Morrey space $M^{q}_{\mathrm{loc}}(p;\Omega)$, the functional space where the potential~$V$ belongs to, and recall the Morrey-Adams theorem. \subsubsection{Local Morrey space $M^{q}_{\mathrm{loc}}(p;\Omega)$} The following is a revised definition of the local Morrey space $M^{q}_{\mathrm{loc}}(p;\Omega)$, where $q=q(p)$. \begin{Def}[{\cite[definitions 2.1 and 2.3]{Pinchover}}]\label{Morreydef1}{\em Let $\omega\Subset \Omega$ be a domain and $f\in L^1_{\rm loc}(\omega)$ be a real-valued function. Then \begin{itemize} \item for $p<n$, we say that $f\in M^{q}(p;\omega)$ if $q>n/p$ and $$\Vert f\Vert_{M^{q}(p;\omega)}\triangleq \sup_{\substack{y\in\gw\\0<r<\diam(\gw)}} \frac{1}{r^{n/q'}}\int_{\omega\cap B_{r}(y)}|f|\,\mathrm{d}x<\infty,$$ where $\mathrm{diam}(\omega)$ is the diameter of~$\omega$; \item for $p=n$, we say that~$f\in M^{q}(n;\omega)$ if $q>n$ and $$\Vert f\Vert_{M^{q}(n;\omega)}\triangleq \sup_{\substack{y\in\gw\\0<r<\diam(\gw)}} \varphi_{q}(r)\int_{\omega\cap B_{r}(y)}|f|\,\mathrm{d}x<\infty,$$ where $\varphi_{q}(r)\triangleq \Big(\log\big(\mathrm{diam}(\omega)/r\big)\Big)^{n/q'};$ \item for $p>n$ and $q=1$, we define~$M^{q}(p;\omega)\triangleq L^{1}(\omega)$. \end{itemize} } \end{Def} \begin{Def}[{\cite[Definition 2.3]{Pinchover}}]\label{Morreydef2}{\em For every real-valued function $f\in L^1_{\rm loc}(\Omega)$ and~$1<p<\infty$, we say that $f\in M^{q}_{{\rm loc}}(p;\Omega)$ if $f\in M^{q}(p;\omega)$ for every domain~$\omega\Subset\Omega$. } \end{Def} For a more detailed discussion on Morrey spaces, see \cite{Maly,Pinchover} and references therein. \subsubsection{Morrey-Adams theorem} We present the Morrey-Adams theorem proved in \cite{Pinchover}, which is crucial when dealing with the potential term. See \cite{Maly,Morrey1966,Rakotoson1990,Trudinger1967} for relevant earlier results. \begin{Thm}[{\cite[Theorem 2.4]{Pinchover}}]\label{MA_thm} Let~$\omega\Subset\mathbb{R}^{n}$ be a domain and~$V\in M^{q}(p;\omega)$. \begin{enumerate} \item[$(1)$] There exists a constant~$C(n,p,q)>0$ such that for any~$\delta>0$ and all~$u\in W^{1,p}_{0}(\omega)$, \begin{equation*} \int_{\omega}|V||u|^{p}\,\mathrm{d}x\leq \delta\Vert\nabla u\Vert^{p}_{L^{p}(\omega;\mathbb{R}^{n})}+\frac{C(n,p,q)}{\delta^{n/(pq-n)}}\Vert V\Vert^{pq/(pq-n)}_{M^{q}(p;\omega)}\Vert u\Vert^{p}_{L^{p}(\omega)}. \end{equation*} \item[$(2)$] For any~$\omega'\Subset\omega$ with Lipschitz boundary, there exists $\delta_{0}$ such that for any~$0<\delta\leq \delta_{0}$ and all~$u\in W^{1,p}(\omega')$, \begin{equation*} \int_{\omega'}|V||u|^{p}\,\mathrm{d}x\leq \delta\Vert\nabla u\Vert^{p}_{L^{p}(\omega';\mathbb{R}^{n})}+C\left(n,p,q,\omega',\omega,\delta,\Vert V\Vert_{M^{q}(p;\omega)}\right)\Vert u\Vert^{p}_{L^{p}(\omega')}. \end{equation*} \end{enumerate} \end{Thm} \subsection{Weak solutions of $Q'_{p,\mathcal{A},V}[u]=g$} With the preliminaries of the previous subsections in hand, we may define weak solutions of the equation~$Q'_{p,\mathcal{A},V}[u]=g$. \begin{Def}\label{def_sol} {\em Suppose that $\mathcal{A}$ satisfies Assumption~\ref{ass8} and~$V, g\in M^{q}_{\mathrm{loc}}(p;\Omega)$. A function~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ is a {\em (weak) solution} of the equation \begin{equation}\label{half} Q'_{p,\mathcal{A},V}[v]\triangleq -\dive\mathcal{A}(x,\nabla v)+V|v|^{p-2}v=g, \end{equation} in~$\Omega$ if for all~$\vgf \in C_{c}^{\infty}(\Omega)$,$$\int_{\Omega}\mathcal{A}(x,\nabla v)\cdot \nabla \vgf\,\mathrm{d}x+\int_{\Omega}V|v|^{p-2}v \vgf\,\mathrm{d}x=\int_{\Omega} g\vgf\,\mathrm{d}x,$$ and a \emph{supersolution} of \eqref{half} if for all nonnegative~$\vgf \in C_{c}^{\infty}(\Omega)$,$$\int_{\Omega}\mathcal{A}(x,\nabla v)\cdot \nabla \vgf\,\mathrm{d}x+\int_{\Omega}V|v|^{p-2}v \vgf\,\mathrm{d}x\geq \int_{\Omega} g\vgf\,\mathrm{d}x.$$ A supersolution~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ of \eqref{half} is said to be \emph{proper} if~$v$ is not a solution of \eqref{half}. } \end{Def} By virtue of the Morrey-Adams theorem and an approximation argument, we obtain: \begin{lemma}\label{lem4.11} Suppose that $\mathcal{A}$ satisfies Assumption~\ref{ass8} and~$V\in M^{q}_{\mathrm{loc}}(p;\Omega)$. \begin{enumerate} \item[$(1)$] All the integrals in Definition \ref{def_sol} are well defined. \item[$(2)$] The test function space~$C_{c}^{\infty}(\Omega)$ in Definition \ref{def_sol} can be replaced with~$W^{1,p}_{c}(\Omega)$. \end{enumerate} \end{lemma} \section{Properties of weak solutions of~$Q'_{p,\mathcal{A},V}[u]=0$}\label{toolbox} In this section, we present various properties of weak solutions of~$Q'_{p,\mathcal{A},V}[u]=0$ which are frequently used subsequently, including Harnack and weak Harnack inequalities, standard elliptic H\"{o}lder estimates, and a Harnack convergence principle. \subsection{Harnack inequality} By \cite[Theorem 3.14]{Maly} for~$p\!\leq \!n$ and \cite[Theorem 7.4.1]{Pucci} for~$p\!>\!n$, we have the following local Harnack inequality for nonnegative solutions of $Q'_{p,\mathcal{A},V}[u]\!=\!0$. See \cite{Trudinger,Maly,Moser,Serrin1964} for Harnack's inequalities for linear and quasilinear equations in divergence form. \begin{Thm} Assume that~$\mathcal{A}$ satisfies Assumption \ref{ass8} and~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. Let~$v$ be a nonnegative solution~$v$ of $Q'_{p,\mathcal{A},V}[u]=0$ in a domain $\gw\Subset \Omega$. Then for any~$\omega'\Subset\omega$, $$\sup_{\omega'}v\leq C\inf_{\omega'} v,$$ where~$C$ is a positive constant depending only on~$n,p,q,\omega',\omega,\alpha_{\omega},\beta_{\omega},$ and $\Vert V\Vert_{M^{q}(p;\omega)}$. \end{Thm} \subsection{H\"{o}lder estimate} Let $v$ be a H\"{o}lder continuous function of the order $0<\gamma\leq 1$ in $\gw$. We denote $$[v]_{\gamma,\omega}\triangleq\sup_{x,y\in\omega,x\neq y}\frac{\big|v(x)-v(y)\big|}{|x-y|^{\gamma}}\,.$$ The H\"{o}lder continuity of solutions of $Q'_{p,\mathcal{A},V}[u]=0$ follows from \cite[Theorem 4.11]{Maly} for~$p\leq n$ and \cite[Theorem 7.4.1]{Pucci} for~$p>n$. For further regularity of solutions of quasilinear elliptic equations, see \cite{Trudinger, Maly, Pucci}. We need the following result: \begin{Thm} Assume that~$\mathcal{A}$ satisfies Assumption \ref{ass8} and~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. Let~$v$ be a solution of $Q'_{p,\mathcal{A},V}[u]=0$ in a domain $\gw\Subset \Omega$. Then~$v$ is locally H\"{o}lder continuous of order $0<\gamma\leq 1$ (depending on~$n,p,q,\alpha_{\omega}$, and~$\beta_{\omega}$). Furthermore, for any~$\omega'\Subset\omega$,$$[v]_{\gamma,\omega'}\leq C\sup_{\omega}|v|,$$ where~$C$ is a positive constant depending only on~$n,p,q,\omega',\omega,\alpha_{\omega},\beta_{\omega}$, and~$\Vert V\Vert_{M^{q}(p;\omega)}$. \end{Thm} \subsection{Weak Harnack inequality} If $v$ is a nonnegative supersolution of \eqref{half}, then the Harnack inequality still holds for $p>n$ by \cite[Theorem 7.4.1]{Pucci} (See also \cite{Trudinger1967}). On the other hand, for~$p\leq n$, we have: \begin{Thm}[{\cite[Theorem 3.13]{Maly}}] Assume that~$\mathcal{A}$ satisfies Assumption \ref{ass8} and~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. Let~$p\leq n$ and~$s=n(p-1)/(n-p)$. For any nonnegative supersolution~$v$ of $Q'_{p,\mathcal{A},V}[u]=0$ in a domain $\omega\Subset \Omega$, any~$\omega'\Subset\omega$, and any~$0<t<s$, $$\Vert v\Vert_{L^{t}(\omega')}\leq C\inf_{\omega'}v,$$ where~$C$ is a positive constant depending only on~$n,p,t,\omega,\omega',$ and~$\Vert V\Vert_{M^{q}(p;\omega)}$. In particular, such a supersolution is either strictly positive in the domain $\gw$ or vanishes identically. \end{Thm} \subsection{Harnack convergence principle} In this subsection, we generalize the Harnack convergence principle \cite[Proposition 2.11]{Pinchover} to our setting. See \cite[Proposition 2.7]{Giri} for a slightly more general Harnack convergence principle in the sense that the second-order term is also not fixed but a sequence. \begin{Def} \emph{ By a \emph{Lipschitz exhaustion} of~$\Omega$, we mean a sequence of Lipschitz domains~$\{\omega_{i}\}_{i\in\mathbb{N}}$ satisfying for all~$i\in\mathbb{N}$,~$\omega_{i}\Subset\omega_{i+1}\Subset\Omega$ and~$\cup_{i=1}^{\infty}\omega_{i}=\Omega.$} \end{Def} For the existence of a Lipschitz exhaustion of~$\Omega$, see for example \cite[Proposition 8.2.1]{smooth}. \begin{Thm}[Harnack convergence principle]\label{HCP} Let~$\mathcal{A}$ satisfy Assumption~\ref{ass8} and let $\{\omega_{i}\}_{i\in\mathbb{N}}$ be a Lipschitz exhaustion of~$\Omega$ and~$x_{0}\in \omega_{1}$. Assume that~$\mathcal{V}_{i}\in M^{q}(p;\omega_{i})$ converges weakly in~$M^{q}_{{\rm loc}}(p;\Omega)$ to~$\mathcal{V}\in M^{q}_{{\rm loc}}(p;\Omega)$ as~$i\rightarrow\infty$. For every~$i\in\mathbb{N}$, suppose that~$v_{i}$ is a positive solution of the equation~$Q'_{p,\mathcal{A},\mathcal{V}_{i}}[u]=0$ in~$\omega_{i}$ with~$v_{i}(x_{0})=1$. Then there exists a subsequence of~$\{v_{i}\}_{i\in\mathbb{N}}$ converging weakly in $W^{1,p}_{{\rm loc}}(\Omega)$ and locally uniformly in~$\Omega$ to a positive weak solution~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ of the equation~$Q'_{p,\mathcal{A},\mathcal{V}}[u]=0$ in~$\Omega$. \end{Thm} \begin{proof} We use the same approach as in the proof of \cite[Proposition 2.11]{Pinchover}. Note our convention throughout the proof: when extracting a suitable subsequence of~$\{v_{i}\}_{i\in\mathbb{N}}$, we keep denoting the obtained subsequence by~$\{v_{i}\}_{i\in\mathbb{N}}$ without stating it. By the local H\"{o}lder continuity,~$v_{i}$ are continuous in~$\omega_{i}$ for all~$i\in\mathbb{N}$. Fix a subdomain $\gw_1\Subset \gw\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$. By the local Harnack inequality, $\{v_{i}\}_{i\in\mathbb{N}}$ is uniformly bounded in $\omega$. Therefore, the local H\"{o}lder continuity guarantees that $\{v_{i}\}_{i\in\mathbb{N}}$ is equicontinuous over~$\omega$. Applying the Arzel\`{a}-Ascoli theorem, we obtain a subsequence converging uniformly in $\gw$ to a positive continuous function $v$. Now we aim to find a subsequence of~$\{v_{i}\}_{i\in\mathbb{N}}$ converging weakly in~$W^{1,p}(\omega)$ to a positive solution of~$Q'_{p,\mathcal{A},\mathcal{V}}[u]=0$ in~$\Omega$. Fix $k\in\mathbb{N}$. Then for any~$\varphi\in C^{\infty}_{c}(\omega_{k})$, we have~$v_{i}|\varphi|^{p}\in W^{1,p}_{c}(\omega_{k})$ for $i>k$. Testing~$v_{i}|\varphi|^{p}$ in the definition of~$v_{i}$ being a positive weak solution of the equation~$Q'_{p,\mathcal{A},\mathcal{V}_{i}}[u]=0$ in $\gw_k$, we obtain % $$\left\Vert|\nabla v_{i}|_{\mathcal{A}}\varphi\right\Vert^{p}_{L^{p}(\omega_{k})}\leq p\int_{\omega_{k}}|\nabla v_{i}|_{\mathcal{A}}^{p-1}|\varphi|^{p-1}v_{i}|\nabla \varphi|_{\mathcal{A}}\,\mathrm{d}x+\int_{\omega_{k}}|\mathcal{V}_{i}|v_{i}^{p}|\varphi|^{p}\,\mathrm{d}x.$$ % Applying Young inequality $pab\leq \varepsilon a^{p'}+\left((p-1)/\varepsilon\right)^{p-1}b^{\,p},$ on $p\!\int_{\omega_{k}}\!|\nabla v_{i}|_{\mathcal{A}}^{p-1}|\varphi|^{p-1}v_{i}|\nabla \varphi|_{\mathcal{A}}\!\,\mathrm{d}x$ with $\varepsilon\in (0,1), a=|\nabla v_{i}|_{\mathcal{A}}^{p-1}|\varphi|^{p-1}$, and~$b=v_{i}|\nabla \varphi|_{\mathcal{A}}$, and the Morrey-Adams theorem (Theorem~\ref{MA_thm}) on $\int_{\omega_{k}}|\mathcal{V}_{i}|v_{i}^{p}|\varphi|^{p}\,\mathrm{d}x$, we conclude: \begin{equation*} (1-\varepsilon)\!\left\Vert|\nabla v_{i}|_{\mathcal{A}}\varphi\right\Vert^{p}_{L^{p}(\omega_{k})} \leq \!\left(\frac{p-1}{\varepsilon}\right)^{p-1}\!\!\left\Vert|\nabla \varphi|_{\mathcal{A}}v_{i}\right\Vert^{p}_{L^{p}(\omega_{k})}+\delta\left\Vert\nabla(v_{i}\varphi)\right\Vert^{p}_{L^{p}(\omega_{k};\mathbb{R}^{n})} +C\left\Vert v_{i}\varphi\right\Vert^{p}_{L^{p}(\omega_{k})}, \end{equation*} where $C=C\left(n,p,q,\delta,\Vert\mathcal{V}\Vert_{M^{q}(p;\omega_{k+1})}\right)$. By virtue of the structural properties of~$\mathcal{A}$ and the frequently used inequality: $$\big\Vert\nabla(v_{i}\varphi)\big\Vert^{p}_{L^{p}(\omega_{k};\mathbb{R}^{n})}\leq 2^{p-1}\Big(\big\Vert v_{i}\nabla \varphi\big\Vert^{p}_{L^{p}(\omega_{k};\mathbb{R}^{n})}+\big\Vert \varphi\nabla v_{i}\big\Vert^{p}_{L^{p}(\omega_{k};\mathbb{R}^{n})}\Big),$$ we observe that for all~$i>k$ and all~$\varphi\in C^{\infty}_{c}(\omega_{k})$: \begin{equation*} \left((1\!-\!\varepsilon)\alpha_{\omega_{k}} \! \!- \! 2^{p-1}\delta\right)\!\left\Vert|\nabla v_{i}|\varphi\right\Vert_{L^{p}(\omega_{k})}^{p} \!\leq \!\! \left(\!\!\left(\!\frac{p-1}{\varepsilon} \! \right)^{p-1} \!\!\beta_{\omega_{k}} \! + \! 2^{p-1}\delta \!\right)\!\!\left\Vert v_{i}|\nabla \varphi|\right\Vert_{L^{p}(\omega_{k})}^{p} \!\! + \!C\left\Vert v_{i}\varphi\right\Vert^{p}_{L^{p}(\omega_{k})} \!, \end{equation*} where~$C=C\left(n,p,q,\delta,\Vert\mathcal{V}\Vert_{M^{q}(p;\omega_{k+1})}\right)$. Let $\delta>0$ be such that~$(1-\varepsilon)\alpha_{\omega_{k}}-2^{p-1}\delta>0$, and fix $\omega\Subset\omega'\Subset\omega_{k}$. Choose~$\varphi\in C^{\infty}_{c}(\omega_{k})$ \cite[Theorem 1.4.1]{cutoff} such that$$\supp(\varphi)\subseteq\omega',\quad0\leq \varphi\leq 1~\mbox{in}~\omega',\quad \varphi=1~\mbox{in}~\omega, \mbox{ and } |\nabla \varphi|\leq C(\omega',\omega)~\mbox{in}~\omega'.$$ Consequently, with~$C'=C\left(n,p,q,\delta,\varepsilon,\alpha_{\omega_{k}},\Vert\mathcal{V}\Vert_{M^{q}(p;\omega_{k+1})}\right)$ and~$C''=C(p,\delta,\varepsilon,\alpha_{\omega_{k}},\beta_{\omega_{k}})$, we have \begin{eqnarray*} \Vert\nabla v_{i}\Vert_{L^{p}(\omega;\mathbb{R}^{n})}^{p}+\Vert v_{i}\Vert_{L^{p}(\omega)}^{p} &\leq& \Vert|\nabla v_{i}|\varphi\Vert_{L^{p}(\omega_{k})}^{p}+\Vert v_{i}\varphi\Vert_{L^{p}(\omega_{k})}^{p}\\ &\leq& C'\big\Vert v_{i}\varphi\big\Vert^{p}_{L^{p}(\omega_{k})} + C''\big\Vert v_{i}|\nabla \varphi|\big\Vert_{L^{p}(\omega_{k})}^{p} \leq \tilde{C} \end{eqnarray*} where the positive constant $\tilde{C}$ does not depend on~$v_{i}$. So~$\{v_{i}\}_{i\in\mathbb{N}}$ is bounded in~$W^{1,p}(\omega)$. Hence, there exists a subsequence converging weakly in~$W^{1,p}(\omega)$ to the nonnegative function~$v\in W^{1,p}(\omega)$ with~$v(x_{0})=1$ because~$\{v_{i}\}_{i\in\mathbb{N}}$ converges uniformly to~$v$ in~$\omega$. The task is now to show that~$v$ is a positive solution of~$Q'_{p,\mathcal{A},\mathcal{V}}[u]=0$ in~$\tilde{\omega}\Subset\omega$ such that~$x_{0}\in\tilde{\omega}$. For any~$\psi\in C^{\infty}_{c}(\tilde{\omega})$, we have \begin{eqnarray*} &&\left\vert\int_{\tilde{\omega}}\mathcal{V}_{i}v_{i}^{p-1}\psi\,\mathrm{d}x-\int_{\tilde{\omega}}\mathcal{V}v^{p-1}\psi\,\mathrm{d}x\right\vert\\ &=&\left\vert\int_{\tilde{\omega}}\mathcal{V}_{i}v_{i}^{p-1}\psi\,\mathrm{d}x-\int_{\tilde{\omega}}\mathcal{V}_{i}v^{p-1}\psi\,\mathrm{d}x+\int_{\tilde{\omega}}\mathcal{V}_{i}v^{p-1}\psi\,\mathrm{d}x-\int_{\tilde{\omega}}\mathcal{V}v^{p-1}\psi\,\mathrm{d}x\right\vert\\ &\leq& \int_{\tilde{\omega}}\vert\mathcal{V}_{i}\vert \vert v_{i}^{p-1}-v^{p-1}\vert|\psi|\,\mathrm{d}x+\left\vert\int_{\tilde{\omega}}(\mathcal{V}_{i}-\mathcal{V}) v^{p-1}\psi\,\mathrm{d}x\right\vert\\ &\leq& C(\psi)\int_{\tilde{\omega}}\vert\mathcal{V}_{i}\vert \vert v_{i}^{p-1}-v^{p-1}\vert\,\mathrm{d}x+\left\vert\int_{\tilde{\omega}}(\mathcal{V}_{i}-\mathcal{V}) v^{p-1}\psi\,\mathrm{d}x\right\vert. \end{eqnarray*} The sequence~$\{v_{i}\}_{i\in\mathbb{N}}$ is uniformly bounded by Harnack's inequality in~$\tilde{\omega}$. The limit function~$v$ is continuous in~$\omega$ and hence bounded in~$\tilde{\omega}$. The function~$f(t)\triangleq t^{p-1}$ is uniformly continuous on~$[0,L]$ for any~$L>0$. Then~$\{v_{i}^{p-1}\}_{i\in\mathbb{N}}$ converges uniformly in~$\tilde{\omega}$ to~$v^{p-1}$ as~$\{v_{i}\}_{i\in\mathbb{N}}$ converges uniformly to~$v$. Furthermore, by a standard finite covering argument, because~$\mathcal{V}_{i}$ converges weakly to~$\mathcal{V}$ in~$M^{q}_{{\rm loc}}(p;\Omega)$, we infer that~$\int_{\tilde{\omega}}\vert\mathcal{V}_{i}\vert\,\mathrm{d}x$ is bounded with respect to~$i$. Hence, $$\int_{\tilde{\omega}}\vert\mathcal{V}_{i}\vert \vert v_{i}^{p-1}-v^{p-1}\vert\,\mathrm{d}x\to 0 \qquad \mbox{ as } i\to \infty.$$ Moreover, by the weak convergence of $\{\mathcal{V}_{i}\}_{i\in\mathbb{N}}$ to $\mathcal{V}$, it follows that $$ \int_{\tilde{\omega}}(\mathcal{V}_{i}-\mathcal{V}) v^{p-1}\psi\,\mathrm{d}x\to 0 \qquad \mbox{ as } i\to\infty.$$ Consequently, it follows that \begin{equation}\label{potentialconvergence} \lim_{i\rightarrow\infty}\int_{\tilde{\omega}}\mathcal{V}_{i}v_{i}^{p-1}\psi\,\mathrm{d}x=\int_{\tilde{\omega}}\mathcal{V}v^{p-1}\psi\,\mathrm{d}x. \end{equation} {\bf Claim:} The sequence $\{\mathcal{A}(x,\nabla v_{i})\}$ converges weakly in~$L^{p'}(\tilde{\omega};\mathbb{R}^{n})$ to $ \mathcal{A}(x,\nabla v)$. This will imply that for any~$\psi\in C^{\infty}_{c}(\tilde{\omega})$ we have $$\int_{\tilde{\omega}}\mathcal{A}(x,\nabla v)\cdot\nabla \psi\,\mathrm{d}x+\int_{\tilde{\omega}}\mathcal{V}v^{p-1}\psi\,\mathrm{d}x=\lim_{i\to\infty}\int_{\tilde{\omega}}\left(\mathcal{A}(x,\nabla v_{i})\cdot\nabla \psi+\mathcal{V}_{i}v_{i}^{p-1}\psi\right)\,\mathrm{d}x=0.$$ In other words, $v$ is a nonnegative solution of the equation~$Q'_{p,\mathcal{A},\mathcal{V}}[u]=0$ in~$\tilde{\omega}$. To this end, choose~$\psi\in C^{\infty}_{c}(\omega)$ \cite[Theorem 1.4.1]{cutoff} such that$$\supp(\psi)\subseteq\omega,\quad0\leq \psi\leq 1~\mbox{in}~\omega,\quad \psi=1~\mbox{in}~\tilde{\omega}, \mbox{ and } |\nabla \psi|\leq C(\tilde{\omega},\omega)~\mbox{in}~\omega.$$ Testing~$\psi(v_{i}-v)$ in the definition of~$v_{i}$ being a solution of~$Q'_{p,\mathcal{A},\mathcal{V}_{i}}[w]=0$ in~$\omega_{i}$, we get$$\int_{\omega}\psi\mathcal{A}(x,\nabla v_{i})\cdot \nabla(v_{i}-v)\,\mathrm{d}x=-\int_{\omega}(v_{i}-v)\mathcal{A}(x,\nabla v_{i})\cdot\nabla \psi\,\mathrm{d}x-\int_{\omega}\mathcal{V}_{i}v_{i}^{p-1}\psi(v_{i}-v)\,\mathrm{d}x.$$ We claim that \begin{equation}\label{ineq7} \int_{\omega}\psi\mathcal{A}(x,\nabla v_{i})\cdot \nabla(v_{i}-v)\,\mathrm{d}x\rightarrow 0\mbox{ as } i\rightarrow \infty. \end{equation} As in the proof of \eqref{potentialconvergence}, $\int_{\omega}\mathcal{V}_{i}v_{i}^{p-1}\psi(v_{i}-v)\,\mathrm{d}x \to 0$ as $i\rightarrow\infty$. In addition \begin{eqnarray*} \left\vert-\int_{\omega}(v_{i}-v)\mathcal{A}(x,\nabla v_{i})\cdot\nabla \psi\,\mathrm{d}x\right\vert &\leq& \beta_{\omega}\int_{\omega}|\nabla v_{i}|^{p-1}|(v_{i}-v)\nabla \psi|\,\mathrm{d}x\\ &\leq& \beta_{\omega}\Big(\int_{\omega}|(v_{i}-v)\nabla \psi|^{p}\,\mathrm{d}x\Big)^{1/p}\Vert\nabla v_{i}\Vert^{p/p'}_{L^{p}(\omega;\mathbb{R}^{n})}\\ &\leq &C\left(\beta_{\omega},\omega,\tilde{\omega}, \psi\right)\Vert v_{i}-v\Vert_{L^{p}(\omega)}\Vert\nabla v_{i}\Vert^{p/p'}_{L^{p}(\omega;\mathbb{R}^{n})}. \end{eqnarray*} Because the norms $\Vert\nabla v_{i}\Vert^{{p}/{p'}}_{L^{p}(\omega;\mathbb{R}^{n})}$ are uniformly bounded for all~$i\in\mathbb{N}$, and~$v_{i}$ converges to~$v$ uniformly in~$\omega$ as~$i \to \infty$, we get $$\left\vert-\int_{\omega}(v_{i}-v)\mathcal{A}(x,\nabla v_{i})\cdot\nabla \psi\,\mathrm{d}x\right\vert \to 0 \mbox{ as } i\to \infty.$$ \begin{comment} For every~$X,Y\in\mathbb{R}^{n},n\geq 2$, \begin{multline} \big(|X|^{p-2}_{A}AX-|Y|^{p-2}_{A}AY\big)\cdot(X-Y)\\ =|X|^{p}_{A}-|X|^{p-2}_{A}AX\cdot Y+|Y|^{p}_{A}-|Y|^{p-2}_{A}AY\cdot X\\ \geq |X|^{p}_{A}-|X|^{p-1}_{A}|Y|_{A}+|Y|^{p}_{A}-|Y|^{p-1}_{A}|X|_{A}\\ =(|X|^{p-1}_{A}-|Y|^{p-1}_{A})(|X|_{A}-|Y|_{A})\geq 0. \end{multline} \end{comment} It follows that \begin{eqnarray*} 0\leq \mathcal{I}_{i}&\triangleq&\int_{\tilde{\omega}}\big(\mathcal{A}(x,\nabla v_{i})-\mathcal{A}(x,\nabla v)\big)\cdot(\nabla v_{i}-\nabla v)\,\mathrm{d}x\\ &\leq&\int_{\omega}\psi\big(\mathcal{A}(x,\nabla v_{i})-\mathcal{A}(x,\nabla v)\big)\cdot(\nabla v_{i}-\nabla v)\,\mathrm{d}x \to 0 \mbox{ as } i\to \infty, \end{eqnarray*} which is derived from \eqref{ineq7} and the weak convergence of $\{\nabla v_{i}\}_{i\in\mathbb{N}}$ to~$\nabla v$. Hence, $\lim_{i\rightarrow\infty}\mathcal{I}_{i}=0$. By means of~\cite[Lemma 3.73]{HKM}, we obtain in~$L^{p'}(\tilde{\omega};\mathbb{R}^{n})$, $$\mathcal{A}(x,\nabla v_{i})\rightharpoonup \mathcal{A}(x,\nabla v) \qquad \mbox{ as } i\to \infty.$$ Hence, $v$ is a positive solution of the equation~$Q'_{p,\mathcal{A},\mathcal{V}}[u]=0$ in~$\tilde{\omega}$ satisfying $v(x_{0})=1$. In conclusion, for any~$\tilde{\omega}\Subset\omega\Subset\Omega$ with~$x_{0}\in \tilde{\omega}$, there exists a subsequence of~$\{v_{i}\}_{i\in\mathbb{N}}$ converging weakly and locally uniformly in~$\tilde{\omega}$ to a positive weak solution of the equation~$Q'_{p,\mathcal{A},\mathcal{V}}[u]=0$ in~$\tilde{\omega}$. Using a standard diagonal argument, we may extract a subsequence of~$\{v_{i}\}_{i\in\mathbb{N}}$ which converges weakly in~$W^{1,p}(\omega_{i})$ for all~$i\in\mathbb{N}$ and locally uniformly in~$\Omega$ to a positive weak solution~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ of the equation~$Q'_{p,\mathcal{A},\mathcal{V}}[u]=0$ in~$\Omega$. \end{proof} \begin{comment} \subsection{Gradient estimate} In the sequel, we will need for the case $1<p<2$ the local boundedness of the modulus of the gradient of a solution of \eqref{half}. The following theorem is a consequence of \red{\cite[Theorem~1.1]{DM} and} \cite[Theorem~5.3]{Lieberman}. \red{Please change the formulation of the gradient estimate using the result in \cite{DM}, and prove that this estimate implies that solutions are in fact $C^1$.} \begin{Thm}\label{localbound} Let $1<p<2$, $\mathcal{A}\triangleq(\mathcal{A}^{1},\mathcal{A}^{2},\ldots,\mathcal{A}^{n})$ satisfy \red{Assumption~\ref{ass8}} and $V\in L^{\infty}_{{\rm loc}}(\Omega)$. Suppose further that there exist positive constants $\theta\leq 1,\mu,\Lambda,\Lambda_{1}$, and a nonnegative constant~$\iota\leq 1$ such that for all~$(x,y,\eta)\in\Omega\times\Omega\times(\mathbb{R}^{n}\setminus\{0\})$ and all~$\xi\in\mathbb{R}^{n}$, $$\frac{\partial \mathcal{A}^{i}}{\partial \eta^{j}}(x,\eta)\xi_{i}\xi_{j}\geq \mu(\iota+|\eta|)^{p-2}|\xi|^{2};$$ $$\left|\frac{\partial \mathcal{A}}{\partial \eta}(x,\eta)\right|\leq \Lambda(\iota+|\eta|)^{p-2};$$ $$|\mathcal{A}(x,\eta)-\mathcal{A}(y,\eta)|\leq \Lambda_{1}(1+|\eta|)^{p-1}|x-y|^{\theta}.$$ Then there exists a positive constant~$\gamma'=\gamma'\left(p,n,\theta,\frac{\Lambda}{\mu}\right)$ such that any locally bounded weak solution~$u$ of \eqref{half} is in~$C^{1,\gamma'}_{{\rm loc}}(\Omega)$. In particular, the modulus of the gradient of such a solution is locally bounded. \end{Thm} \end{comment} \section{Generalized principal eigenvalue}\label{sec_eigenvalue} Throughout the present section we consider solutions in a fixed domain $\gw\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$. First, by virtue of the weak lower semicontinuity as well as the coercivity of certain functionals related to the functional $Q_{p,\mathcal{A},V}$, we prove that the generalized principal eigenvalue of the operator $Q'_{p,\mathcal{A},V}$ in $\gw$ is, in fact, a principal eigenvalue of $Q'_{p,\mathcal{A},V}$. Moreover, the principal eigenvalue is simple, which is proved by virtue of the Picone-type identity (Lemma \ref{Picone}). After that, we show in Theorem \ref{maximum} (together with Theorem~\ref{complement}) that the following properties are equivalent: the positivity of the principal eigenvalue, the validity of the generalized weak or strong maximum principles, and the unique solvability of the Dirichlet problem $Q'_{p,\mathcal{A},V}[u]=g$ in~$W^{1,p}_{0}(\omega)$. We also establish a weak comparison principle, which is of core importance in Section~\ref{minimal}. See \cite{Pinchovergp} for more on the generalized principal eigenvalue. \subsection{D\'{\i}az-Sa\'a type inequalities} In this subsection, we generalize the D\'{\i}az-Sa\'a type inequalities as a counterpart of \cite[Lemma 3.3]{Pinchover}, see also \cite{Anane1987, Diaz, Lindqvist} for related results. To this end, Assumption \ref{ass2}, concerning the local strong convexity of the norm $|\cdot|_{\mathcal{A}}^p$ is assumed. The D\'{\i}az-Sa\'a type inequalities are used to prove the uniqueness of solutions of two Dirichlet problems (see theorems~\ref{maximum} and \ref{5proposition}). But in Lemma \ref{newDiaz} and hence in Theorem \ref{5proposition}, this assumption is not supposed. \begin{Def} {\em Assume that $\mathcal{A}$ satisfies Assumption \ref{ass8}, and let $V\in M_{\rm loc}^{q}(p;\Omega)$. Let $\omega\Subset\Omega$ be a subdomain. A real number~$\lambda$ is called an \emph{eigenvalue with an eigenfunction v} of the Dirichlet eigenvalue problem \begin{equation}\label{evp} \begin{cases} % Q'_{p,\mathcal{A},V}[u]=\lambda|u|^{p-2}u&\text{in~$\omega$},\\ % u=0& \text{on~$\partial\omega$}, \end{cases} \end{equation} if~$v\in W^{1,p}_{0}(\omega)\setminus\{0\}$ is a solution of the equation $Q'_{p,\mathcal{A},V}[u]=\lambda |u|^{p-2}u$ } \end{Def} \begin{lem}[D\'{\i}az-Sa\'a type inequalities]\label{elementary} Assume that~$\mathcal{A}$ satisfies Assumption~\ref{ass8} and Assumption \ref{ass2}. Let~$\omega\Subset\Omega$ be a domain and~$g_{i},V_{i}\in M^{q}(p;\omega)$, where~$i=1,2$. There exists a positive constant~$C(\omega,p)$ and $\bar p\geq p$ satisfying the following conclusions. \begin{enumerate} \item[$(1)$] Let $w_{i}\in W^{1,p}_{0}(\omega)\setminus\{0\}$ be nonnegative solutions of $Q'_{p,\mathcal{A},V_{i}}[u]=g_{i}$ in $\gw$, where $i=1,2$, and let~$w_{i,h}\triangleq w_{i}+h$, where~$h$ is a positive constant. Then \begin{multline*} I_{h,g_{1},g_{2},w_{1},w_{2}} \!\!\triangleq \!\!\int_{\omega}\!\!\left(\!\frac{g_{1} \! - \! V_{1}w_{1}^{p-1}}{w_{1,h}^{p-1}}- \frac{g_{2}\!-\!V_{2}w_{2}^{p-1}}{w_{2,h}^{p-1}}\!\right)\!\!(w^{p}_{1,h}-w^{p}_{2,h})\!\,\mathrm{d}x \!\geq \! C(\bar p,\mathcal A,\gw)L_{h,w_{1},w_{2}}, \end{multline*} where $$L_{h,w_{1},w_{2}}\triangleq\int_{\omega}(w^{p}_{1,h}+w^{p}_{2,h})\left|\frac{\nabla w_{1,h}}{w_{1,h}}-\frac{\nabla w_{2,h}}{w_{2,h}}\right|_{\mathcal{A}}^{\bar{p}}\,\mathrm{d}x.$$ \item[$(2)$] Let $w_\gl$ and $w_\mu} \def\gn{\nu} \def\gp{\pi$ be nonnegative eigenfunctions of the operators $Q'_{p,\mathcal{A},V_{i}}$ in $\gw$ with eigenvalues $\gl$ and $\mu} \def\gn{\nu} \def\gp{\pi$, respectively. Then $$I_{0,g_{\lambda},g_{\mu},w_{\lambda},w_{\mu}}=\int_{\omega}\big((\lambda-\mu)-(V_{1}-V_{2})\big)(w_{\lambda}^{p}-w_{\mu}^{p})\,\mathrm{d}x\geq C(\bar p,\mathcal A,\gw)L_{0,w_{\lambda},w_{\mu}}.$$ \end{enumerate} \end{lem} \begin{proof} $(1)$ Let~$\psi_{1,h}\triangleq(w^{p}_{1,h}-w^{p}_{2,h})w_{1,h}^{1-p}$. Then~$\psi_{1,h}\in W^{1,p}_{0}(\omega)$. It follows that \begin{equation*} \int_{\omega}\mathcal{A}(x,\nabla w_{1})\cdot \nabla \psi_{1,h}\,\mathrm{d}x+\int_{\omega}V_{1}|w_{1}|^{p-2}w_{1}\psi_{1,h}\,\mathrm{d}x=\int_{\omega}g_{1}\psi_{1,h}\,\mathrm{d}x. \end{equation*} We thus get \begin{eqnarray*}\label{w1h} &&\int_{\omega}(w^{p}_{1,h}-w^{p}_{2,h})\left\vert\frac{\nabla w_{1,h}}{w_{1,h}}\right\vert_{\mathcal{A}}^{p}\,\mathrm{d}x -p\int_{\omega}w_{2,h}^{p}\mathcal{A}\left(x,\frac{\nabla w_{1,h}}{w_{1,h}}\right)\cdot\left(\frac{\nabla w_{2,h}}{w_{2,h}}-\frac{\nabla w_{1,h}}{w_{1,h}}\right) \!\! \,\mathrm{d}x\\ &=&\int_{\omega}\frac{g_{1}-V_{1}w_{1}^{p-1}}{w_{1,h}^{p-1}}(w^{p}_{1,h}-w^{p}_{2,h}) \,\mathrm{d}x. \end{eqnarray*} Similarly, we see that \begin{eqnarray*}\label{w2h} &&\int_{\omega}(w^{p}_{2,h}-w^{p}_{1,h})\left\vert\frac{\nabla w_{2,h}}{w_{2,h}}\right\vert_{\mathcal{A}}^{p}\,\mathrm{d}x -p\int_{\omega}w_{1,h}^{p}\mathcal{A}\left(x,\frac{\nabla w_{2,h}}{w_{2,h}}\right)\cdot\left(\frac{\nabla w_{1,h}}{w_{1,h}}-\frac{\nabla w_{2,h}}{w_{2,h}}\right)\!\!\,\mathrm{d}x\\ &=&\int_{\omega}\frac{g_{2}-V_{2}w_{2}^{p-1}}{w_{2,h}^{p-1}}(w^{p}_{2,h}-w^{p}_{1,h})\,\mathrm{d}x. \end{eqnarray*} Adding the previous derived equalities yields \begin{eqnarray*} & &I_{h,g_{1},g_{2},w_{1},w_{2}} \!=\!\int_{\omega}\!\!w^{p}_{1,h} \! \left(\left\vert\frac{\nabla w_{1,h}}{w_{1,h}}\right\vert_{\mathcal{A}}^{p}\!\!-\!\left\vert\frac{\nabla w_{2,h}}{w_{2,h}}\right\vert_{\mathcal{A}}^{p} \!-\! p\mathcal{A}\!\!\left(x,\frac{\nabla w_{2,h}}{w_{2,h}}\right)\!\!\cdot\!\!\left(\frac{\nabla w_{1,h}}{w_{1,h}}-\frac{\nabla w_{2,h}}{w_{2,h}}\right) \! \right)\!\!\,\mathrm{d}x\\[2mm] &+& \!\!\int_{\omega}w^{p}_{2,h}\left(\left\vert\frac{\nabla w_{2,h}}{w_{2,h}}\right\vert_{\mathcal{A}}^{p}-\left\vert\frac{\nabla w_{1,h}}{w_{1,h}}\right\vert_{\mathcal{A}}^{p}-p\mathcal{A}\left(x,\frac{\nabla w_{1,h}}{w_{1,h}}\right)\cdot\left(\frac{\nabla w_{2,h}}{w_{2,h}}-\frac{\nabla w_{1,h}}{w_{1,h}}\right)\right)\!\!\,\mathrm{d}x. \end{eqnarray*} Applying the local strong convexity of $\mathcal{A}$ (Assumption~\ref{ass2}), we obtain the conclusion $(1)$. $(2)$ Using part $(1)$, we have \begin{eqnarray*} &&\left\vert\left((\lambda-V_{1})\left(\frac{w_{\lambda}}{w_{\lambda,h}}\right)^{p-1}-(\mu-V_{2})\left(\frac{w_{\mu}}{w_{\mu,h}}\right)^{p-1}\right)(w_{\lambda,h}^{p}-w_{\mu,h}^{p})\right\vert\\ &\leq& \left(|\lambda-V_{1}|+|\mu-V_{2}|\right)\left((w_{\lambda}+1)^{p}+(w_{\mu}+1)^{p}\right)\\ &\leq& 2^{p-1}\left(|\lambda-V_{1}|+|\mu-V_{2}|\right)(w_{\lambda}^{p}+w_{\mu}^{p}+2)\in L^{1}(\omega). \end{eqnarray*} On the other hand, for a.e.~$x\in\omega$, the limit \begin{eqnarray*} &&\lim_{h\rightarrow 0}\left((\lambda-V_{1})\left(\frac{w_{\lambda}}{w_{\lambda,h}}\right)^{p-1}-(\mu-V_{2})\left(\frac{w_{\mu}}{w_{\mu,h}}\right)^{p-1}\right)(w_{\lambda,h}^{p}-w_{\mu,h}^{p})\\ &=&(\lambda-\mu-V_{1}+V_{2})(w_{\lambda}^{p}-w_{\mu}^{p}). \end{eqnarray*} Hence, the dominated convergence theorem and Fatou's lemma imply $(2)$. \end{proof} \begin{lemma}\label{newDiaz} Assume that $\omega\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ is a bounded Lipschitz domain, $\mathcal{A}$ satisfies Assumption~\ref{ass8}, $g_{i},V_{i}\in M^{q}(p;\omega)$. For $i=1,2$, let $w_{i}\in W^{1,p}(\omega)$ be respectively, positive solutions of the equations $Q'_{p,\mathcal{A},V_{i}}[w]=g_{i}$ in $\gw$, which are bounded away from zero in~$\omega$ and satisfy $w_{1}=w_{2}>0$ on~$\partial\omega$ in the trace sense. Then $$I_{0,g_{1},g_{2},w_{1},w_{2}}=\int_{\omega}\left(\left(\frac{g_{1}}{w_{1}^{p-1}}-\frac{g_{2}}{w_{2}^{p-1}}\right)-(V_{1}-V_{2})\right)(w_{1}^{p}-w_{2}^{p})\,\mathrm{d}x \geq 0,$$ and~$I_{0,g_{1},g_{2},w_{1},w_{2}}=0$ if and only if $\nabla w_{1}/w_{1}=\nabla w_{2}/w_{2}$. \end{lemma} \begin{proof} Letting~$h=0$, we see at once that the lemma follows from the proof of $(1)$ of Theorem \ref{elementary} and Lemma \ref{strictconvexity} (without assuming local strong convexity). \end{proof} \begin{comment} The following lemma is a direct corollary of \cite[Theorem 3.23]{HKM} which will be used only when we prove that the principal eigenvalue is isolated. This lemma is a weak counterpart of \cite[Lemma 3.4]{Pinchover}. We do not know whether such a conclusion in our setting holds if~$\mathcal{V}$ is nontrivial. \begin{lem} Let~$\mathcal{V}=0$. For any supersolution~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ of~$Q'_{p,\mathcal{A},\mathcal{V}}[u]=0$ in~$\Omega$,~$v^{-}\in W^{1,p}_{{\rm loc}}(\Omega)$ is a subsolution of the same equation. \end{lem} \end{comment} \subsection{Weak lower semicontinuity and coercivity} In this subsection, we study the weak lower semicontinuity and coercivity of certain functionals related to the functional $Q_{p,\mathcal{A},V}$. See also \cite[Section 8.2]{Evans}. \begin{Def {\em Let~$(X,\Vert\cdot\Vert_{X})$ be a Banach space. A functional $J:X\to\mathbb{R}\cup\{\infty\}$ is said to be {\em coercive} if $J[u] \to \infty\mbox{ as }\Vert u\Vert_{X} \to \infty.$ \noindent The functional~$J$ is said to be {\em (sequentially) weakly lower semicontinuous} if $$J[u] \leq \liminf_{k\to\infty} J[u_{k}] \qquad \mbox{ whenever }u_{k}\rightharpoonup u.$$} \end{Def} \begin{notation} \emph{For a domain~$\omega\subseteq \Omega} \def\Gx{\Xi} \def\Gy{\Psi$,~$\mathcal{A}$ satisfying Assumption~\ref{ass8}, and $V\in M^{q}_{\rm loc} (p;\Omega)$, let $Q_{p,\mathcal{A},V}[\varphi;\omega]$ be the functional on $C^{\infty}_{c}(\omega)$ given by $$\vgf\mapsto \int_{\omega}\Big(\vert\nabla \varphi\vert_{\mathcal{A}}^{p}+V\vert \varphi\vert^{p}\Big)\,\mathrm{d}x.$$ When~$\omega=\Omega$, we write $Q_{p,\mathcal{A},V}[\varphi]\triangleq Q_{p,\mathcal{A},V}[\varphi;\Omega].$} \end{notation} The next four theorems can be proved by standard arguments which are similar to the proof of \cite[propositions 3.6 and 3.7]{Pinchover}, and therefore their proofs are omitted. We state them as four separate theorems for the sake of clarity. \begin{Thm}\label{ThmJ} Consider the domains~$\omega\Subset\omega'\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, and let $\mathcal{A}$ satisfy Assumption \ref{ass8}, and $g,\mathcal{V}\in M^{q}(p;\omega')$, where~$\omega$ is Lipschitz. Then the functional $$\bar{J}:W^{1,p}(\omega)\rightarrow\mathbb{R}\cup\{\infty\},\quad \bar{J}[u]\triangleq Q_{p,\mathcal{A},\mathcal{V}}[u;\omega]-\int_{\omega}g\vert u\vert\,\mathrm{d}x,$$ is weakly lower semicontinuous in~$W^{1,p}(\omega)$. \end{Thm} \begin{Thm}\label{ThmJ1} Consider the domains~$\omega\Subset\omega'\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, and let $\mathcal{A}$ satisfy Assumption \ref{ass8}, $\mathcal{V}\in M^{q}(p;\omega)$, and~$g\in L^{p'}(\omega)$. Then the functional $$J:W^{1,p}_{0}(\omega)\rightarrow\mathbb{R}\cup\{\infty\},\quad J[u]\triangleq Q_{p,\mathcal{A},\mathcal{V}}[u;\omega]-\int_{\omega}gu\,\mathrm{d}x,$$ is weakly lower semicontinuous in~$W^{1,p}_{0}(\omega)$. \end{Thm} \begin{Thm} Consider the domains~$\omega\Subset\omega'\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, where~$\omega$ is a Lipschitz domain. Let $\mathcal{A}$ satisfy Assumption \ref{ass8}, $g, \mathcal{V}\in M^{q}(p;\omega')$, and~$\mathcal{V}$ is nonnegative. For any~$f\in W^{1,p}(\omega)$, ~$\bar{J}$ is coercive in$$\mathbf{A}\triangleq\{u\in W^{1,p}(\omega):u-f\in W^{1,p}_{0}(\omega)\}.$$ \end{Thm} \begin{Thm}\label{thm-coercive} Consider a domain~$\omega\Subset\Omega$, and let $\mathcal{A}$ satisfy Assumption \ref{ass8}, $\mathcal{V}\in M^{q}(p;\omega)$, and~$g\in L^{p'}(\omega)$. If for some~$\varepsilon>0$ and all~$u\in W^{1,p}_{0}(\omega)$ we have, $$Q_{p,\mathcal{A},\mathcal{V}}[u;\omega]\geq\varepsilon\Vert u\Vert^{p}_{L^{p}(\omega)},$$ then~$J$ is coercive in~$W^{1,p}_{0}(\omega)$. \end{Thm} \subsection{Picone identity} This subsection concerns a Picone-type identity for $Q_{p,\mathcal{A},V}$. Picone's identities for the $(p,A)$-Laplacian and the $p$-Laplacian are crucial tools in \cite{Regev, Tintarev} (see also \cite{Regev1, Regev2}). In the present work, it will be used to give an alternative and direct proof (without Assumption~\ref{ass2}) of the AAP type theorem (see Lemma~\ref{lem_alter}), and in Theorem~\ref{newthm}. \begin{lem}[{cf. \cite[ Lemma 2.2]{newpicone}}]\label{RL} Let~$\mathcal{A}$ satisfy Assumption \ref{ass8}, and define $$L(u,v)\triangleq |\nabla u|_{\mathcal{A}}^{p}+(p-1)\frac{u^{p}}{v^{p}}|\nabla v|^{p}_{\mathcal{A}}-p\frac{u^{p-1}}{v^{p-1}}\mathcal{A}(x,\nabla v)\cdot\nabla u,$$ and $$R(u,v) \triangleq |\nabla u|_{\mathcal{A}}^{p}- \mathcal{A}(x,\nabla v) \cdot\nabla\left(\frac{u^{p}}{v^{p-1}}\right),$$ where the functions~$u\in W^{1,p}_{{\rm loc}}(\Omega)$ and~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ are respectively nonnegative and positive with $u^{p}/v^{p-1}\in W^{1,p}_{{\rm loc}}(\Omega)$ such that the product rule for $u^{p}/v^{p-1}$ holds. Then $$L(u,v)(x)=R(u,v)(x) \qquad \mbox{for a.e.~$x\in \Omega$}.$$ Furthermore, we have~$L(u,v) \geq 0$ a.e. in $\Omega$ and $L(u,v)=0$ a.e. in $\Omega$ if and only if $u=kv$ for some constant~$k\geq 0$. \end{lem} \begin{remark} \emph{The lemma concerns pointwise equality and inequality. Therefore, the proof in \cite[ Lemma 2.2]{newpicone} applies to our more general case where $\mathcal{A}$ depends also on $x$. Hence, the proof is omitted.} \end{remark} \begin{lem}[Picone-type identity]\label{Picone} Let $\mathcal{A}$ satisfy Assumption \ref{ass8} and $V\!\in\!M^{q}_{{\rm loc}}(p;\Omega)$. For any positive solution~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ of $Q'_{p,\mathcal{A},V}[w]=0$ in~$\Omega$, and any nonnegative function~$u\in W^{1,p}_{c}(\Omega)$ with $u^{p}/v^{p-1}\in W^{1,p}_{c}(\Omega)$ such that the product rule for $u^{p}/v^{p-1}$ holds, we have $$Q_{p,\mathcal{A},V}[u]=\int_{\Omega} L(u,v)(x)\,\mathrm{d}x.$$ If instead,~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ is either a positive subsolution or a positive supersolution, and all the other conditions are satisfied, then respectively, $$Q_{p,\mathcal{A},V}[u]\leq\int_{\Omega} L(u,v)(x)\,\mathrm{d}x, \quad \mbox{or} \quad Q_{p,\mathcal{A},V}[u]\geq\int_{\Omega} L(u,v)(x)\,\mathrm{d}x.$$ \end{lem} \begin{proof} The proof is similar to that of \cite[Proposition 3.3]{Regev}, and therefore it is omitted. \end{proof} \begin{lem}\label{lem_alter} Let~$\mathcal{A}$ satisfy Assumption \ref{ass8}, and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. \begin{enumerate} \item[$(1)$] For any positive solution~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ of $Q'_{p,\mathcal{A},V}[\psi]=0$ in~$\Omega$ and any nonnegative function~$u\in W^{1,p}_{c}(\Omega)$ such that~$u^{p}/v^{p-1}\in W^{1,p}_{c}(\Omega)$ and the product rule for~$u^{p}/v^{p-1}$ holds, if~$vw$ satisfies the product rule for~$w\triangleq u/v$, then $$Q_{p,\mathcal{A},V}[vw]=\int_{\Omega} \left(|v\nabla w+w\nabla v|^{p}_{\mathcal{A}}-w^{p}|\nabla v|^{p}_{\mathcal{A}}-pw^{p-1}v\mathcal{A}(x,\nabla v)\cdot\nabla w\right)\,\mathrm{d}x.$$ \item[$(2)$] For a positive subsolution or a positive supersolution~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ of $Q'_{p,\mathcal{A},V}[\psi]=0$ in~$\Omega$ and any nonnegative function~$u\in W^{1,p}_{c}(\Omega)$ such that~$u^{p}/v^{p-1}\in W^{1,p}_{c}(\Omega)$ and the product rule for~$u^{p}/v^{p-1}$ holds, if~$vw$ satisfies the product rule for~$w\triangleq u/v$, then, respectively, % $$Q_{p,\mathcal{A},V}[vw]\leq\int_{\Omega}\left(|v\nabla w+w\nabla v|^{p}_{\mathcal{A}}-w^{p}|\nabla v|^{p}_{\mathcal{A}}-pw^{p-1}v\mathcal{A}(x,\nabla v)\cdot\nabla w \right) \,\mathrm{d}x,$$ or $$Q_{p,\mathcal{A},V}[vw]\geq\int_{\Omega}\left(|v\nabla w+w\nabla v|^{p}_{\mathcal{A}}-w^{p}|\nabla v|^{p}_{\mathcal{A}}-pw^{p-1}v\mathcal{A}(x,\nabla v)\cdot\nabla w \right) \,\mathrm{d}x.$$ \item[$(3)$]If $v\in W^{1,p}_{{\rm loc}}(\Omega)$ is either a positive solution or a positive supersolution of $Q'_{p,\mathcal{A},V}[u]=0$ in~$\Omega$, then the functional $Q_{p,\mathcal{A},V}$ is nonnegative on $W^{1,p}_{c}(\Omega)$. \end{enumerate} \end{lem} \begin{Rem} \emph{The third part of the lemma gives an alternative proof of~$(2)\Rightarrow (1)$ and~$(3)\Rightarrow (1)$ of the AAP type theorem (Theorem \ref{thm_AAP}).} \end{Rem} \begin{proof}[Proof of Lemma~\ref{lem_alter}] For the first two parts of the lemma, we apply the product rule directly in the final equality/inequalities of Lemma \ref{Picone}. The third part follows from the first two parts, the strict convexity of the function $|\cdot|^{p}_{\mathcal{A}}$, and an approximation argument. For details, see \cite[Theorem 5.2]{Regev} and \cite[Theorem 2.3]{Tintarev}. \end{proof} \subsection{Principal eigenvalues in domains~$\omega\Subset\Omega$}\label{eigenvalueunique} \begin{Def}{\em Let~$\mathcal{A}$ satisfy Assumption \ref{ass8} and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. The \emph{generalized principal eigenvalue} of $Q'_{p,\mathcal{A},V}$ in a domain $\gw \subseteq \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ is defind by $$\lambda_{1}=\lambda_{1}(Q_{p,\mathcal{A},V};\omega)\triangleq\inf_{u\in C^{\infty}_{c}(\omega) \setminus\{0\}}\frac{Q_{p,\mathcal{A},V}[u;\omega]}{\Vert u\Vert_{L^{p}(\omega)}^{p}}\,.$$} \end{Def} \begin{Rem}\label{rem_lambda1}{\em It follows that for a domain $\gw\Subset\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ we have \begin{equation}\label{eq_pev} \lambda_{1}(Q_{p,\mathcal{A},V};\omega) = \inf_{u\in W^{1,p}_{0}(\omega)\setminus\{0\}}\frac{Q_{p,\mathcal{A},V}[u;\omega]}{\Vert u\Vert_{L^{p}(\omega)}^{p}}\,. \end{equation} } \end{Rem} \begin{lemma}\label{easylemma} Let $\omega\Subset\Omega$ be a domain, let $\mathcal{A}$ satisfy Assumption \ref{ass8}, and let $V\in M^{q}(p;\omega)$. All eigenvalues of \eqref{evp} are larger than or equal to $\lambda_{1}$. \end{lemma} \begin{proof} Testing any eigenfunction, we get the conclusion. \end{proof} \begin{Def} {\em A \emph{principal eigenvalue} of \eqref{evp} is an eigenvalue with a nonnegative eigenfunction, which is called a \emph{principal eigenfunction}.} \end{Def} We first state a useful lemma. \begin{lemma}\label{functionalcv} Let~$\mathcal{A}$ satisfy Assumption~\ref{ass8}, and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$. For every domain~$\omega\Subset\Omega$, if~$u_{k}\rightarrow u$ as~$k\rightarrow\infty$ in~$W^{1,p}_{0}(\omega)$, then $\displaystyle{\lim_{k\rightarrow\infty}}Q_{p,\mathcal{A},V}[u_{k}]=Q_{p,\mathcal{A},V}[u]$. \end{lemma} \begin{proof} By \cite[Lemma 5.23]{HKM}, we get~$\lim_{k\rightarrow\infty}\int_{\omega}|\nabla u_{k}|_{\mathcal{A}}^{p}\,\mathrm{d}x=\int_{\omega}|\nabla u|_{\mathcal{A}}^{p}\,\mathrm{d}x$. The elementary inequality $$|x^{p}-y^{p}|\leq p|x-y|(x^{p-1}+y^{p-1}) \qquad \forall x,y\geq0,$$ the H\"{o}lder inequality, and the Morrey-Adams theorem, imply \begin{eqnarray*} \left\vert\int_{\omega}\!V(|u_{k}|^{p}-|u|^{p})\!\,\mathrm{d}x \right\vert&\!\!\leq\! \!& \int_{\omega}\!|V|||u_{k}|^{p}-|u|^{p}|\!\,\mathrm{d}x \!\leq p\!\int_{\omega}\!|V||u_{k}-u|||u_{k}|^{p-1}+|u|^{p-1}|\!\,\mathrm{d}x\\ &\!\!\leq\!\!& C(p)\!\left(\int_{\omega}\!\!|V||u_{k}\!-\! u|^{p}\!\,\mathrm{d}x\!\!\right)^{\!1/p}\!\!\left(\int_{\omega}\!|V||u_{k}|^{p} \!+\! |V||u|^{p}\!\,\mathrm{d}x\!\right)^{\!1/p'}\! \underset{ k\rightarrow\infty}{\rightarrow 0}. \end{eqnarray*} Hence, the desired convergence follows. \end{proof} Now we prove that in every domain~$\omega\Subset\Omega$, the generalized principal eigenvalue is a principal and simple eigenvalue, whose uniqueness is proved in Corollary \ref{newuniqueness}. \begin{Thm}\label{principaleigenvalue} Let $\omega\Subset\Omega$ be a domain, let $\mathcal{A}$ satisfy Assumption~\ref{ass8}, and let $V\in M^{q}(p;\omega)$. \begin{enumerate} \item[$(1)$] The generalized principal eigenvalue is a principal eigenvalue of the operator~$Q'_{p,\mathcal{A},\mathcal{V}}$. \item[$(2)$] The principal eigenvalue is simple, i.e., for any two eigenfunctions $u$ and~$v$ associated with the eigenvalue $\gl_1$, we have $u=cv$ for some~$c\in\mathbb{R}$. \end{enumerate} \end{Thm} \begin{proof}[Proof of Theorem \ref{principaleigenvalue}] $(1)$ Applying the Morrey-Adams theorem (Theorem~\ref{MA_thm}) with the positive number~$\delta=\alpha_{\omega}$ and the ellipticity condition \eqref{structure}, we obtain that $$\lambda_{1}\geq -C(n,p,q)\alpha_{\omega}^{-n/(pq-n)}\Vert V\Vert^{pq/(pq-n)}_{M^{q}(p;\omega)}>-\infty.$$ % For any~$\varepsilon>0$, letting~$\mathcal{V}=V-\lambda_{1}+\varepsilon$, we immediately see that for all~$u\in W^{1,p}_{0}(\omega)$, $$Q_{p,\mathcal{A},\mathcal{V}}[u;\omega]\geq \varepsilon\Vert u\Vert_{L^{p}(\omega)}^{p}.$$ Therefore, the functional~$Q_{p,\mathcal{A},V-\lambda_{1}+\varepsilon}[\;\cdot\; ;\omega]$ is coercive and weakly lower semicontinuous in $W^{1,p}_{0}(\omega)$, and hence also in~$W^{1,p}_{0}(\omega)\cap\{\Vert u\Vert_{L^{p}(\omega)}=1\}$. Therefore, the infimum in \eqref{eq_pev} is attained in~$ W^{1,p}_{0}(\omega)\setminus\{0\}$. Let~$v\in W^{1,p}_{0}(\omega)\setminus\{0\}$ be a minimizer of \eqref{eq_pev}. By standard variational calculus techniques, we conclude that the minimizer $v\in W^{1,p}_{0}(\omega)$ satisfies the equation $Q'_{p,\mathcal{A},V}[u]=\lambda_1|u|^{p-2}u$ in the weak sense. Note that~$|v|\in W^{1,p}_{0}(\omega)$. In addition, almost everywhere in~$\omega$, we have~$\big|\nabla(|v|)\big|=|\nabla v|$ and~$\big|\nabla(|v|)\big|_{\mathcal{A}}=|\nabla v|_{\mathcal{A}}.$ Thus~$|v|$ is also a minimizer, and therefore, it satisfies the equation $Q'_{p,\mathcal{A},V}[u]=\lambda_1|u|^{p-2}u$ in the weak sense. So~$\lambda_{1}$ is a principal eigenvalue. The Harnack inequality and H\"{o}lder estimates guarantee that~$|v|$ is strictly positive and continuous in~$\omega$. Therefore, we may assume that $v>0$. $(2)$ The proof is inspired by \cite[Theorem 2.1]{Regev1}. Let $v,u\in W^{1,p}_{0}(\omega)$ be, respectively, a positive principal eigenfunction and any eigenfunction associated with~$\lambda_{1}$. It suffices to show~$u=cv$ for some~$c\in\mathbb{R}$. By part $(1)$ we may assume that $u>0$ in~$\omega$. Let~$\{\varphi_{k}\}_{k\in\mathbb{N}}\subseteq C^{\infty}_{c}(\omega)$ a nonnegative sequence approximating $u$ in~$W^{1,p}_{0}(\omega)$ and a.e. in~$\omega$. Then the product rule for~$\varphi_{k}^{p}/v^{p-1}$ holds for all~$k\in\mathbb{N}$. By Lemma \ref{RL}, we get, for all~$k\in\mathbb{N}$, $$\int_{\omega} L(\varphi_{k},v)(x)\,\mathrm{d}x=\int_{\omega}|\nabla \varphi_{k}|_{\mathcal{A}}^{p}\,\mathrm{d}x-\int_{\omega}\mathcal{A}(x,\nabla v) \cdot\nabla\left(\frac{\varphi_{k}^{p}}{v^{p-1}}\right)\,\mathrm{d}x.$$ Since~$\varphi_{k}^{p}/v^{p-1}\in W^{1,p}_{c}(\omega)$, we obtain $$\int_{\omega}\mathcal{A}(x,\nabla v)\cdot\nabla\left(\frac{\varphi_{k}^{p}}{v^{p-1}}\right)\,\mathrm{d}x+\int_{\omega}(V-\lambda_{1})v^{p-1}\frac{\varphi_{k}^{p}}{v^{p-1}}\,\mathrm{d}x=0.$$ It follows that~$Q_{p,\mathcal{A},V-\lambda_{1}}[\varphi_{k};\gw] =\int_{\omega} L(\varphi_{k},v)(x)\,\mathrm{d}x.$ By Fatou's lemma and Lemma \ref{functionalcv}, we obtain \begin{eqnarray*} 0 & \leq &\int_{\omega} L(u,v)(x)\d \leq \int_{\omega}\liminf_{k\rightarrow\infty}L(\varphi_{k},v)(x)\,\mathrm{d}x \leq \liminf_{k\rightarrow\infty}\int_{\omega}L(\varphi_{k},v)(x)\,\mathrm{d}x\\ &=&\liminf_{k\rightarrow\infty}Q_{p,\mathcal{A},V-\lambda_{1}}[\varphi_{k};\gw] =Q_{p,\mathcal{A},V-\lambda_{1}}[u;\gw]=0. \end{eqnarray*} Lemma \ref{RL} and the connectedness of $\gw$ imply that $u=cv$ in $\gw$ for some~$c>0$. \end{proof} \subsection{Positivity of the principal eigenvalues}\label{localtheory} In this subsection, we consider positivity features of the operator $Q'_{p,\mathcal{A},V}$ in a {\em Lipschitz} domain $\gw\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$. In particular, we study the relationship between the validity of the generalized strong/weak maximum principles, the existence of a proper positive supersolution, the unique solvability in $W^{1,p}_{0}(\omega)$ of the nonnegative Dirichlet problem $Q'_{p,\mathcal{A},V}[u]=g \geq 0$ in $\gw$, and the positivity of the principal eigenvalue. \begin{Def} \emph{Let $\omega$ be a bounded Lipschitz domain. A function $v\in W^{1,p}(\omega)$ is said to be \emph{nonnegative} on~$\partial\omega$ if $v^{-}\in W^{1,p}_{0}(\omega)$. A function~$v$ is said to be \emph{zero} on~$\partial\omega$ if $v\in W^{1,p}_{0}(\omega)$.} \end{Def} \begin{Def} \emph{ Let $\omega\Subset\Omega$ be a Lipschitz domain,~$\mathcal{A}$ satisfy Assumption~\ref{ass8}, and let $V\in M^{q}(p;\omega)$. \begin{itemize} \item The operator~$Q'_{p,\mathcal{A},V}$ is said to satisfy the \emph{generalized weak maximum principle in $\gw$} if every solution~$v \in W^{1,p}(\omega)$ of the equation $Q'_{p,\mathcal{A},V}[u]=g$ in $\gw$ with $0\leq g\in L^{p'}(\omega)$ and $v\geq 0$ on~$\partial\omega$ is nonnegative in~$\omega$; \item the operator~$Q'_{p,\mathcal{A},V}$ satisfies the \emph{generalized strong maximum principle in $\gw$} if any such a solution $v$ is either the zero function or strictly positive in~$\omega$. \end{itemize}} \end{Def} Under Assumption~\ref{ass2}, by Theorem \ref{complement}, all the assertions in the following theorem are in fact equivalent even though we can not prove it completely at this point. \begin{Thm}\label{maximum} Let~$\omega\Subset\Omega$, where~$\omega$ is a Lipschitz domain,~$\mathcal{A}$ satisfy Assumption~\ref{ass8}, and $V\in M^{q}(p;\omega)$. Consider the following assertions: \begin{enumerate} \item[$(1)$] The operator~$Q'_{p,\mathcal{A},V}$ satisfies the generalized weak maximum principle in~$\omega$. \item[$(2)$] The operator~$Q'_{p,\mathcal{A},V}$ satisfies the generalized strong maximum principle in~$\omega$. \item[$(3)$] The principal eigenvalue $\lambda_{1} =\lambda_{1}(Q_{p,\mathcal{A},V};\omega)$ is positive. \item[$(4)$] The equation $Q'_{p,\mathcal{A},V}[u]=0$ has a proper positive supersolution in~$W^{1,p}_{0}(\omega)$. \item[$(4')$] The equation $Q'_{p,\mathcal{A},V}[u]=0$ has a proper positive supersolution in~$W^{1,p}(\omega)$. \item[$(5)$] For any nonnegative~$g\in L^{p'}(\omega)$, there exists a nonnegative solution $v\in W^{1,p}_{0}(\omega)$ of the equation~$Q'_{p,\mathcal{A},V}[u]=g$ in~$\omega$ which is either zero or positive. \end{enumerate} Then $(1)\Leftrightarrow (2)\Leftrightarrow (3)\Rightarrow (4)\Rightarrow (4')$, and~$(3)\Rightarrow (5)\Rightarrow (4).$ \medskip Furthermore, \begin{enumerate} \item[$(6)$] If Assumption~\ref{ass2} is satisfied and $\gl_1>0$, then the solution in $(5)$ is unique. \end{enumerate} \begin{comment} Assume that$$\lambda_{1}\triangleq\inf_{u\in W^{1,p}_{0}(\omega)\setminus\{0\}}\frac{Q_{p,\mathcal{A},\mathcal{V}}[u;\omega]}{\Vert u\Vert_{L^{p}(\omega)}^{p}}>0.$$ Then for~$0\leq g\in L^{p'}(\omega)$, the equation~$Q'_{p,\mathcal{A},\mathcal{V}}[v]=g$ has a nonnegative solution. Any such a solution is either strictly positive or the zero function. \end{comment} \end{Thm} \begin{proof} $(1)\Rightarrow (2)$ The generalized weak maximum principle implies that any solution~$v$ of $Q'_{p,\mathcal{A},V}[u]=g$ with $g\geq 0$, which is nonnegative on~$\partial\omega$, is nonnegative in~$\omega$. So, $v$ is a nonnegative supersolution of \eqref{half}. The weak Harnack inequality implies that either $v>0$ or $v=0$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. $(2)\Rightarrow (3)$ Let~$\lambda_{1}\leq 0$ and $v>0$ be its principal eigenfunction. By the homogeneity, the function $w=-v$ satisfies $Q'_{p,\mathcal{A},V}[w]=\gl_1|w|^{p-1}w$, and $w=0 $ on $\partial\omega$ in the weak sense, but this contradicts the generalized strong maximum principle. $(3)\Rightarrow (1)$ Let~$v$ satisfy $v$ of $Q'_{p,\mathcal{A},V}[u]=g$ with $g\geq 0$, and $v\geq 0$ on~$\partial\omega$. Suppose that $v^{-}\neq 0$. Testing~$v^{-}$ in the definition of the solution of $Q'_{p,\mathcal{A},\mathcal{V}}[u] =g\geq 0$ , we get $$Q_{p,\mathcal{A},\mathcal{V}}[v^{-};\omega]=\int_{\{x\,\in\,\omega\,:\,v\,<\,0\}}gv\,\mathrm{d}x\leq 0.$$ Therefore, $\lambda_{1}\leq 0$, which contradicts the assumption. $(3)\Rightarrow (4)$ Since $\lambda_{1}>0$, its principal eigenfunction is a proper positive supersolution of \eqref{half} in~$\omega$. $(4)\Rightarrow (4')$ This implication follows from~$W^{1,p}_{0}(\omega)\subseteq W^{1,p}(\omega)$. $(3)\Rightarrow (5)$ By Theorems \ref{ThmJ1} and \ref{thm-coercive}, the functional $J[u]= Q_{p,\mathcal{A},V}[u;\omega]- p\int_{\omega}gu\,\mathrm{d}x $ is weakly lower semicontinuous and coercive in~$W^{1,p}_{0}(\omega)$ for $g\in L^{p'}(\gw)$. Therefore, the functional~$J$ has a minimizer in~$W^{1,p}_{0}(\omega)$ (see for example \cite[Theorem 1.2]{Struwe}). Consequently, the corresponding equation~$Q'_{p,\mathcal{A},V}[u]=g$ has a weak solution $v_{1}\in W^{1,p}_{0}(\omega)$. Note that~$(3)\Rightarrow (2)$. Therefore, the solution~$v_{1}$ is either zero or positive in~$\omega$. $(5)\Rightarrow (4)$ Use $(5)$ with~$g = 1$ to obtain a proper positive supersolution. $(6)$ Assume now that Assumption~\ref{ass2} is satisfied, and let us prove that $v_1=v$ is unique. If~$v_{1}=0$, then~$g=0$. Hence, $Q_{p,\mathcal{A},V}[v;\omega]=0$, but this contradicts the assumption that $\gl_1 >0$. Assume now that~$v_1>0$. Let~$v_{2}\in W^{1,p}_{0}(\omega)$ be any other positive solution. By part $(1)$ of Lemma~\ref{elementary} with~$g_{i}=g,V_{i}=V$ and $i=1,2$, we conclude \begin{equation*} \int_{\omega}V\!\!\left(\left(\frac{v_{1}}{v_{1,h}}\right)^{p-1}\!\!\! - \!\!\left(\frac{v_{2}}{v_{2,h}}\right)^{p-1}\!\right)\!\!\left(v_{1,h}^{p}-v_{2,h}^{p}\right)\!\!\,\mathrm{d}x \leq \! \int_{\omega} \! g\left(\!\frac{1}{v_{1,h}^{p-1}}-\frac{1}{v_{2,h}^{p-1}} \!\right)\!\! \left(v_{1,h}^{p}-v_{2,h}^{p}\right)\!\!\,\mathrm{d}x \leq 0. \end{equation*} We note that $$\lim_{h\rightarrow 0}V\left(\left(\frac{v_{1}}{v_{1,h}}\right)^{p-1}-\left(\frac{v_{2}}{v_{2,h}}\right)^{p-1}\right)\left(v_{1,h}^{p}-v_{2,h}^{p}\right)=0,$$ and \begin{eqnarray*} \left\vert V\left(\left(\frac{v_{1}}{v_{1,h}}\right)^{\!p-1}\!\!-\left(\frac{v_{2}}{v_{2,h}}\right)^{\!p-1}\right)\left(v_{1,h}^{p}-v_{2,h}^{p}\right)\right\vert \leq 2|V|\left(\left(v_{1}+1\right)^{p}+\left(v_{2}+1\right)^{p}\right) \in L^{1}(\omega). \end{eqnarray*} It follows that$$\lim_{h\rightarrow 0}\int_{\omega}g\left(\frac{1}{v_{1,h}^{p-1}}-\frac{1}{v_{2,h}^{p-1}}\right)\left(v_{1,h}^{p}-v_{2,h}^{p}\right)\,\mathrm{d}x=0.$$ By Fatou's lemma, and Lemma~\ref{elementary}, we infer that $L_{0,v_1,v_2}\!=\!0$. Hence, $v_{2}\!=\!v_{1}$ in~$\omega$. \begin{comment} Moreover,~$v$ is a nonnegative supersolution of the equation~\ref{half}. Assume that~$v(x_{0})>0$ and~$v(x_{1})=0$. Let~$\omega'$ containing~$x_{1}$ and~$x_{0}$ be a domain compactly included in~$\omega$. By virtue of the Harnack inequality($p\leq n$) or the weak Harnack inequality($p>n$), it must be that~$v\equiv0$ in~$\omega'$. Contradiction! So~$v$ is either strictly positive or the zero function. \end{comment} \end{proof} \subsection{Weak comparison principle}\label{WCP} \subsubsection{Super/sub-solution technique} The following two theorems can be obtained by similar arguments to those of \cite[Lemma 5.1 and Proposition 5.2]{Pinchover}. We first state a weak comparison principle under the assumption that the potential is nonnegative. \begin{lem}\label{5lemma} Let $\omega\Subset\Omega$ be a Lipschitz domain, $\mathcal{A}$ satisfy Assumption \ref{ass8}, $g\in M^{q}(p;\omega)$, and~$\mathcal{V}\in M^{q}(p;\omega)$, where $\mathcal{V}$ is nonnegative. For any subsolution~$v_{1}$ and any supersolution~$v_{2}$ of the equation $Q'_{p,\mathcal{A},\mathcal{V}}[u]=g$, in $\omega$ with $v_{1},v_{2}\in W^{1,p}(\omega)$ satisfying $\left(v_{2}-v_{1}\right)^{-}\in W^{1,p}_{0}(\omega)$, we have $v_{1}\leq v_{2}\quad \mbox{in}~\omega$. \end{lem} \begin{proof} Testing the integral inequalities for the subsolution~$v_{1}$ and the supersolution~$v_{2}$ with ~$\left(v_{2}-v_{1}\right)^{-}$ and then subtracting one from the other, we arrive at \begin{equation*} \int_{\omega}\left(\mathcal{A}(x,\nabla v_{1})-\mathcal{A}(x,\nabla v_{2})\right)\cdot\nabla\left((v_{2}-v_{1})^-\right)\,\mathrm{d}x +\int_{\omega}\mathcal{V}v_{1,2}\left(v_{2}-v_{1}\right)^{-}\,\mathrm{d}x\leq 0, \end{equation*} where~$v_{1,2}\triangleq|v_{1}|^{p-2}v_{1}-|v_{2}|^{p-2}v_{2}$. It follows that \begin{equation*} \int_{\{v_{2}<v_{1}\}}\left(\mathcal{A}(x,\nabla v_{1})-\mathcal{A}(x,\nabla v_{2})\right)\cdot\left(\nabla v_{1}-\nabla v_{2}\right)\,\mathrm{d}x\\ +\int_{\{v_{2}<v_{1}\}}\mathcal{V}v_{1,2}\left(v_{1}-v_{2}\right)\,\mathrm{d}x\leq 0. \end{equation*} Since the above two terms are nonnegative, the two integrals are zero. Hence, $\nabla (v_{2} - v_{1})^-\!=\!0$ a.e. in $\omega$. Therefore, the negative part of $v_{2} -v_{1}$ is a constant a.e. in $\omega$. Hence, Poincar\'{e}'s inequality implies $(v_{2}\! - \! v_{1})^{-} \! = \! 0$ a.e. in $\omega$. Namely, $v_{1}\leq v_{2}$ a.e. in $\omega$. \end{proof} In order to establish a weak comparison principle when $V$ is not necessarily nonnegative, we need the following result which is of independent interest. \begin{Thm}[Super/sub-solution technique]\label{5proposition} Let~$\omega\Subset\Omega$ be a Lipschitz domain, let~$\mathcal{A}$ satisfy Assumption~\ref{ass8}, and let $g,V\in M^{q}(p;\omega)$, where~$g$ is nonnegative a.e.~in~$\omega$. We assume that~$f,\varphi,\psi\in W^{1,p}(\omega)\cap C(\bar{\omega}),$ where~$f\geq 0$ a.e. in~$\omega$, and \[\begin{cases} % Q'_{p,\mathcal{A},V}[\psi]\leq g\leq Q'_{p,\mathcal{A},V}[\varphi]&\text{in~$\omega$ in the weak sense,}\\ % \psi\leq f\leq \varphi& \text{on~$\partial\omega$,}\\ 0\leq \psi\leq\varphi&\text{in~$\omega$.} \end{cases}\] Then there exists a nonnegative function $u\in W^{1,p}(\omega)\cap C(\bar{\omega})$ satisfying \[\begin{cases} % Q'_{p,\mathcal{A},V}[u]=g&\text{in~$\omega$,}\\ % u=f& \text{on~$\partial\omega$,} \end{cases}\] and $\psi\leq u\leq \varphi$ in~$\omega$. Moreover, if $f>0$ a.e. on~$\partial\omega$, then the above boundary value problem has a unique positive solution. \end{Thm} \begin{proof} Set$$\mathcal{K}\triangleq\left\{v\in W^{1,p}(\omega)\cap C(\bar{\omega}):0\leq \psi\leq v\leq \varphi \mbox{ in } \omega\right\}.$$ For every~$x\in\omega$ and~$v\in\mathcal{K}$, let $G(x,v)\triangleq g(x)+2V^{-}(x)v(x)^{p-1}$. Then~$G\in M^{q}(p;\omega)$ and~$G\geq 0$ a.e. in~$\omega$. % Let the functionals $J$,$\bar{J}:W^{1,p}(\omega)\rightarrow \mathbb{R}\cup\{\infty\}$, be as in Theorems \ref{ThmJ} and \ref{ThmJ1} with~$|V|$ and~$G(x,v)$ as the potential and the right hand side, respectively. Choose a sequence~$\{u_{k}\}_{k\in\mathbb{N}}$ in $$\mathbf{A}\triangleq\{u\in W^{1,p}(\omega):u=f \mbox{ on }\partial\omega\},$$ satisfying $$J[u_{k}]\downarrow m\triangleq \inf_{u\in\mathbf{A}}J[u].$$ Because~$f\geq 0$,~$\{|u_{k}|\}_{k\in\mathbb{N}}\subseteq \mathbf{A},$ we infer $$m\leq J[|u_{k}|]=\bar{J}[u_{k}]\leq J[u_{k}],$$ where the last inequality is on account of~$G(x,v)\geq 0$ a.e. in~$\omega$. Then~$\lim_{k\rightarrow\infty}\bar{J}[u_{k}]=m$. It is also immediate that~$\inf_{u\in\mathbf{A}}\bar{J}[u]=m$. On the other hand,~$\bar{J}$ is weakly lower semicontinuous and coercive. Noting that~$\mathbf{A}$ is weakly closed, it follows from \cite[Theorem 1.2]{Struwe} that~$m$ is attained by a nonnegative function~$u_0\in\mathbf{A}$, that is,~$\bar{J}[u_0]=m$. In addition,~$J[u_0]=\bar{J}[u_0]=m$. Then~$u_0$ is a solution of \[\begin{cases}\label{problem1} % Q'_{p,\mathcal{A},|V|}[u]=G(x,v)&\text{in~$\omega$,}\\ % u=f& \text{in the trace sense on~$\partial\omega$,} \end{cases}\] and for any~$v\in\mathcal{K}$, let~$T(v)$ be a solution of this Dirichlet problem. % % Then the map $$T:\mathcal{K}\longrightarrow W^{1,p}(\omega),$$ is increasing. Indeed, pick any~$v_{1}\leq v_{2}$ in~$\mathcal{K}$. Because~$G(x,v_{1})\leq G(x,v_{2})$, we infer that~$T(v_{1})$ and $T(v_{2})$ are respectively a solution and a supersolution of$$Q'_{p,\mathcal{A},|V|}[u]=G(x,v_{1}).$$ On~$\partial\omega$, we have~$T(v_{1})=f=T(v_{2})$. By Lemma \ref{5lemma}, we obtain~$T(v_{1})\leq T(v_{2})$ in~$\omega$. Consider any subsolution~$v\in W^{1,p}(\omega)\cap C(\bar{\omega})$ of the boundary value problem \[\begin{cases} % Q'_{p,\mathcal{A},V}[u]=g&\text{in~$\omega$,}\\ % u=f& \text{on~$\partial\omega$.} \end{cases}\] Then in the weak sense in~$\omega$,$$Q'_{p,\mathcal{A},|V|}[v]=Q'_{p,\mathcal{A},V}[v]+G(x,v)-g(x)\leq G(x,v).$$ Furthermore, $T(v)$ is a solution of \[\begin{cases} % Q'_{p,\mathcal{A},|V|}[u]=G(x,v)&\text{in~$\omega$,}\\ % u=f& \text{in the trace sense on~$\partial\omega$.} \end{cases}\] Invoking Lemma \ref{5lemma}, we get~$v\leq T(v)$ a.e. in~$\omega$. Furthermore, $$Q'_{p,\mathcal{A},V}[T(v)]=g+2V^{-}\left(|v|^{p-2}v-|T(v)|^{p-2}T(v)\right)\leq g\; \mbox{ in } \omega.$$ Analogously, for any supersolution $v\in W^{1,p}(\omega)\cap C(\bar{\omega})$ of the boundary value problem \[\begin{cases} Q'_{p,\mathcal{A},V}[u]=g&\text{in~$\omega$,}\\ u=f& \text{on~$\partial\omega$,}\\ \end{cases}\] ~$T(v)$ is a supersolution of the same problem with~$v\geq T(v)$ a.e. in~$\omega$. We define two sequences by recursion: for any~$k\in\mathbb{N}$, $$\underline{u}_{0}\triangleq \psi,\; \underline{u}_{k}\triangleq T(\underline{u}_{k-1})=T^{(k)}(\psi), \quad \mbox{ and } \; \bar{u}_{0} \triangleq \varphi,\; \bar{u}_{k}\triangleq T(\bar{u}_{k-1})=T^{(k)}(\varphi).$$ Then the monotone sequences~$\{\underline{u}_{k}\}_{k\in\mathbb{N}}$ and~$\{\bar{u}_{k}\}_{k\in\mathbb{N}}$ converge to $\underline{u}$ and~$\bar{u}$ a.e. in~$\omega$, respectively. Using \cite[Theorem 1.9]{Lieb}, we conclude that the convergence is also in~$L^{p}(\omega)$. Arguing as in the Harnack convergence principle, we may assert that~$\underline{u}$ and~$\bar{u}$ are both fixed points of~$T$ and solutions of \[\begin{cases} % Q'_{p,\mathcal{A},V}[u]=g&\text{in~$\omega$,}\\ % u=f& \text{on~$\partial\omega$,} \end{cases}\] with~$\psi\leq\underline{u}\leq \bar{u}\leq\varphi$ in~$\omega$. The uniqueness is derived from Lemma \ref{newDiaz}. \end{proof} % % % % % % % % % % \subsubsection{Weak comparison principle} The following weak comparison principle extends \cite[Theorem 5.3]{Pinchover} to our setting and has a similar proof to \cite[Theorem 5.3]{Pinchover}, and therefore it is omitted. \begin{Thm}[Weak comparison principle]\label{thm_wcp} Let~$\omega\Subset\Omega$ be a Lipschitz domain, let~$\mathcal{A}$ satisfy Assumption~\ref{ass8}, and let $g,V\in M^{q}(p;\omega)$ with~$g\geq 0$ a.e. in~$\omega$. Assume that~$\lambda_{1}>0$. If~$u_{2}\in W^{1,p}(\omega)\cap C(\bar{\omega})$ satisfies \[\begin{cases} Q'_{p,\mathcal{A},V}[u_{2}]=g&\text{in~$\omega$,}\\ u_{2}>0& \text{on~$\partial\omega$,}\\ \end{cases}\] and $u_{1}\in W^{1,p}(\omega)\cap C(\bar{\omega})$ satisfies \[\begin{cases} Q'_{p,\mathcal{A},V}[u_{1}]\leq Q'_{p,\mathcal{A},V}[u_{2}]&\text{in~$\omega$,}\\ u_{1}\leq u_{2}& \text{on~$\partial\omega$,}\\ \end{cases}\] then $u_{1}\leq u_{2}$ in $\omega$. \end{Thm} % % \section{Agmon-Allegretto-Piepenbrink (AAP) theorem}\label{AP} In this section, we establish an Agmon-Allegretto-Piepenbrink (in short, AAP) type theorem. See \cite{Agmon, Allegretto1974}, \cite{Pinchover}, and \cite{Keller, AAPform}, respectively, for the counterparts, in the linear case, the quasilinear case, and the cases of graphs and certain Dirichlet forms. \subsection{Divergence-type equation} We introduce a divergence-type equation of the first order. For a related study, see \cite{firstreference}. \begin{Def}\label{def2} {\em Let~$\mathcal{A}$ satisfy Assumption~\ref{ass8} and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. A vector field $S\in L^{p}_{\mathrm{loc}}(\Omega;\mathbb{R}^{n})$ is a {\em solution} of the first order partial differential equation \begin{equation}\label{first} -\dive{\mathcal{A}(x,S)}+(1-p)\mathcal{A}(x,S)\cdot S+V=0 \qquad \mbox{ in } \Omega, \end{equation} if $$\int_{\Omega}\mathcal{A}(x,S)\cdot\nabla \psi\,\mathrm{d}x+(1-p)\int_{\Omega}(\mathcal{A}(x,S)\cdot S)\psi\,\mathrm{d}x + \int_{\Omega}V\psi\,\mathrm{d}x=0,$$ for every function~$\psi\in C_{c}^{\infty}(\Omega)$, and a {\em supersolution} of the same equation if$$\int_{\Omega}\mathcal{A}(x,S)\cdot\nabla \psi\,\mathrm{d}x+(1-p)\int_{\Omega}(\mathcal{A}(x,S)\cdot S)\psi\,\mathrm{d}x+ \int_{\Omega}V\psi\,\mathrm{d}x\geq 0,$$ for every nonnegative function~$\psi\in C_{c}^{\infty}(\Omega)$. } \end{Def} We state a straightforward assertion without proof. \begin{assertion} All the integrals in Definition~\ref{def2} are finite. Moreover, for any solution $S$ defined as above, the corresponding integral equality holds for all bounded functions in $W^{1,p}_{c}(\Omega)$, and for any supersolution~$S$, the corresponding integral inequality holds for all nonnegative bounded functions in $W^{1,p}_{c}(\Omega)$. \end{assertion} \begin{comment} \begin{proof} For any bounded function~$\psi\in W^{1,p}_{c}(\Omega)$, we can find a sequence of uniformly bounded functions~$\{\psi_{k}\}_{k\in\mathbb{N}}\subseteq C_{c}^{\infty}(\Omega)$ approximating~$\psi$ in~$W^{1,p}(\Omega)$. For any nonnegative bounded function~$\psi\in W^{1,p}_{c}(\Omega)$, we can find a sequence of nonnegative uniformly bounded functions~$\{\psi_{k}\}_{k\in\mathbb{N}}\subseteq C_{c}^{\infty}(\Omega)$ approximating~$\psi$ in~$W^{1,p}(\Omega)$. By \cite[Page 250, Theorem 1]{Evans} and \cite[Page 630, Theorem 6]{Evans}, we may assume that all the support sets of~$\psi_{k}$ and~$\psi$ are compactly included in a Lipschitz domain~$\omega$ and~$|\psi|\leq M, |\psi_{k}|\leq M$ for all~$k\in \mathbb{N}$ and some~$M>0$. Then \begin{eqnarray*} \int_{\omega}\!|\mathcal{A}(x,S)\cdot(\nabla \psi_{k}-\nabla \psi)|\!\,\mathrm{d}x &\!\!\leq\!\!&\beta_{\omega} \int_{\omega}|S|^{p-1}|\nabla \psi_{k}-\nabla \psi|_{A}\,\mathrm{d}x\\ &\!\!\leq\!\!&\beta_{\omega}\!\left(\!\!\int_{\omega}|S|^{p}\!\,\mathrm{d}x\!\!\right)^{\!1/p'} \!\!\left(\int_{\omega}|\nabla \psi_{k}-\nabla \psi|^{p}\!\,\mathrm{d}x\!\!\right)^{\!1/p} \!\!\rightarrow 0\mbox{ as } k\to \infty, \end{eqnarray*} and $$ \int_{\omega}|V|| \psi_{k}- \psi|\,\mathrm{d}x \leq \left(\int_{\omega}|V||\psi_{k}- \psi|^{p}\,\mathrm{d}x\right)^{1/p}\left(\int_{\omega}|V|\,\mathrm{d}x\right)^{1/p'}\rightarrow 0\mbox{ as } k\to \infty. $$ For any~$m>0$, we have~$\lim_{k\rightarrow \infty}\left|\left\{x\in\omega:|\psi_{k}(x)-\psi(x)|>m\right\}\right|=0,$ and hence, \begin{eqnarray*} &\!\!\!\!&\int_{\omega}|\psi_{k}- \psi|\mathcal{A}(x,S)\cdot S\,\mathrm{d}x \leq \beta_{\omega}\int_{\omega}|S|^{p}| \psi_{k}- \psi|\,\mathrm{d}x\\ &\!\!=\!\!&\beta_{\omega}\int_{\{x\in\omega:|\psi_{k}(x)-\psi(x)|>m\}}|S|^{p}| \psi_{k}- \psi|\,\mathrm{d}x+\beta_{\omega}\int_{\{x\in\omega:|\psi_{k}(x)-\psi(x)|\leq m\}}|S|^{p}| \psi_{k}- \psi|\,\mathrm{d}x\\ &\!\!\leq\!\!& 2M\beta_{\omega}\int_{\{x\in\omega:|\psi_{k}(x)-\psi(x)|>m\}}|S|^{p}\,\mathrm{d}x+m\beta_{\omega}\int_{\omega}|S|^{p}\,\mathrm{d}x \rightarrow 0\;\mbox{ as } k\to \infty, m\rightarrow 0. \qedhere \end{eqnarray*} \end{proof} \end{comment} \subsection{AAP type theorem} Following the approach in \cite{Pinchover}, we establish the AAP type theorem. In other words, we prove that the nonnegativity of the functional~$Q_{p,\mathcal{A},V}$ on $C_c^{\infty}(\Omega)$ is equivalent to the existence of a positive solution or positive supersolution of the equation~$Q'_{p,\mathcal{A},V}[u]=0$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. We also obtain certain other conclusions involving the first-order equation \eqref{first} defined above. Recall that for every~$\vgf \in C_c^{\infty}(\Omega)$, $$Q_{p,\mathcal{A},V}[\vgf]=\int_{\Omega}\left(\vert\nabla \vgf\vert_{\mathcal{A}}^{p}+V\vert \vgf\vert^{p}\right)\,\mathrm{d}x.$$ The functional~$Q_{p,\mathcal{A},V}$ is said to be {\em nonnegative} in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ if $Q_{p,\mathcal{A},V}[\varphi]\geq 0$ for all~$\varphi\in C^{\infty}_{c}(\Omega)$. \begin{theorem}[AAP type theorem]\label{thm_AAP} Let the operator~$\mathcal{A}$ satisfy Assumption~\ref{ass8}, and let $V\in M^{q}_{{\rm loc}}(p;\Omega).$ Then the following assertions are equivalent: \begin{enumerate} \item[$(1)$] the functional~$Q_{p,\mathcal{A},V}$ is nonnegative in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$; \item[$(2)$] the equation~$Q'_{p,\mathcal{A},V}[u]= 0$ admits a positive solution~$v\in W^{1,p}_{{\rm loc}}(\Omega)$; \item[$(3)$] the equation~$Q'_{p,\mathcal{A},V}[u]= 0$ admits a positive supersolution~$\tilde{v}\in W^{1,p}_{{\rm loc}}(\Omega)$; \item[$(4)$]the first-order equation \eqref{first} admits a solution~$S\in L^{p}_{{\rm loc}}(\Omega;\mathbb{R}^{n})$; \item[$(5)$] the first-order equation \eqref{first} admits a supersolution $\tilde{S}\in L^{p}_{{\rm loc}}(\Omega;\mathbb{R}^{n})$. \end{enumerate} \end{theorem} \begin{proof}[Proof of Theorem~\ref{thm_AAP}] The proof of the theorem is similar to that of \cite[Theorem 4.3]{Pinchover}, but the arguments for the implications~$(2)\Rightarrow (4)$ and~$(3)\Rightarrow (5)$ are simpler. It suffices to show~$(1)\Rightarrow (2)\Rightarrow (j)\Rightarrow (5)$, where~$j=3,4$, $(3)\Rightarrow (1)$, and~$(5)\Rightarrow (1)$. $(1)\Rightarrow (2)$ Fix a point~$x_{0}\in \Omega$ and let~$\{\omega_{i}\}_{i\in \mathbb{N}}$ be a Lipschitz exhaustion of $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ such that~$x_{0}\in \omega_{1}$. Assertion (1) yields for~$i\in \mathbb{N}$,$$\lambda_{1}\big(Q_{p,\mathcal{A},V+1/i};\omega_{i}\big)\triangleq\inf_{\substack{u\in W^{1,p}_{0}(\omega_{i})\setminus\{0\}}}\frac{Q_{p,\mathcal{A},V+1/i}[u;\omega_{i}]}{\Vert u\Vert^{p}_{L^{p}(\omega_{i})}}\geq \frac{1}{i},$$ which, combined with Theorem \ref{maximum}, gives a positive solution~$v_{i}\in W^{1,p}_{0}(\omega_{i})$ of the problem~$Q'_{p,\mathcal{A},V+1/i}[u]=f_{i}$ in~$\omega_{i}$ with~$u=0$ on~$\partial\omega_{i}$, where~$f_{i}\in C^{\infty}_{c}(\omega_{i}\setminus\bar{\omega}_{i-1})\setminus\{0\}, i\geq 2,$ are nonnegative. and$$Q_{p,\mathcal{A},V+1/i}[u;\omega_{i}]\triangleq\int_{\omega_{i}}(\vert\nabla u\vert_{\mathcal{A}}^{p}+(V+1/i)\vert u\vert^{p})\,\mathrm{d}x.$$ Setting~$\omega'_{i}=\omega_{i-1}$, we get for all~$u\in W^{1,p}_{c}(\omega'_{i})$, $$\int_{\omega_{i}'}\mathcal{A}(x,\nabla v_{i})\cdot \nabla u\,\mathrm{d}x+\int_{\omega_{i}'}\Big(V+\frac{1}{i}\Big)v_{i}^{p-1}u\,\mathrm{d}x=0.$$ Normalize~$f_{i}$ so that~$v_{i}(x_{0})=1$ for all~$i\geq 2$. Applying the Harnack convergence principle with~$\mathcal{V}_{i}\triangleq V+1/i$, we get the second assertion. $(2)\Rightarrow (3)$ We may choose~$\tilde{v}=v$. $(3)\Rightarrow (1)$ Let~$\tilde{v}$ be a positive supersolution of~$Q'_{p,\mathcal{A},V}[u]=0$ and~$T\triangleq -|\nabla \tilde{v}/\tilde{v}|_{\mathcal{A}}^{p-2}.$ For any~$\psi\in C^{\infty}_{c}(\Omega)$, picking~$|\psi|^{p}\tilde{v}^{1-p}\in W^{1,p}_{c}(\Omega)$ as a test function, we obtain: $$(p-1)\int_{\Omega}|T|_{\mathcal{A}}^{p'}|\psi|^{p}\,\mathrm{d}x\leq p\int_{\Omega}|T|_{\mathcal{A}}|\psi|^{p-1}|\nabla \psi|_{\mathcal{A}}\,\mathrm{d}x+\int_{\Omega}V|\psi|^{p}\,\mathrm{d}x.$$ Then Young's inequality~$pab\leq (p-1)a^{p'}+b^{p}$ with~$a=|T|_{\mathcal{A}}|\psi|^{p-1}$ and~$b=|\nabla \psi|_{\mathcal{A}}$ yields the first assertion. For an alternative proof, see Lemma \ref{lem_alter}. $(2)\Rightarrow (4)$ Let~$v$ be a positive solution of ~$Q'_{p,\mathcal{A},V}[u]= 0$. Then~$1/v\in L^{\infty}_{{\rm loc}}(\Omega)$ by the weak Harnack inequality (or by the Harnack inequality if~$p>n$). Let $S\triangleq \nabla v/v$. Because~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ and~$1/v\in L^{\infty}_{{\rm loc}}(\Omega)$, it follows that~$S\in L^{p}_{{\rm loc}}(\Omega;\mathbb{R}^{n})$. \begin{comment} Let~$u\in C^{\infty}_{c}(\Omega)$. Using~$|u|^{p}v^{1-p}\in W^{1,p}_{c}(\Omega)$ as a test function in the definition of the equation~$Q'_{p,\mathcal{A},V}[w]= 0$ with~$v$ as a (super)solution, we get $$ \int_{\Omega}\mathcal{A}(x,\nabla v)\cdot\nabla\big(|u|^{p}v^{1-p}\big)\,\mathrm{d}x+\int_{\Omega}V|v|^{p-2}v\cdot|u|^{p}v^{1-p}\,\mathrm{d}x\geq 0. $$ Noting that~$\nabla\big(|u|^{p}v^{1-p}\big)=p|u|^{p-1}v^{1-p}\nabla |u|+(1-p)|u|^{p}v^{-p}\nabla v$, we may deduce that \begin{multline} \int_{\Omega}\mathcal{A}(x,\nabla v)\cdot\big(p|u|^{p-1}v^{1-p}\nabla |u|\big)\,\mathrm{d}x+\int_{\Omega}V|u|^{p}\,\mathrm{d}x\\ \geq (p-1)\int_{\Omega}\big(\mathcal{A}(x,\nabla v)\cdot\nabla v\big)|u|^{p}v^{-p}\,\mathrm{d}x\\ =(p-1)\int_{\Omega}\Bigg(\mathcal{A}\Big(x,\frac{\nabla v}{v}\Big)\cdot \frac{\nabla v}{v}\Bigg)|u|^{p}\,\mathrm{d}x\\ \geq(p-1)\alpha\int_{\Omega}|S|^{p}|u|^{p}\,\mathrm{d}x \end{multline} Moreover, \begin{multline} \int_{\Omega}\mathcal{A}(x,\nabla v)\cdot\Big(p|u|^{p-1}v^{1-p}\nabla |u|\Big)\,\mathrm{d}x\\ =p\int_{\Omega}|u|^{p-1}\mathcal{A}\Big(x,\frac{\nabla v}{v}\Big)\cdot \nabla|u|\,\mathrm{d}x\\ \leq p\beta\int_{\Omega} |u|^{p-1}|S|^{p-1}|\nabla u|\,\mathrm{d}x. \end{multline} Then$$(p-1)\alpha\int_{\Omega}|S|^{p}|u|^{p}\,\mathrm{d}x\leq p\beta\int_{\Omega} |u|^{p-1}|S|^{p-1}|\nabla u|\,\mathrm{d}x+\int_{\Omega}V|u|^{p}\,\mathrm{d}x.$$ Let~$\eta>0, a,b\geq 0$. $$ \frac{\eta a^{p'}}{p}+\Big(\frac{p-1}{\eta}\Big)^{p-1}\frac{b^{p}}{p} =\frac{\eta}{p-1}\frac{a^{p'}}{\frac{p}{p-1}}+\Big(\frac{p-1}{\eta}\Big)^{p-1}\frac{b^{p}}{p} =\frac{\eta}{p-1}\frac{a^{p'}}{p'}+\Big(\frac{p-1}{\eta}\Big)^{p-1}\frac{b^{p}}{p}. $$ It follows that$$ab=\Big(\frac{\eta}{p-1}\Big)^{\frac{p-1}{p}}a\Big(\frac{p-1}{\eta}\Big)^{\frac{p-1}{p}}b\leq \frac{\eta a^{p'}}{p}+\Big(\frac{p-1}{\eta}\Big)^{p-1}\frac{b^{p}}{p}.$$ Applying Young's inequality$$pab\leq \eta a^{p'}+\Big(\frac{p-1}{\eta}\Big)^{p-1}b^{p},$$ where~$\eta\in \big(0,(p-1)\alpha\big), a=|u|^{p-1}|S|^{p-1}$, and ~$b=\beta|\nabla u|$, we see at once that \begin{multline} \big((p-1)\alpha-\eta\big)\int_{\Omega}|S|^{p}|u|^{p}\,\mathrm{d}x\leq \beta^{p}\Big(\frac{p-1}{\eta}\Big)^{p-1}\int_{\Omega}|\nabla u|^{p}\,\mathrm{d}x+\int_{\Omega}|V||u|^{p}\,\mathrm{d}x \end{multline} For any~$\omega\Subset \Omega$, choose~$u\in C^{\infty}_{c}(\Omega)$ such that~$u|_{\omega}\equiv 1$ Then $$ \big((p-1)\alpha-\eta\big)\int_{\Omega}|S|^{p}|u|^{p}\,\mathrm{d}x\geq \big((p-1)\alpha-\eta\big)\int_{\omega}|S|^{p}\,\mathrm{d}x. $$ \end{comment} We claim that~$S$ is a solution of the equation \eqref{first}. For any~$\psi\in C^{\infty}_{c}(\Omega)$, we employ~$\psi v^{1-p}\in W^{1,p}_{c}(\Omega)$, with$$\nabla\big(\psi v^{1-p}\big)=v^{1-p}\nabla \psi+(1-p)\psi v^{-p}\nabla v,$$ as a test function in the definition of the equation~$Q'_{p,\mathcal{A},V}[w]= 0$ with~$v$ as a solution. \begin{eqnarray*} &&\int_{\Omega}\mathcal{A}(x,\nabla v)\!\cdot \! v^{1-p}\nabla \psi \!\,\mathrm{d}x +\int_{\Omega}\mathcal{A}(x,\nabla v) \! \cdot \! (1-p)\psi v^{-p}\nabla v\,\mathrm{d}x+\int_{\Omega}V|v|^{p-2}v\psi v^{1-p}\,\mathrm{d}x\\ &=&\int_{\Omega}\mathcal{A}\left(x,\frac{\nabla v}{v}\right)\cdot\nabla \psi\,\mathrm{d}x+(1-p)\int_{\Omega}\psi\mathcal{A}\left(x,\frac{\nabla v}{v}\right)\cdot\frac{\nabla v}{v}\,\mathrm{d}x+\int_{\Omega}V\psi\,\mathrm{d}x\\ &=&\int_{\Omega}\mathcal{A}(x,S)\cdot\nabla \psi\,\mathrm{d}x+(1-p)\int_{\Omega}\psi\mathcal{A}(x,S)\cdot S\,\mathrm{d}x+\int_{\Omega}V\psi\,\mathrm{d}x=0. \end{eqnarray*} $(3)\Rightarrow (5)$ For a positive supersolution~$\tilde{v}$ of~$Q'_{p,\mathcal{A},V}[u]= 0$ , we adopt the same argument as above with~$S$ replaced by~$\tilde{S}\triangleq \nabla \tilde{v}/\tilde{v}$, except using nonnegative test functions~$\psi\in C^{\infty}_{c}(\Omega)$ and in the last step. $(4)\Rightarrow (5)$ We may choose~$ \tilde{S}=S$. $(5)\Rightarrow (1)$ For any~$\psi\in C_{0}^{\infty}(\Omega)$, we get \begin{eqnarray} \int_{\Omega}\!\!\mathcal{A}(x,\tilde{S})\! \cdot \! \nabla |\psi|^{p}\!\!\,\mathrm{d}x &\!\!=\!\!& p\!\int_{\Omega}\!|\psi|^{p-1}\mathcal{A}(x,\tilde{S}) \!\cdot \! \nabla |\psi| \!\,\mathrm{d}x\\ &\!\!\leq \!\!& p\!\int_{\Omega} \! |\psi|^{p-1}|\tilde{S}|_{\mathcal{A}}^{p-1}|\nabla \psi|_{\mathcal{A}} \!\,\mathrm{d}x\\ &\!\leq \!& (p\!-\! 1)\int_{\Omega}\!\!|\psi|^{p}|\tilde{S}|_{\mathcal{A}}^{p}\!\,\mathrm{d}x \!+ \! \int_{\Omega}\!|\nabla \psi|_{\mathcal{A}}^{p} \! \,\mathrm{d}x, \label{eqS} \end{eqnarray} where the first inequality is derived from the generalized H\"older inequality (Lemma \ref{ass1}), and in the last step, Young's inequality $pab\leq (p-1)a^{p'}+b^{p}$ is applied with~$a=|\psi|^{p-1}|\tilde{S}|_{\mathcal{A}}^{p-1}$ and~$b=|\nabla \psi|_{\mathcal{A}}$. Because $\tilde{S}$ is a supersolution of \eqref{first}, we have $$\int_{\Omega}\mathcal{A}(x,\tilde{S})\cdot\nabla |\psi|^{p}\,\mathrm{d}x+(1-p)\int_{\Omega}|\tilde{S}|_{\mathcal{A}}^{p}|\psi|^{p}\,\mathrm{d}x+ \int_{\Omega}V|\psi|^{p}\,\mathrm{d}x\geq 0, $$ which together with \eqref{eqS} implies $Q_{p,\mathcal{A},V}[\psi]\geq 0$ for all $\psi\in C_{0}^{\infty}(\Omega)$. \end{proof} \begin{corollary}\label{newuniqueness} Let $\omega\Subset\Omega$ be a domain, let $\mathcal{A}$ satisfy Assumption~\ref{ass8}, and let~$V\in M^{q}(p;\omega)$. Then the principal eigenvalue is unique. \end{corollary} \begin{proof} Let~$\lambda$ be any eigenvalue with an eigenfunction~$v_{\lambda}\geq 0$. By Harnack's inequality, the eigenfunction~$v_{\lambda}$ is positive in~$\omega$. Then~$v_{\lambda}$ is a positive solution of~$Q'_{p,\mathcal{A},V-\gl}[u]=0$. By the AAP type theorem, the functional~$Q_{p,\mathcal{A},V-\gl}$ is nonnegative in~$\omega$ and hence by the definition of~$\gl_{1}$ and Lemma \ref{easylemma}, we get $\gl_{1}\leq\gl\leq \gl_{1}$. Thus, $\lambda_{1}=\gl$. \end{proof} \section{Criticality theory}\label{criticality} In this section we introduce the notions of criticality and subcriticality and establish fundamental results in criticality theory for the functional $Q_{p,\mathcal{A},V}$ that generalize the counterpart results in \cite[Section 4B]{Pinchover} and \cite[Theorem~6.8]{Regev}. \subsection{Characterizations of criticality}\label{subsec_null} \subsubsection{Null-sequences and ground states} \begin{Def}{\em Let~$\mathcal{A}$ satisfy Assumption~\ref{ass8} and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$. \begin{itemize} \item If there exists a nonnegative function~$W\in M^{q}_{{\rm loc}}(p;\Omega)\setminus\{0\}$ such that \begin{equation*}\label{subcritical} Q_{p,\mathcal{A},V}[\varphi]\geq \int_{\Omega}W|\varphi|^{p}\,\mathrm{d}x, \end{equation*} for all $\varphi\in C^{\infty}_{c}(\Omega)$, we say that the functional $Q_{p,\mathcal{A},V}$ is \emph{subcritical} in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, and $W$ is a \emph{Hardy-weight} for $Q_{p,\mathcal{A},V}$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$; \item if $Q_{p,\mathcal{A},V}$ is nonnegative in~$\Omega$ but $Q_{p,\mathcal{A},V}$ does not admit a Hardy-weight in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, we say that the functional $Q_{p,\mathcal{A},V}$ is \emph{critical} in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$; \item if there exists~$\varphi\in C^{\infty}_{c}(\Omega)$ such that $Q_{p,\mathcal{A},V}[\varphi]<0$, we say that the functional~$Q_{p,\mathcal{A},V}$ is \emph{supercritical} in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. \end{itemize} } \end{Def} \begin{Def} \emph{ Let~$\mathcal{A}$ satisfy Assumption~\ref{ass8} and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$. A nonnegative sequence~$\{u_{k}\}_{k\in\mathbb{N}}\subseteq W^{1,p}_{c}(\Omega)$ is called a \emph{null-sequence} with respect to the nonnegative functional~$Q_{p,\mathcal{A},V}$ in~$\Omega$ if \begin{itemize} \item for every $k\in\mathbb{N}$, the function~$u_{k}$ is bounded in~$\Omega$; \item there exists a fixed open set~$U\Subset\Omega$ such that~$\Vert u_{k}\Vert_{L^{p}(U)}=1$ for all~$k\in\mathbb{N}$; \item and~$\displaystyle{\lim_{k\rightarrow\infty}}Q_{p,\mathcal{A},V}[u_{k}]=0.$ \end{itemize}} \end{Def} \begin{Def} \emph{ Let~$\mathcal{A}$ satisfy Assumption~\ref{ass8} and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$. A \emph{ground state} of the nonnegative functional~$Q_{p,\mathcal{A},V}$ is a positive function~$\phi\in W^{1,p}_{{\rm loc}}(\Omega)$, which is an~$L^{p}_{{\rm loc}}(\Omega)$ limit of a null-sequence.} \end{Def} \begin{lem}\label{simplelemma} Let~$\mathcal{A}$ satisfy Assumption~\ref{ass8} and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$. If a nonnegative sequence~$\{u_{k}\}_{k\in\mathbb{N}}\subseteq W^{1,p}_{c}(\Omega)$ satisfies: \begin{itemize} \item for every $k\in\mathbb{N}$, the function~$u_{k}$ is bounded in~$\Omega$; \item there exists a fixed open set~$U\Subset\Omega$ such that~$\Vert u_{k}\Vert_{L^{p}(U)}\asymp 1$ for all~$k\in\mathbb{N}$; \item $\displaystyle{\lim_{k\rightarrow\infty}}Q_{p,\mathcal{A},V}[u_{k}]=0;$ \item and~$\{u_{k}\}_{k\in\mathbb{N}}$ converges in~$L^{p}_{{\rm loc}}(\Omega)$ to a positive~$u\in W^{1,p}_{{\rm loc}}(\Omega)$, \end{itemize} then $u$ is a ground state up to a multiplicative constant. \end{lem} \begin{proof} By the second condition, we may assume that up to a subsequence $\displaystyle{\lim_{k\rightarrow\infty}}\Vert u_{k}\Vert_{L^{p}(U)}=C_{0}$ for some positive constant~$C_{0}$. Set $C_{k}\triangleq\Vert u_{k}\Vert_{L^{p}(U)}$. Then $\left\{u_{k}/C_{k}\right\}_{k\in\mathbb{N}}$ is a null-sequence converging in~$L^{p}_{{\rm loc}}(\Omega)$ to~${u/C_{0}}$. \end{proof} \begin{corollary}\label{nullrem} Let $\omega\Subset\Omega$ be a domain, let $\mathcal{A}$ satisfy Assumption~\ref{ass8} and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$. Then a positive principal eigenfunction~$v$ associated to the principal eigenvalue $\gl_1=\gl_1(Q_{p,\mathcal{A},V};\omega)$ is a ground state of the functional~$Q_{p,\mathcal{A},V-\lambda_{1}}$ in~$\omega$. \end{corollary} \begin{proof} Let $v'\in W^{1,p}_{0}(\omega)$ be a principal eigenfunction associated to $\gl_1$, and let $\{\varphi_{k}\}_{k\in\mathbb{N}}\subseteq C^{\infty}_{c}(\omega)$ be a sequence approximating $v'$ in $W^{1,p}_{0}(\omega)$. Then Lemma~\ref{functionalcv} implies that $$\lim_{k\rightarrow\infty}Q_{p,\mathcal{A},V-\lambda_{1}}[\varphi_{k};\omega]=Q_{p,\mathcal{A},V-\lambda_{1}}[v';\omega]=0,\; \mbox{and } \; \Vert \varphi_{k}\Vert_{L^{p}(U)}\asymp 1 \;\; \forall k\in\mathbb{N},$$ where~$U\Subset\omega$ is a fixed nonempty open set. By Lemma \ref{simplelemma}, for some positive constant~$C$, the principal eigenfunction~$v\triangleq Cv'$ is a ground state of~$Q_{p,\mathcal{A},V-\lambda_{1}}$ in~$\omega$. \end{proof} \begin{comment} \begin{Def} \emph{ Let~$1<p<2$. A positive supersolution of the equation \eqref{half} is called \emph{regular} if the supersolution, together with the modulus of its gradient, is locally bounded a.e. in~$\Omega$.} \end{Def} \end{comment} \begin{proposition}\label{mainlemma} Let~$\{u_{k}\}_{k\in\mathbb{N}}$ be a null-sequence with respect to a nonnegative functional $Q_{p,\mathcal{A},V}$ in~$\Omega$, where $\mathcal{A}$ satisfies Assumptions~\ref{ass8} and \ref{ass2}, and $V\in M^{q}_{{\rm loc}}(p;\Omega)$. Let $v$ be a positive supersolution of $Q'_{p,\mathcal{A},V}[u]=0$ in $\Omega$ and let $w_{k}=u_{k}/v$, where $k\in\mathbb{N}$. Then the sequence~$\{w_{k}\}_{k\in\mathbb{N}}$ is bounded in~$W^{1,p}_{{\rm loc}}(\Omega)$ and~$\nabla w_{k}\rightarrow 0$ as~$k\rightarrow\infty$ in~$L^{p}_{{\rm loc}}(\Omega;\mathbb{R}^{n})$. \end{proposition} \begin{proof Let $U$ be a fixed open set such that for all~$k\in\mathbb{N}$,~$\Vert u_{k}\Vert_{L^{p}(U)}=1$, and let $U\Subset\omega\Subset\Omega$ be a Lipschitz domain. Using the Minkowski inequality, the Poincar\'{e} inequality, and the weak Harnack inequality, we obtain for every $k\in\mathbb{N}$, \begin{eqnarray*} \Vert w_{k}\Vert_{L^{p}(\omega)}&\leq& \Vert w_{k}-\langle w_{k}\rangle_{U}\Vert_{L^{p}(\omega)}+\langle w_{k}\rangle_{U}\left(\mathcal{L}^{n}(\omega)\right)^{1/p}\\ &\leq& C(n,p,\omega,U)\Vert \nabla w_{k}\Vert_{L^{p}(\omega;\mathbb{R}^{n})}+\frac{1}{\inf_{U}v}\langle u_k\rangle_{U}\left(\mathcal{L}^{n}(\omega)\right)^{1/p}. \end{eqnarray*} By H\"{o}lder's inequality, noting that $\Vert u_{k}\Vert_{L^{p}(U)}=1$, we obtain \begin{equation}\label{estimate} \Vert w_{k}\Vert_{L^{p}(\omega)}\leq C(n,p,\omega,U)\Vert \nabla w_{k}\Vert_{L^{p}(\omega;\mathbb{R}^{n})}+\frac{1}{\inf_{U}v}\left(\frac{\mathcal{L}^{n}(\omega)}{\mathcal{L}^{n}(U)}\right)^{1/p}. \end{equation} By Lemma \ref{strictconvexity} with $\xi_{1}=\nabla(vw_{k})=\nabla(u_{k})$ and $\xi_{2}=w_{k}\nabla v$, we obtain $$|\nabla u_{k}|_{\mathcal{A}}^{p} -w_{k}^{p}|\nabla v|_{\mathcal{A}}^{p} - p\mathcal{A}(x,w_k\nabla v)\!\cdot\! v\nabla w_{k}\geq 0,$$ which together with the local strong convexity of $|\xi|_{\mathcal{A}}^p$ (Assumption~\ref{ass2}) implies \begin{eqnarray*} C_\gw(\bar{p}, \mathcal{A})\int_{\gw}v^{\bar{p}}|\nabla w_{k}|^{\bar{p}}_{\mathcal{A}}\,\mathrm{d}x &\leq& \int_{\omega}\left(|\nabla u_{k}|_{\mathcal{A}}^{p} -w_{k}^{p}|\nabla v|_{\mathcal{A}}^{p} - p\mathcal{A}(x,w_k\nabla v)\!\cdot\! v\nabla w_{k}\right)\,\mathrm{d}x\\ &\leq& \int_{\Omega}\left(|\nabla u_{k}|_{\mathcal{A}}^{p} - w_{k}^{p}|\nabla v|_{\mathcal{A}}^{p} - v\mathcal{A}(x,\nabla v)\!\cdot\! \nabla\left(w_{k}^{p}\right)\right)\,\mathrm{d}x\\ &=&\int_{\Omega}|\nabla u_{k}|_{\mathcal{A}}^{p}\,\mathrm{d}x-\int_{\Omega}\mathcal{A}(x,\nabla v)\!\cdot\! \nabla\left(w_{k}^{p}v\right)\!\,\mathrm{d}x. \end{eqnarray*} Since $v$ is a positive supersolution, we obtain $$ C_\gw(\bar{p}, \mathcal{A})\int_{\gw}v^{\bar{p}}|\nabla w_{k}|^{\bar{p}}_{\mathcal{A}}\,\mathrm{d}x \leq \int_{\Omega}|\nabla u_{k}|_{\mathcal{A}}^{p}\,\mathrm{d}x+\int_{\Omega}Vu_{k}^{p}\,\mathrm{d}x=Q_{p,\mathcal{A},V}[u_{k}].$$ By the weak Harnack inequality, and the ellipticity condition \eqref{structure}, we get for a positive constant~$c$ which does not depend on~$k$, $$c\int_{\omega}|\nabla w_{k}|^{\bar{p}}\,\mathrm{d}x\leq C_\gw(\bar{p}, \mathcal{A})\int_{\Omega}v^{\bar{p}}|\nabla w_{k}|^{\bar{p}}_{\mathcal{A}}\,\mathrm{d}x\leq Q_{p,\mathcal{A},V}[u_{k}]\rightarrow 0\; \mbox{ as } k\to \infty.$$ Consequently, by H\"{o}lder's inequality, because~$\bar{p}\geq p$, $$\nabla w_{k}\rightarrow 0\; \mbox{ as } k\to \infty\; \mbox{ in } L^{p}_{{\rm loc}}(\Omega;\mathbb{R}^{n}),$$ and this and \eqref{estimate} also imply that~$\{w_{k}\}_{k\in\mathbb{N}}$ is bounded in~$W^{1,p}_{{\rm loc}}(\Omega)$. \begin{comment} Now we deal with the case of~$p<2$. Let~$q_{k}\triangleq Q_{p,\mathcal{A},V}[u_{k}]$. Applying H\"{o}lder's inequality with the conjugate indexes~$\frac{2}{p}$ and~$\frac{2}{2-p}$, we have \begin{multline*} \int_{\omega}v^{p}|\nabla w_{k}|^{p}_{\mathcal{A}}\,\mathrm{d}x\\ \leq\left(\int_{\omega}v^{2}|\nabla w_{k}|^{2}_{\mathcal{A}}\left(\left\vert\nabla(vw_{k})\right\vert_{\mathcal{A}}+w_{k}\left\vert\nabla v\right\vert_{\mathcal{A}}\right)^{p-2}\,\mathrm{d}x\right)^{\!\frac{p}{2}}\!\!\left(\int_{\omega}\left(\left\vert\nabla(vw_{k})\right\vert_{\mathcal{A}}+w_{k}\left\vert\nabla v\right\vert_{\mathcal{A}}\right)^{p}\,\mathrm{d}x\right)^{\!1-\frac{p}{2}}\\ \leq Cq_{k}^{\frac{p}{2}}\left(\int_{\omega}v^{p}|\nabla w_{k}|_{\mathcal{A}}^{p}\,\mathrm{d}x+\int_{\omega}w_{k}^{p}|\nabla v|_{\mathcal{A}}^{p}\,\mathrm{d}x\right)^{1-\frac{p}{2}}\\ \leq Cq_{k}^{\frac{p}{2}}\left(\int_{\omega}v^{p}|\nabla w_{k}|_{\mathcal{A}}^{p}\,\mathrm{d}x+\int_{\omega}w_{k}^{p}|\nabla v|_{\mathcal{A}}^{p}\,\mathrm{d}x+1\right), \end{multline*} where~$C$ is a constant depending only on~$p$ but may be different from~$C(p)$. Because~$v$ is regular and locally has a positive lower bound, we may estimate, for some positive constants~$c_{j},j=1,2,3,4$ independent of~$k$, in view of \red{the ellipticity condition} \eqref{structure} and the estimate \eqref{estimate}, $$c_{1}\int_{\omega}|\nabla w_{k}|^{p}\,\mathrm{d}x\leq c_{2}q_{k}^{\frac{p}{2}}\left(\int_{\omega}|\nabla w_{k}|^{p}\,\mathrm{d}x+\int_{\omega}w_{k}^{p}\,\mathrm{d}x+1\right)\leq c_{2}q_{k}^{\frac{p}{2}}\left(c_{3}\int_{\omega}|\nabla w_{k}|^{p}\,\mathrm{d}x+c_{4}\right).$$ Once more, by virtue of~$\lim_{k\rightarrow\infty}q_{k}=0$, we get$$\nabla w_{k}\rightarrow 0\; \mbox{ as } k\rightarrow \infty \mbox{ in } L^{p}_{{\rm loc}}(\Omega;\mathbb{R}^{n}).$$ Recalling \eqref{estimate}, we conclude that~$w_{k}$ is bounded in~$W^{1,p}_{{\rm loc}}(\Omega)$. \end{comment} \end{proof} \begin{comment} \begin{Rem} \emph{ One can see that in the proof, the strong convexity with the coefficient depending only on~$p$ is used.} \end{Rem} \end{comment} \begin{Thm}\label{mainthm} Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2}, and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$, and assume that the functional $Q_{p,\mathcal{A},V}$ is nonnegative on $C_c^{\infty}(\Omega)$. \begin{comment} where if~$p\geq 2$,~$A$ is symmetric, locally uniformly positive definite, and bounded and~$V\in M^{q}_{{\rm loc}}(p;\Omega)$ or if~$1<p<2$,~$A\in C^{0,\gamma}_{{\rm loc}}(\Omega;\mathbb{R}^{n\times n})$(still symmetric, locally uniformly positive definite, and bounded) and~$V\in M^{q}_{{\rm loc}}(\Omega),q>n$. \end{comment} Then every null-sequence of~$Q_{p,\mathcal{A},V}$ converges, both in~$L^{p}_{{\rm loc}}(\Omega)$ and a.e. in~$\Omega$, to a unique (up to a multiplicative constant) positive supersolution of the equation $Q'_{p,\mathcal{A},V}[u]=0$ in~$\Omega$. Furthermore, a ground state is in~$C^{\gamma}_{{\rm loc}}(\Omega)$ for some~$0<\gamma\leq 1$, and it is the unique positive solution and the unique positive supersolution of $Q'_{p,\mathcal{A},V}[u]=0$ in~$\Omega$. \end{Thm} \begin{Rem} \emph{By uniqueness we mean uniqueness up to a multiplicative constant.} \end{Rem} \begin{proof}[Proof of Theorem~\ref{mainthm}] By the AAP type theorem, there exist a positive supersolution $v\in W^{1,p}_{{\rm loc}}(\Omega)$ and a positive solution~$\tilde{v}\in W^{1,p}_{{\rm loc}}(\Omega)$ of $Q'_{p,\mathcal{A},V}[u]=0$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. Let $\{u_{k}\}_{k\in\mathbb{N}}$ be a null-sequence of $Q_{p,\mathcal{A},V}$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, and set $w_{k}=u_{k}/v$. Using Proposition \ref{mainlemma}, we obtain$$\nabla w_{k}\rightarrow 0\; \mbox{ as } k\rightarrow \infty \mbox{ in } L^{p}_{{\rm loc}}(\Omega;\mathbb{R}^{n}).$$ The Rellich-Kondrachov theorem yields a subsequence, which is still denoted by~$w_{k}$, with~$w_{k}\rightarrow c$ for some~$c\geq 0$ in~$W^{1,p}_{{\rm loc}}(\Omega)$ as~$k\rightarrow\infty$. Since $v$ is locally bounded away from zero, it follows that up to a subsequence, $u_{k}\rightarrow cv$ a.e. in~$\Omega$ and also in~$L^{p}_{{\rm loc}}(\Omega)$. Therefore, $c=1/\Vert v\Vert_{L^{p}(U)}>0$. Furthermore, any null-sequence~$\{u_{k}\}_{k\in\mathbb{N}}$ converges (up to a positive constant multiple) to the same positive supersolution~$v$. Noting that the solution~$\tilde{v}$ is also a positive supersolution, we conclude that~$v=C\tilde{v}$ for some~$C>0$. It follows that~$v$ is also the unique positive solution. \end{proof} As a corollary of the above theorem we have: \begin{Thm}\label{complement} Let $\omega\Subset\Omega$ be a domain, let $\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2}, and let~$V\in M^{q}(p;\omega)$. Suppose that the equation $Q'_{p,\mathcal{A},V}[u]=0$ in $\gw$ admits a proper positive supersolution in~$W^{1,p}(\omega)$. Then the principal eigenvalue $\lambda_{1} =\lambda_{1}(Q_{p,\mathcal{A},V};\omega)$ is strictly positive. Therefore, all the assertions in Theorem \ref{maximum} are equivalent if~$\mathcal{A}$ and~$V$ are as above and~$\omega\Subset\Omega$ is a Lipschitz domain. \end{Thm} \begin{proof} We need to prove $(4')\Rightarrow (3)$ in Theorem \ref{maximum}. Indeed, by the AAP type theorem,~$Q_{p,\mathcal{A},V}$ is nonnegative. In particular, $\lambda_{1}\geq 0.$ If~$\lambda_{1}=0$, then positive principal eigenfunctions are all positive solutions of the equation $Q'_{p,\mathcal{A},V}[u]=0$. By Corollary \ref{nullrem} and Theorem \ref{mainthm}, the positive principal eigenfunctions are the unique positive supersolution of $Q'_{p,\mathcal{A},V}[u]=0$ in $\gw$, but this contradicts our assumption that the equation $Q'_{p,\mathcal{A},V}[u]=0$ in $\gw$ admits a proper positive supersolution in~$W^{1,p}(\omega)$. Hence, $\lambda_{1}>0$. \end{proof} \subsubsection{Characterizations of criticality} The next theorem contains fundamental characterizations of criticality or subcriticality. \begin{Thm}\label{thm_Poincare} Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2} and let ~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. Assume that~$Q_{p,\mathcal{A},V}$ is nonnegative in~$\Omega$. Then the following assertions hold true. \begin{enumerate} \item[$(1)$] The functional~$Q_{p,\mathcal{A},V}$ is critical in~$\Omega$ if and only if~$Q_{p,\mathcal{A},V}$ has a null-sequence in~$C_c^{\infty}(\Omega)$. \item[$(2)$] The functional~$Q_{p,\mathcal{A},V}$ has a null-sequence if and only if the equation $Q'_{p,\mathcal{A},V}[u]=0$ has a unique positiv ~supersolution. \item[$(3)$]The functional $Q_{p,\mathcal{A},V}$ is subcritical in~$\Omega$ if and only if $Q_{p,\mathcal{A},V}$ admits a strictly positive continuous Hardy-weight $W$ in~$\Omega$. \item[$(4)$] Assume that~$Q_{p,\mathcal{A},V}$ admits a ground state $\phi$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. Then there exists a strictly positive continuous function~$W$ such that for $\psi\in C^{\infty}_{c}(\Omega)$ with~$\int_{\Omega}\phi\psi\,\mathrm{d}x\neq 0$ there exists a constant $C>0$ such that the following Poincar\'{e}-type inequality holds: $$Q_{p,\mathcal{A},V}[\varphi]+C\left\vert\int_{\Omega}\varphi\psi\,\mathrm{d}x\right\vert^{p}\geq \frac{1}{C}\int_{\Omega}W|\varphi|^{p}\,\mathrm{d}x\qquad \forall \varphi\in C^{\infty}_{c}(\Omega).$$ \end{enumerate} \end{Thm} \begin{proof} $(1)$ Suppose that~$Q_{p,\mathcal{A},V}$ is critical. For every nonempty open~$U\Subset\Omega$, let$$c_{U}\triangleq\inf_{\substack{0\leq \varphi\in C^{\infty}_{c}(\Omega)\\\Vert \varphi\Vert_{L^{p}(U)}=1}}Q_{p,\mathcal{A},V}[\varphi]=\inf_{\substack{\varphi\in C^{\infty}_{c}(\Omega)\\\Vert \varphi\Vert_{L^{p}(U)}=1}}Q_{p,\mathcal{A},V}[\varphi],$$ where the equality is an immediate corollary of Lemma \ref{functionalcv}. Pick $W\in C^{\infty}_{c}(U)\setminus\{0\}$ such that~$0\leq W\leq 1$. Then for all~$\varphi\in C^{\infty}_{c}(\Omega)$ with~$\Vert \varphi\Vert_{L^{p}(U)}=1$, we have $$c_{U}\int_{\Omega}W|\varphi|^{p}\,\mathrm{d}x\leq c_{U}\leq Q_{p,\mathcal{A},V}[\varphi].$$ Because $Q_{p,\mathcal{A},V}$ is critical in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, it follows that $c_{U}=0$. Hence, a minimizing sequence of the above variational problem is a null-sequence of $Q_{p,\mathcal{A},V}$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. Suppose that $Q_{p,\mathcal{A},V}$ admits a null-sequence in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. By Theorem \ref{mainthm}, we get a positive solution~$v$ of~$Q'_{p,\mathcal{A},V}[u]=0.$ If $Q_{p,\mathcal{A},V}$ is subcritical in $\Omega$ with a nontrivial nonnegative Hardy-weight $W$, then the AAP type theorem gives a positive solution $w$ of the equation $Q'_{p,\mathcal{A},V-W}[u]=0$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. The function $w$ is also a proper positive supersolution of the equation $Q'_{p,\mathcal{A},V}[u]=0$. Therefore, $w$ and $v$ are two positive supersolutions of the above equation, but this contradicts Theorem~\ref{mainthm}. $(2)$ Suppose that the equation $Q'_{p,\mathcal{A},V}[u]=0$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ admits a unique positiv ~supersolution. If~$Q_{p,\mathcal{A},V}$ does not admits a null-sequence, then~$Q_{p,\mathcal{A},V}$ is subcritical by~$(1)$. The same argument as in the second part of the proof of~$(1)$ leads to a contradiction. The other direction follows from Theorem \ref{mainthm}. $(3)$ Suppose that~$Q_{p,\mathcal{A},V}$ is subcritical. Let~$\{U_{k}\}_{k\in\mathbb{N}}$ be an open covering of $\Omega$ with $U_k\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ and $\cup_{k\in\mathbb{N}}U_{k}=\Omega$. Let $\{\chi_{k}\}_{k\in\mathbb{N}}$ be a locally finite smooth partition of unity subordinated to the covering. Then by the proof of $(1)$, for every~$k\in \mathbb{N}$, we have~$c_{U_k}>0$. Then for all $\varphi\in C^{\infty}_{c}(\Omega)$ and every~$k\in\mathbb{N}$ we have $$2^{-k}Q_{p,\mathcal{A},V}[\varphi]\geq 2^{-k}c_{U_k}\int_{U_{k}}|\varphi|^{p}\,\mathrm{d}x\geq 2^{-k}C_{k}\int_{\Omega}\chi_{k}|\varphi|^{p}\,\mathrm{d}x,$$ where~$C_{k}\triangleq \min\{c_{U_{k}},1\}$ for $k\in\mathbb{N}$. Then for all $\varphi\in C^{\infty}_{c}(\Omega)$ and every~$k\in\mathbb{N}$ we have $$2^{-k}Q_{p,\mathcal{A},V}[\varphi]\geq 2^{-k}C_{k}\int_{U_{k}}|\varphi|^{p}\,\mathrm{d}x\geq 2^{-k}C_{k}\int_{\Omega}\chi_{k}|\varphi|^{p}\,\mathrm{d}x.$$ Adding together all the above inequalities over all~$k\in\mathbb{N}$, we get $$Q_{p,\mathcal{A},V}[\varphi]\geq \int_{\Omega}W|\varphi|^{p}\,\mathrm{d}x \qquad \forall \varphi\in C^{\infty}_{c}(\Omega),$$ where $W\triangleq \sum_{k=1}^{\infty}2^{-k}C_{k}\chi_{k} >0$ is smooth (recall that this series is locally finite). The other direction follows from the definition of subcriticality. $(4)$ For every nonempty open set~$U\Subset\Omega$ and every~$\varphi\in C^{\infty}_{c}(\Omega)$, let $$Q_{p,\mathcal{A},V}^{U}[\varphi]\triangleq Q_{p,\mathcal{A},V}[\varphi]+\int_{U}|\varphi|^{p}\,\mathrm{d}x,$$ which is subcritical because $Q_{p,\mathcal{A},V}$ is nonnegative. By~$(3)$, for every nonempty open set ~$U\Subset\Omega$, there is a positive continuous function~$W$ in~$\Omega$ such that for all~$\varphi\in C^{\infty}_{c}(\Omega)$, \begin{equation}\label{eq_W} Q_{p,\mathcal{A},V}^{U}[\varphi]\geq \int_{\Omega}W(x)|\varphi|^{p}\,\mathrm{d}x. \end{equation} Fix $\psi\in C^{\infty}_{c}(\Omega)$ with~$\int_{\Omega}\phi\psi\,\mathrm{d}x\neq 0$. Assume that for every~$U\!\Subset\!\Omega$, there exists a nonnegative sequence $\{\varphi_{k}\}_{k\in\mathbb{N}}\subseteq C^{\infty}_{c}(\Omega)$ such that $$\int_{U}|\varphi_{k}|^{p}\,\mathrm{d}x=1,\quad Q_{p,\mathcal{A},V}[\varphi_{k}]\rightarrow 0,\quad\mbox{and}~\int_{\Omega}\varphi_{k}\psi\,\mathrm{d}x\rightarrow 0,\quad\mbox{as}~k\rightarrow\infty.$$ Because~$\{\varphi_{k}\}_{k\in\mathbb{N}}$ is a null-sequence, by Theorem \ref{mainthm}, ~$\{\varphi_{k}\}_{k\in\mathbb{N}}$ converges (up to a multiplicative constant) in~$L^{p}_{{\rm loc}}(\Omega)$ to the ground state. Furthermore,$$\lim_{k\rightarrow\infty}\int_{\Omega}\varphi_{k}\psi\,\mathrm{d}x=\int_{\Omega}\phi\psi\,\mathrm{d}x\neq 0,$$ and we arrive at a contradiction. Therefore, there exists a nonempty open~$U\Subset\Omega$ such that for all~$\varphi\in C^{\infty}_{c}(\Omega)$ and some positive constant~$C$, $$\int_{U}|\varphi|^{p}\,\mathrm{d}x\leq C\Big(Q_{p,\mathcal{A},V}[\varphi]+\Big\vert\int_{\Omega}\varphi\psi\,\mathrm{d}x\Big\vert^{p}\Big).$$ By combining the above inequality with \eqref{eq_W}, we obtain the desired inequality. \end{proof} \begin{corollary}\label{subcriticaleg} Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2} and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$. Then $Q_{p,\mathcal{A},V}$ is subcritical in a domain~$\omega\Subset\Omega$ if and only if $\lambda_{1}(Q_{p,\mathcal{A},V};\omega)>0$. \end{corollary} \begin{proof} Suppose that $Q_{p,\mathcal{A},V}$ is subcritical in $\omega $. Therefore, it admits a Hardy-weight $W$ in~$\omega$. The AAP type theorem (Theorem~\ref{thm_AAP}) implies that there exists a positive solution $v$ of $Q_{p,\mathcal{A},V-W}'[u]=0$ in~$\omega$. Clearly, $v$ is a proper positive supersolution of $Q_{p,\mathcal{A},V}'[u]=0$ in~$\omega$. By Theorem \ref{complement}, we have~$\lambda_{1}(Q_{p,\mathcal{A},V};\omega)>0$. On the other hand, if $\lambda_{1}(Q_{p,\mathcal{A},V};\omega)>0$, then~$\lambda_{1}$ is a Hardy-weight for~$Q_{p,\mathcal{A},V}$ in~$\omega$ and hence~$Q_{p,\mathcal{A},V}$ is subcritical. \end{proof} \subsection{Perturbation results and applications}\label{ssect_pert} The present subsection is mainly intended for certain perturbation results. Our perturbations results are divided into two cases. One is a domain perturbation, and the other concerns certain potential perturbations. As an application we show that a critical operator admits a null-sequence that converges locally uniformly to its ground state. \subsubsection{Criticality theory under a domain perturbation} The following is a straightforward result (see \cite[Proposition 4.2]{Tintarev}). \begin{proposition}\label{two} Let~$\mathcal{A}$ satisfy Assumption~\ref{ass8} and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. Let $\Omega_{1}\subseteq\Omega_{2} \subseteq \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ be subdomains such that $\Omega_{2}\setminus \overline{\Omega_{1}}\neq \emptyset$. \begin{enumerate} \item[$(a)$] If~$Q_{p,\mathcal{A},V}$ is nonnegative in~$\Omega_{2}$, then~$Q_{p,\mathcal{A},V}$ is subcritical in~$\Omega_{1}$. \item[$(b)$] If~$Q_{p,\mathcal{A},V}$ is critical in~$\Omega_{1}$, then~$Q_{p,\mathcal{A},V}$ is supercritical in~$\Omega_{2}$. \end{enumerate} \end{proposition} \begin{corollary} Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2}, and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. If~$Q_{p,\mathcal{A},V}$ is nonnegative in~$\Omega$, then for all domains~$\omega\Subset\Omega$, we have~$\lambda_{1}(Q_{p,\mathcal{A},V};\omega)>0$. \end{corollary} \begin{proof} The result follows directly from Proposition \ref{two} and Corollary \ref{subcriticaleg}. \end{proof} \subsubsection{Criticality theory under potential perturbations} We state here certain results on perturbations by a potential whose proofs are as in the proofs of \cite[Proposition 4.8]{Pinchover}, \cite[Corollary 4.17]{Pinchover}, \cite[Propositions 4.4 and 4.5]{Tintarev}. \begin{proposition}\label{prop1} Suppose that~$\mathcal{A}$ satisfies Assumption \ref{ass8}, $V_{2}\geq V_{1}$ a.e. in $\Omega$, where $V_i\in M^{q}_{{\rm loc}}(p;\Omega)$ for~$i=1,2$, and $\mathcal{L}^{n}(\{x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi : V_{2}(x)>V_{1}(x) \})>0$. \begin{enumerate} \item[$(1)$] If~$Q_{p,\mathcal{A},V_{1}}$ is nonnegative in~$\Omega$, then~$Q_{p,\mathcal{A},V_{2}}$ is subcritical in~$\Omega$. \item[$(2)$] If~$Q_{p,\mathcal{A},V_{2}}$ is critical in~$\Omega$, then~$Q_{p,\mathcal{A},V_{1}}$ is supercritical in~$\Omega$. \end{enumerate} \end{proposition} \begin{cor}\label{interval} Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2}, and let~$V_{i}\in M^{q}_{{\rm loc}}(p;\Omega)$, where $i=0,1$. Assume that~$Q_{p,\mathcal{A},V_{i}}$ are nonnegative for~$i=1,2$. Let~$V_{t}\triangleq(1-t)V_{0}+tV_{1} $ for~$t\in [0,1]$. Then $Q_{p,\mathcal{A},V_{t}}$ is nonnegative in~$\Omega$ for all~$t\in [0,1]$. Moreover, if $\mathcal{L}^{n}\left(\{V_{0}\neq V_{1}\}\right)\!>\!0$, then~$Q_{p,\mathcal{A},V_{t}}$ is subcritical in~$\Omega$ for every~$t\in (0,1)$. \end{cor} \begin{proposition}\label{prop_subcritical} Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2}, and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. Assume that~$Q_{p,\mathcal{A},V}$ is subcritical in~$\Omega$ and $\mathbf{V}\in L_c^{\infty}(\Omega)\setminus\{0\}$ is such that~$\mathbf{V}\ngeq 0$. Then there is~$\tau_{+}>0$ and~$\tau_{-}\in[-\infty,0)$ such that $Q_{p,\mathcal{A},V+t\mathbf{V}}$ is subcritical in~$\Omega$ if and only if~$t\in(\tau_{-},\tau_{+})$. In addition,~$Q_{p,\mathcal{A},V+\tau_{+}\mathbf{V}}$ is critical in~$\Omega$. \end{proposition} \begin{proposition} Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2}, and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. Assume that~$Q_{p,\mathcal{A},V}$ is critical in~$\Omega$ with a ground state $v$. Let $\mathbf{V}\in L_c^{\infty}(\Omega)$. Then there is~$0<\tau_{+}\leq \infty$ such that~$Q_{p,\mathcal{A},V+t\mathbf{V}}$ is subcritical in~$\Omega$ for~$t\in (0,\tau_{+})$ if and only if~$\int_{\Omega}\mathbf{V}|v|^{p}\,\mathrm{d}x>0.$ \end{proposition} \subsubsection{Locally uniformly convergent null-sequence} The following is an important application of the above perturbation results. \begin{lem}\label{localuniform} Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2}, and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. Assume that~$Q_{p,\mathcal{A},V}$ is critical in~$\Omega$. Then~$Q_{p,\mathcal{A},V}$ admits a null-sequence $\{\phi_{i}\}_{i\in\mathbb{N}}\subseteq C^{\infty}_{c}(\Omega)$ converging locally uniformly to the ground state $\phi$. \end{lem} \begin{proof} Let~$\{\omega_{i}\}_{i\in\mathbb{N}}$ be a Lipschitz exhaustion of~$\Omega$,~$x_{0}\in\omega_{1}$, and~$\mathbf{V}\in C^{\infty}_{c}(\Omega)\setminus\{0\}$ a nonnegative function such that~$\supp(\mathbf{V})\Subset\omega_{1}$. By virtue of Proposition~\ref{prop_subcritical}, for every~$i\in\mathbb{N}$, there exists~$t_{i}>0$ such that the functional~$Q_{p,\mathcal{A},V-t_{i}\mathbf{V}}$ is critical in~$\omega_{i}$. Let $\phi_{i}'$ be the ground state of $Q_{p,\mathcal{A},V-t_{i}\mathbf{V}}$ in~$\omega_{i}$ satisfying $\phi_{i}'(x_0)=1$. Clearly. $\lim_{i\to\infty}t_i =0$, and $\lambda_{1}(Q_{p,\mathcal{A},V-t_{i}\mathbf{V}};\omega_{i})\!=\!0$, hence, $\phi_{i} \!\in\! W^{1,p}_{0}(\omega_{i})$, and~$Q_{p,\mathcal{A},V-t_{i}\mathbf{V}}[\phi_{i}']\!=\!0$. By Theorems~\ref{HCP} and \ref{thm_Poincare}, it follows that the sequence~$\{\phi_{i}'\}_{i\in\mathbb{N}}$ converges locally uniformly to $c\phi$, the ground state of $Q'_{p,\mathcal{A},V}$ in $\Omega$, where $c>0$ is a constant, and $\int_{\omega_{1}}|\phi'_i|^{p}\,\mathrm{d}x \asymp \int_{\omega_{1}}|\phi|^{p}\,\mathrm{d}x\asymp 1$. It follows that~$\displaystyle{\lim_{i\rightarrow\infty}}Q_{p,\mathcal{A},V}[\phi_{i}'] \!=\!\displaystyle{\lim_{i\rightarrow\infty}}t_{i} \! \int_{\omega_{1}}\!\mathbf{V}(\phi_{i}')^{p}\,\mathrm{d}x = 0$. By virtue of \cite[Page 250, Theorem 1]{Evans} and \cite[Page 630, Theorem 6]{Evans}, there exists a nonnegative approximating sequence~$\{\phi_{i}\}_{i\in\mathbb{N}} \! \subseteq \! C^{\infty}_{c}(\Omega)$ such that~$\displaystyle{\lim_{i\rightarrow\infty}}Q_{p,\mathcal{A},V}[\phi_{i}] \!=\!0,$ and $\{\phi_{i}\}$ converges locally uniformly to~$\phi$ in~$\Omega$. Hence, $\int_{\omega_{1}}|\phi_i|^{p}\,\mathrm{d}x \!\asymp \!1.$ By Lemma \ref{simplelemma}, the desired result follows. \end{proof} \subsection{Hardy–Sobolev–Maz’ya inequality and $(\mathcal{A},V)$-capacity}\label{A,V-capacity} The following definition of capacity is a counterpart of \cite[Definition 6.7]{Regev}. \begin{Def}\label{AVcapacity} \emph{ Let~$\mathcal{A}$ satisfy Assumption~\ref{ass8} and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. Assume that the functional~$Q_{p,\mathcal{A},V}$ is nonnegative on~$C^{\infty}_{c}(\Omega)$. For every compact subset~$K$ of~$\Omega$, we define the\emph{~$(\mathcal{A},V)$-capacity} of~$K$ in~$\Omega$ as $$\capacity_{\mathcal{A},V}(K,\Omega)\triangleq \inf\left\{Q_{p,\mathcal{A},V}[\varphi]:\varphi\in C^{\infty}_{c}(\Omega), \varphi\geq 1 \mbox{ on } K\right\}.$$ } \end{Def} \begin{remark} \emph{For the $p$-capacity and the $(p;r)$-capacity, see \cite[Chapter 2]{HKM} and \cite[Section 2.1]{Maly}. For a relationship between the~$p$-capacity and the~$p$-parabolicity in a Riemannian manifold, see \cite{parabolicity1,parabolicity2}. Recall that $|\xi|_{\mathcal{A}}^{p}=pF(x,\xi)$ for a.e.~$x\in\Omega$ and all~$\xi\in\mathbb{R}^{n}$. For the variational~$F$-capacity, which is a Choquet capacity as guaranteed by \cite[Theorem 5.31]{HKM}, we refer to \cite[Section 5.30]{HKM}. } \end{remark} The following theorem is an extension of \cite[Theorem 6.8]{Regev}, \cite[Theorem 4.5]{Regev20}, and \cite[Theorem 3.4]{Regev21}. The proof is omitted since it is similar to that of \cite[Theorem 4.5]{Regev20}. \begin{Thm}\label{newthm} Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2} and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. Assume that $Q_{p,\mathcal{A},V}$ is nonnegative on~$C^{\infty}_{c}(\Omega)$. Then the following assertions are equivalent. \begin{enumerate} \item[$(1)$] The functional~$Q_{p,\mathcal{A},V}$ is subcritical in~$\Omega$; \item[$(2)$] there exists a positive continuous function~$W^\ast$ in~$\Omega$ such that for all~$\varphi\in C_c^{\infty}(\Omega)$, $$Q_{p,\mathcal{A},V}[\varphi]\geq \int_{\Omega}W^\ast(x)\left(|\nabla\varphi|_{\mathcal{A}}^{p}+|\varphi|^{p}\right)\,\mathrm{d}x;$$ \item[$(3)$] for every nonempty open set~$U\Subset\Omega$ there exists~$c_{U}>0$ such that for all~$\varphi\in C_c^{\infty}(\Omega)$, $$Q_{p,\mathcal{A},V}[\varphi]\geq c_{U}\left(\int_{U}|\varphi|\,\mathrm{d}x\right)^{p};$$ \item[$(4)$] the~$(\mathcal{A},V)$-capacity of all closed balls~$B\Subset\Omega$ with positive radii in~$\Omega$ is positive; \item[$(4')$] the~$(\mathcal{A},V)$-capacity of some closed ball~$B\Subset\Omega$ with a positive radius in~$\Omega$ is positive. \medskip Furthermore, in the case of~$p<n$,~$(1)$ holds if and only if \item[$(5)$] there exists a positive continuous function~$\tilde{W}$ in~$\Omega$ such that the following weighted Hardy–Sobolev–Maz’ya inequality holds true: $$Q_{p,\mathcal{A},V}[\varphi]\geq \left(\int_{\Omega}\tilde{W}(x)|\varphi|^{p^{\ast}}\,\mathrm{d}x\right)^{p/p^{\ast}} \qquad \forall~\varphi\inC_c^{\infty}(\Omega),$$ where~$p^{\ast}\triangleq pn/(n-p)$ is the critical Sobolev exponent. \end{enumerate} \end{Thm} \begin{comment} \subsection{Liouville comparison theorem}\label{Liouville} In the present section we establish a Liouville comparison theorem in our setting, following the results and methods in \cite[Theorem 8.1]{Regev} and \cite{Lioupincho}. To this end, we need the following two additional assumptions related to the {\em simplified energy} \cite{Lioupincho}. In particular, the assumptions are valid for the $(p,A)$-Laplacian in \cite{Pinchover}. \begin{ass}\label{ass3} \emph{We assume that there exists a constant~$C(p)>0$ such that $$|\xi+\eta|_{\mathcal{A}}^{p}-|\xi|_{\mathcal{A}}^{p}-p\mathcal{A}(x,\xi)\cdot\eta\leq C(p)|\eta|_{\mathcal{A}}^{2}\left(|\xi|_{\mathcal{A}}+|\eta|_{\mathcal{A}}\right)^{p-2},$$ for all~$\xi,\eta\in\mathbb{R}^{n}$ and a.e.~$x\in\Omega$.} \end{ass} \begin{remark}\label{1rem} \emph{The operator~$\mathcal{A}$ in Example \ref{exa} satisfies Assumption \ref{ass3} for~$p\geq 2$. Indeed, by \cite[(3.11)]{Regev}, we have for all~$i=1,2,\ldots,n$ and a.e.~$x\in\Omega$, $$a_{i}(x)|\xi_{i}+\eta_{i}|^{p}-a_{i}(x)|\xi_{i}|^{p}-pa_{i}(x)|\xi_{i}|^{p-2}\xi_{i}\eta_{i}\leq C(p)|\eta_{i}|'^{2}\left(|\xi_{i}|'+|\eta_{i}|'\right)^{p-2},$$ where~$|\xi_{i}|'\triangleq\sqrt[p]{a_{i}(x)}|\xi_{i}|$ and~$|\eta_{i}|'\triangleq\sqrt[p]{a_{i}(x)}|\eta_{i}|$. Adding these inequalities over all~$i=1,2,\ldots,n$, gives $$|\xi+\eta|_{\mathcal{A}}^{p}-|\xi|_{\mathcal{A}}^{p}-p\mathcal{A}(x,\xi)\cdot\eta\leq C(p)\sum_{i=1}^{n}|\eta_{i}|'^{2}\left(|\xi_{i}|'+|\eta_{i}|'\right)^{p-2}.$$ Noting that~$|\xi_{i}|'\leq (\sum_{i=1}^{n}a_{i}(x)|\xi_{i}|^{p})^{1/p}=|\xi|_{\mathcal{A}}$ and~$|\eta_{i}|'\leq (\sum_{i=1}^{n}a_{i}(x)|\eta_{i}|^{p})^{1/p}=|\eta|_{\mathcal{A}}$, we get $$|\xi+\eta|_{\mathcal{A}}^{p}-|\xi|_{\mathcal{A}}^{p}-p\mathcal{A}(x,\xi)\cdot\eta\leq C(p)|\eta|_{\mathcal{A}}^{2}\left(|\xi|_{\mathcal{A}}+|\eta|_{\mathcal{A}}\right)^{p-2}.$$} \end{remark} \begin{ass}\label{ass4} \emph{We assume that there exists a constant~$C(p)>0$ such that $$|\xi+\eta|_{\mathcal{A}}^{p}-|\xi|_{\mathcal{A}}^{p}-p\mathcal{A}(x,\xi)\cdot\eta\geq C(p)|\eta|_{\mathcal{A}}^{2}\left(|\xi|_{\mathcal{A}}+|\eta|_{\mathcal{A}}\right)^{p-2},$$ for all~$\xi,\eta\in\mathbb{R}^{n}$ and a.e.~$x\in\Omega$.} \end{ass} \begin{remark} \emph{If~$p\geq 2$, Assumption \ref{ass4} implies Assumption \ref{ass2}.} \end{remark} \begin{remark}\label{pless2} \emph{The operator~$\mathcal{A}$ in Example \ref{exa} satisfies Assumption \ref{ass4} for~$p<2$. This remark has a similar proof to Remark \ref{1rem} } \end{remark} \begin{lem}[Simplified energy]\label{simplified} Let~$\mathcal{A}$ satisfy assumptions \ref{ass8} and \ref{ass3}, and~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. Consider any positive subsolution $v\in W^{1,p}_{{\rm loc}}(\Omega)$ of $Q'_{p,\mathcal{A},V}[f]=0$ and any nonnegative function~$u\in W^{1,p}_{{\rm loc}}(\Omega)$ such that~$u^{p}/v^{p-1} \in W^{1,p}_{c}(\Omega)$, the product rule for~$u^{p}/v^{p-1}$ holds, and~$vw$ satisfies the product rule for~$w\triangleq u/v$. Then, $$Q_{p,\mathcal{A},V}[vw]\leq C(p) \int_{\Omega}v^{2}|\nabla w|^{2}_{\mathcal{A}}\left(w|\nabla v|_{\mathcal{A}}+v|\nabla w|_{\mathcal{A}}\right)^{p-2}\,\mathrm{d}x,$$ where~$C(p)$ is as in Assumption \ref{ass3}. Similarly, if~$v\in W^{1,p}_{{\rm loc}}(\Omega)$ is a positive supersolution of~$Q'_{p,\mathcal{A},V}[f]=0$ and~$\mathcal{A}$ satisfies assumptions \ref{ass8} and \ref{ass4} with all the other above conditions on $u$ and $v$, then $$Q_{p,\mathcal{A},V}[vw]\geq C(p)\int_{\Omega}v^{2}|\nabla w|^{2}_{\mathcal{A}}\left(w|\nabla v|_{\mathcal{A}}+v|\nabla w|_{\mathcal{A}}\right)^{p-2}\,\mathrm{d}x,$$ where~$C(p)$ is as in Assumption \ref{ass4}. \end{lem} \begin{proof} The two inequalities follow from Assumption \ref{ass3} or \ref{ass4}, respectively, and Lemma \ref{lem_alter} with~$\xi=w\nabla v$ and~$\eta=v\nabla w$. \end{proof} The Liouville comparison theorem has a similar proof to \cite[Theorem 8.1]{Regev}. \begin{Thm}[Liouville comparison theorem]\label{thm_Liouville} Let~$\mathcal{A}_{0}$ satisfy assumptions \ref{ass8}, \ref{ass2}, and \ref{ass3},~$\mathcal{A}_{1}$ satisfy assumptions \ref{ass8} and \ref{ass4} (if~$p\geq 2$) or assumptions \ref{ass8}, \ref{ass2}, and \ref{ass4} (if~$p<2$), and $V_{i}\in M^{q}_{{\rm loc}}(p;\Omega)$, where $i=0,1$. If the following conditions hold true: \begin{enumerate} \item[$(1)$] the functional~$Q_{p,\mathcal{A}_{1},V_{1}}$ is critical in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ with a ground state~$\phi$ in~$\Omega$; \item[$(2)$] the functional~$Q_{p,\mathcal{A}_{0},V_{0}}$ is nonnegative in~$\Omega$ and the equation~$Q'_{p,\mathcal{A}_{0},V_{0}}[u]=0$ in~$\Omega$ has a positive subsolution~$\psi\in W^{1,p}_{{\rm loc}}(\Omega)$; \item[$(3)$] there is~$M>0$ such that a.e. in~$\Omega$, for all~$\xi\in\mathbb{R}^{n}$,~$\psi|\xi|_{\mathcal{A}_{0}}\leq M\phi|\xi|_{\mathcal{A}_{1}};$ \item[$(4)$] there is~$N>0$ such that a.e. in~$\Omega$,~$|\nabla \psi|_{\mathcal{A}_{0}}^{p-2}\leq N^{p-2}|\nabla\phi|_{\mathcal{A}_{1}}^{p-2},$ \end{enumerate} then the functional~$Q_{p,\mathcal{A}_{0},V_{0}}$ is critical in~$\Omega$, and~$\psi$ is its ground state. In particular,~$\psi$ is the unique positive supersolution of the equation~$Q'_{p,\mathcal{A}_{0},V_{0}}[u]=0$ in~$\Omega$. \end{Thm} \begin{Rem} \emph{ In contrast to the counterparts in \cite{Pinchover} and \cite{Regev} of the above theorem, we assume here the $\psi$ is a positive subsolution, since we were unable to extend to our setting \cite[Lemma 2.4]{Regev} saying that $v^+$ is a subsolution if $v$ is a subsolution. Note that by~$(3)$, $\psi\in L^{\infty}_{{\rm loc}}(\Omega)$.} \end{Rem} % \end{comment} \section{Positive solution of minimal growth}\label{minimal} This section concerns the removability of an isolated singularity, the existence of positive solutions of minimal growth in a neighborhood of infinity in $\Omega$, and their relationships with the criticality or subcriticality. We also study the minimal decay of a Hardy-weights. \subsection{Removability of isolated singularity} In this subsection, we consider the removability of an isolated singularity (see also \cite{Fraas, Serrin1964, Regev25} and references therein). \begin{lemma}\label{newev} Fix $x_0\in \Omega$. Denote by~$B_r\triangleq B_{r}(x_0)$ the open ball of the radius $r>0$ centered at $x_0$. Suppose that $\mathcal{A}$ satisfies Assumption~\ref{ass8}, and let $V\in M^{q}(p;B_R)$ for some $R>0$ with~$B_{R}\Subset\Omega$. Then there exists~$R_1\in(0,R)$ such that $\lambda_{1}(Q_{p,\mathcal{A},V};B_r)>0$ for all $0<r<R_1$. \end{lemma} \begin{proof} By \cite[Theorem 13.19]{Leoni}, for all~$0<r<R$ and~$u\in W^{1,p}_{0}(B_{r})\setminus\{0\}$, we have the lower bound~$\Vert \nabla u\Vert^{p}_{L^{p}(B_{r})}\geq C(n,p)r^{-p}\Vert u\Vert^{p}_{L^{p}(B_{r})}.$ Let $\gd= \alpha_{B_{R}}/2$. Then by the Morrey-Adams theorem, for all~$0\!<\!r\!<\!R$ and $u\!\in\! W^{1,p}_{0}(B_{r})\setminus\{0\}$ with $\Vert u\Vert^{p}_{L^{p}(B_{r})}=1$, we get \begin{eqnarray*} \lambda_{1}(Q_{p,\mathcal{A},V};B_{r})&\geq & \int_{B_{r}}|\nabla u|_{\mathcal{A}}^{p}\,\mathrm{d}x+\int_{B_{r}}V|u|^{p}\,\mathrm{d}x \geq \alpha_{B_{R}}\int_{B_{r}}|\nabla u|^{p}\,\mathrm{d}x+\int_{B_{r}}V|u|^{p}\,\mathrm{d}x\\ &\geq& \delta C(n,p)r^{-p}-\frac{C(n,p,q)}{\delta^{n/(pq-n)}}\Vert V\Vert^{n/(pq-n)}_{M^{q}(p;B_{R})} \,. \end{eqnarray*} Thus, for all sufficiently small radii~$r>0$, the principal eigenvalue $\lambda_{1}(Q_{p,\mathcal{A},V};B_{r})>0$. \end{proof} The following theorem can be proved by essentially the same arguments as those of \cite[Theorems 5.4]{Pinchover}, and therefore it is omitted. \begin{Thm}\label{singularity} Assume that~$p\leq n$,~$x_{0}\in\Omega$, $\mathcal{A}$ satisfies Assumption~\ref{ass8}, and $V\in M^{q}_{{\rm loc}}(p;\Omega)$. Consider a positive solution~$u$ of~$Q'_{p,\mathcal{A},V}[w]=0$ in a punctured neighborhood $U_{x_0}$ of $x_0$. If $u$ is bounded in some punctured neighborhood of~$x_{0}$, then~$u$ can be extended to a nonnegative solution in $U_{x_0}$. Otherwise,~$\displaystyle{\lim_{x\rightarrow x_{0}}}u(x)=\infty. \end{Thm} \subsection{Positive solutions of minimal growth} In this subsection, we study positive solutions of minimal growth at infinity in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, a notion that was introduced by Agmon in \cite{Agmon} for second-order linear elliptic operators, and was later extended to the quasilinear case \cite{Tintarev} and graphs \cite{Keller}. In particular, we give a further characterization of criticality in terms of global minimal positive solutions. \subsubsection{Positive solutions of minimal growth} \begin{Def} \emph{Let $\mathcal{A}$ satisfy Assumption~\ref{ass8} and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$. Let $K_{0}$ be a compact subset of~$\Omega$. A positive solution~$u$ of~$Q'_{p,\mathcal{A},V}[w]=0$ in~$\Omega\setminus K_{0}$, is called a \emph{positive solution of minimal growth in a neighborhood of infinity} in $\Omega$ if for any smooth compact subset~$K$ of~$\Omega$ with~$K_{0}\Subset \mathring{K}$, any positive supersolution~$v\in C\big(\Omega\setminus \mathring{K}\big)$ of ~$Q'_{p,\mathcal{A},V}[w]=0$ in~$\Omega\setminus K$ such that~$u\leq v$ on~$\partial K$, satisfies $u\leq v$ in~$\Omega\setminus K$. For such a positive solution~$u$, we write~$u\in \mathcal{M}_{\Omega;K_{0}}=\mathcal{M}_{\Omega;K_{0}}^{\mathcal{A},V}$. If~$K_{0}=\emptyset$, then $u \in \mathcal{M}_{\Omega;\emptyset}$ is said to be a \emph{global minimal positive solution} of~$Q'_{p,\mathcal{A},V}[w]=0$ in~$\Omega$.} \end{Def} \begin{comment} \begin{Rem} \emph{A compact set is \emph{smooth} if its interior is nonempty and has a smooth boundary.} \end{Rem } \end{comment} \begin{Thm} Let $Q_{p,\mathcal{A},V}\geq 0$ in~$C_c^{\infty}(\Omega)$ with~$\mathcal{A}$ satisfying Assumption~\ref{ass8}, and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$. Then for every~$x_{0}\in\Omega$, the equation~$Q'_{p,\mathcal{A},V}[w]=0$ has a solution~$u\in \mathcal{M}_{\Omega;\{x_{0}\}}$. \end{Thm} \begin{proof} Let~$\{\omega_{i}\}_{i\in\mathbb{N}}$ be a Lipschitz exhaustion of~$\Omega$ with~$x_{0}\in\omega_{1}$. We define the inradius of~$\omega_{1}$ as $r_{1}\triangleq\sup_{x\in\omega_{1}}\mathrm{d}(x,\partial\omega_{1})$, and consider the open sets $U_{i}\triangleq\omega_{i}\setminus \overline B_i=\omega_{i}\setminus \overline{B_{r_{1}/(i+1)}(x_{0})},$ for $i\in\mathbb{N}$. Fix a point~$x_{1}\in U_{1}$. Note that~$\{U_{i}\}_{i\in\mathbb{N}}$ is an exhaustion of~$\Omega\setminus\{x_{0}\}.$ Pick a sequence of nonnegative functions $f_{i}\in C^{\infty}_{c}\left(B_{i}(x_{0})\setminus \overline{B_{i+1}(x_{0})}\right)\setminus\{0\}$, for all~$i\in\mathbb{N}$. The principal eigenvalue $$\lambda_{1}\left(Q_{p,\mathcal{A},V+1/i};U_{i}\right)>0,$$ because~$Q_{p,\mathcal{A},V}$ is nonnegative in~$\Omega$. Then, by virtue of Theorem \ref{maximum}, for every~$i\in\mathbb{N}$, there exists a positive solution~$v_{i}\in W^{1,p}_{0}(U_{i})$ of~$Q'_{p,\mathcal{A},V+1/i}[u]=f_{i}$ in~$U_{i}$. The Harnack convergence principle yields a subsequence of ~$\big\{u_{i}\triangleq v_{i}(x)/v_{i}(x_{1})\big\}_{i\in\mathbb{N}}$ converging locally uniformly in $\Omega\setminus\{x_{0}\}$ to a positive solution~$u$ of $Q'_{p,\mathcal{A},V}[u]=0$ in $\Omega\setminus\{x_{0}\}$. We claim that~$u\in\mathcal{M}_{\Omega;\{x_{0}\}}$. Consider any smooth compact subset~$K$ of~$\Omega$ with~$x_{0}\in\mathring{K}$ and any positive supersolution~$v\in C\big(\Omega\setminus\mathring{K}\big)$ of~$Q'_{p,\mathcal{A},V}[u]=0$ in~$\Omega\setminus K$ with~$u\leq v$ on~$\partial K$. For an arbitrary~$\delta>0$, there exists~$i_{K}\in\mathbb{N}$ such that~$\supp{f_{i}}\Subset K$ for all~$i\geq i_{K}$ and~$u_{i}\leq (1+\delta)v$ on~$\partial\left(\omega_{i}\setminus K\right)$. The weak comparison principle (Theorem~\ref{thm_wcp}) gives~$u_{i}\leq (1+\delta)v$ in~$\omega_{i}\setminus K$. Then by letting~$i\rightarrow\infty$ and then $\delta\rightarrow 0$, we obtain~$u\leq v$ in~$\Omega\setminus K$. \end{proof} \begin{Def} \emph{A function~$u\in \mathcal{M}_{\Omega;\{x_{0}\}}$ is called a \emph{minimal positive Green function of~$Q'_{p,\mathcal{A},V}$ in~$\Omega$ with singularity} at~$x_{0}$, if~$u$ admits a nonremovable singularity at~$x_{0}$. We denote such a Green function by~$G^{\Omega}_{\mathcal{A},V}(x,x_{0})$.} \end{Def} \begin{Rem} \emph{See \cite{PinchoverGreen, PinchoverGreen2,Pinchoverlinear} for more on minimal positive Green functions of linear elliptic operators of the second order.} \end{Rem} \subsubsection{Further characterization of criticality} We characterize the criticality and subcriticality of $Q_{p,\mathcal{A},V}$ in terms of the existence of a global minimal positive solution and the existence of a Green function. \begin{Thm} Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2}, and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. Consider the nonnegative functional~$Q_{p,\mathcal{A},V}$. Then~$Q_{p,\mathcal{A},V}$ is subcritical in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ if and only if the equation $Q'_{p,\mathcal{A},V}[u]=0$ does not admit a global minimal positive solution in~$\Omega$. Moreover, a ground state of~$Q'_{p,\mathcal{A},V}[u]=0$ in~$\Omega$ is a global minimal positive solution of~$Q'_{p,\mathcal{A},V}[u]=0$ in~$\Omega$. \end{Thm} \begin{proof} The proof is similar to that of \cite[Theorem 5.9]{Pinchover} and hence omitted. \end{proof} \begin{Thm} Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2}, and let $V\in M^{q}_{{\rm loc}}(p;\Omega)$. Assume that the functional~$Q_{p,\mathcal{A},V}$ is nonnegative in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, and fix $u\in \mathcal{M}_{\Omega;\{x_{0}\}}$ for some $x_{0}\in\Omega$. \begin{enumerate} \item[$(1)$] If~$u$ has a removable singularity at $x_0$, then~$Q_{p,\mathcal{A},V}$ is critical in~$\Omega$. \item[$(2)$] If~$p\leq n$ and~$u$ has a nonremovable singularity at~$x_{0}$, then~$Q_{p,\mathcal{A},V}$ is subcritical in~$\Omega$. \item[$(3)$] If~$p> n$,~$u$ has a nonremovable singularity at~$x_{0},$ and~$\lim_{x\rightarrow x_{0}}u(x)=c$ for some positive constant~$c,$ then~$Q_{p,\mathcal{A},V}$ is subcritical in~$\Omega$. \end{enumerate} \end{Thm} \begin{proof} The proof is similar to that of \cite[Theorem 5.10]{Pinchover} and hence omitted. \end{proof} \subsection{How large can Hardy-weights be?} The following theorem is a generalization of \cite[theorems~3.1 and 3.2]{Kovarik}. \begin{Thm} Let~$\mathcal{A}$ satisfy Assumptions~\ref{ass8} and \ref{ass2} and let~$V\in M^{q}_{{\rm loc}}(p;\Omega)$. Assume that~$Q_{p,\mathcal{A},V}$ is nonnegative in~$\Omega$. For $K\Subset\Omega$, let $\phi\in W^{1,p}_{{\rm loc}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus K)$ be a positive solution of the equation $Q'_{p,\mathcal{A},V}[u]=0$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus K$ of minimal growth in a neighborhood of infinity in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. Then for every~$K\Subset\mathring{\mathcal{K}}\Subset\Omega$ and every Hardy-weight $W$ of $Q_{p,\mathcal{A},V}$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus K$, we have $$\int_{\mathcal{K}^{c}}W|\phi|^{p}\,\mathrm{d}x<\infty.$$ \end{Thm} \begin{proof} Let $K\Subset\mathring{\mathcal{K}}\Subset\Omega$, and let $\tilde V\in C_0^\infty(\mathring{\mathcal{K}})$ be a nonnegative function such that $Q_{p,\mathcal{A},V-\tilde V}$ is critical in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. There exists a null sequence $\{\varphi_{k}\}_{k\in\mathbb{N}}\subseteq C_c^{\infty}(\Omega)$ for $Q_{p,\mathcal{A},V-\tilde V}$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ converging locally uniformly to its ground state $\vgf$. So, $\vgf_{k}\geq 0$,~$\Vert\vgf_{k}\Vert_{L^{p}(K)}=1$, and~$\lim_{k\rightarrow\infty}Q_{p,\mathcal{A},V-\tilde V}[\vgf_{k}]\!=\!0$. Let $f\in C^{1}(\Omega)$ satisfy $0\leq f\leq 1$,~$f|_{K} = 0$, $f|_{\mathcal{K}^{c}}=1$, and~$|\nabla f(x)|_{\mathcal{A}}\leq C_{0}$ for some constant~$C_{0}$ and all~$x\in\Omega$. Then~$Q_{p,\mathcal{A},V}[f\vgf_{k}]\geq \int_{K^{c}}W|f\vgf_{k}|^{p}\,\mathrm{d}x\geq \int_{\mathcal{K}^{c}}W|\vgf_{k}|^{p}\,\mathrm{d}x$. Moreover, \begin{eqnarray*} \int_{\mathcal{K}^{c}}W|\vgf_{k}|^{p}\,\mathrm{d}x \!&\leq&\! Q_{p,\mathcal{A},V}[f\vgf_{k}]\!=\!\int_{\mathcal{K}^{c}} \!\!\! (|\nabla\vgf_{k}|_{\mathcal{A}}^{p} \!+\! V|\vgf_{k}|^{p})\,\mathrm{d}x+\int_{\mathcal{K}\setminus K} \!\!\!( |\nabla(f\vgf_{k})|_{\mathcal{A}}^{p}+V|f\vgf_{k}|^{p})\!\,\mathrm{d}x\\ \!&\leq&\!\!Q_{p,\mathcal{A},V-\tilde V}[\vgf_{k}]+2\!\int_{\mathcal{K}}\!(|V|+\tilde V) |\vgf_{k}|^{p}\!\,\mathrm{d}x +C\|\vgf_k\|^p_{W_{1,p}(\mathcal{K})}. \end{eqnarray*} Since the null-sequence $\{\varphi_{k}\}$ is locally bounded in $L^\infty(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\cap W_{1,p}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$, it follows that $\int_{\mathcal{K}^{c}}W|\vgf_{k}|^{p}\,\mathrm{d}x < C_1$. Consequently, the Fatou Lemma implies that $\int_{\mathcal{K}^{c}}W|\vgf|^{p}\,\mathrm{d}x\leq C_1$. Note that the ground state $\vgf$ is a positive solution of $Q'_{p,\mathcal{A},V}[u]=0$ of minimal growth at infinity of $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$, hence, $\phi} \def\vgf{\varphi} \def\gh{\eta\asymp \vgf$ in $\mathcal{K}^c$. Thus, $\int_{\mathcal{K}^{c}}W|\phi} \def\vgf{\varphi} \def\gh{\eta|^{p}\,\mathrm{d}x< \infty$. \end{proof} \subsection*{Data Availability Statement} Data sharing not applicable to this article as no datasets were generated or analysed during the current study. \subsection*{Acknowledgements} This paper is based on the thesis of the first author for the degree of Master of Science of in Mathematics at the Technion-Israel Institute of Technology under the supervision of Professors Yehuda Pinchover and Antti Rasila. Y.H. and A.R. gratefully acknowledge the generous financial help of NNSF of China (No. 11971124) and NSF of Guangdong Province (No. 2021A1515010326). Y.P. acknowledges the support of the Israel Science Foundation (grant 637/19) founded by the Israel Academy of Sciences and Humanities. {\small
proofpile-arXiv_065-77
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Toy example details} \label{sec:example_details} Material herein is adapted from \citep[task 2]{lee2018structural}. \subsection{Structural equation model} Generating model for exogenous variables: \begin{align*} \{P(U_{Z_t} = 1) = 0.6 &\mid t = 0,\ldots,T \}\\ \{P(U_{X_t} = 1) = 0.11 &\mid t = 0,\ldots,T \}\\ \{P(U_{XY_t} = 1) = 0.51 &\mid t = 0,\ldots,T \}\\ \{P(U_{Y_t} = 1) = 0.15 &\mid t = 0,\ldots,T \} \end{align*} In the \acro{ccb} setting, these probabilities remain the same for all time-slices indexed by $\{0,\ldots,t,\ldots,T\}$, as shown by the functions in $\mathcal{M}_i$, when $t>0$: \begin{align} f_Z(u_{Z_t},z_{t-1}) &= u_{Z_t} \wedge z_{t-1} \label{eq:sem_t1a}\\ f_X(z_t,u_{X_t},u_{XY_t},x_{t-1}) &= u_{X_t} \oplus u_{XY_t} \oplus z_t \oplus x_{t-1}\\ f_Y(x_t,u_{Y_t},u_{XY_t}, y_{t-1}) &= 1 \oplus u_{Y_t} \oplus u_{XY_t} \oplus x \wedge y_{t-1}. \label{eq:sem_t1b} \end{align} When $t=0$ we use the original set of structural equation models \citep[appendix D]{lee2018structural}: \begin{align} f_Z(u_{Z_0}) &= u_{Z_0} \label{eq:sem_t0a}\\ f_X(z_0,u_{X_0},u_{XY_0}) &= u_{X_0} \oplus u_{XY_0} \oplus z_0 \\ f_Y(x_0,u_{Y_0},u_{XY_0}) &= 1 \oplus u_{Y_0} \oplus u_{XY_0} \oplus x_0 \label{eq:sem_t0b} \end{align} where $\oplus$ is the exclusive disjunction operator and $\wedge$ is the logical conjunction operator (i.e. `and'). The biggest difference between \crefrange{eq:sem_t1a}{eq:sem_t1b} and \crefrange{eq:sem_t0a}{eq:sem_t0b} is that the former has an explicit dependence on the past. Depending on the implemented intervention at $t-1$ one or both of $\{z_{t-1}, x_{t-1}\}$ will be fixed to the value(s) in $I_{t-1}$. \subsection{Intervention sets} \label{sec:arms} ``The task of finding the best arm among all possible arms can be reduced to a search within the \textsc{mis}s'' \citep{lee2018structural}. The \acro{pomis} is given by $\mathbb{A} = \{\DO{X}{0}, \DO{X}{1}, \DO{Z}{0}, \DO{Z}{1}\}$. \begin{table}[ht!] \centering \caption{Assume binary domains i.e. $D(X) = \{0,1\}$ and $D(Z) = \{0,1\}$. The first arm (ID 0) does nothing i.e. corresponds to the intervention on the empty set $\textrm{do}(\varnothing)$. Arms which belong to the \acro{pomis} arms are highlighted. \label{table:arms}} \begin{tabular}{ccccc} \toprule \multirow{2}{*}{Arm ID} & \multirow{2}{*}{Domain} & \multicolumn{2}{r}{Interventions} \\ \cmidrule(lr){3-5} & & $\textrm{do}(X = x)$& $\textrm{do}(Z = z)$ & $\textrm{do}(X = x, Z = z)$ \\ \midrule 0 & $\nexists$ & & & \\ \rowcolor{ForestGreen!40} 1 & $D(X)$ & $x=0$& &\\ \rowcolor{ForestGreen!40} 2 & $D(X)$ & $x=1$ & & \\ \rowcolor{ForestGreen!40} 3 & $D(Z)$ & &$z=0$ & \\ \rowcolor{ForestGreen!40} 4 & $D(Z)$ & &$z=1$ & \\ 5 & $D(X) \times D(Z)$ & & & $x=0,z=0$\\ 6 & $D(X) \times D(Z)$ & & & $x=0,z=1$\\ 7 & $D(X) \times D(Z)$ & & & $x=1,z=0$\\ 8 & $D(X) \times D(Z)$ & & & $x=1,z=1$\\ \bottomrule \end{tabular} \end{table} \subsection{Additional results} The optimal intervention sequence is given by $\{Z_0 = z_0, X_1= x_1, Z_1=z_1,X_1=x_1,Z_1=z_1 \}$. Using the example domain, this translates to $\{0,1,0,1,0\}$. \begin{figure}[ht!] \centering \includegraphics[width=0.8\linewidth]{figures/probability_i_0_5_UCB_10000_rounds_v1.pdf} \caption{Results using a Kullback-Leibler Upper-confidence bound (KL-UCB) policy $\pi$ \citep{cappe2013kullback}.\label{fig:UCB_results}} \end{figure} \subsection{Restricting the search space} \label{sec:space} The search space for the problem in \cref{eq:dcgo} grows exponentially with $|\mat{X}_t|$ thus slowing down the identification of the optimal intervention when $\mathcal{G}$ includes more than a few nodes. Indeed, a naive approach to find $\mat{X}_{s,t}^\star$ at $t=0, \dots, T$ would be to explore the $2^{|\mat{X}_t|}$ sets in $\mathcal{P}(\mat{X}_t)$ at every $t$ and keep $2^{|\mat{X}_t|}$ models for the objective functions. In the static setting, \acro{cbo} reduces the search space by exploiting invariances in the interventional space \cite{lee2018structural} to identify a subset of intervention sets $\mathbb{M} \subseteq \mathcal{P}(\vec{X})$ worth exploring. In our dynamic setting, the objective functions change at every time step depending on the previously implemented interventions and one would need to recompute $\mathbb{M}$ at every $t$. However, it is possible to show that, given Assumptions (\ref{assumptions}), the search space remains constant over time. Denote by $\mathbb{M}_t$ the set $\mathbb{M}$ at time $t$ and let $\mathbb{M}_0$ represent the set at $t=0$ which corresponds to $\mathbb{M}$ computed in \acro{cbo}. For $t>0$ it is possible to prove that: \begin{proposition}{\textbf{\acro{mis} in time.}} If Assumptions (\ref{assumptions}) are satisfied, $\mathbb{M}_t = \mathbb{M}_0$ for $t>0$. \end{proposition} \subsection{Related Work} \label{sec:related_work} \nd{Have not touched this section yet. \textbf{Dynamic Optimization} Optimization in dynamic environments has studied in the context of evolutionary algorithms \cite{fogel1966artificial, goldberg1987nonstationary}. More recently, other optimization techniques \cite{pelta2009simple, trojanowski2009immune, de2006stochastic} have been adapted to dynamic settings, see e.g. \cite{cruz2011optimization} for a review. Focusing on \acro{bo}, the literature on dynamic settings \cite{azimi2011dynamic, bogunovic2016time, abo} is limited. The dynamic \acro{bo} framework closest to this work is given by \citet{abo} and focuses on functions defined on continuous spaces that follow a more complex behaviour than a simple Markov model. \acro{abo} treats the inputs as fixed and not as random variables, thereby disregarding their temporal evolution and, more importantly, breaking their causal dependencies. \textbf{Causal Optimization} Causal \acro{bo} \citep[\acro{cbo},][]{cbo} focuses instead on the causal aspect of optimization and solves the problem of finding an optimal intervention in a \acro{dag} by modelling the intervention functions with single \acro{gp}{s} or a multi-task \acro{gp} model \cite{daggp}. \acro{cbo} disregards the existence of a temporal evolution in both the inputs and the output variable, treating them as i.i.d.\xspace overtime. While disregarding time significantly simplifies the problem, it prevents the identification of an optimal intervention at every $t$. \textbf{Bandits and \acro{rl}} In the broader decision-making literature, causal relationships have been considered in the context of bandits \citep{bareinboim2015bandits, lattimore2016causal, lee2018structural, lee2019structural} and reinforcement learning \citep{lu2018deconfounding, buesing2018woulda, foerster2018counterfactual, zhang2019near, madumal2020explainable}. In these cases, actions or arms, correspond to interventions on a causal graph where there exists complex links between the agent’s decisions and the received rewards. While dynamic settings have been considered in acausal bandit algorithms \citep{besbes2014stochastic, villar2015multi, wu2018learning}, causal \acro{mab} have focused on static settings. Dynamic settings are instead considered by \acro{rl} algorithms and formalized through Markov decision processes (\acro{mdp}). In the current formulation, \acro{dcbo} does not consider a \acro{mdp} as we do not have a notion of \emph{state} and therefore do not require an explicit model of its dynamics. The system is fully specified by the causal model. As in \acro{bo}, we focus on identifying a set of time-indexed optimal actions rather than an optimal policy. We allow the agent to perform explorative interventions that do not lead to state transitions. More importantly, differently from both \acro{mab} and \acro{rl}, \emph{we allow for the integration of both observational and interventional data}. An expanded discussion on the reason why \acro{dcbo} should be used and the links between \acro{dcbo}, \acro{cbo}, \acro{abo} and \acro{rl} is included in the appendix (\cref{sec:connections}). } \subsection{Transferring information between causal bandits} \label{sec:transfer} Herein we describe how to express the reward distribution for trial $i$ as a function of the intervention(s) implemented at the previous trial $i-1$. The key to this section is the relaxation of assumption (3). We seek an estimate $\widehat{\mat{F}}_{\tau}$ for $\mat{F}_{\tau}$ (the true SEM), and $\widehat{P}(\mat{U}_{\tau})$ for $P(\mat{U}_{\tau})$. Thus, if we have $\widehat{\mat{F}}_{\tau}$, we can relay information about past interventions to \acro{scm-mab} $i$ and thus enable the present reward distribution to take into account those interventions. Reminding ourselves that the members of $\mat{F}_{\tau} \triangleq \{ V_j = f_j(\text{Pa}(j), U_j) \mid j \in \mat{V}_{\tau} \}$. Alas for $i=0$ e.g. we seek function estimates $\{\widehat{f}_{Z_0},\widehat{f}_{X_0},\widehat{f}_{Y_0} \}$. Because $\{ D(V) \subseteq \mathbb{Z} \mid V \in \mat{V}\}$ we model all functions in $\mathcal{M}$ as independent probability mass functions (\acro{pmf}). \paragraph{Simulation.} Recall that $P(Y \mid \mat{V} = \mat{v})$ is an observational distribution, who's samples are contained in $\dataset^O$. Where $P(Y \mid \DO{\mat{V}}{\mat{v}} )$ is an interventional distribution, found by fixing variables $\mat{V}$ to $\mat{v}$ in $\mathcal{G}(\mathcal{M})$ -- its samples are contained in $\dataset^I$. As we are operating in a discrete world, we do not need to approximate any intractable integrals (as is required in e.g. \citep{DCBO, cbo}) to compute the effect of interventions. Rather, by assuming the availability of $\dataset^O$ and using the do-calculus, we are at liberty to estimate $\mat{F}$ by approximating the individual \acro{pmf}{}s that arise from applying the do-calculus (see \citep[Theorem 3.4.1]{pearl2000causality}). Consequently we can build a `simulator' $\widehat{\mat{F}}_{\tau}$ using $\dataset^O$ (which concerns the whole of $\mathcal{M}$, not just the window $\tau$). $\mathcal{D}^{I}$ is very scarce because playing an arm does not yield an observed target value but only a reward (hence we cannot exploit the actions taken during horizon $N_i$). The only interventional data that is available, at each $i$, to each \acro{scm-mab} $\left\langle \mathcal{M}_i, Y_i \right\rangle$, is the implemented intervention $I_{i-1}$. The \acro{ccb} method is graphically depicted in \cref{fig:mab_method}. \section{Introduction} \label{sec:introduction} A dynamical system evolves in time. Examples include weather, financial markets as well as unmanned aerial vehicles \citep{gauthier2021next}. One practical goal is to take decisions within those systems such as deciding which financial instrument to buy, one day after another. The bandit (\acro{mab}) paradigm \citep{robbins1952some,chernoff1959sequential} can help with that. In that environment an agent and an environment interact sequentially \citep{lattimore2020bandit}; the agent picks an action and receives a reward from the environment. The agent continues like so, usually for a finite number of plays, with the goal of maximising the total reward. The challenge in this problem stems from the trade-off between exploiting actions with known rewards, versus exploring actions with unknown rewards. Recently, many studies have addressed the case wherein which there is a non-trivial dependency structure between the arms. One such direction presumes that the dependency structure is modelled explicitly through causal graphs \citep{lee2018structural, lee2019structural, lattimore2016causal, lu2020regret}. We extend that paradigm to also account for the causal temporal dynamics of the system. \textsc{mab}s already constitute sequential decision-making paradigms, but here we expand that idea to cover \emph{chronological} MABs: where the reward distribution is conditional upon the actions taken by earlier MABs (see \cref{fig:sequential_mabs}). We are not considering a Markov Decision Process (\acro{mdp}) -- we have no explicit notion of state and consequently do not maintain a model of the state dynamics. This type of transfer learning in causal \textsc{mab}s was also studied by \citep{zhang2017transfer,azar2013sequential} but there the authors transfer information between unrelated tasks whereas we are interested in transfers for when all agents operate in the same (possibly non-stationary) dynamical system. Ours, instead, is more similar to the \emph{restless bandit} \citep{whittle1988restless,guha2010approximation} problem where rewards vary with time (unlike the standard bandit setting where they are fixed but unknown). \paragraph{Example.} Consider a dynamical environment $\mathcal{F}$\footnote{We will somewhat abuse standard notation for dynamical systems theory.}, such as a country subject to the COVID-19 pandemic. The highly transient nature of the virus \citep{mandal2020model} necessitates multiple chronological clinical trials, the \emph{start} of each indexed by $t_i$ (see \cref{fig:sequential_mabs}), to find a treatment or vaccine. Suppose we conduct clinical trial $i$ which has $K$ different treatments $\mathcal{A} \triangleq \{a_1,\ldots,a_K\}$ of unknown efficacy for COVID-19 and we have $N_i$ patients in our study group. Patients $\{0,\ldots,n,\ldots, N_i\}$ arrive sequentially, and we must decide on a treatment $A_n \in \mathcal{A}$ to administer to each new patient. \input{figures/fig_sections/fig_intro} To make this decision, we could learn from how the previous choices of treatments fared for the previous patients. After a sufficient number of trials, we may have a reasonable idea of which treatment is most effective, and from thereon, we could administer that treatment to all patients. However the exploration phase may take a long time and many patients may receive a sub-optimal treatment during that period. But we know than a earlier, similar trial $i-1$, has just concluded and because we are aware of the causal nature of our treatments and their evolution over time, we can condition our trial $i$ on the lessons learned, and actions taken in trial $i-1$, before we start ours. There are two purposes to this (1) the additional information may aid the discovery of the most effective treatment in our trial and (2) it may also show that the most optimal intervention changes over time owing to the highly non-stationary environment of real systems, where a typical assumption \citep{lattimore2020bandit} is for standard \textsc{MAB}s to have a stationary reward distribution \citep{lattimore2020bandit}. The time between two consecutive trials $i$ and $i-1$ is $\mathrm{d}t_i \triangleq t_i - (t_{i-1}+N_{i-1}), \forall i>0$. \paragraph{Contributions.} The chronological causal bandit (\acro{ccb}) extends the \acro{scm-mab} by \citet{lee2018structural} by conditioning a \acro{scm-mab} on prior causal bandits played in the same environment $\mathcal{F}$. The result of this is a piece-wise stationary model which offers a novel approach for causal decision making under uncertainty within dynamical systems. The reward process of the arms is non-stationary on the whole, but stationary on intervals \citep{yu2009piecewise}. Specifically, past optimal interventions are transferred across time allowing the present \acro{mab} to weigh the utility of those actions in the present game. \subsection{Preliminaries} \label{sec:prelims} \paragraph{Notation.} Random variables are denoted by upper-case letters and their values by lower-case letters. Sets of variables and their values are noted by bold upper-case and lower-case letters respectively. We make extensive use of the do-calculus (for details see \citep[\S 3.4]{pearl2000causality}). Samples (observational) drawn from a system or process unperturbed over time are contained in $\mathcal{D}^O$. Samples (interventional) drawn from a system or process subject to one or more interventions are denoted by $\mathcal{D}^I$. The domain of a variable is denoted by $D(\cdot)$ where e.g. $x \in D(X)$ and $\mat{x} \in D(\mat{X}) \equiv x_1 \times x_2 \times \ldots \times x_{|\mat{x}|} \in D(X_1) \times D(X_2) \times \ldots \times D(X_{|\mat{x}|})$. \paragraph{Structural causal model.} Structural causal models (SCMs) \citep[ch. 7]{pearl2000causality} are used as the semantics framework to represent an underlying environment. For the exact definition as used by Pearl see \citep[def. 7.1.1]{pearl2000causality}. Let $\mathcal{M}$ be a \acro{scm} parametrised by the quadruple $\left\langle \mat{U},\mat{V}, P(\mat{U}),\mat{F} \right\rangle$. Here, $\mat{U}$ is a set of exogenous variables which follow a joint distribution $P(\mat{U})$ and $\mat{V}$ is a set of endogenous (observed) variables. Within $\mat{V}$ we distinguish between two types of variables: manipulative (treatment) and target (output) variables (always denoted by $Y$ in this paper). Further, endogenous variables are determined by a set of functions $\mat{F} \subset \mathcal{F}$. Let $\mat{F} \triangleq \{f_i\}_{V_i \in \mat{V}}$ \citep[\S 1]{lee2020characterizing} s.t. each $f_i$ is a mapping from (the respective domains of) $U_i \cup P\!A_i$ to $V_i$ -- where $U_i \subseteq \mat{U}$ and $P\!A_i \subseteq \mat{V} \setminus V_i$. Graphically, each SCM is associated with a causal diagram (a directed acyclical graph, \acro{dag} for short) $\mathcal{G} = \left\langle \mat{V}, \mat{E} \right\rangle$ where the edges are given by $\mat{E}$. Each vertex in the graph corresponds to a variable and the directed edges point from members of $P\!A_i$ and $U_i$ toward $V_i$ \citep[ch. 7]{pearl2000causality}. A directed edge is s.t. $V_i \leftarrow V_j \in \mat{E}$ if $V_i \in P\!A_j$ (i.e. $V_j$ is a child of $V_i$). A bidirected edge between $V_i$ and $V_j$ occurs if they share an unobserved confounder which is to say $\mat{U}_i \cap \mat{U}_j \neq \varnothing$ \citep{lee2018structural}. Unless otherwise stated, from hereon, when referring to $\mat{V}$ we are implicitly considering $\mat{V} \setminus \{Y\}$ -- i.e. the manipulative variables not including the outcome variable. Finally, the fundamental do-operator $\DO{\mat{V}}{\mat{v}}$ represents the operation of fixing a set of endogenous variable(s) $\mat{V}$ to constant value(s) $\mat{v}$ irrespective of their original mechanisms. Throughout we do not consider graphs with non-manipulative variables. For more a more incisive discussion on the properties of SCMs we refer the reader to \citep{pearl2000causality, bareinboim2016causal}. \paragraph{Multi-armed bandit.} The MAB setting \citep{robbins1952some} entertains a discrete sequential decision-making scenario in which an agent selects an action or `arm' $a \in \mathcal{A}$ according to a policy $\pi$, and receives a stochastic reward $Y(a)$, emanating from an unknown distribution particular to each arm. The expectation of the reward is given by $\mu_a$. The goal of the agent is to optimise the arm selection sequence and thereby maximise the expected reward $\sum ^N _{n=0} \expectation{\pi}{Y(A_n)}$ after $N$ rounds, where $\expectation{\pi}{\cdot}$ is the expectation under the given policy and $A_n$ is the arm played on the $n^{th}$ round. We will use a similar performance measure, the cumulative regret \citep{lee2018structural} $R_N = N \mu^* - \sum^N_{n=1} \expectation{\pi}{Y(A_n)}$ where the max reward is $\mu^* = \max_{a \in \mathcal{A}} \mu_a$ . Using the regret decomposition lemma \citep[Lemma 4.5]{lattimore2020bandit}, we can write this in the form \begin{equation} \label{eq:standard_regret} R_N = \sum^K_{a=1} \Delta_a \expectation{\pi}{\#_a(N)} \end{equation} where each arm's gap from the best arm (``suboptimality gap'' \citep[\S 4.5]{lattimore2020bandit}) is $\Delta_a = \mu^* - \mu_a$ and $\#_a(N)$ is the total number of times arm $a$ was played after $N$ rounds. \begin{wrapfigure}{r}{0.35\textwidth} \vspace{-0.5cm} \centering \includegraphics[width=0.35\textwidth]{figures/experimental_graph-eps-converted-to.pdf} \caption{Toy \acro{scm} used throughout this paper, based on \citep[Fig. 3(c)]{lee2018structural} but with the important difference that this \acro{scm} is propagated in time.} \vspace{-0.5cm} \label{fig:example_scm} \end{wrapfigure} \paragraph{Connecting SCMs to MABs.} Echoing the approach taken by \citet[\S2]{lee2018structural}, using the notation and concepts introduced in \cref{sec:prelims}; let $\mathcal{M}$ be an SCM parametrised by $\left\langle \mat{U},\mat{V}, P(\mat{U}),\mat{F} \right\rangle$ where $Y \in \mat{V}$ is the, as-noted, reward variable. The \emph{arms} of the bandit are given by the set $\mathcal{A} = \{ \mat{a} \in D(\mat{A}) \mid \mat{A} \subseteq \mathcal{P}(\mat{V} \setminus \{Y\}) \}$\footnote{For example if $\mat{A} = \{Z,X\}$ then $D(\mat{A}) = D(Z)\times D(X)$. If $D(Z) = D(X) =[0,1]$ then $D(\mat{A}) = \{(0,0), (0,1),(1,0),(1,1) \}.$}. This is a set of all possible interventions on endogenous variables except the reward variable \citep[\S2]{lee2018structural}. Each arm is associated with a reward distribution $p(Y \mid \mathrm{do}(\mat{a}))$ where the mean reward $\mu_{\mat{a}}$ is $\expectation{\pi}{Y \mid \mathrm{do}(\mat{a})}$. This is the \acro{scm-mab} setting \citep{lee2018structural}, fully represented by the tuple $\left\langle \mathcal{M}, Y \right\rangle$. As noted by the authors an agent facing a \acro{scm-mab} $\left\langle \mathcal{M}, Y \right\rangle$, intervenes (plays arms) with knowledge of $\mathcal{G}(\mathcal{M})$ and $Y$ but does not have access to the structural equations model $\mat{F}$ and the joint distribution over the exogenous variables $P(\mat{U})$. \paragraph{Causality across time.} Taking inspiration from \citet[\S 2]{DCBO}, the authors consider causality in time, manifested by propagating a \acro{dag} in time, and connecting each time-slice \acro{dag} with directed edges as shown in \cref{fig:example_scm}. By doing this we are invariably considering a dynamic Bayesian network (\acro{dbn}) \citep{koller2009probabilistic}. As we are making interventions on time-slices of the DBN, we introduce notation to aid the exposition of the method. \begin{definition} Let $\mathcal{M}_i$ be the \acro{scm} at time step $t_i$ defined as $\mathcal{M}_i = \left\langle \mat{U}_{\tau},\mat{V}_{\tau}, P(\mat{U}_{\tau}),\mat{F}_{\tau} \right\rangle$ for $t>0$. The temporal window covered by the \acro{scm} spans $\tau \triangleq t_{i-1}:t_i$ i.e. taking into account only the most recent time-slice, and the actions taken therein. It is also possible to increase the size of $\tau$ to include the entire history i.e. $0:t_i$\footnote{The choice of window size is a difficult one. More information is typically better but we may also subject the model to `stale' information i.e. interventions which are no longer of any relevance or worse, misleading in the present scenario.} as is done in \cref{fig:mab_method}. \end{definition} \begin{definition} Let $\mathcal{G}_i(\scmt)$ \citep[p. 203]{pearl2000causality} be the induced subgraph associated with $\mathcal{M}_i$. In $\mathcal{G}_i(\scmt)$, following the rules of do-calculus \citep[Theorem 3.4.1]{pearl2000causality}, the intervened variable(s) at $t_{i-1}$ have no incoming edges i.e. the $t_{i-1}$ time-slice part of $\mathcal{G}_i(\scmt)$ has been mutilated in accordance with the implemented intervention at $t_{i-1}$ \end{definition} \section{Conclusion} We propose the chronological causal bandit (\acro{ccb}) algorithm which transfers information between causal bandits which have been played in the dynamical system at an earlier point in time. Some initial findings are demonstrated on a simple toy example where we show that taking the system dynamics into account has a profound effect on the final action selection. Further, whilst we in this example have assumed that $\text{d}t_i$ is the same for each trial, it remains a large assumption that will be studied further in future work. There are many other interesting avenues for further work such as the optimal length of horizon $N_i$ as well as determining the best time $t^*_i$ to play a bandit (i.e. start a trial). Moreover, the current framework allows for confounders to change across time-step -- something we have yet to explore. \section{Chronological Causal Bandits} The \acro{scm-mab} is particular to one graph, wherein which we seek to minimise $R_N$. We instead seek a sequence of interventions which minimise $R_{N_i}$ at each time-step $t_i$ (i.e. the start of the trial)\footnote{For clarity: each trial contains multiple time-trials (rounds), summarised in $N_i$.} of a piece-wise stationary \acro{scm-mab} as set out in the next paragraph. \input{sections/problem_setup} \input{sections/methodology_recursion} \input{figures/fig_sections/dcbo_visual} \section{Experiments} \label{sec:experiments} We demonstrate some empirical observations on the toy-example used throughout, shown in \cref{fig:example_scm}. We consider the first five time-slices (trials) as shown in \cref{fig:mab_method}. The reward distribution, under the \acro{pomis}, for each slice is shown in \cref{fig:reward_distros} (these distributions are found conditional upon the optimal intervention being \emph{implemented} in the preceding trial as shown in \cref{fig:mab_method}). For the true \textsc{sem} see \crefrange{eq:sem_t1a}{eq:sem_t1b} and \crefrange{eq:sem_t0a}{eq:sem_t0b}. \Cref{fig:reward_distros} shows that the system is in effect oscillating in the reward distribution. This is because the optimal intervention changes in-between trials. \input{figures/fig_sections/experimental_figure.tex} The horizon for each trial was set to 10,000. We used two common \acro{mab} solvers: Thompson sampling (TS, \citep{thompson1933likelihood}) and KL-UCB \citep{cappe2013kullback}. Displayed results are averaged over 100 replicates of each experiment shown in \cref{fig:results}. We investigate the CR and the optimal arm selection probability at various instances in time. For trial $i=0$, \acro{ccb} and \acro{scm-mab} are targeting the same reward distribution. Consequently they both identify the same optimal intervention $Z_0=z_0$. Come $i=1$ things start to change; having implemented the optimal intervention from $i=0$ \acro{ccb} is now targeting a different set of rewards (see \cref{fig:reward_distros}). The \acro{scm-mab}, being agnostic about past interventions, targets the same reward as previously (blue bars in \cref{fig:reward_distros}). As discussed, this ignores the dynamics of the system; a vaccinated population will greatly alter the healthcare environment, hence to administer the same vaccine (arm 3 at $i=0$) at the next clinical trial ($i=1$), without taking this into account, makes for poor public policy. \input{figures/fig_sections/experimental_figure_CR.tex} Consider the CR from the \acro{ccb} at trial $i=1$ \cref{fig:cr_ccb}; it is lower than the \acro{scm-mab} in \cref{fig:cr_scmmab}, as it is transferring preceding intervention to the current causal model, \cref{fig:mab_method}($i=1$), and now finds that $X_1=x_1$ is the optimal intervention (see \cref{sec:example_details} for full optimal intervention sequence). Let's now turn to \cref{fig:TS_results}, to minimise the regret the agent should be choosing the optimal action almost all of the time. But it is only possible to reduce regret if the algorithm has discovered the arm with the largest mean. In trials $i=1$ and $i=3$ the reward per arm, across the \acro{pomis}, is almost identical. As is stands the agent does not have a reasonable statistical certainty that is has found the optimal arm (orange and red curves in \cref{fig:TS_results}). But all have the same causal effect, why the CR in \cref{fig:cr_ccb} is low. \subsection{Dynamic Causal \acro{gp} model} Here we introduce the Dynamic Causal \acro{gp} model that is used as a surrogate model for the objective functions in \cref{eq:dcgo}. The prior parameters are constructed by exploiting the recursion in \eq \eqref{eq: theorem_eq}. At each time step $t$, the agent explores the sets in $\mathbb{M}_t \subseteq \mathcal{P}(\vec{X}_t)$ by selecting the next intervention to be the one maximizing a given acquisition function. The \acro{dcbo} algorithm is shown in \cref{alg:dcbo_alg}. \textbf{Prior Surrogate Model} \label{sec:dc_model} At each time step $t$ and for each $\X_{s,t} \in \mathbb{M}_t$, we place a \acro{gp} prior on the objective function $f_{s,t}(\vec{x}) = \expectation{}{Y_t|\DO{\X_{s,t}}{\vec{x}}, \mathds{1}_{t>0} \cdot I_{i-1}}$. We construct the prior parameters exploiting the recursive expression in \cref{eq: theorem_eq}: \begin{align*} &f_{s,t}(\vec{x}) \sim \mathcal{GP}(m_{s,t}(\vec{x}), k_{s,t}(\vec{x}, \vec{x}')) \text{ where } \\ &m_{s,t}(\vec{x}) = \expectation{}{f_Y^{Y}(\mat{f}^\star) + \widehat{\mathbb{E}}_{}[f_Y^{\text{NY}}(\x^{\acro{PY}}, \mat{i}^{\acro{PY}}, \mat{w})]} \wedge k_{s,t}(\vec{x}, \vec{x}') = k_{\acro{rbf}}(\vec{x}, \vec{x}') + \sigma_{s,t}(\vec{x})\sigma_{s,t}(\vec{x}') \end{align*} with $\sigma_{s,t}(\vec{x}) = \sqrt{\mathbb{V}[f_Y^{Y}(\mat{f}^\star) + \hat{\mathbb{E}}_{}[f_Y^{\text{NY}}(\x^{\acro{PY}}, \mat{i}^{\acro{PY}}, \mat{w})]}$ and $k_{\acro{rbf}}(\vec{x}, \vec{x}') \coloneqq \exp(-\frac{||\vec{x} -\vec{x}'||^2}{2l^2})$ represents the radial basis function kernel \cite{rasmussen2003gaussian}. $\hat{\mathbb{E}}_{}[f_Y^{\text{NY}}(\x^{\acro{PY}}, \mat{i}^{\acro{PY}}, \mat{w})] = \hat{\mathbb{E}}_{p(\mat{w}|\DO{\X_{s,t}}{\vec{x}}, \mat{i})}[f_Y^{\text{NY}}(\x^{\acro{PY}}, \mat{i}^{\acro{PY}}, \mat{w})]$ represents the expected value of $f_Y^{\text{NY}}(\x^{\acro{PY}}, \mat{i}^{\acro{PY}}, \mat{w})$ with respect to $p(\mat{w} \mid \DO{\X_{s,t}}{\vec{x}}, \mat{i})$ which is estimated via the do-calculus using observational data. The outer expectation in $m_{s,t}(\vec{x})$ and the variance in $\sigma_{s,t}(\vec{x})$ are computed with respect to $p(f_Y^{Y}, f_Y^{\text{NY}})$ which is also estimated using observational data. In this work we model $f_Y^{Y}$, $f_Y^{\text{NY}}$ and all functions in the \acro{scm} by independent \acro{gp}{s}. Both $m_{s,t}(\vec{x})$ and $\sigma_{s,t}(\vec{x})$ can be equivalently written by exploiting the equivalence in \eq \eqref{eq:alter_expression}. In both cases, this prior construction allows the integration of three different types of data: observational data, interventional data collected at time $t$ and the optimal interventional data points collected in the past. The former is used to estimate the \acro{scm} model and $p(\mat{w}|\DO{\X_{s,t}}{\vec{x}}, \mat{i})$ via the rules of do-calculus. The optimal interventional data points at $0:t-1$ determine the shift $f_Y^{Y}(\mat{f}^\star)$ while the interventional data collected at time $t$ are used to update the prior distribution on $f_{s,t}(\vec{x})$. Similar prior constructions were previously considered in statistic settings \cite{cbo, daggp} where only observational and interventional data at the current time step were used. The additional shift term appears here as there exists causal dynamics in the target variables and the objective function is affected by previous decisions. \cref{fig:toy_example} in the appendix shows a synthetic example in which accounting for the dynamic aspect in the prior formulation leads to a more accurate \acro{gp} posterior compared to the baselines, especially when the the optimum location changes across time steps. \textbf{Likelihood} Let $\dataset^I_{s,t} = (\mat{X}^I, \mat{Y}^I_{s,t})$ be the set of interventional datapoints collected for $\X_{s,t}$ with $\mat{X}^I$ being a vector of intervention values and $\mat{Y}^I_{s,t}$ representing the corresponding vector of observed target values. As in standard \acro{bo} we assume each $y_{s,t}$ in $\mat{Y}^I_{s,t}$ to be a noisy observation of the function $f_{s,t}(\vec{x})$ that is $y_{s,t}(\vec{x}) = f_{s,t}(\vec{x}) + \epsilon_{s,t}$ with $\epsilon_{s,t} \sim \mathcal{N}(0, \sigma^2)$ for $s \in \{1, \dots, |\mathbb{M}_t|\}$ and $t \in \{0,\dots, T\}$. In compact form, the joint likelihood function for $\dataset^I_{s,t}$ is $p(\mat{Y}^I_{s,t} \mid f_{s,t}, \sigma^2) = \mathcal{N}(f_{s,t}(\mat{X}^I),\sigma^2 \mat{I})$. \textbf{Acquisition Function} Given our surrogate models at time $t$, the agent selects the interventions to implement by solving a Causal Bayesian Optimization problem \cite{cbo}. The agent explores the sets in $\mathbb{M}_t$ and decides where to intervene by maximizing the Causal Expected Improvement (\acro{ei}). Denote by $y^\star_t$ the optimal observed target value in $\{\mat{Y}^I_{s,t}\}_{s=1}^{|\mathbb{M}_t|}$ that is the optimal observed target across all intervention sets at time $t$. The Causal \acro{ei} is given by $\textsc{ei}_{s,t}(\vec{x}) = \expectation{p(y_{s,t})}{\text{max}(y_{s,t}-y^\star_t, 0)}/\texttt{cost}(\X_{s,t}, \vec{x}_{s,t})$. Let $\alpha_1, \dots, \alpha_{|\mathbb{M}_t|}$ be solutions of the optimization of $\textsc{ei}_{s,t}(\vec{x})$ for each set in $\mathbb{M}_t$ and $\alpha^\star := \max \{\alpha_1, \dots, \alpha_{|\mathbb{M}_t|}\}$. The next best intervention to explore at time $t$ is given by $s^\star = \argmax_{s \in \{1, \cdots, |\mathbb{M}_t|\}} \alpha_s.$ Therefore, the set-value pair to intervene on is $(s^\star, \alpha^\star)$. At every $t$, the agents implement $H$ \emph{explorative} interventions in the system which are selected by maximizing the Causal \acro{ei}. Once the budget $H$ is exhausted, the agent implements what we call the \emph{decision} intervention $I_t$, that is the optimal intervention found at the current time step, and move forward to a new optimization at $t+1$ carrying the information in $y^\star_{0:t-1}$. The parameter $H$ determines the level of exploration of the system and acts as a budget for the \acro{cbo} algorithm. Its value is determined by the agent and is generally problem specific. \textbf{Posterior Surrogate Model} For any set $\X_{s,t} \in \mathbb{M}_t$, the posterior distribution $p(f_{s,t} \mid \dataset^I_{s,t})$ can be derived analytically via standard \acro{gp} updates. $p(f_{s,t}|\dataset^I_{s,t})$ will also be a \acro{gp} with parameters $m_{s,t}(\vec{x} \mid \dataset^I_{s,t}) = m_{s,t}(\vec{x}) + k_{s,t}(\vec{x}, \mat{X}^I)[k_{s,t}(\mat{X}^I, \mat{X}^I) + \sigma^2\mat{I}])(\mat{Y}^I_{s,t} - m_{s,t}(\mat{X}^I)$ and $k_{s,t}(\vec{x}, \vec{x}' \mid \dataset^I_{s,t})= k_{s,t}(\vec{x}, \vec{x}') - k_{s,t}(\vec{x}, \mat{X}^I)[k_{s,t}(\mat{X}^I, \mat{X}^I) + \sigma^2\mat{I}])k_{s,t}(\mat{X}^I, \vec{x}')$. \begin{wrapfigure}{R}{0.45\textwidth} \vspace{-5em} \begin{minipage}{.5\textwidth} \scalebox{0.90}{ \begin{algorithm}[H] \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \KwData{$\mathcal{D}^O$, $\{\dataset^I_{s,t=0}\}_{s \in \{0,\dots, |\mathbb{M}_0|\}}$, $\mathcal{G}_{0:T}$, $H$. } \KwResult{Optimal intervention path $\{\X_{s,t}^\star, \x_{s,t}^\star, y_t^\star\}_{t = 1}^T$} \textbf{Initialise}: $\mathbb{M}$, $\mathcal{D}^I_0$ and initial optimal $\dataset^{I}_{\star} = \emptyset$.\\ \For{$t = 0, \dots, T$}{ 1. Initialise dynamic causal \acro{gp} models for all $\X_{s,t} \in \mathbb{M}_t$ using $\mathcal{D}^I_{\star,t-1}$ if $t>0$. \\ 2. Initialise interventional dataset $\{\dataset^I_{s,t}\}_{s \in \{0,\dots, |\mathbb{M}_t|\}}$ \\ \For{$h = 1, \dots, H$}{ 1. Compute $\textsc{ei}_{s,t}(\vec{x})$ for each $\X_{s,t} \in \mathbb{M}_t$.\\ 2. Obtain $(s^\star, \alpha^\star)$ \\ 3. Intervene and augment $\dataset^{I}_{s=s^\star,t}$ \\ 4. Update posterior for $f_{s=s^\star, t}$ } 3. Return the optimal intervention $(\X_{s,t}^\star, \x_{s,t}^\star)$\\ 4. Append optimal interventional data $\mathcal{D}^I_{\star,t} = \mathcal{D}^I_{\star,t-1} \cup ((\X_{s,t}^\star, \x_{s,t}^\star), y^\star_t)$ } \caption{\acro{dcbo}} \label{alg:dcbo_alg} \end{algorithm} } \end{minipage} \vspace{-1em} \end{wrapfigure} \subsection{Connections} The problem setup we study differs from those considered by both \acro{cbo} \cite{cbo} and \acro{abo} \cite{abo}. In this section we draw the links between these methods highlighting the reasons why \acro{dcbo} is needed to solve the problem in \eq \eqref{eq:dcgo}. See \fig \ref{fig:map_methods} for a graphical representation of the relationship between \acro{dcbo} and alternative optimization methods. A thorough discussion is included in the appendix (\S X). \textbf{\acro{cbo} algorithm} \acro{cbo} tackles the causal dimension of the \acro{dcgo} problem but not the temporal dimension. While it can be used to find optimal interventions to perform in a \acro{dag}, this algorithm addresses static settings where variables in $\mathcal{G}$ are i.i.d. across time steps, i.e.\xspace $p(\mat{V}_t) = p(\mat{V})\text{,}\forall t$, and only one static target variable exists. For instance, \acro{cbo} can be used to find the optimal intervention for $Y$ in \fig \ref{fig:map_methods}, plate (b). Finding an optimal sequence of interventions for the \acro{dag} in \fig \ref{fig:map_methods} plate (d) requires running \acro{cbo} for $T$ times optimizing $Y_t$ at each time step. In doing that \acro{cbo} re-initializes all surrogate models at every $t$ thus loosing all the information collected from previous interventions and not accounting for how the previously taken interventions have changed the system. \textbf{\acro{abo} algorithm} \acro{abo} addresses dynamic settings but does not account for the causal relationships among variables. As done by \acro{bo}, \acro{abo} finds the optimal intervention values by breaking the causal dependencies among the inputs and intervening simultaneously on all of them thus setting $\X_{s,t} = \X_t$ for all $t$. Additionally, considering the inputs as fixed and not as random variables, \acro{abo} does not account for their temporal evolution (see \acro{dag} (c) of \fig \ref{fig:map_methods}). Furthermore, \acro{abo} considers a continuous time space and places a surrogate model on $Y_t = f(\vec{x}, t)$. Considering a spatio-temporal \acro{gp} allows \acro{abo} to predict the objective function ahead in time and track the evolution of the optimum. This is useful when the objective function rate of change over time is slow to gather enough samples to learn the relationships in space and time. In our discrete time setting this condition is equivalent to ask that the objective functions for different interventions do not change overtime. While this can be true in some stationary settings, it might not be the case in non stationary setting where both the \acro{dag} and the \acro{scm} evolve overtime. In this settings \acro{abo} convergence is significantly slower. \acro{dcbo} addresses both \acro{cbo} and \acro{abo} shortcomings. \acro{dcbo} considers a different objective function at every time step, incorporates prior interventional information in the objective function and limits the search space at every time step based on the topology of the $\mathcal{G}$. In addition, through the causal graph, \acro{dcbo} imposes additional structure on the surrogate models. This allows \acro{dcbo} to better track the dynamic of the objective function and deal with sharp changes in the objectives. \acro{dcbo} is thus a framework that can be practically used for sequential decision making in a variety of applications. \nd{Have to compare and contrast to reinforcement learning.}
proofpile-arXiv_065-78
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} The goal of this paper is to compute the conditional expectation $\mathbb{E}[Y \mid X]$ of a square-integrable random variable $Y$ given a $d$-dimensional random vector $X$, both defined on a common probability space $(\Omega, \cF, \p)$. The accurate estimation of conditional expectations is an important problem arising in different branches of science and engineering as well as various business applications. In particular, it plays a central role in regression analysis, which tries to model the relationship between a response variable $Y$ and a number of explanatory variables $X_1, \dots, X_d$ \citep[see, e.g.,][]{norman98, ryan2008modern, hastie2009elements, chatterjee2015regression}. But it also appears in different computational problems, such as the numerical approximation of partial differential equations and backward stochastic differential equations \citep[see, e.g.,][]{bally1997approximation, chevance1997numerical, BT, gobet2005regression, GobetT, FTW, beck2019deep}, stochastic partial differential equations and \citep[see, e.g.,][]{beck2020deep}, stochastic control problems \citep[see, e.g.,][]{astrom70,bain2008fundamentals}, stochastic filtering \citep[see, e.g.,][]{jazwinski2007stochastic}, the approximation of posterior distributions in Bayesian statistics \citep[see, e.g.][]{gelman2013bayesian}, complex valuation problems \citep[see, e.g.][]{carriere96,longstaff2001valuing, tsitsiklis2001regression,broadie2004stochastic, broadie2008,becker2020pricing} and risk management \citep[see, e.g.][]{lee2003computing, gordy2010nested, broadie2011efficient, BauerReussSinger, Cher}. In addition, conditional expectations are closely related to squared loss minimization problems arising in various machine learning applications \citep[see, e.g.,][]{hastie2009elements, goodfellow2016deep}. If it is possible to simulate from the conditional distribution of $Y$ given $X$, the conditional expectation $\mathbb{E}[Y \mid X]$ can be approximated with nested Monte Carlo simulation; see, e.g., \citep[see, e.g.,][]{BauerReussSinger, broadie2011efficient, broadie2015risk}. While the approach can be shown to converge for increasing sample sizes, it often is too time-consuming to be useful in practical applications. On the other hand, it is well known that $\mathbb{E}[Y \mid X]$ is of the form $\bar{f}(X)$ for a regression function $\bar{f} \colon \R^d \to \R$ which can be characterized as a minimizer\footnote{The conditional expectation $\mathbb{E}[Y \mid X]$ is unique up to $\p$-almost sure equality, and accordingly, the regression function $\bar{f}$ is unique up to almost sure equality with respect to the distribution of $X$.} of the mean squared distance \begin{equation} \label{msd} \mathbb{E} \edg{(Y - f(X))^2} \end{equation} over all Borel functions $f \colon \R^d \to \R$ \citep[see, e.g.,][]{bru1985meilleures}. However, in many applications, the minimization problem \eqref{msd} cannot be solved exactly. For instance, the joint distribution of $X$ and $Y$ might not be known precisely, or the problem might be too complicated to admit a closed-form solution. In such cases, it can be approximated with a least squares regression, consisting in minimizing an empirical mean squared distance \begin{equation} \label{emsd} \frac{1}{M} \sum_{m=1}^M \brak{\tilde{Y}^m - f(\tilde{X}^m)}^2 \end{equation} based on samples $(\tilde{X}^m ,\tilde{Y}^m)$ of $(X,Y)$ over a suitable subfamily ${\cal S}$ of all Borel functions $f \colon \R^d \to \R$. This typically entails the following three types of approximation errors: \begin{itemize} \item[{\rm (i)}] a function approximation error if the true regression function $\bar{f}$ does not belong to the function family ${\cal S}$; \item[{\rm (ii)}] a statistical error stemming from estimating the expected value \eqref{msd} with \eqref{emsd}; \item[{\rm (iii)}] a numerical error if the minimization of \eqref{emsd} over $\cS$ has to be carried out numerically. \end{itemize} Instead of analyzing the errors (i)--(iii), we here derive an alternative representation of the minimal mean squared distance $\mathbb{E} [(Y - \bar{f}(X))^2]$, which does not involve a minimization problem or require knowledge of the true regression function $\bar{f}$. This enables us to provide quantitative estimates on the quality of any numerical approximation $\hat{f}$ of $\bar{f}$. In particular, if a machine learning method is used to find $\hat{f}$, our approach contributes to trustworthy AI. While the empirical mean squared distance \eqref{emsd} can directly be minimized using samples $(\tilde{X}^m, \tilde{Y}^m)$ of $(X,Y)$, our approach to derive error bounds for the approximation of $\bar{f}$ requires that $Y$ be of the form $Y = h(X,V)$ for a known function $h \colon \R^{d+k} \to \R$ and a $k$-dimensional random vector $V$ independent of $X$. In typical statistical applications, only realizations of $(X,Y)$ can be observed and a structure of the form $Y = h(X,V)$ would have to be inferred from the data. But in many of the computational problems mentioned above, $Y$ is directly given in the form $Y = h(X,V)$. The rest of the paper is organized as follows: In Section \ref{sec:appr}, we first introduce the notation and some preliminary results before we formulate the precise mean squared distance minimization problem we are considering along with its empirical counterpart. Then we discuss upper bounds of the minimal mean squared distance and their approximation with Monte Carlo averages. In Section \ref{sec:est} we derive an expected value representation of the minimal mean squared distance which makes it possible to derive $L^2$-bounds on the error of any numerical approximation $\hat{f}$ of the true regression function $\bar{f}$. In Section \ref{sec:ex} we compute conditional expectations in different examples using linear regression, polynomial regression and feedforward neural networks with varying activation functions. We benchmark the numerical results against values obtained from our expected value representation of the minimal mean squared distance and derive $L^2$-error estimates. Section \ref{sec:conclusion} concludes, and in the Appendix we report auxiliary numerical results used to compute the figures shown in Section \ref{sec:ex}. \section{Numerical approximation of conditional expectations} \label{sec:appr} \subsection{Notation and preliminaries} \label{sec:notation} Let us first note that the mean squared distance \eqref{msd} does not necessarily have to be minimized with respect to the original probability measure $\p$. Indeed, the regression function $\bar{f} \colon \R^d \to \R$ only depends on the conditional distribution of $Y$ given $X$ and not on the distribution $\nu_X$ of $X$. More precisely, the measure $\p$ can be disintegrated as \[ \p[A] = \int_{\R^d} \p[A \mid X = x] d\nu_X(x), \quad A \in {\cal F}, \] where $\p[. \mid X = x]$ is a regular conditional version of $\p$ given $X$. For any Borel probability measure $\nu$ on $\R^d$ that is absolutely continuous with respect to $\nu_X$, \[ \p^{\nu}[A] := \int_{\R^d} \p[A \mid X = x] d\nu(x), \quad A \in {\cal F}, \] defines a probability measure on $\Omega$ under which $X$ has the modified distribution $\nu$ while the conditional distribution of $Y$ given $X$ is the same as under $\p$. Let us denote by $\mathbb{E}^{\nu}$ the expectation with respect to $\p^{\nu}$ and by ${\cal B}(\R^d ; \R)$ the set of all Borel functions $f \colon \R^d \to \R$. With this notation, one has the following. \begin{lemma} \label{lemma:choice} Assume $\mathbb{E}^{\nu} Y^2 < \infty$. Then a minimizer $\tilde{f} \colon \R^d \to \R$ of the distorted minimal mean squared distance \begin{equation} \label{D} D^{\nu} := \min_{f \in {\cal B}(\R^d ; \, \R)} \mathbb{E}^{\nu} \edg{ \brak{Y - f(X)}^2 } \end{equation} agrees with $\bar{f} \colon \R^d \to \R$ $\nu$-almost surely. In particular, if $\nu$ has the same null sets as $\nu_X$, then $\tilde{f} = \bar{f}$ $\nu_X$-almost surely. \end{lemma} \begin{proof} A Borel function $\tilde{f} \colon \R^d \to \R$ minimizes \eqref{D} if and only if \[ \tilde{f}(x) = \argmin_{z \in \R} \int_{\R} (y - z)^2 \, \p[Y \in dy \mid X = x] \quad \mbox{for $\nu$-almost all } x \in \R^d. \] Since $\bar{f}$ has an analogous representation holding for $\nu_X$-almost all $x \in \R^d$, it follows that $\tilde{f}$ agrees with $\bar{f}$ $\nu$-almost surely. In particular, if $\nu$ has the same null sets as $\nu_X$, then $\tilde{f} = \bar{f}$ $\nu_X$-almost surely. \end{proof} Lemma \ref{lemma:choice} gives us the flexibility to choose a distribution $\nu$ on $\R^d$ which assigns more weight than $\nu_X$ to regions of $\R^d$ that are important in a given application; see, e.g., Lemma \ref{lemma:point} below. \subsection{Upper bound of the minimal mean squared distance} \label{sec:upper} In many situations, the minimization problem \eqref{D} cannot be solved exactly. But if one has access to $\p^{\nu}$-samples $(\tilde{X}^m, \tilde{Y}^m)$ of $(X,Y)$, the true regression function $\bar{f}$ can be approximated by minimizing the empirical mean squared distance \begin{equation} \label{empD} \frac{1}{M} \sum_{m =1}^M \brak{\tilde{Y}^m - f(\tilde{X}^m)}^2 \end{equation} over $f$ in a subset ${\cal S}$ of ${\cal B}(\R^d ; \R)$. In the examples of Section \ref{sec:ex} below, we compare results obtained by using linear combinations of $1, X_1, \dots, X_d$, second order polynomials in $X_1, \dots, X_d$ as well as feedforward neural networks with different activation functions. But irrespective of the method used to obtain an approximation of $\bar{f}$, any Borel measurable candidate regression function $\hat{f} : \R^d \to \R$ yields an upper bound \begin{equation} \label{Unu} U^{\nu} := \mathbb{E}^{\nu} \edg{\brak{Y - \hat{f}(X)}^2} \end{equation} of the minimal mean square distance $D^{\nu}$. However, since in typical applications, $U^{\nu}$ cannot be computed exactly, we approximate it with a Monte Carlo estimate \[ U^{\nu}_N := \frac{1}{N} \sum_{n =1}^{N} \brak{Y^n - \hat{f}(X^n)}^2 \] based on $N$ independent $\p^{\nu}$-samples $(X^n, Y^n)_{n =1}^{N}$ of $(X,Y)$ drawn independently of any data $(\tilde{X}^m, \tilde{Y}^m)_{m = 1}^M$ used to determine $\hat{f}$. Provided that $\mathbb{E}^{\nu} [ (Y -\hat{f}(X))^2] < \infty$, one obtains from the strong law of large numbers that \[ \lim_{N \to \infty} U^{\nu}_N = U^{\nu} \quad \p^{\nu}\mbox{-almost surely.} \] To derive confidence intervals, we consider the sample variance \[ v^{U,\nu}_{N} := \frac{1}{N-1} \sum_{n=1}^{N} \brak{\brak{Y^n - \hat{f}(X^n)}^2 - U^{\nu}_N }^2 \] and denote, for $\alpha \in (0,1)$, by $q_{\alpha}$ the $\alpha$-quantile of the standard normal distribution. Then the following holds. \begin{lemma} \label{lemma:ciU} Assume $\mathbb{E}^{\nu} \, Y^4 < \infty$ and $\mathbb{E}^{\nu} \, |\hat{f}(X)|^4 < \infty$. Then, for every $\alpha \in (0,1)$, \begin{equation} \label{conub} \liminf_{N \to \infty} \p^{\nu} \edg{\abs{U^{\nu} - U^{\nu}_N} \le q_{1 - \alpha}\sqrt{\frac{v^{U,\nu}_{N}}{N}} \; } \ge 1 - 2 \alpha. \end{equation} \end{lemma} \begin{proof} In the special case where $Y = \hat{f}(X)$ $\p^{\nu}$-almost surely, we have $U^{\nu} = U^{\nu}_N = v^{U,\nu}_N = 0$ \, $\p^{\nu}$-almost surely for all $N \ge 1$. So $\eqref{conub}$ holds trivially. On the other hand, if $\p^{\nu}[Y \neq \hat{f}(X)] > 0$, it follows from the assumptions and the strong law of large numbers that $v^{U,\nu}_N$ converges $\p^{\nu}$-almost surely to $\operatorname{Var}^{\p^{\nu}} \brak{ (Y - \hat{f}(X))^2} > 0$ for $N \to \infty.$ Therefore, one obtains from the central limit theorem and Slutky's theorem that \[ \lim_{N \to \infty} \p^{\nu} \edg{\sqrt{\frac{N}{v^{U,\nu}_N}} \, \abs{U^{\nu} - U^{\nu}_{N}} \le q_{1-\alpha}} = 1- 2\alpha, \] which shows \eqref{conub}. \end{proof} \section{Error estimates} \label{sec:est} For a given candidate regression function $\hat{f} \colon \R^d \to \R$, we try to obtain bounds on the approximation error $\bar{f} - \hat{f}$. To do that we assume in this section that $Y$ has a representation of the form: \[ {\bf (R)} \qquad \begin{aligned} & Y = h(X,V) \; \mbox{\sl for a Borel function $h \colon \R^{d+k} \to \R$ and a $k$-dimensional}\\[-0.8mm] & \mbox{\sl random vector $V$ that is independent of $X$ under $\p^{\nu}$.} \end{aligned} \] \begin{Remark} In principle, $Y$ can always be assumed to be of the form (R). Indeed, if, e.g., $V$ is a random variable which, under $\p^{\nu}$, is uniformly distributed on the unit interval $(0,1)$ and independent of $X$, the function $h \colon \R^d \times (0,1) \to \R$ can be chosen as a conditional $\p^{\nu}$-quantile function of $Y$ given $X$ and extended to the rest of $\R^{d+ 1}$ arbitrarily. Then $(X,h(X,V))$ has the same $\p^{\nu}$-distribution as $(X,Y)$, and, as a consequence, $\mathbb{E}^{\nu} [h(X,V) \mid X] = \bar{f}(X)$ $\nu$-almost surely. However, for our method to be applicable, the function $h$ needs to be known explicitly. \end{Remark} \subsection{Alternative representation of the minimal mean squared distance} \label{sec:altrep} The key ingredient of our approach is an alternative representation of the minimal mean squared distance \begin{equation} \label{Dnu} D^{\nu} = \min_{f \in {\cal B}(\R^d ; \, \R)} \mathbb{E}^{\nu} \edg{\brak{Y - f(X)}^2} = \mathbb{E}^{\nu} \edg{\brak{Y - \bar{f}(X)}^2} \end{equation} which does not involve a minimization problem or require knowledge of the true regression function $\bar{f}$ and, at the same time, can be approximated efficiently. An analogous representation exists for the squared $L^2$-norm of the conditional expectation \begin{equation} \label{Cnu} C^{\nu} := \mathbb{E}^{\nu} \, \bar{f}^2(X) = \no{ [\mathbb{E}^{\nu}[Y \mid X] }^2_{L^2(\p^{\nu})}, \end{equation} which will be helpful in the computation of relative approximation errors in Section \ref{sec:ex} below. Let us define \[ Z := h(X,\tilde{V}), \] where $\tilde{V}$ is a $k$-dimensional random vector with the same $\p^{\nu}$-distribution as $V$ that is independent of $(X,V)$ with respect to $\p^{\nu}$. Then, we have the following. \begin{proposition} \label{prop:rep} If $\mathbb{E}^{\nu} \, Y^2 < \infty$, then \[ C^{\nu} = \mathbb{E}^{\nu} \edg{Y Z} \quad \mbox{and} \quad D^{\nu} = \mathbb{E}^{\nu} \edg{Y (Y- Z)}. \] \end{proposition} \begin{proof} It follows from independence that \[ \mathbb{E}^{\nu} \edg{Y Z} = \mathbb{E}^{\nu} \edg{\mathbb{E}^{\nu} \edg{h(X,V) h(X,\tilde{V}) \mid X}} = \mathbb{E}^{\nu} \edg{\bar{f}^2(X)} = C^{\nu}. \] Similarly, one has \[ \mathbb{E}^{\nu} \edg{Y \bar{f}(X)} = \mathbb{E}^{\nu} \edg{\mathbb{E}[Y \mid X] \bar{f}(X) } = \mathbb{E}^{\nu} \edg{\bar{f}^2(X)}, \] from which one obtains \[ \mathbb{E}^{\nu} \edg{Y (Y-Z)} = \mathbb{E}^{\nu} \edg{Y^2 - \bar{f}^2(X)} = \mathbb{E}^{\nu} \edg{Y^2 - 2 Y \bar{f}(X) + \bar{f}^2(X)} = \mathbb{E}^{\nu} \edg{\brak{Y - \bar{f}(X)}^2} = D^{\nu}. \] \end{proof} \subsection{Approximation of $C^{\nu}$ and $D^{\nu}$} \label{sec:apprCD} To approximate $C^{\nu}$ and $D^{\nu}$, we use $\p^{\nu}$-samples $Z^n := h(X^n, \tilde{V}^n)$, $n = 1, \dots, N$, of $Z$ based on independent copies $\tilde{V}^n$ of $V$ drawn independently of $(\tilde{X}^m, \tilde{Y}^m)$, $m = 1, \dots, M$, and $(X^n, Y^n, V^n)$, $n = 1, \dots, N$. The corresponding Monte Carlo approximations of $C^{\nu}$ and $D^{\nu}$ are \begin{equation} \label{CDnuN} C^{\nu}_N := \frac{1}{N} \sum_{n=1}^{N} Y^n Z^n \quad \mbox{and} \quad D^{\nu}_N := \frac{1}{N} \sum_{n=1}^{N} Y^n (Y^n- Z^n), \end{equation} respectively. If $\mathbb{E}^{\nu} \, Y^2 < \infty$, one obtains from the strong law of large numbers that \[ \lim_{N \to \infty} C^{\nu}_N = C^{\nu} \quad \mbox{and} \quad \lim_{N \to \infty} D^{\nu}_N = D^{\nu} \quad \p^{\nu}\mbox{-almost surely.} \] Moreover, for the sample variances \[ v^{C,\nu}_N := \frac{1}{N-1} \sum_{n=1}^{N} \brak{Y^n Z^n - C^{\nu}_N}^2 \quad \mbox{and} \quad v^{D,\nu}_N := \frac{1}{N-1} \sum_{n=1}^{N} \brak{Y^n (Y^n-Z^n) - D^{\nu}_N}^2, \] the following analog of Lemma \ref{lemma:ciU} holds. \begin{lemma} \label{lemma:ciD} If $\mathbb{E}^{\nu} \, Y^4 < \infty$, then, for every $\alpha \in (0,1)$, \begin{equation} \label{conc} \liminf_{N \to \infty} \p^{\nu} \edg{\abs{C^{\nu} - C^{\nu}_N} \le q_{1 - \alpha}\sqrt{\frac{v^{C,\nu}_{N}}{N}} \; } \ge 1 - 2 \alpha \end{equation} and \begin{equation} \label{cond} \liminf_{N \to \infty} \p^{\nu} \edg{\abs{D^{\nu} - D^{\nu}_N} \le q_{1 - \alpha}\sqrt{\frac{v^{D,\nu}_{N}}{N}} \; } \ge 1 - 2 \alpha. \end{equation} \end{lemma} \begin{proof} If $C^{\nu} = Y Z$ $\p^{\nu}$-almost surely, then $C^{\nu} - C^{\nu}_N = v^{C,\nu}_N = 0$ $\p^{\nu}$-almost surely for all $N \ge 1$, and $\eqref{conc}$ is immediate. On the other hand, if $\p^{\nu}[C^{\nu} \neq YZ] > 0$, one obtains from the strong law of large numbers that $v^{C,\nu}_N \to \operatorname{Var}^{\p{\nu}} \brak{YZ} > 0$ $\p^{\nu}$-almost surely for $N \to \infty$, and it follows from the central limit theorem together with Slutky's theorem that \[ \lim_{N \to \infty} \p^{\nu} \edg{\sqrt{\frac{N}{v^{C,\nu}_N}} \, \abs{C^{\nu} - C^{\nu}_{N}} \le q_{1-\alpha}} = 1- 2\alpha. \] This shows \eqref{conc}. \eqref{cond} follows analogously. \end{proof} \subsection{$L^2$-estimates on the approximation error} \label{sec:L2est} We now derive $L^2$-estimates on the error resulting from approximating the true regression function $\bar{f}$ with a candidate regression function $\hat{f}$. Let us denote by $L^2(\nu)$ the space of all Borel functions $f \colon \R^d \to \R$ satisfying \[ \n{f}^2_{L^2(\nu)} := \mathbb{E}^{\nu} f^2(X) = \int_{\R^d} f^2(x) d \nu(x) < \infty \] and define \begin{equation} \label{Fnu} F^{\nu} := \n{\bar{f} - \hat{f}}^2_{L^2(\nu)} = \mathbb{E}^{\nu} \edg{\brak{\bar{f}(X) - \hat{f}(X)}^2}. \end{equation} With this notation, the following holds. \begin{theorem} \label{thm:error} If $\mathbb{E}^{\nu} \, Y^2 < \infty$ and $\mathbb{E}^{\nu} \hat{f}^2(X) < \infty$, then \begin{equation} \label{F} F^{\nu} = \mathbb{E}^{\nu} \edg{YZ + \hat{f}(X) \brak{\hat{f}(X) - Y - Z}}. \end{equation} Moreover, if $\mathbb{E}^{\nu} \, Y^4 < \infty$ and $\mathbb{E}^{\nu} \hat{f}^4(X) < \infty$, one has for all $\alpha \in (0,1)$, \begin{equation} \label{ciF} \liminf_{N \to \infty} \p^{\nu} \edg{F^{\nu} \le F^{\nu}_N + q_{\alpha} \, \sqrt{\frac{v^{F,\nu}_N}{N}} \;} \ge \alpha, \end{equation} where \begin{equation} \label{FnuN} F^{\nu}_N := \frac{1}{N} \sum_{n = 1}^{N} \crl{Y^n Z^n + \hat{f}(X^n) \brak{\hat{f}(X^n) - Y^n - Z^n}} \end{equation} and \[ v^{F, \nu}_N := \frac{1}{N-1} \sum_{n=1}^{N} \crl{Y^n Z^n + \hat{f}(X^n) \brak{\hat{f}(X^n) - Y^n - Z^n} - F^{\nu}_N}^2. \] \end{theorem} \begin{proof} Let us first assume that $\mathbb{E}^{\nu} Y^2 < \infty$ and $\mathbb{E}^{\nu} \hat{f}^2(X) < \infty$. Then, since $\bar{f}(X) = \mathbb{E}^{\nu}[Y \mid X]$, one also has $\mathbb{E}^{\nu} \bar{f}^2(X) < \infty$, and $Y - \bar{f}(X)$ is orthogonal to $\bar{f}(X) - \hat{f}(X)$ in $L^2(\p^{\nu})$. So it follows from Pythagoras' theorem that \begin{equation} \label{Pyth} F^{\nu} = \no{\bar{f}(X) - \hat{f}(X)}^2_{L^2(\p^{\nu})} = \no{Y - \hat{f}(X)}^2_{L^2(\p^{\nu})} - \no{Y - \bar{f}(X)}^2_{L^2(\p^{\nu})}. \end{equation} Moreover, we know from Proposition \ref{prop:rep} that \begin{equation} \label{prop} \no{Y - \bar{f}(X)}^2_{L^2(\p^{\nu})} = \mathbb{E}^{\nu} \edg{\brak{Y - \bar{f}(X)}^2} = \mathbb{E}^{\nu}[Y(Y-Z)]. \end{equation} So, since \[ \mathbb{E}^{\nu} \edg{Y \hat{f}(X)} = \mathbb{E}^{\nu} \edg{\mathbb{E}^{\nu}[h(X,V) \mid X] \, \hat{f}(X)} = \mathbb{E}^{\nu} \edg{\mathbb{E}^{\nu}[h(X,\tilde{V}) \mid X] \, \hat{f}(X)} = \mathbb{E}^{\nu} \edg{Z \hat{f}(X)}, \] one obtains from \eqref{Pyth} und \eqref{prop} that \[ F^{\nu} = \mathbb{E}^{\nu} \edg{Y^2 - (Y+Z) \hat{f}(X) + \hat{f}^2(X)- Y(Y-Z)} = \mathbb{E}^{\nu} \edg{YZ + \hat{f}(X) \brak{\hat{f}(X) - Y - Z}}, \] which shows \eqref{F}. Now suppose $\mathbb{E}^{\nu} \, Y^4 < \infty$ and $\mathbb{E}^{\nu} \hat{f}^4(X) <\infty$. In the special case, where \[ YZ + \hat{f}(X)(\hat{f}(X) - Y - Z) = F^{\nu} \quad \mbox{$\p^{\nu}$-almost surely,} \] one has \[ F^{\nu} - F^{\nu}_N = v^{F,\nu}_N = 0 \quad \mbox{$\p^{\nu}$-almost surely for all } N \ge 1, \] and \eqref{ciF} is clear. On the other hand, if \[ \p^{\nu} \edg{YZ + \hat{f}(X)(\hat{f}(X) - Y - Z) \neq F^{\nu}} > 0, \] it follows from the strong law of large numbers that \[ \lim_{N \to \infty} v^{F,\nu}_N = {\rm Var}^{\p^{\nu}} \brak{YZ + \hat{f}(X)(\hat{f}(X) - Y - Z} > 0 \quad \mbox{$\p^{\nu}$-almost surely.} \] So, one obtains from the central limit theorem and Slutky's theorem that \begin{equation} \label{conS} \liminf_{N \to \infty} \p^{\nu} \edg{\sqrt{\frac{N}{v^{F,\nu}_N}} \brak{F^{\nu} - F^{\nu}_N} \le q_{\alpha}} = \alpha, \end{equation} which implies \eqref{ciF}. \end{proof} The next result shows how the original distribution $\nu_X$ of $X$ can be distorted so as to obtain a probability measure $\nu \ll \nu_X$ on $\R^d$ that concentrates more weight around a given point $x_0$ in the support of $\nu_X$. Then, provided that $\bar{f} - \hat{f}$ is continuous at $x_0$, $\n{\bar{f} - \hat{f}}_{L^2(\nu)}$ approximates the point-wise difference $|\bar{f}(x_0) - \hat{f}(x_0)|$. More precisely, if $\n{.}_2$ denotes the standard Euclidean norm on $\R^d$, we have the following. \begin{lemma} \label{lemma:point} Assume $\mathbb{E} \, Y^2 < \infty$, $\mathbb{E} \, \hat{f}^2(X) < \infty$ and $\bar{f} - \hat{f}$ is continuous at some point $x_0 \in \R^d$. Let $(\nu_n)_{n \ge 1}$ be a sequence of Borel probability measures on $\R^d$ given by $d \nu_n/d\nu_X = p_n$ for a sequence of Borel functions $p_n \colon \R^d \to \R_+$ satisfying \[ \int_{\R^d} p_n(x) d\nu_X(x) = 1 \; \mbox{ for all } n \ge 1 \quad \mbox{and} \quad \sup_{x \in \R^d \, : \, \n{x - x_0}_2 > 1/n} p_n(x) \to 0 \; \mbox{ for $n \to \infty$}. \] Then \[ \lim_{n \to \infty} \n{\bar{f} - \hat{f}}_{L^2(\nu_n)} = |\bar{f}(x_0) - \hat{f}(x_0)|. \] \end{lemma} \begin{proof} It follows from $\mathbb{E} \, Y^2 < \infty$ that $\mathbb{E} \, \bar{f}^2(X) < \infty$, which together with the condition $\mathbb{E} \, \hat{f}^2(X) < \infty$, implies that $f := \bar{f} - \hat{f} \in L^2(\nu_X)$. Moreover, it follows from the assumptions that for every $\varepsilon > 0$, there exists an $n \ge 1$ such that \[ \abs{f(x) - f(x_0)} \le \varepsilon \quad \mbox{for all } x \in \R^d \mbox{ satisfying } \n{x - x_0}_2 \le 1/n, \] and \[ \int_{\crl{x \in \R^d \, : \, \n{x - x_0}_2 > 1/n}} \brak{f(x) - f(x_0)}^2 d\nu_n(x) = \int_{\crl{x \in \R^d \, : \, \n{x - x_0}_2 > 1/n}} \brak{f(x) - f(x_0)}^2 p_n(x) d\nu_X(x) \le \varepsilon^2. \] Hence, \begin{eqnarray*} \n{f - f(x_0)}^2_{L^2(\nu_n)} &=& \int_{\crl{x \in \R^d \, : \, \n{x - x_0}_2 \le 1/n}} \brak{f(x) - f(x_0)}^2 d\nu_n(x)\\ &&+ \int_{\crl{x \in \R^d \, : \, \n{x - x_0}_2 > 1/n}} \brak{f(x) - f(x_0)}^2 d\nu_n(x) \le 2 \varepsilon^2, \end{eqnarray*} and therefore, \[ \abs{\n{f}_{L^2(\nu_n)} - |f(x_0)|} \le \n{f - f(x_0)}_{L^2(\nu_n)} \le \sqrt{2}\, \varepsilon. \] Since $\varepsilon$ was arbitrary, this proves the lemma. \end{proof} \section{Examples} \label{sec:ex} In all our examples we compute a candidate regression function $\hat{f} \colon \R^d \to \R$ by minimizing an empirical mean squared distance of the form \eqref{empD}. For comparison reasons we minimize \eqref{empD} over different subsets ${\cal S} \subseteq {\cal B}(\R^d; \R)$, in each case using a numerical method suited to the specific form of ${\cal S}$. \begin{enumerate} \item First, we use linear regression on $1, X_1, \dots, X_d$. The corresponding function family ${\cal S}$ consists of all linear combinations of $1, x_1, \dots, x_d$, and the minimization of the empirical mean squared distance \eqref{empD} becomes the ordinary least squares problem \[ \min_{\beta \in \R^{d+1}} \sum_{m=1}^M \brak{\tilde{Y}^m - \beta_0 - \sum_{i=1}^d \beta_i \tilde{X}^m_i}^2. \] This yields a candidate regression function of the form $\hat{f}(x) = \hat{\beta}_0 + \sum_{i=1}^d \hat{\beta}_i x_i$, where $\hat{\beta} \in \R^{d+1}$ is a solution of the normal equation \begin{equation} \label{neq} A^T A \, \hat{\beta} = A^T y \end{equation} with parameters \[ A = \left( \begin{array}{cccc} 1 & \tilde{X}^1_1 & ... & \tilde{X}^1_d\\ 1 & \tilde{X}^2_1 & ... & \tilde{X}^2_d\\ ... & ... & ... & ...\\ 1 & \tilde{X}^M_1 & ... & \tilde{X}^M_d \end{array} \right) \quad \mbox{and} \quad y = \left( \begin{array}{cccc} \tilde{Y}^1\\ \dots \\ \dots \\ \tilde{Y}^M \end{array} \right). \] In Sections \ref{ex:poly} and \ref{ex:non-poly} we use $M = 2 \times 10^6$ independent Monte Carlo samples $(\tilde{X}^m, \tilde{Y}^m)$ for the linear regression, while in Sections \ref{ex:max-call} and \ref{ex:binary} we use $M =$ 500,000 of them. If the matrix $A^T A$ is positive definite and well-conditioned, equation \eqref{neq} has a unique solution $\hat{\beta}$, which can be computed using the Cholesky decomposition $R^TR$ of $A^T A$ to solve $R^T z = A^T y$ and $R \hat{\beta} = z$ in two steps. On the other hand, if $A^T A$ is not invertible or ill-conditioned, the Cholesky method is numerically unstable. In this case, we compute a singular value decomposition $U \Sigma V^T$ of $A$ for orthogonal matrices $U \in \R^{M \times M}$, $V \in \R^{(d+1) \times (d+1)}$ and a diagonal matrix $\Lambda \in \R^{M \times (d+1)}$ with diagonal entries $\lambda_1 \ge \lambda_2 \ge \dots \ge \lambda_{d+1} \ge 0$. The solution of \eqref{neq} with the smallest Euclidean norm is then given by \[ \hat{\beta} = V \left( \begin{array}{ccccc} \lambda_1^{-1} 1_{\crl{\lambda_1 > 0}} & 0 & \dots & \dots & 0\\ 0 & \lambda^{-1}_2 1_{\crl{\lambda_2 > 0}} & \dots & \dots & 0\\ 0 & \dots & \dots & \dots & 0\\ 0 & \dots & \dots & \dots & \lambda_{d+1}^{-1} 1_{\crl{\lambda_{d+1} > 0}}\\ 0 & \dots & \dots & \dots & 0\\ \dots & \dots & \dots & \dots & \dots\\ 0 & \dots & \dots & \dots & 0 \end{array} \right) U^T y, \] which, for numerical stability reasons, we approximate with the truncated SVD solution \[ \hat{\beta}_c = V \left( \begin{array}{ccccc} \lambda_1^{-1} 1_{\crl{\lambda_1 > c}} & 0 & \dots & \dots & 0\\ 0 & \lambda^{-1}_2 1_{\crl{\lambda_2 > c}} & \dots & \dots & 0\\ 0 & \dots & \dots & \dots & 0\\ 0 & \dots & \dots & \dots & \lambda_{d+1}^{-1} 1_{\crl{\lambda_{d+1} > c}}\\ 0 & \dots & \dots & \dots & 0\\ \dots & \dots & \dots & \dots & \dots\\ 0 & \dots & \dots & \dots & 0 \end{array} \right) U^T y \] for a small cutoff value $c > 0$; see, e.g., \citet{bjorck96}. \item As second method we use second order polynomial regression; that is, we regress on $1, X_1, \dots, X_d$ and all second order terms $X_i X_j$, $1 \le i \le j \le d$. ${\cal S}$ is then the linear span of $1, x_1, \dots, x_d$ and $x_ix_j$, $1 \le i \le j \le d$, and a canditate regression function can be computed as in Method 1 above, except that now the feature matrix $A$ has $1 + 3d/2 + d^2/2$ columns. As before, we use $M = 2 \times 10^6$ independent Monte Carlo samples $(\tilde{X}^m, \tilde{Y}^m)$ in Sections \ref{ex:poly} -- \ref{ex:non-poly} and 500,000 of them in Sections \ref{ex:max-call} -- \ref{ex:binary}, and again, we use the Cholesky decomposition of $A^T A$ to solve \eqref{neq} if $A^T A$ is well-conditioned and a truncated SVD solution otherwise. \item In our third method, ${\cal S}$ consists of all neural networks of a given architecture. In this paper we focus on feedforward neural networks of the form \begin{equation} \label{nn} f^{\theta} = \psi \circ a_{D}^{\theta}\circ \varphi \circ a_{D-1}^{\theta} \circ \dots \circ \varphi \circ a_{1}^{\theta}, \end{equation} where \begin{itemize} \item $D \ge 1$ is the depth of the network. \item $a^{\theta}_{i} : \mathbb{R}^{q_{i-1}} \to \mathbb{R}^{q_{i}}$, $i = 1, \dots, D$, are affine transformations of the form $a^{\theta}_{i}(x) = A_ix + b_i$ for matrices $A_i \in \mathbb{R}^{q_i \times q_{i-1}}$ and vectors $b_i \in \mathbb{R}^{q_i}$, where $q_0$ is the input dimension, $q_D = 1$ and $q_i$ is the number of neurons in the $i$-th hidden layer, $i = 1, \dots, D - 1$. \item $\varphi \colon \R \to \R$ is a non-linear activation function applied component-wise in each hidden layer. \item $\psi \colon \R \to \R$ is a readout function, which often is just the identity, but depending on the form of the target function, could e.g., also be an exponential or logistic function. \end{itemize} In the examples below, we use networks of depth $D = 4$ and 128 neurons in each of the three hidden layers. We compare the commonly used activation functions $\varphi = \tanh$ and ${\rm ReLU}$ to the following smooth version of ${\rm Leaky ReLU}$: \[ {\rm LSE}(x) = \log \brak{e^{\alpha x} + e^x} \quad \mbox{for } \alpha = 0.01, \] which is efficient to evaluate numerically and, by the LogSumExp inequality, satisfies \[ {\rm LeakyReLU}(x) = \max \{\alpha x, x\} \leq {\rm LSE}(x) \leq {\rm LeakyReLU}(x) + \log{2}. \] In addition, ${\rm LSE}$ is everywhere differentiable with non-vanishing derivative, which alleviates the problem of vanishing gradients that can arise in the training of $\tanh$ and ${\rm ReLU}$ networks. We initialize the parameter vector $\theta$ according to Xavier initialization \cite{glorot2010understanding} and then optimize it by iteratively decreasing the empirical mean squared distance \eqref{empD} with Adam stochastic gradient descent \cite{kingma2014adam} using mini-batches of size $2^{20}$ and batch-normalization\footnote{Note that while the trained network is of the form \eqref{nn}, training with batch-normalization decomposes each affine transformation into a concatenation $a^{\theta}_i = a^{\theta}_{i,2} \circ a^{\theta}_{i,1}$ for a general affine transformation $a^{\theta}_{i,1} \colon \R^{q_{i-1}} \to \R^{q_i}$ and a batch-normalization transformation $a^{\theta}_{i,2} \colon \R^{q_i} \to \R^{q_i}$, both of which are learned from the data. This usually stabilizes the training process but increases the number of parameters that need to be learned.} \cite{IoffeSzegedy} before each activation $\varphi$. We perform 50,000 gradient steps with standard Adam parameters, except that we start with a learning rate of 0.001, which we reduce manually by a factor of 10 after 30,000 iterations. To avoid slow cross device communications between the CPU and the GPU, we simulate all samples on the fly during the training procedure. Since we simulate from a model, we can generate a large amount of training samples and therefore, do not need to worry about overfitting to the training data. \end{enumerate} \begin{Remark} In many applications, the performance of the numerical regression can be improved with little additional effort by adding a redundant feature of the form $a(X)$ for a Borel measurable function $a \colon \R^d \to \R$ capturing important aspects of the relation between $X$ and $Y$. For instance, if $Y$ is given by $Y = h(X,V)$ for a Borel function $h \colon \R^{d+k} \to \R$ and a $k$-dimensional random vector $V$, adding the additional feature $a(X) = h(X,0)$, or something similar, often yields good results. Instead of minimizing the mean squared distance \eqref{D}, one then tries to find a Borel function $\hat{g} \colon \R^{d+1} \to \R$ that minimizes $\mathbb{E}^{\nu} \edg{\brak{Y - \hat{g}(X, a(X))}^2}$ and approximates the regression function $\bar{f} \colon \R^d \to \R$ with $\hat{f}(x) = \hat{g}(x, a(x))$, $x \in \R^d$. \end{Remark} In all examples, we report for all different methods used to determine a candidate regression function $\hat{f}$, \begin{itemize} \item an approximate 95\% confidence interval for $U^{\nu} = \mathbb{E}^{\nu} \edg{\brak{Y - \hat{f}(X)}^2}$ using \eqref{conub}. \item an approximate 95\% confidence interval for $D^{\nu} = \mathbb{E}^{\nu} \edg{\brak{Y - \bar{f}(X)}^2}$ using \eqref{cond}. \item an estimate of the relative error $\n{\bar{f} - \hat{f}}_{L^2(\nu)}/ \n{\bar{f}}_{L^2(\nu)}$ of the form $\sqrt{F^{\nu}_N/C^{\nu}_N}$ for $C^{\nu}_N$ and $F^{\nu}_N$ given in \eqref{CDnuN} and \eqref{FnuN}, respectively. Note that while the theoretical values $C^{\nu} = \n{\bar{f}}_{L^2(\nu)}$ and $F^{\nu} = \n{\bar{f} - \hat{f}}^2_{L^2(\nu)}$ are both non-negative, in some of our examples, $F^{\nu}$ is close to zero. So due to Monte Carlo noise, the estimate $F^{\nu}_N$ can become negative. In these cases, we report $- \sqrt{- F^{\nu}_N/C^{\nu}_N}$ instead of $\sqrt{F^{\nu}_N/C^{\nu}_N}$. \item an approximate 95\% confidence upper bound for the error $\n{\bar{f} - \hat{f}}_{L^2(\nu)}$ based on \eqref{ciF} expressed as a fraction of $\n{\bar{f}}_{L^2(\nu)}$ as estimated by $C^{\nu}_N$ given in \eqref{CDnuN}. \end{itemize} In Sections \ref{ex:poly} and \ref{ex:non-poly} below we used $N = 6 \times 10^8$ independent Monte Carlo samples $(X^n, Y^n, Z^n)$ to compute the estimates $U^{\nu}_N$, $D^{\nu}_N$, $F^{\nu}_N$, $C^{\nu}_N$ together with the corresponding confidence intervals, whereas in Sections \ref{ex:max-call} and \ref{ex:binary}, due to the higher dimensionality of the examples, we only worked with $N = 6 \times 10^7$ independent Monte Carlo samples. To fit such large test data sets into the computer memory, we split them into 6,000 independent batches of 100,000 or 10,000 data points, respectively. In most examples, we chose $\nu$ to be equal to the original distribution $\nu_X$ of $X$, in which case $\mathbb{E}^{\nu}$ equals $\mathbb{E}$. All computations were performed on a Nvidia GeForce RTX 2080 Ti GPU together with Intel Core Xeon CPUs using \classname{Python 3.9.6}, \classname{TensorFlow 2.5.0} with eager mode disabled and \classname{TensorFlow Probability 0.13.0} on \classname{Fedora 32}. \subsection{A four-dimensional polynomial example} \label{ex:poly} In our first example, we consider a simple non-linear model for $(X,Y)$ in which the conditional expectation $\mathbb{E}[Y \mid X]$ can be computed explicitly. This enables us to benchmark our numerical results against the theoretical values. Let $X = (X_1,X_2,X_3,X_4)$ be a four-dimensional random vector and $V$, $Y$ random variables such that $X_1, \dots, X_4, V$ are i.i.d.\ standard normal and $Y$ is of the form \begin{equation} \label{Ypoly} Y = X_1 + X_2^2 + X_3 X_4 + V. \end{equation} Then the conditional expectation is \[ \mathbb{E}[Y \mid X]=X_1+ X_2^2 + X_3 X_4, \] from which the minimal mean squared distance under $\p$ can be seen to be \[ D^{\nu_X} = \mathbb{E} [\brak{Y - \mathbb{E}[Y \mid X]}^2] = \mathbb{E} [V^2] = 1.\] Replacing $V$ by $0$ in the expression \eqref{Ypoly} would suggest to use the additional feature $a(X) = X_1 + X^2_2 + X_3 X_4$. However, since this would directly solve the problem, we are not using it in this example. Our numerical results are listed in Table \ref{tab:1}. More details are provided in Table \ref{tab:7} in the Appendix. As could be expected, in this example the second order polynomial regression works very well, while the accuracy of the linear regression is poor. All three neural networks, albeit with more computational effort, provide results comparable to the one of the second order polynomial regression. \begin{table}[h!] \centering {\small \begin{tabular}{l|c|c|c|c} \toprule \thead{} & 95\% CI $U^{\nu_X}$ & 95\% CI $D^{\nu_X}$ & \(\displaystyle \frac{\n{\bar{f} - \hat{f}}_{L^2(\nu_X)}}{\n{\bar{f}}_{L^2(\nu_X)}} \) & \( \displaystyle \frac{\mbox{95\% CB } \n{\bar{f} - \hat{f}}_{L^2(\nu_X)}}{\n{\bar{f}}_{L^2(\nu_X)}} \) \cr \midrule \makecell{lin. regr.} & [3.99878, 4.00026] & [0.99967, 1.00025] & 77.45859\% & 77.46357\% \cr \rowcolor[gray]{0.925} \makecell{poly. regr.} & [0.99984, 1.00008] & [0.99967, 1.00025] & -0.16811\% & 0.61185\% \cr \makecell{NN tanh} & [1.00009, 1.00033] & [0.99967, 1.00025]& 0.69571\% & 0.94163\% \cr \rowcolor[gray]{0.925} \makecell{NN ReLU} & [0.99995, 1.00019] & [0.99967, 1.00025] & 0.45371\% & 0.78003\% \cr \makecell{NN LSE} & [0.99986, 1.00010] & [0.99967, 1.00025] & 0.13869\% & 0.64944\% \cr \bottomrule \end{tabular} } \caption{Numerical results for the polynomial example \eqref{Ypoly}} \label{tab:1} \end{table} \subsection{A five-dimensional non-polynomial example} \label{ex:non-poly} In our second example, we consider a non-polynomial relationship between $Y$ and $X$. More precisely, we let $X_1, V_1, \dots, X_5, V_5$ be i.i.d.\ standard normal and assume that $Y$ is of the form \begin{equation} \label{non-poly} Y = 5 \log \brak{5 + (X_1+V_1)^2 X_2^2 + V^2_2} \tanh \brak{(X_3 + V_3) (X_4 + V_4) (X_5 + V_5)^2}. \end{equation} Then the conditional expectation $\mathbb{E}[Y \mid X]$ is not known in closed form. Setting $V = 0$ in \eqref{non-poly} suggests to use the additional feature \begin{equation} \label{adnon-poly} a(X) = 5 \log \brak{5 + X_1^2 X_2^2} \tanh \brak{X_3 X_4 X_5^2}. \end{equation} We first search for the function $f$ minimizing the mean squared distance $\mathbb{E} [\brak{Y - f(X)}^2]$ under the original measure $\p$. The numerical results are reported in Table \ref{tab:2}, and more details can be found in Table \ref{tab:8} in the Appendix. It can be seen that the second order polynomial regression yields better results than the linear regression, but both are outperformed by the three neural network approaches. Moreover, the inclusion of the additional feature \eqref{adnon-poly} improves the performance of the linear and second order polynomial regressions, while it does not make the neural networks more accurate. \begin{table}[h!] \centering {\small \begin{tabular}{l|c|c|c|c} \toprule \thead{} & 95\% CI $U^{\nu_X}$ & 95\% CI $D^{\nu_X}$ & \(\displaystyle \frac{\n{\bar{f} - \hat{f}}_{L^2(\nu_X)}}{\n{\bar{f}}_{L^2(\nu_X)}} \) & \( \displaystyle \frac{\mbox{95\% CB } \n{\bar{f} - \hat{f}}_{L^2(\nu_X)}}{\n{\bar{f}}_{L^2(\nu_X)}} \) \cr \midrule \makecell{lin. regr.} & [41.57529, 41.58353] & [36.16531, 36.17559] & 100.00155 \% & 100.02758 \% \cr \rowcolor[gray]{0.925} \makecell{lin. regr., add. feature} & [37.88883, 37.89721] & [36.16167, 36.17195] & 56.49035 \% & 56.53659 \% \cr \makecell{poly. regr.} & [36.66792, 36.67584] & [36.16531, 36.17559] & 30.47391 \% & 30.55790 \% \cr \rowcolor[gray]{0.925} \makecell{poly. regr., add. feature} & [36.39318, 36.40122] & [36.16167, 36.17195] & 20.59798 \% & 20.72211 \% \cr \makecell{NN tanh} & [36.1642 , 36.17224] & [36.16531, 36.17559] & -1.48922 \% & 1.70868 \% \cr \rowcolor[gray]{0.925} \makecell{NN tanh, add. feature} & [36.16359, 36.17163] & [36.16167, 36.17195] & -0.56091 \% & 2.19429 \% \cr \makecell{NN ReLU} & [36.16404, 36.17208] & [36.16531, 36.17559] & -1.58532 \% & 1.61970 \% \cr \rowcolor[gray]{0.925} \makecell{NN ReLU, add. feature} & [36.16289, 36.17093] & [36.16167, 36.17195] & -1.27391 \% & 1.87296 \% \cr \makecell{NN LSE} & [36.16399, 36.17203] & [36.16531, 36.17559] & -1.61221\% & 1.59302\% \cr \rowcolor[gray]{0.925} \makecell{NN LSE, add. feature} & [36.16286, 36.17090] & [36.16167, 36.17195] & -1.27824 \% & 1.86977 \% \cr \bottomrule \end{tabular} } \caption{Numerical results for the non-polynomial example \eqref{non-poly} minimized under $\p$} \label{tab:2} \end{table} As a variant, we numerically minimize the mean squared distance $\mathbb{E}^{\nu} [\brak{Y - f(X)}^2]$ with respect to a distorted measure $\p^{\nu}$ under which $X_1, V_1 \dots, X_5, V_5$ are independent with $X_1, \dots X_5 \sim N(1,0.1)$ and $V_1, \dots, V_5 \sim N(0,1)$. The measure $\nu$ concentrates more mass around the point $(1,\dots, 1) \in \R^5$ than $\nu_X$. But since $(V_1, \dots, V_5)$ has the same distribution as under $\p$, the minimizing function $f$ coincides with the same theoretical regression function $\bar{f}$ as before. On the other hand, $L^2$-norms are now measured with respect to $\p^{\nu}$ instead of $\p$ and the resulting approximations $\hat{f}$ are different. As a consequence, the numerical results in Tables \ref{tab:3} and \ref{tab:9} differ from those in Tables \ref{tab:2} and \ref{tab:8}. It can be seen that as before in the $\p$-minimization, the three neural networks provide better results than the second order polynomial regression, which works better than the linear regression. But now, including the additional feature \eqref{adnon-poly} only improves the accuracy of the linear regression, while it does not help the other methods. \begin{table}[h!] \centering {\small \begin{tabular}{l|c|c|c|c} \toprule \thead{} & 95\% CI $U^{\nu}$ & 95\% CI $D^{\nu}$ & \(\displaystyle \frac{\n{\bar{f} - \hat{f}}_{L^2(\nu)}}{\n{\bar{f}}_{L^2(\nu)}} \) & \( \displaystyle \frac{\mbox{95\% CB } \n{\bar{f} - \hat{f}}_{L^2(\nu)}}{\n{\bar{f}}_{L^2(\nu)}} \) \cr \midrule \makecell{lin. regr.} & [39.93562, 39.94394] & [39.85265, 39.86327] & 8.61092\% & 8.75526\% \cr \rowcolor[gray]{0.925} \makecell{lin. regr., add. feature} & [39.92227, 39.93059] & [39.84812, 39.85874] & 8.22434 \% & 8.37536 \% \cr \makecell{poly. regr.} & [39.85543, 39.86375] & [39.85265, 39.86327] & 1.26400\% & 2.02624\% \cr \rowcolor[gray]{0.925} \makecell{poly. regr., add. feature} & [39.85484, 39.86316] & [39.84982, 39.86044] & 1.53131 \% & 2.20202 \% \cr \makecell{NN tanh} & [39.85109, 39.85941] & [39.85265, 39.86327] & -1.52880\% & 0.41362\% \cr \rowcolor[gray]{0.925} \makecell{NN tanh, add. feature} & [39.84717, 39.85549]& [39.84812, 39.85874] & -0.26907 \% & 1.56080 \% \cr \makecell{NN ReLU} & [39.85106, 39.85938] & [39.85265, 39.86327] & -1.53967\% & 0.37113\% \cr \rowcolor[gray]{0.925} \makecell{NN ReLU, add. feature} & [39.84718, 39.85550] & [39.84812, 39.85874] & -0.26869 \% & 1.56092 \% \cr \makecell{NN LSE} & [39.85108, 39.85940] & [39.85265, 39.86327] & -1.53208\% & 0.40141\% \cr \rowcolor[gray]{0.925} \makecell{NN LSE, add. feature} & [39.84714, 39.85546] & [39.84812, 39.85874] & -0.32727 \% & 1.54958 \% \cr \bottomrule \end{tabular} } \caption{Numerical results for the non-polynomial example \eqref{non-poly} minimized under $\p^{\nu}$} \label{tab:3} \end{table} \subsection{Max-call options} \label{ex:max-call} Different pricing and risk management problems require a conditional valuation of a financial product conditional on the state of the world at a later time \citep[see, e.g.][]{carriere96,longstaff2001valuing, tsitsiklis2001regression,broadie2004stochastic, broadie2008,becker2020pricing, lee2003computing, gordy2010nested, broadie2011efficient, BauerReussSinger, Cher}. In this section we consider the conditional valuation of a max-call option. \begin{Framework} \label{fw} We assume there exists a financial market consisting of a money market account offering zero interest rate and $d$ risky securities with risk-neutral dynamics\footnote{We are considering a standard multi-dimensional Black--Scholes model with zero interest rate for ease of presentation. One could also use a more complicated financial market model as long as one can efficiently simulate from it. } \[ S^i_t = S^i_0 \exp \brak{\sigma_i W^i_t - \frac{1}{2} \sigma^2_i t}, \quad t \ge 0, \] for initial prices $S^i_0 = 10$, volatilities $\sigma_i = (10+i/2)\%$ and Brownian motions $W^i$, $i = 1, \dots, d$, with instantaneous correlation $\rho = 30\%$ between them. We denote the current time by $0$ and consider a financial derivative on $S^1, \dots, S^d$ with payoff $\phi(S_T)$ at maturity $T = 1/3$ (four months) for a payoff function $\phi \colon \R^d \to \R$. Suppose we are interested in the value of the derivative at time $t = 1/52$ (one week from now) conditional on the prices $S^1_t, \dots, S^d_t$. According to standard no-arbitrage arguments \citep[see, e.g.,][]{KS}, it is given by $\mathbb{E} \edg{\phi(S_T) \mid S_t}$, which can be written as $\mathbb{E}[Y \mid X]$ for $Y = \phi(S_T)$ and $X_i = S^i_t, \; i = 1, \dots, d$. Note that $Y$ has an explicit representation of the form (R) (see the beginning of Section \ref{sec:est}) since $Y= h(X,V)$ for \[ h(x,v) = \phi \brak{x_1 \exp \crl{v_1 - \frac{\sigma^2_1}{2} (T-t)}, \dots, x_d \exp \crl{v - \frac{\sigma^2_d}{2} (T-t)}}\] and the random variables \[ V_i = \sigma_i (W^i_T - W^i_t), \quad i = 1, \dots, d, \] which are independent of $X_1, \dots, X_d$. \end{Framework} We consider a $d=100$-dimensional max-call option with a time-$T$ payoff of the form \[ \phi(S_T) = \brak{\max_{1 \le i \le d} S^i_T - K}^+ \] with strike price $K=16.4$.\footnote{The strike price has been chosen so that approximately half of the simulated paths end up in the money at time $T$.} Since the time-$t$ price \begin{equation} \label{max-call} \mathbb{E} \edg{\brak{\max_{1 \le i \le d} S^i_T - K}^+ \, \bigg| \, S_t} \end{equation} does not admit a closed form solution, it has to be computed numerically. In this example, \[ h(X, 0) = \brak{\max_{1 \le i \le d} S^i_t e^{- \sigma^2_i (T-t)/2} - K}^+ \] equals zero with high probability. Therefore, it is not useful as an additional feature. Instead, we use the additional feature \begin{equation} \label{admax-call} a(X) = \max_{1 \le i \le d} X_i = \max_{1 \le i \le d} S^i_{t}. \end{equation} The numerical results are reported in Table \ref{tab:4}. Additional results are given in Table \ref{tab:10} in the Appendix. As can be seen, the three neural networks outperform the second order polynomial regression, which works better than the linear regression, and all methods benefit from the inclusion of the additional feature \eqref{admax-call}. \begin{table}[h!] \centering {\small \begin{tabular}{l|c|c|c|c} \toprule \thead{} & 95\% CI $U^{\nu_X}$ & 95\% CI $D^{\nu_X}$ & \(\displaystyle \frac{\n{\bar{f} - \hat{f}}_{L^2(\nu_X)}}{\n{\bar{f}}_{L^2(\nu_X)}} \) & \( \displaystyle \frac{\mbox{95\% CB } \n{\bar{f} - \hat{f}}_{L^2(\nu_X)}}{\n{\bar{f}}_{L^2(\nu_X)}} \) \cr \midrule \makecell{lin. regr.} & [6.40069, 6.41061] & [6.39198, 6.40428] & 5.34996\% & 5.81190\% \cr \rowcolor[gray]{0.925} \makecell{lin. regr., add. feature} & [6.40470, 6.41466] & [6.39803, 6.41037] & 4.40053 \% & 4.96856 \% \cr \makecell{poly. regr.} & [6.40380, 6.41376] & [6.39667, 6.40901] & 4.48173 \% & 5.02730 \% \cr \rowcolor[gray]{0.925} \makecell{poly. regr., add. feature} & [6.40482, 6.41474] & [6.40286, 6.41520] & 3.22723 \% & 3.93530 \% \cr \makecell{NN tanh} & [6.39444, 6.40440] & [6.39198, 6.40428] & 2.31392\% & 2.31393\% \cr \rowcolor[gray]{0.925} \makecell{NN tanh, add. feature} & [6.39930, 6.40926] & [6.39803, 6.41037] & -0.87808 \% & 2.13372 \% \cr \makecell{NN ReLU} & [6.39436, 6.40428] & [6.39198, 6.40428] & 2.23445\% & 2.23445\% \cr \rowcolor[gray]{0.925} \makecell{NN ReLU, add. feature} & [6.39911, 6.40907] & [6.39803, 6.41037] & -1.21985 \% & 1.95863 \% \cr \makecell{NN LSE} & [6.39430, 6.40422] & [6.39198, 6.40428] & 2.18492\% & 3.15181 \% \cr \rowcolor[gray]{0.925} \makecell{NN LSE, add. feature} & [6.39903, 6.40895] & [6.39803, 6.41037] & -1.36571 \% & 1.86024 \% \cr \bottomrule \end{tabular} } \caption{Numerical results for the max-call option \eqref{max-call}} \label{tab:4} \end{table} \subsection{Binary options} \label{ex:binary} In our next example we consider a $d=100$-dimensional binary option in Framework \ref{fw} with time-$T$ payoff \begin{equation} \label{bin1} \phi(S_T) = 10 \times 1_{\crl{\min_{1 \le i \le d} S^i_T > K}}, \end{equation} where the level above which the option finishes in the money is set to $K=5.6$\footnote{The value of $K$ has been chosen so that approximately half of the simulated paths end up in the money at time $T$.}. As before, the time-$t$ price \begin{equation} \label{bin2} 10 \, \mathbb{E} \edg{1_{\crl{\min_{1 \le i \le d} S^i_T > K}} \, \Big| \, S_t} = 10 \, \p \edg{\min_{1 \le i \le d} S^i_T > K \, \Big| \, S_t} \end{equation} cannot computed exactly and therefore, has to be evaluated numerically. As additional feature, we use \begin{equation} \label{adbin1} a(X) = \min_{1 \le i \le d} X_i = \min_{1 \le i \le d} S^i_t. \end{equation} Our main numerical results are listed in Table \ref{tab:5}, and additional results are given in Table \ref{tab:11} in the Appendix. Again, the three neural networks work better than the second order polynomial regression, which is more accurate than the linear regression, and only the two regressions benefit significantly from the inclusion of the additional feature \eqref{adbin1}. \begin{table}[h!] \centering {\small \begin{tabular}{l|c|c|c|c} \toprule \thead{} & 95\% CI $U^{\nu_X}$ & 95\% CI $D^{\nu_X}$ & \(\displaystyle \frac{\n{\bar{f} - \hat{f}}_{L^2(\nu_X)}}{\n{\bar{f}}_{L^2(\nu_X)}} \) & \( \displaystyle \frac{\mbox{95\% CB } \n{\bar{f} - \hat{f}}_{L^2(\nu_X)}}{\n{\bar{f}}_{L^2(\nu_X)}} \) \cr \midrule \makecell{lin. regr.} & [24.36607, 24.36995] & [24.33501, 24.35673] & 2.88079\% & 3.22343\% \cr \rowcolor[gray]{0.925} \makecell{lin. regr., add. feature} & [24.36485, 24.36873] & [24.33538, 24.35710] & 2.67631 \% & 3.04196 \% \cr \makecell{poly. regr.} & [24.36097, 24.36485] & [24.33416, 24.35588] & 2.55474 \% & 2.93553 \% \cr \rowcolor[gray]{0.925} \makecell{poly. regr., add. feature} & [24.36085, 24.36477] & [24.34659, 24.36831] & 2.11798 \% & 2.56456 \% \cr \makecell{NN tanh} & [24.34679, 24.35075] & [24.33501, 24.35673] & 0.82959\% & 1.66742\% \cr \rowcolor[gray]{0.925} \makecell{NN tanh, add. feature} & [24.34656, 24.35052] & [24.33538, 24.35710] & -0.28751 \% & 1.41715 \% \cr \makecell{NN ReLU} & [24.34675, 24.35071] & [24.33501, 24.35673] & -1.09534\% & 0.94431\% \cr \rowcolor[gray]{0.925} \makecell{NN ReLU, add. feature} & [24.34664, 24.35056] & [24.33538, 24.35710] & -0.23672 \% & 1.42644 \% \cr \makecell{NN LSE} & [24.34711, 24.35107] & [24.33501, 24.35673] & 0.90989 \% & 1.70870 \% \cr \rowcolor[gray]{0.925} \makecell{NN LSE, add. feature} & [24.35064, 24.35464] & [24.33538, 24.35710] & 1.24260 \% & 1.90673 \% \cr \bottomrule \end{tabular} } \caption{Numerical results for the binary option \eqref{bin2}} \label{tab:5} \end{table} As a final example we consider, as a variant of \eqref{bin1}, a $d=100$-dimensional binary option in Framework \ref{fw} with time-$T$ payoff \[ \phi(S_T) = 10 \times 1_{\crl{\max_{1 \le i \le d} S^i_T \geq K}} \] for a strike price $K=16.3$\footnote{Again, $K$ has been chosen so that approximately half of the simulated paths end up in the money at time $T$.}. Again, the time-$t$ price \begin{equation} \label{bin3} 10 \, \mathbb{E} \edg{1_{\max_{1 \le i \le d} S^i_T \geq K} \, \Big| \, S_t} = 10 \, \p \edg{\max_{1 \le i \le d} S^i_T \geq K \, \Big| \, S_t} \end{equation} does not admit a closed-form solution and so, has to be evaluated numerically. As additional feature, we use \begin{equation} \label{adbin2} a(X) = \max_{1 \le i \le d} X_i = \max_{1 \le i \le d} S^i_t. \end{equation} Table \ref{tab:6} reports our numerical results. Additional results can be found in Table \ref{tab:12} in the Appendix. Once more, the three networks yield better results than the second order polynomial regression, which works better than the linear regression. But this time, the use of the additional feature \eqref{adbin2} does not help any of the approaches. \begin{table}[h!] \centering {\small \begin{tabular}{l|c|c|c|c} \toprule \thead{} & 95\% CI $U^{\nu_X}$ & 95\% CI $D^{\nu_X}$ & \(\displaystyle \frac{\n{\bar{f} - \hat{f}}_{L^2(\nu_X)}}{\n{\bar{f}}_{L^2(\nu_X)}} \) & \( \displaystyle \frac{\mbox{95\% CB } \n{\bar{f} - \hat{f}}_{L^2(\nu_X)}}{\n{\bar{f}}_{L^2(\nu_X)}} \) \cr \midrule \makecell{lin. regr.} & [24.35271, 24.35655] & [24.32253, 24.34425] & 2.78149 \% & 3.11875 \% \cr \rowcolor[gray]{0.925} \makecell{lin. regr., add. feature} & [24.35260, 24.35644] & [24.32253, 24.34425] & 2.77345 \% & 3.11161 \% \cr \makecell{poly. regr.} & [24.35347, 24.35747] & [24.32775, 24.34947] & 2.22950 \% & 2.63887 \% \cr \rowcolor[gray]{0.925} \makecell{poly. regr., add. feature} & [24.35439, 24.35835] & [24.32447, 24.34619] & 2.43212 \% & 2.81159 \% \cr \makecell{NN tanh} & [24.33540, 24.33936] & [24.33178, 24.35350] & 0.71028 \% & 1.57954 \% \cr \rowcolor[gray]{0.925} \makecell{NN tanh, add. feature} & [24.33476, 24.33872] & [24.32253, 24.34425] & 0.97186 \% & 1.71328 \% \cr \makecell{NN ReLU} & [24.33552, 24.33952] & [24.33178, 24.35350] & 0.74986\% & 1.59774 \% \cr \rowcolor[gray]{0.925} \makecell{NN ReLU, add. feature} & [24.33451, 24.33851] & [24.32253, 24.34425] & 0.92458 \% & 1.68701 \% \cr \makecell{NN LSE} & [24.33585, 24.33985] & [24.33178, 24.35350] & 0.83183\% & 1.63783\% \cr \rowcolor[gray]{0.925} \makecell{NN LSE, add. feature} & [24.33418, 24.33818] & [24.32253, 24.34425] & 0.85883 \% & 1.65185 \% \cr \bottomrule \end{tabular} } \caption{Numerical results for the binary option \eqref{bin3}} \label{tab:6} \end{table} \section{Conclusion} \label{sec:conclusion} In this paper, we have studied the numerical approximation of the conditional expectation of a square-integrable random variable $Y$ given a number of explanatory random variables $X_1, \dots, X_d$ by minimizing the mean squared distance between $Y$ and $f(X_1, \dots, X_d)$ over a family ${\cal S}$ of Borel functions $f \colon \R^d \to \R$. The accuracy of the approximation depends on the suitability of the function family ${\cal S}$ and the performance of the numerical method used to solve the minimization problem. Using an expected value representation of the minimal mean squared distance which does not involve a minimization problem or require knowledge of the true regression function, we have derived $L^2$-bounds for the approximation error of any numerical solution to a given least squares regression problem. We have illustrated the method by computing approximations of conditional expectations in a range of examples using linear regression, polynomial regression as well as different neural network regressions and estimating their $L^2$-approximation errors. Our results contribute to trustworthy AI by providing numerical guarantees for a computational problem arising in various applications. \begin{appendix} \section{Additional numerical results} In this appendix we report for all numerical experiments of Section \ref{sec:ex} our estimates of \begin{itemize} \item the upper bound $U^{\nu} = \mathbb{E}^{\nu}[(Y - \hat{f}(X))^2]$; see \eqref{Unu}, \item the minimal mean squared distance $D^{\nu} = \mathbb{E}^{\nu}[(Y - \bar{f}(X))^2]$; see \eqref{Dnu}, \item the squared $L^2$-approximation error $F^{\nu} = \n{\bar{f} - \hat{f}}^2_{L^2(\nu)}$; see \eqref{Fnu}, \item the squared $L^2$-norm $C^{\nu} = \n{\bar{f}}^2_{L^2(\nu)}$; see \eqref{Cnu}, \end{itemize} together with the corresponding sample standard errors $\sqrt{v^{U,\nu}/N}$, $\sqrt{v^{D,\nu}/N}$, $\sqrt{v^{F,\nu}/N}$ and $\sqrt{v^{C,\nu}/N}$, which were used to compute the quantities in the tables of Section \ref{sec:ex}. \begin{table}[H] \centering {\small \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c} \toprule \thead{} & $U^{\nu_X}$ & \( \displaystyle \sqrt{\frac{v^{U,\nu_X}_{N}}{N}}\) & \( \displaystyle D^{\nu_X}\) & \( \displaystyle\sqrt{\frac{v^{D,\nu_X}_{N}}{N}}\) & $F^{\nu_X}$ & \( \displaystyle \sqrt{\frac{v^{F,\nu}_{N}}{N}}\) & $C^{\nu_X}$ & \( \displaystyle \sqrt{\frac{v^{C,\nu_X}_{N}}{N}}\) \cr \midrule \makecell{lin. regr.} & 3.99952 & 0.00038 & 0.99996 & 0.00015 & 2.99955 & 0.00023 & 4.99939 & 0.00049 \cr \rowcolor[gray]{0.925} \makecell{poly. regr.} & 0.99996 & 0.00006 & 0.99996 & 0.00015 & -0.00001 & 0.00012 & 4.99939 & 0.00049 \cr \makecell{NN tanh} & 1.00021 & 0.00006 & 0.99996 & 0.00015 & 0.00025 & 0.00012 & 4.99939 & 0.00049 \cr \rowcolor[gray]{0.925} \makecell{NN ReLU} & 1.00007 & 0.00006 & 0.99996 & 0.00015 & 0.00010 & 0.00012 & 4.99939 & 0.00049 \cr \makecell{NN LSE} & 0.99998 & 0.00006 & 0.99996 & 0.00015 & 0.00001 & 0.00012 & 4.99939 & 0.00049 \cr \bottomrule \end{tabular} } \caption{Additional numerical results for the polynomial example \eqref{Ypoly}} \label{tab:7} \end{table} \begin{table}[H] \centering {\small \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c} \toprule {} & $U^{\nu_X}$ & \( \displaystyle \sqrt{\frac{v^{U,\nu_X}_{N}}{N}}\) & $D^{\nu_X}$ & \( \displaystyle \sqrt{\frac{v^{D,\nu_X}_{N}}{N}}\) & $F^{\nu_X}$ & \( \displaystyle \sqrt{\frac{v^{F,\nu_X}_{N}}{N}}\) & $C^{\nu_X}$ & \( \displaystyle \sqrt{\frac{v^{C,\nu_X}_{N}}{N}} \) \cr \midrule \makecell{lin. regr.} & 41.57941 & 0.00210 & 36.17045 & 0.00262 & 5.40894 & 0.00171 & 5.40877 & 0.00187 \cr \rowcolor[gray]{0.925} \makecell{lin. regr., add. feature} & 37.89302 & 0.00214 & 36.16681 & 0.00262 & 1.72617 & 0.00172 & 5.40921 & 0.00187 \cr \makecell{poly. regr.} & 36.67188 & 0.00202 & 36.17045 & 0.00262 & 0.50229 & 0.00169 & 5.40877 & 0.00187 \cr \rowcolor[gray]{0.925} \makecell{poly. regr., add. feature} & 36.39720 & 0.00205 & 36.16681 & 0.00262 & 0.22950 & 0.00169 & 5.40921 & 0.00187 \cr \makecell{NN tanh} & 36.16822 & 0.00205 & 36.17045 & 0.00262 & -0.00120 & 0.00169 & 5.40877 & 0.00187 \cr \rowcolor[gray]{0.925} \makecell{NN tanh, add. feature} & 36.16761 & 0.00205 & 36.16681 & 0.00262 & -0.00017 & 0.00169 & 5.40921 & 0.00187 \cr \makecell{NN ReLU} & 36.16806 & 0.00205 & 36.17045 & 0.00262 & -0.00136 & 0.00169 & 5.40877 & 0.00187 \cr \rowcolor[gray]{0.925} \makecell{NN ReLU, add. feature} & 36.16691 & 0.00205 & 36.16681 & 0.00262 & -0.00088 & 0.00169 & 5.40921 & 0.00187 \cr \makecell{NN LSE} & 36.16801 & 0.00205 & 36.17045 & 0.00262 & -0.00141 & 0.00169 & 5.40877 & 0.00187 \cr \rowcolor[gray]{0.925} \makecell{NN LSE, add. feature} & 36.16688 & 0.00205 & 36.16681 & 0.00262 & -0.00088 & 0.00169 & 5.40921 & 0.00187 \cr \bottomrule \end{tabular} } \caption{Additional numerical results for the non-polynomial example \eqref{non-poly} minimized under $\p$} \label{tab:8} \end{table} \begin{table}[H] \centering {\small \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c} \toprule {} & $U^{\nu}$ & \( \displaystyle \sqrt{\frac{v^{U,\nu}_{N}}{N}}\) & $D^{\nu}$ & \( \displaystyle \sqrt{\frac{v^{D,\nu}_{N}}{N}}\) & $F^{\nu}$ & \( \displaystyle \sqrt{\frac{v^{F,\nu}_{N}}{N}} \) & $C^{\nu}$ & \( \displaystyle \sqrt{\frac{v^{C,\nu}_{N}}{N}}\) \cr \midrule \makecell{lin. regr.} & 39.93978 & 0.00212 & 39.85796 & 0.00271 & 0.08197 & 0.00168 & 11.05521 & 0.00209 \cr \rowcolor[gray]{0.925} \makecell{lin. regr., add. feature} & 39.92643 & 0.00212 & 39.85343 & 0.00271 & 0.07480 & 0.00169 & 11.05788 & 0.00209 \cr \makecell{poly. regr.} & 39.85959 & 0.00212 & 39.85796 & 0.00271 & 0.00177 & 0.00169 & 11.05521 & 0.00209 \cr \rowcolor[gray]{0.925} \makecell{poly. regr., add. feature} & 39.85900 & 0.00212 & 39.85513 & 0.00271 & 0.00259 & 0.00168 & 11.05700 & 0.00209 \cr \makecell{NN tanh} & 39.85525 & 0.00212 & 39.85796 & 0.00271 & -0.00258 & 0.00169 & 11.05521 & 0.00209 \cr \rowcolor[gray]{0.925} \makecell{NN tanh, add. feature} & 39.85133 & 0.00212 & 39.85343 & 0.00271 & -0.00008 & 0.00169 & 11.05788 & 0.00209 \cr \makecell{NN ReLU} & 39.85522 & 0.00212 & 39.85796 & 0.00271 & -0.00262 & 0.00169 & 11.05521 & 0.00209 \cr \rowcolor[gray]{0.925} \makecell{NN ReLU, add. feature} & 39.85134 & 0.00212 & 39.85343 & 0.00271 & -0.00008 & 0.00169 & 11.05788 & 0.00209 \cr \makecell{NN LSE} & 39.85524 & 0.00212 & 39.85796 & 0.00271 & -0.00259 & 0.00169 & 11.05521 & 0.00209 \cr \rowcolor[gray]{0.925} \makecell{NN LSE, add. feature} & 39.85130 & 0.00212 & 39.85343 & 0.00271 & -0.00012 & 0.00169 & 11.05788 & 0.00209 \cr \bottomrule \end{tabular} } \caption{Additional numerical results for the non-polynomial example minimized under the distorted measure $\p^{\nu}$} \label{tab:9} \end{table} \begin{table}[H] \centering {\small \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c} \toprule {} & $U^{\nu}$ & \( \displaystyle \sqrt{\frac{v^{U,\nu}_{N}}{N}}\) & $D^{\nu}$ & \( \displaystyle \sqrt{\frac{v^{D,\nu}_{N}}{N}} \) & $F^{\nu}$ & \( \displaystyle \sqrt{\frac{v^{F,\nu}_{N}}{N}} \) & $C^{\nu}$ & \( \displaystyle \sqrt{\frac{v^{C,\nu}_{N}}{N}} \) \cr \midrule \makecell{lin. regr.} & 6.40565 & 0.00253 & 6.39813 & 0.00314 & 0.00764 & 0.00084 & 2.67061 & 0.00119 \cr \rowcolor[gray]{0.925} \makecell{lin. regr., add. feature} & 6.40968 & 0.00254 & 6.40420 & 0.00315 & 0.00517 & 0.00086 & 2.67039 & 0.00119 \cr \makecell{poly. regr.} & 6.40878 & 0.00254 & 6.40284 & 0.00315 & 0.00537 & 0.00084 & 2.67290 & 0.00119 \cr \rowcolor[gray]{0.925} \makecell{poly. regr., add. feature} & 6.40978 & 0.00253 & 6.40903 & 0.00315 & 0.00278 & 0.00082 & 2.66955 & 0.00119 \cr \makecell{NN tanh} & 6.39942 & 0.00254 & 6.39813 & 0.00314 & 0.00143 & 0.00084 & 2.67061 & 0.00119 \cr \rowcolor[gray]{0.925} \makecell{NN tanh, add. feature} & 6.40428 & 0.00254 & 6.40420 & 0.00315 & -0.00021 & 0.00086 & 2.67039 & 0.00119 \cr \makecell{NN ReLU} & 6.39932 & 0.00253 & 6.39813 & 0.00314 & 0.00133 & 0.00084 & 2.67061 & 0.00119 \cr \rowcolor[gray]{0.925} \makecell{NN ReLU, add. feature} & 6.40409 & 0.00254 & 6.40420 & 0.00315 & -0.00040 & 0.00086 & 2.67039 & 0.00119 \cr \makecell{NN LSE} & 6.39926 & 0.00253 & 6.39813 & 0.00314 & 0.00127 & 0.00084 & 2.67061 & 0.00119 \cr \rowcolor[gray]{0.925} \makecell{NN LSE, add. feature} & 6.40399 & 0.00253 & 6.40420 & 0.00315 & -0.00050 & 0.00086 & 2.67039 & 0.00119 \cr \bottomrule \end{tabular} } \caption{Additional numerical results for the max-call option \eqref{max-call}} \label{tab:10} \end{table} \begin{table}[H] \centering {\small \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c} \toprule {} & $U^{\nu}$ & \( \displaystyle \sqrt{\frac{v^{U,\nu}_{N}}{N}} \) & $D^{\nu}$ & \( \displaystyle \sqrt{\frac{v^{D,\nu}_{N}}{N}}\) & $F^{\nu}$ & \( \displaystyle \sqrt{\frac{v^{S,\nu}_{N}}{N}} \) & $C^{\nu}$ & \( \displaystyle \sqrt{\frac{v^{C,\nu}_{N}}{N}} \) \cr \midrule \makecell{lin. regr.} & 24.36801 & 0.00099 & 24.34587 & 0.00554 & 0.02104 & 0.00322 & 25.35285 & 0.00562 \cr \rowcolor[gray]{0.925} \makecell{lin. regr., add. feature} & 24.36679 & 0.00099 & 24.34624 & 0.00554 & 0.01816 & 0.00322 & 25.36054 & 0.00562 \cr \makecell{poly. regr.} & 24.36291 & 0.00099 & 24.34502 & 0.00554 & 0.01656 & 0.00322 & 25.36815 & 0.00562 \cr \rowcolor[gray]{0.925} \makecell{poly. regr., add. feature} & 24.36281 & 0.00100 & 24.35745 & 0.00554 & 0.01138 & 0.00322 & 25.36212 & 0.00562 \cr \makecell{NN tanh} & 24.34877 & 0.00101 & 24.34587 & 0.00554 & 0.00181 & 0.00322 & 25.35285 & 0.00562 \cr \rowcolor[gray]{0.925} \makecell{NN tanh, add. feature} & 24.34854 & 0.00101 & 24.34624 & 0.00554 & -0.00021 & 0.00322 & 25.36054 & 0.00562 \cr \makecell{NN ReLU} & 24.34873 & 0.00101 & 24.34587 & 0.00554& 0.00174 & 0.00322 & 25.35285 & 0.00562 \cr \rowcolor[gray]{0.925} \makecell{NN ReLU, add. feature} & 24.34860 & 0.00100 & 24.34624 & 0.00554 & -0.00014 & 0.00322 & 25.36054 & 0.00562 \cr \makecell{NN LSE} & 24.34909 & 0.00101 & 24.34587 & 0.00554 & 0.00210 & 0.00322 & 25.35285 & 0.00562 \cr \rowcolor[gray]{0.925} \makecell{NN LSE, add. feature} & 24.35264 & 0.00102 & 24.34624 & 0.00554 & 0.00392 & 0.00322 & 25.36054 & 0.00562 \cr \bottomrule \end{tabular} } \caption{Additional numerical results for the binary option \eqref{bin2}} \label{tab:11} \end{table} \begin{table}[H] \centering {\small \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c} \toprule {} & $U^{\nu}$ & \( \displaystyle \sqrt{\frac{v^{U,\nu}_{N}}{N}}\) & $D^{\nu}$ & \( \displaystyle \sqrt{\frac{v^{D,\nu}_{N}}{N}}\) & $F^{\nu}$ & \( \displaystyle \sqrt{\frac{v^{F,\nu}_{N}}{N}} \) & $C^{\nu}$ & \( \displaystyle \sqrt{\frac{v^{C,\nu}_{N}}{N}} \) \cr \midrule \makecell{lin. regr.} & 24.35463 & 0.00098 & 24.33339 & 0.00554 & 0.02060 & 0.00322 & 26.63183 & 0.00571 \cr \rowcolor[gray]{0.925} \makecell{lin. regr., add. feature} & 24.35452 & 0.00098 & 24.33339 & 0.00554 & 0.02049 & 0.00322 & 26.63183 & 0.00571 \cr \makecell{poly. regr.} & 24.35547 & 0.00102 & 24.33861 & 0.00554 & 0.01323 & 0.00323 & 26.62428 & 0.00571 \cr \rowcolor[gray]{0.925} \makecell{poly. regr., add. feature} & 24.35637 & 0.00101 & 24.33533 & 0.00554 & 0.01576 & 0.00322 & 26.63636 & 0.00571 \cr \makecell{NN tanh} & 24.33738 & 0.00101 & 24.34264 & 0.00554 & 0.00134 & 0.00322 & 26.62898 & 0.00571 \cr \rowcolor[gray]{0.925} \makecell{NN tanh, add. feature} & 24.33674 & 0.00101 & 24.33339 & 0.00554 & 0.00252 & 0.00322 & 26.63183 & 0.00571 \cr \makecell{NN ReLU} & 24.33752 & 0.00102 & 24.34264 & 0.00554 & 0.00150 & 0.00322 & 26.62898 & 0.00571 \cr \rowcolor[gray]{0.925} \makecell{NN ReLU, add. feature} & 24.33651 & 0.00102 & 24.33339 & 0.00554 & 0.00228 & 0.00322 & 26.63183 & 0.00571 \cr \makecell{NN LSE} & 24.33785 & 0.00102 & 24.34264 & 0.00554 & 0.00184 & 0.00322 & 26.62898 & 0.00571 \cr \rowcolor[gray]{0.925} \makecell{NN LSE, add. feature} & 24.33618 & 0.00102 & 24.33339 & 0.00554 & 0.00196 & 0.00322 & 26.63183 & 0.00571 \cr \bottomrule \end{tabular} } \caption{Additional numerical results for the binary option \eqref{bin3}} \label{tab:12} \end{table} \end{appendix}
proofpile-arXiv_065-79
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\chapter*{Biography} \end{frontmatter} \begin{biography} \section*{Sir Stanley Davidson (1894--1981)} One or more blank lines denote the end of a paragraph. The ends of\index{index} words and sentences are marked by spaces. It doesn't matter how many spaces you type; one is as good as 100. The end of a line counts as a space. \end{biography} \chapter{Appendix title \end{frontmatter} \chapter{Appendix title \end{frontmatter} \chapter*{References} \markboth{References}{References} \bibliographystyle{elsarticle-num} \chapter{GelTip Tactile Sensor for Dexterous Manipulation in Clutter}\label{chap1} \begin{aug} \author[addressrefs={ad1}]% {% \fnm{Daniel} \snm{Fernandes Gomes}% }% \author[addressrefs={ad1}]% {% \fnm{Shan} \snm{Luo}% }% \address[id=ad1]% {% smARTLab, Department of Computer Science, University of Liverpool, United Kingdom. \\ Emails: \{danfergo, shan.luo\}@liverpool.ac.uk. }% \end{aug} \begin{abstract} Tactile sensing is an essential capability for robots that carry out dexterous manipulation tasks. While cameras, Lidars and other remote sensors can assess a scene globally and instantly, tactile sensors can reduce their measurement uncertainties and gain information about the local physical interactions between the in-contact objects and the robot, that are often not accessible via remote sensing. Tactile sensors can be grouped into two main categories: electronic tactile skins and camera based optical tactile sensors. The former are slim and can be fitted to different body parts, whereas the latter assume a more prismatic shape and have much higher sensing resolutions, offering a good advantage for being used as robotic fingers or fingertips. One of such optical tactile sensors is our \textit{GelTip} sensor that is shaped as a finger and can sense contacts on any location of its surface. As such, the \textit{GelTip} sensor is able to detect contacts from all the directions, like a human finger. To capture these contacts, it uses a camera installed at its base to track the deformations of the opaque elastomer that covers its hollow, rigid and transparent body. Thanks to this design, a gripper equipped with \textit{GelTip} sensors is capable of simultaneously monitoring contacts happening inside and outside its grasp closure. Experiments carried out using this sensor demonstrate how contacts can be localised, and more importantly, the advantages, and even possibly a necessity, of leveraging all-around touch sensing in dexterous manipulation tasks in clutter where contacts may happen at any location of the finger. In particular, experiments carried out in a Blocks World environment show that the detected contacts on the fingers can be used to adapt planned actions during the different moments of the reach-to-grasp motion. All the materials for the fabrication of the \textit{GelTip} sensor can be found at https://danfergo.github.io/geltip/\end{abstract} \begin{keywords} \kwd{Tactile sensors} \kwd{Dexterous manipulation} \kwd{GelTip sensor} \end{keywords} \end{frontmatter} \section{Introduction} \label{sec:sample1} As humans, robots need to make use of tactile sensing when performing dexterous manipulation tasks in cluttered environments such as at home and in warehouses. In such cases, the positions and shapes of objects are uncertain, and it is of critical importance to sense and adapt to the cluttered scene. With cameras, Lidars and other remote sensors, large areas can be assessed instantly~\cite{peel2018localisation}. However, measurements obtained using such sensors often suffer from large uncertainties, occlusions and variance of factors like light conditions and shadows. Thanks to the direct interaction with the object, tactile sensing can reduce the measurement uncertainties of remote sensors and it is not affected by the changes of the aforementioned surrounding conditions. Furthermore, tactile sensing gains information of the physical interactions between the objects and the robot end-effector that is often not accessible via remote sensors, e.g., incipient slip, collisions and detailed geometry of the object. As dexterous manipulation requires precise information of the interactions with the object, especially in moments of in-contact or near-contact, it is of crucial importance to attain these accurate measurements provided by tactile sensing. For instance, failing to estimate the size of an object by \SI{1}{\mm}, or its surface friction coefficient, during (and also right before) a grasp might result in severely damaging the tactile sensor or dropping the object. In contrast, failing to estimate the object shape by a few centimeters farther away, will not make a big impact on the manipulation. To this end, camera vision and other remote sensors can be used to produce initial estimations of the object and plan manipulation actions, whereas tactile sensing can be used to refine such estimates and facilitate the in-hand manipulation~\cite{luo2017robotic,luo2021vitac}. The usage of tactile sensors for manipulation tasks has been studied since \cite{opticalSensorsDexterousManipulation} and in the past years a wide range of tactile sensors and working principles have been studied in the literature~\cite{dahiya2013directions,luo2017robotic,luo2021vitac}, ranging from flexible electronic skins~\cite{kaltenbrunner2013ultra}, fiber optic based sensors~\cite{xie2013fiber}, capacitive tactile sensors~\cite{maiolino2013flexible}, to camera based optical tactile sensors~\cite{TacTipFamily,GelSight2017}, many of which have been employed to aid robotic grasping~\cite{kappassov2015tactile}. Electronic tactile skins and flexible capacitive tactile sensors can be adapted to different body parts of the robot that have various curvatures and geometry shapes. However, due to the necessary of dielectrics for each sensing element, they produce considerably low resolution tactile readings. For example, a WTS tactile sensor from Weiss Robotics used in~\cite{luo2015novel,luo2016iterative,luo2019iclap} has 14 $\times$ 6 taxels (tactile sensing elements). In contrast, camera based optical tactile sensors provide higher-resolution tactile images. However, on the other side, they usually have a bulkier shape due to the requirement of hosting the camera and the gap between the camera and the tactile membrane. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{contact_moments.png} \\ \caption{There are two distinct areas of contact highlighted in the robot gripper during a manipulation task: \textbf{(A)} Outside contacts when the robot is probing or steering the object to be grasped; \textbf{(B)} Inside contacts when the object is within the grasp closure, which can guide the grasping. } \label{fig:contact_areas} \end{figure} Optical tactile sensors can be grouped in two main groups: marker-based and image-based, with the former being pioneered by the \textit{TacTip} sensors~\cite{TacTip2009} and the latter by the \textit{GelSight} sensors ~\cite{RetrographicSensing}. As the name suggests, marker-based sensors exploit the tracking of markers printed on a soft domed membrane to perceive the membrane displacement and the resulted contact forces. By contrast, image-based sensors directly perceive the raw membrane with a variety of image recognition methods to recognise textures, localise contacts and reconstruct the membrane deformations, etc. Because of the different working mechanisms, marker-based sensors measure the surface on a lower resolution grid of points, whereas image-based sensors make use of the full resolution provided by the camera. Some \textit{GelSight} sensors have also been produced with markers printed on the sensing membrane \cite{GelSight2017GeometrySlip}, enabling marker-based and image-based methods to be used with the same sensor. Both families of sensors have been produced with either flat sensing surfaces or domed/finger-shaped surfaces. In this chapter, we will first review existing optical tactile sensors in Section~\ref{sec:overview}, and then we'll look in detail into one example of such image-based tactile sensors, i.e., the \textit{GelTip}~\cite{geltip,gomes2020geltip}, in Section~\ref{sec:sensormodel}. The \textit{GelTip} is shaped as a finger, and thus it can be installed on traditional and off-the-shelf grippers to replace its fingers, and enable contacts to be sensed inside and outside the grasp closure that are shown in Figure~\ref{fig:contact_areas}. In Section~\ref{sec:experimentresults}, we will look into experiments carried out using the \textit{GelTip} sensor that demonstrate how contacts can be localised, and more importantly, the advantages, and possibly a necessity, of leveraging all-around touch sensing in dexterous manipulation tasks in clutter. In particular, experiments carried out in a Blocks World environment show that the detected contacts on the fingers can be used to adapt planned actions during the different moments of the reach-to-grasp motion. \section{An overview of the tactile sensors} \label{sec:overview} \review{ Compared to remote sensors like cameras, tactile sensors are designed to assess the properties of the objects via physical interactions, e.g., geometry, texture, humidity and temperature. A large range of working principles have been actively proposed in the literature in the past decades~\cite{dahiya2013directions,luo2017robotic,dahiya2009tactile}. An optical tactile sensor uses a camera enclosed within its shell and pointing at its tactile membrane (an opaque window membrane made of a soft material), to capture the properties of the objects, from the deformations caused to its tactile membrane by the in contact object. Such characteristics ensure that the captured tactile images are not affected by the external illumination variances. To perceive the elastomer deformations from the captured tactile images, multiple working principles have been proposed. We group such approaches in two categories: marker tracking and raw image analysis. Optical tactile sensors contrast to electronic tactile skins that usually have lower thickness and are less bulky. They are flexible and can adapt to different body parts of the robot that have various curvatures and geometry shapes. However, each sensing element of most of the tactile skins, e.g., a capacitive transducer, has the size of a few square millimetres or even centimetres, which results in a limited spatial resolution of the tactile skins. Here we do not cover such skins as these are an extensive topic on its own, however, we point the reader to two surveys that extensively cover these sensors \cite{DexterousTactileSensorsSurvey, electronicSkins}. } \subsection{Marker-based optical tactile sensors} The first marker-based sensor proposal can be found in \cite{gelforce}, however, more recently an important family of marker-based tactile sensors is the TacTip family of sensors described in \cite{TacTipFamily}. Since its initial domed shaped version \cite{TacTip2009}, different morphologies have been proposed, including the TacTip-GR2 \cite{tactipGR2} of a smaller fingertip design, TacTip-M2 \cite{tactipM2} that mimicks a large thumb for in-hand linear manipulation experiments, and TacCylinder to be used in capsule endoscopy applications. Thanks to their miniaturised and adapted design, TacTip-M2~\cite{tactipM2} and TacTip-GR2~\cite{tactipGR2} have been used as fingers (or finger tips) in robotic grippers. Although each TacTip sensor introduces some manufacturing improvements or novel surface geometries, the same working principle is shared: white pins are imprinted onto a black membrane that can then be tracked using computer vision methods. As shown in Table~\ref{table:rl_summary_1}, there are also other optical tactile sensors that track the movements of markers. In~\cite{FingerVision}, an optical tactile sensor named FingerVision is proposed to make use of a transparent membrane, with the advantage of gaining proximity sensing. However, the usage of the transparent membrane makes the sensor lack the robustness to external illumination variance associated with touch sensing. In~\cite{ColorMixingTactileSensor}, semi-opaque grids of magenta and yellow makers, painted on the top and bottom surfaces of a transparent membrane are proposed, in which the mixture of the two colours is used to detect horizontal displacements of the elastomer. In \cite{greendots}, green florescent particles are randomly distributed within the soft elastomer with black opaque coating so that a higher number of markers can be tracked and used to predict the interaction with the object, according to the authors. In \cite{greenDotsMulti}, a sensor with the same membrane construction method, 4 Raspberry PI cameras and fisheye lenses has been proposed for optical tactile skins. \begin{table} \centering \caption{A summary of influential Marker-based optical tactile sensors} \def1.5{1.5} \begin{tabular}{R{0.15\linewidth} | p{0.23\linewidth} p{0.46\linewidth} } & \textbf{Sensor Structure} & \textbf{Illumination and Tactile Membrane} \\ \hline \textbf{TacTip} \cite{TacTip2009} & The \textit{TacTip} has a domed (finger) shape, $40 \times 40 \times 85$ mm, and tracks 127 pins. It uses the Microsoft LifeCam HD webcam. & \multirow{2}{*}{\parbox{\linewidth}{The membrane is black on the outside, with white pins and filled with transparent elastomer inside. Initially the membrane was cast from VytaFlex 60 silicone rubber, the pins painted by hand and the tip filled with the optically clear silicone gel (Techsil, RTV27905); however, currently the entire sensor can be 3d-printed using a multi-material printer (Stratasys Objet 260 Connex), with the rigid parts printed in Vero White material and the compliant skin in the rubber-like TangoBlack+. }} \\ \textbf{TacTip-M2} \cite{tactipM2} & It has a thumb-like or semi-cylindrical shape, with TacTip-M2 $ 32 \times 102 \times 95$~mm and it tracks 80 pins. \\ \textbf{TacTip-GR2} \cite{tactipGR2} & It has a cone shape with a flat sensing membrane, and is smaller than the TacTip, $40 \times 40 \times 44$ mm, tracks 127 pins and uses the Adafruit SPY PI camera. \\ \textbf{TacCylinder} \cite{tacCylinder} & A catadioptric mirror is used to track the 180 markers around the sensor cylindrical body. \\ \hline \textbf{FingerVision} \cite{FingerVision} & It uses a ELP Co. USBFHD01M-L180 camera with an 180 degree fisheye lens. It has approximately $40 \times 47 \times 30 $ mm. & The membrane is transparent, made with Silicones Inc. XP-565, with \SI{4}{\mm} of thickness and markers spaced by \SI{5}{\mm}. No internal illumination is used, as it the membrane transparent. \\ \hline \textbf{Subtractive Color Mixing} \cite{ColorMixingTactileSensor} &N/A & Two layers of semi-opaque colored markers is proposed. SortaClear 12 from Smooth-On, clear and with Ignite pigment, is used to make the inner and outer sides. \\ \hline \textbf{Green Markers} \cite{greendots} & The sensor has a flat sensing surface, measures $50 \times 50 \times 37 $~mm and is equipped with a ELP USBFHD06H RGB camera with a fisheye lens. & \multirow{2}{*}{\parbox{\linewidth}{It is composed of three layers: stiff elastomer, soft elastomer with randomly distributed green florescent particles in it and black opaque coating. The stiff layer is made of ELASTOSIL® RT 601 RTV-2 and is poured directly on top of the electronics, the soft layer is made of Ecoflex™ GEL (shore hardness 000-35) with the markers mixed in, and the final coat layer is made of ELASTOSIL® RT 601 RTV-2 (shore hardness 10A) black silicone. A custom board with an array of SMD white LEDs is mounted on the sensor base, around the camera. }} \\ \textbf{Multi-camera Skin} \cite{greenDotsMulti} & It has a flat prismatic shape of $49\times51\times17.45$ mm. Four Pi cameras are assembled in a $2\times2$ array and fish-eye lenses are used to enable its thin shape. & \end{tabular} \label{table:rl_summary_1} \end{table} \subsection{Image-based optical tactile sensors} On the other side of the spectrum, the GelSight sensors, initially proposed in~\cite{RetrographicSensing}, exploit the entire resolution of the tactile images captured by the sensor camera, instead of just tracking makers. Due to the soft opaque tactile membrane, the captured images are robust to external light variations, and capture information of the touched surface's geometry structure, unlike most conventional tactile sensors that measure the touching force. Leveraging the high resolution of the captured tactile images, high accuracy geometry reconstructions are produced in \cite{GelSightSmallParts, luo2018vitac,lee2019touching,cao2020spatio,lu2019surface,jiang2021vision}. In~\cite{GelSightSmallParts}, this sensor was used as fingers of a robotic gripper to insert a USB cable into the correspondent port effectively. However, the sensor only measures a small flat area oriented towards the grasp closure. In~\cite{gomes2019gelsight,gomes2021generation}, simulation models of the GelSight sensors are also created. Markers were also added to the membrane of the GelSight sensors, enabling applying the same set of methods that were explored in the TacTip sensors. There are some other sensor designs and adaptations for robotic fingers in~\cite{GelSight2017, GelSlim, digit}. In \cite{GelSight2017}, matte aluminium powder was used for improved surface reconstruction, together with the LEDs being placed next to the elastomer, and the elastomer being slightly curved on the top/external side. In \cite{GelSlim}, the GelSlim is proposed, a design wherein a mirror is placed at a shallow and oblique angle for a slimmer design. The camera was placed on the side of the tactile membrane, such that it captures the tactile image reflected onto the mirror. A stretchy textured fabric was also placed on top of the tactile membrane to prevent damages to the elastomer and to improve tactile signal strength. Recently, an even more slim design has been proposed \SI{2}{\mm} \cite{gelslim3}, wherein an hexagonal prismatic shaping lens is used to ensure radially simetrically illumination. In \cite{digit}, DIGIT is also proposed with a USB ``plug-and-play'' port and an easily replaceable elastomer secured with a single screw mount. In these previous works on camera based optical tactile sensors, multiple designs and two distinct working principles have been exploited. However, none of these sensors has the capability of sensing the entire surface of a robotic finger, i.e., both the sides and the tip of the finger. As a result, they are highly constrained in object manipulation tasks, due to the fact that the contacts can only be sensed when the manipulated object is within the grasp closure ~\cite{GelSightSmallParts, IncipientSlip, RegraspVisionTouch}. To address this gap, we propose the finger-shaped sensor named GelTip that captures tactile images by a camera placed in the center of a finger-shaped tactile membrane. It has a large sensing area of approximately \SI{75}{\cm\squared} (\textit{vs.} \SI{4}{\cm\squared} of the GelSight sensor) and a high resolution of 2.1 megapixels over both the sides and the tip of the finger, with a small diameter of \SI{3}{\cm} (\textit{vs.} \SI{4}{\cm} of the TacTip sensor). More details of the main differences between the GelSight sensors, TacTip sensors and our GelTip sensor are given in Table~\ref{table:rl_summary}. \begin{table} \centering \caption{A summary of influential flat and finger-shaped GelSight sensors} \def1.5{1.5} \begin{tabular}{R{1.5cm} | p{3cm} p{3cm} p{3cm} } & \textbf{Sensor Structure} & \textbf{Illumination} & \textbf{Tactile Membrane} \\ \hline \textbf{GelSight}\cite{GelSightSmallParts} & It has a cubic design with a flat square surface. A Logitech C310 (1280 $\times$ 720) camera is placed at its base pointing at the top membrane. & Four LEDs (RGB and white) are placed at the base. The emitted light is guided by the transparent hard surfaces on the sides, so that it enters the membrane tangentially. & A soft elastomer layer is placed on top of a rigid, flat and transparent acrylic sheet. It is painted using semi-specular aluminum flake powder.\\ \textbf{GelSight} \cite{GelSight2017} & It has a close-to hexagonal prism shape. The used webcam is also the Logitech C310. & Three sets of RGB LEDs are positioned (close to) tangent to the elastomer, with a \SI{120}{\degree} angle from each other. & A matte aluminium powder is proposed for improved surface reconstruction. Its elastomer has a flat bottom and a curved top. \\ \textbf{GelSlim} \cite{GelSlim} & A mirror placed at a shallow oblique angle and a Raspberry Pi Spy (640 $\times$ 480) camera is used to capture the tactile image reflected by the mirror. & A single set of white LEDs is used. These are pointed at the mirror, so that the light is reflected directly onto the tactile membrane. & A stretchy and textured fabric on the tactile membrane prevents damages to the elastomer and results in improved tactile signal strength.\\ \textbf{GelSlim v3}\cite{gelslim3} & It is shaped similar to \cite{GelSightSmallParts, GelSight2017} however slimmer \SI{20}{\mm} of thickness, and a round sensing surface. & A custom hexagonal prism is constructed to ensure radially symmetric illumination. & An elastomer with Lambertian reflectance is used, as proposed in \cite{GelSight2017}. \\ \textbf{DIGIT} \cite{digit} & A prismatic design, with curved sides. An OmniVision OVM7692 (640 $\times$ 480) camera is embedded in the custom circuit board. & Three RGB LEDs are soldered directly into the circuit board, illuminating directly the tactile membrane. & The elastomer can be quickly replaced using a single screw mount. \\ \textbf{Round Fingertip} \cite{br2020soft} & It has a round membrane, close to a quarter of sphere. A single \SI{160}{\degree} FoV Raspberry Pi (640 $\times$ 480) is installed on its base. & Two rings of LEDs are placed on the base of the sensor, with the light being guided through the elastomer. & Both rigid and soft parts of the membrane are cast, using SLA 3D printed molds. \\ \textbf{OmniTact} \cite{padmanabha2020omnitact} & It has a domed shape. Five endoscope cameras (400 $\times$ 400) are installed on a core mount, and placed orthogonally to each other: pointing at the tip and sides. & RGB LEDs are soldered both onto the top and sides of the sensor. & The elastomer gel is directly poured onto the core mount (and cameras) without any rigid surface or empty space in between. \\ \textbf{GelTip} \cite{geltip} & It has a domed (finger) shape, similar to a human finger. A Microsoft Lifecam Studio webcam (1920 $\times$ 1080) is used. & Three sets of LEDs, with a \SI{120}{\degree} angle from each other, are placed at the sensor base, and the light is guided through the elastomer. & An acrylic test tube is used as the rigid part of the membrane. The deformable elastomer is cast using a three-part SLA/FFF 3D printed mold. \end{tabular} \label{table:rl_summary} \end{table} With their compact design, the GelTip \cite{geltip} and other GelSight \cite{GelSightSmallParts, GelSlim, gelslim3, digit, cao2021touchroller} sensors are candidate sensors to be mounted on robotic grippers. Recently, custom grippers built using the GelSight working principle have also been proposed \cite{wilson2020design, she2019cable}. Two recent works \cite{br2020soft, padmanabha2020omnitact} also address the issue of the flat surface of previous GelSight sensors. However, their designs have large differences to ours. In \cite{br2020soft}, the proposed design has a tactile membrane with a surface geometry close to a quarter of a sphere. As a consequence, a great portion of contacts happening on the regions outside the grasp closure is undetectable. In \cite{padmanabha2020omnitact}, this issue is mitigated by the use of five endoscope micro cameras looking at different regions of the finger. However, this results in a significant increase of cost for the sensor, according to the authors, approximately US\$3200 (\textit{vs.} only around US\$100 for ours). \section{The GelTip sensor} \label{sec:sensormodel} \subsection{Overview} \begin{figure} \centering \includegraphics[width=\linewidth]{inner_workings_and_model.png} \caption{\textbf{(A)} The working principle of the proposed \textit{GelTip} sensor. The three-layer tactile membrane (rigid body, elastomer and paint coating) is shown in gray. The light rays emitted by the LEDs travel through the elastomer. As one object, shown in green, presses the soft elastomer against the rigid body, an imprint is generated. The resulted tactile image is captured by the camera sensor, placed in the core of the tactile sensor. An opaque shell, enclosing all the optical components, ensures the constant internal lighting of the elastomer surface. \textbf{(B)} Two-dimensional representation of the geometrical model of the \textit{GelTip} sensor. The tactile membrane is modeled as a cylindrical surface and a semi-sphere. An optical sensor of a focal-length $f$ is placed at the referential origin of the sensor, which projects a point on the surface of the sensor $P$ into a point $P'$ in the image plane. The sensor has a radius $r$ and its cylindrical body has a length $d$.} \label{fig:sensor_innerworkings_and_model} \end{figure} As illustrated in Figure~\ref{fig:sensor_innerworkings_and_model}~(A), the GelTip optical tactile sensor is shaped as a finger, and its body consists of three layers, from the inside to the outer surface: a rigid transparent body, a soft transparent membrane and a layer of opaque elastic paint. In its base, a camera is installed, looking at inner surface of the cylinder. When an object is pressed against the tactile membrane, the elastomer distorts and indents according to the object shape. The camera can then capture the obtained imprint into a digital image for further processing. As the membrane is coated with opaque paint, the captured tactile images are immune to external illumination variances, which is characteristic of tactile sensing. To ensure that the imprint is perceptible from the camera view, LED light sources are placed adjacent to the base of the sensor, so that light rays are guided through the membrane. \subsection{The sensor projective model} \label{subsec:sensor_projective_model} For flat sensors, the relationship between the surface and the captured image can be often be easily obtained, or simply substituted by a scaling factor from single pixels to meters \cite{GelSightSmallParts, GelSight2017, IncipientSlip}. However, when considering highly curved sensors, it is important to study a more general projective function. In this subsection, we will look into how to derive such projective function $m$. As for the case of \textit{GelTip} sensor, $m$ maps pixels in the image space $(x',y')$ into points $ (x,y,z)$ on the surface of the sensor. Obtaining the protective function for other curved \textit{GelSight} sensors should be similar, requiring only sensor-specific adaptations. The camera is assumed to be placed at the referential origin, looking in the direction of $z$ axis. The sensor space takes the center of its base, which is also the center point of the camera, as the coordination origin $(0,0,0)$; the image space takes the center of the image as the origin $(0,0)$. Such a projection model is necessary for, among other applications, detecting the position of contacts in the 3D sensor surface. As illustrated in Figure~\ref{fig:sensor_innerworkings_and_model}~(B), the sensor surface can be modeled as a joint semi-sphere and an opened cylinder, both sharing the same radius $r$. The cylinder surface center axis and the $z$-axis are collinear, therefore, the center point of the semi-sphere can be set to $(0,0,d)$, where $d$ is the distance from the center point of the base of the semi-sphere to the center point of the base of the sensor. The location of any point on the sensor surface $(x,y,z)$ can be represented as follows: \begin{numcases}{} x^2 + y^2 + (z - d)^2 = r^2 & \text{for $z > d$} \\ x^2 + y^2 = r^2 & \text{for $z <= d$} \end{numcases} By making the usual thin lens assumptions, the optical sensor is modeled as an ideal pinhole camera. The projective transformation that maps a point in the world space $ P $ into a point in the tactile image $P'$ can be defined using the general camera model~\cite{szeliski2011computer} as: \begin{align} \label{eq:camera_model} P' &= K [\vect{R} | \vect{t}] P \\ K &= \begin{bmatrix} \begin{array}{llll} fk & 0 & c_x & 0 \\ 0 & fl & c_y & 0 \\ 0 & 0 & 1 & 0 \\ \end{array} \end{bmatrix} \end{align} where $P'=[x'z , y'z, z]^T$ is an image pixel and $P=[x,y,z,1]^T$ is a point in space, both represented in homogeneous coordinates here, $[R|t]$ is the camera's extrinsic matrix that encodes the rotation $ R $ and translation $ t $ of the camera, $K$ is the camera intrinsic matrix ($f$ is the focal length; $k$ and $l$ are the pixel-to-meters ratios; $c_x$ and $c_y$ are the offsets in the image frame). Assuming that the used camera produces square pixels, i.e., $k = l$, $fk$ and $fl$ can be replaced by $\alpha$, for mathematical convenience. The orthogonal projections in the $XZ$ and $YZ$ of a generic projection ray can be obtained by expanding the matrix multiplication given by Equation~\ref{eq:camera_model} and solving it w.r.t. $x$ and $y$: \begin{align} \systeme*{ x'z = \alpha x + c_xz, y'z = \alpha y + c_yz, z = z } \Leftrightarrow \systeme*{ \alpha x = x'z - c_xz, \alpha y = y'z - c_yz } \Leftrightarrow \systeme*{ x = (\frac{x' - c_x}{\alpha})z, y = (\frac{y' - c_y}{\alpha})z } \label{eq:proj_ray} \end{align} The desired mapping function $m: (x',y') \rightarrow (x,y,z) $ can then be obtained by constraining the $z$ coordinate through the intersection of the generic projection ray with the sensor surface, described in Equation~\ref{eq:function_f}, where $\chi=x'-c_x$ and $\gamma=y'-c_y$ and $\omega=\chi^2+\gamma^2$. The discontinuity region, i.e., a circumference, is found by setting $z=d$ in Equation \ref{eq:proj_ray}: \begin{equation} m(x',y') = \left\{ \begin{array}{ll} x &= \frac{\chi}{\alpha}z \\ y &= \frac{\gamma}{\alpha}z \\ z &= \left\{ \begin{array}{lll} \sqrt{\frac{(r\alpha)^2}{\omega}} & \mbox{if } \omega < (\frac{r\alpha}{d})^2 \\ \quad \\ \frac{{\alpha^2}2d + \sqrt{{(-{\alpha^2}2d)}^2 - 4\omega{{(d^2 - r^2)}{\alpha^2}}}}{2{(\omega + \alpha^2)}} & otherwise \end{array} \right. \\ \quad \end{array} \right. \label{eq:function_f} \end{equation} The introduced sensor model is validated and visualised in Figure~\ref{fig:sim_model}. Two projection rays, corresponding to the spherical and cylindrical regions, are depicted. Each ray intersects three relevant points: the frame of reference origin, the point in the 3D sensor surface, and the corresponding projected point in the image plane. \begin{figure} \centering \includegraphics[width=.5\linewidth]{sim_model.png} \\ \caption{Two projection rays that correspond to the spherical (in red) and cylindrical (in navy blue) regions are depicted in the figure. Each ray intersects three relevant points: the frame of reference origin, a point in the sensor surface and the corresponding projected point in the image plane.} \label{fig:sim_model} \end{figure} \subsection{Fabrication process} \label{subsec:fabrication_process} As for any other optical tactile sensors, the fabrication of the GelTip sensor is the fabrication of the sensing membrane. It requires the fabrication of three parts: the bottom rigid layer, the deformable elastomer, and the coat of paint. For constructing the rigid layer, a flat sheet of transparent acrylic can be used, in the case of flat sensors \cite{GelSight2017GeometrySlip}. However, for finger-shaped sensors, a curved rigid surface is necessary. In the case of the \textit{GelTip}, a simple off-the-shelf transparent test tube is used. These commercially available tubes are made of plastic/acrylic or glass and one disadvantage of using such test tubes, particularly the plastic ones, is that they contain small imperfections that result from the manufacturing process. An alternative approach is to print the rigid tube using a stereolithography 3D-printer and clear resin, however, proper polishing is necessary to ensure its optical transparency \cite{br2020soft}. To fabricate the elastomer in the desired shape a mold can also be created, for instance 3D printed. Fused Filament Fabrication (FFF) and Stereolithography (SLA) printers, yield different textured surfaces and consecutive differently textured elastomers. Example 3D-printed parts are shown in Figure~\ref{fig:sensor_fabrication_proc}-(C). The soft elastomer is then created by mixing a two-components silicone, such as XP-565. Mixing these two parts in different ratios yields elastomers with different a elastic properties. Additional additives can also be considered, such as, Slacker for increasing the silicone tackiness. A commonly used mixture used for the \textit{GelTip} would be 1 gram of XP-565 part-A, 22 grams of XP-565 part-B and 22 grams of the Slacker. For painting the transparent elastomer, off-the-shelf spray paints tend to form a rigid coat and cracks will develop in the coat when the elastomer deforms or stretches. To avoid these issues, custom paint can be fabricated and applied using a paint gun or an airbrush \cite{GelSight2017}. Pigment powder is mixed with a small portion of \textit{part-A} and \textit{part-B} of XP-565, with the same ratio as used in the elastomer. The paint pigment commonly consists on aluminium powder (\SI{1}{\micro\metre}), however, other options can also be considered as well. After mixing them properly, the mixture is dissolved using a silicone solvent until a watery liquid is achieved, that then can be sprayed onto the elastomer. Finally the three sets of LEDs can be soldere: either of different colors, red, green and blue, or all white. Since different LEDs emit different light intensities, each cluster is preceded by a independent resistor. The power source can be either extracted from the camera or from an external source, e.g., adding a secondary USB cable. At the core of any optical sensor, a camera is installed. In the case of the \textit{GelTip}, a wide angle lens is also considered, enabling the recording of the internal surface of the entire finger. \begin{figure} \centering \includegraphics[width=\linewidth]{gelsight_fabrication_complete.png} \\ \caption{\textbf{(A)} Exploded view of the GelTip tactile sensor design. % \textbf{(B)} A GelTip sensor, next to a British one pound coin, for relative size comparison. The sensor has a length of approximately \SI{10}{\cm}; its shell has a diameter of \SI{2.8}{\cm}; and the tactile membrane has a length of \SI{4}{\cm} and a diameter of \SI{2}{\cm}. % \textbf{(C)} The three-part \textit{mold} next to the remaining parts used in the GelTip construction. \textbf{(D)} The plastic tube is inserted into the sleeve and then mounted onto the mold, afterwards the tube is measured and trimmed and then the elastomer is poured. \textbf{(E)} The tactile membrane after being de-molded and before being painted. } \label{fig:sensor_fabrication_proc} \end{figure} \section{Evaluation} \label{sec:experimentresults} In this section, we look into two sets of experiments carried out using a \textit{GelTip} sensor. \review{The first set of experiments demonstrates how an image-based tactile sensor can be used to localise contacts and; the second set of experiments illustrates the advantages of leveraging all-around touch sensing for dexterous manipulation. Video recordings of these experiments and the CAD models for 3D printing the \textit{GelTip} sensor can be found at https://danfergo.github.io/geltip/}. \subsection{Contact localisation} \label{subsec:contact_detection} For the contact localisation experiment, a set of seven small objects is 3D printed, each with object with a maximum size of $1\times1\times2$ cm$^3$. The objects are shown in Table~\ref{table:contact_errors_per_object} . A 3D printed mount is also built, and placed on top of a raised surface, ensuring that all the objects are kept in the same position throughout the experiment. \textit{GelTip} sensors are installed on a robotic actuator, i.e., the 6-DoF Universal Robots UR5 arm with a Robotiq 2F-85 gripper. The actuator rotates and translates to tap objects at multiple known positions of one of its fingers surface, as illustrated in Figure~\ref{fig:experiment_contact_detection}. The actuator starts with its fingers pointing downwards, i.e., orientation 0, being is visually aligned with the cone object. Contacts are then registered, firstly on the sensor tip, by rotating the sensor, and then on the side, by translating the sensor. In Figure~\ref{fig:experiment_contact_detection}~(B) markings show the location of such contacts and in Figure~\ref{fig:experiment_contact_detection}~(C) the necessary ($\Delta x$, $\Delta z$) translation to obtain contacts on the finger skin is also shown. To use the projection model described in Section~\ref{sec:sensormodel}, five parameters are necessary to be known: $r$, $d$, $c_x$, $c_y$ and $\alpha$. The first two are extracted from the dimensions of the sensor design, however, the latter three are the intrinsic parameters of the camera, which need to be calibrated. To this end, we obtain such parameters from a known pair of corresponding $(x',y')$ and $(x,y,z)$ points. We set the actuator to tap the object in the \SI{15}{\mm} translation position. The center of the sensor tip ($c_x, c_y$) and the contacted point are manually annotated in the image space. The $\alpha$ parameter can then be derived by fitting the known information into Equation \ref{eq:proj_ray}. After detecting the contact in the image space and projecting it into $(x,y,z)$ coordinates, the Euclidean distance between the predicted and the true contact positions are computed. For each of the seven objects, a total of eight contacts are recorded, i.e., four rotations ($\theta$): ~$0$, $\pi/6$, $\pi/4$, $\pi/3$; and four translations ($\tau$): \SI{0}{\mm}, \SI{5}{\mm}, \SI{10}{\mm}, \SI{15}{\mm}. The resulting localisation errors between the observed and true localisation, expressed in millimeters, are summarised in Table~\ref{table:contact_errors_per_position}. Overall, the variance of the localisation errors is large: in some contacts the obtained errors are lower than \SI{1}{\mm}, while in others are over \SI{1}{\cm}. On the other hand, the localisation error, for each object or position, is correlated with its variance. The largest localisation errors happen on objects with large or rounded tops i.e., sphere, edge and slab; contrariwise, the lowest errors are observed for objects with sharp tops, i.e., cone, tube and cylinder. In terms of the localisation errors at different positions, contacts happening near the sensor tip, i.e., the rotations, present lower errors than contacts happening on the sensor side, i.e., translations. In particular, contacts happening at $pi/4$ and $pi/6$ have the lowest errors. \review{From these experiments, we find three main challenges using a finger-shaped sensor, such as the \textit{GelTip}, for contact localisation: 1) weak imprints, created by light contacts, may not be captured by the localisation algorithm; 2) forces perpendicular to the main axis of the sensor may flex the sensor tip, resulting in localisation errors; and 3) imperfections in the sensor modeling and calibration further contribute to these localisation inaccuracies. Examples of captured tactile images and corresponding predictions for the smallest (i.e., $<Cone, \theta=0>$) and largest localisation errors (i.e., $<Slab, \tau=\SI{15}{\mm}>$) are shown in Figure~\ref{fig:contact_localisation_samples}. In the first case, due to the bright imprint provided by the sharp cone top, the algorithm successfully locates the contact. In the second case, due to the imperceptible contact imprint the algorithm incorrectly predicts the contact in the sensor tip.} \begin{figure} \centering \includegraphics[width=\linewidth]{exp_contact_detection.png} \caption{ \textbf{(A)} Two GelTip sensors are installed on a robotic actuator and a 3D printed mount that holds a small 3D printed shape (a cylinder here) placed on top of a wooden block. The actuator moves in small increments and collects tactile images annotated with the known contact positions. \textbf{(B, C)} Illustration of the motion of the sensor during the data collection. The sensor starts pointing downwards, as shown in \textbf{(B)}. To obtain contacts on the sensor surface, while moving, the sensor is also translated by ($\Delta x$,~$\Delta z$), as shown in \textbf{(C)}. A total of eight contacts are collected per object: four rotations ($\theta$) on the sensor tip and four translations ($\tau$) on the sensor side, as highlighted in \textbf{(B)}.} \label{fig:experiment_contact_detection} \end{figure} \begin{table} \setlength{\tabcolsep}{3.9pt} \def1.5{2.2} \centering \caption{Contact errors per position, expressed in millimeters} \label{table:contact_errors_per_position} \begin{tabular}{ccccccccc} \Xhline{3\arrayrulewidth} \multicolumn{4}{c}{\textbf{ROTATIONS}} & \multicolumn{4}{c}{\textbf{TRANSLATIONS}} \\[-1.5ex] $0$ & $\pi/6$ & $\pi/4$ & $\pi/3$ & 0 & 5 & 10 & 15 \\ \Xhline{1.5\arrayrulewidth} \makecell{$4.71$\\$\pm0.75$} & \makecell{$2.01$\\$\pm0.90$} & \makecell{$1.04$\\$\pm0.46$} & \makecell{$6.96$\\$\pm4.82$} & \makecell{$7.87$\\$\pm5.08$} & \makecell{$8.03$\\$\pm1.92$} & \makecell{$7.55$\\$\pm5.00$} & \makecell{$4.86$\\$\pm8.41$} \\ \Xhline{3\arrayrulewidth} \end{tabular} \end{table} \begin{table} \setlength{\tabcolsep}{5.2pt} \def1.5{2.2} \centering \caption{Contact errors per object, expressed in millimeters} \label{table:contact_errors_per_object} \begin{tabular}{cccccccccc} \Xhline{3\arrayrulewidth} \pbox{0.093\linewidth}{\includegraphics[width=0.95\linewidth]{solids/cone.png} \\ \centering\textbf{Cone} }&% \pbox{0.096\linewidth}{\includegraphics[width=\linewidth]{solids/sphere.png} \\ \centering\textbf{Sphere} }&% \pbox{0.12\linewidth}{\includegraphics[width=0.8\linewidth]{solids/irregular.png} \\ \centering\textbf{Irregular} }&% \pbox{0.094\linewidth}{\includegraphics[width=\linewidth]{solids/cylinder.png} \\ \centering\textbf{Cylinder} }&% \pbox{0.093\linewidth}{\includegraphics[width=\linewidth]{solids/edge.png} \\ \centering\textbf{Edge} }&% \pbox{0.093\linewidth}{\includegraphics[width=\linewidth]{solids/tube.png} \\ \centering\textbf{Tube} }&% \pbox{0.093\linewidth}{\includegraphics[width=0.9\linewidth]{solids/slab.png} \\ \centering\textbf{Slab} } \\ \Xhline{1.5\arrayrulewidth} \makecell{$3.63$\\$\pm3.26$} & \makecell{$6.79$\\$\pm5.38$} & \makecell{$5.61$\\$\pm4.08$} & \makecell{$4.57$\\$\pm4.30$} & \makecell{$7.47$\\$\pm6.29$} & \makecell{$3.33$\\$\pm1.90$} & \makecell{$6.27$\\$\pm8.17$} \\ \Xhline{3\arrayrulewidth} \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=\linewidth]{contact_samples.png} \\ \caption{\review{Reference and in contact frames for two evaluated contacts. The expected contact region is highlighted with a dash circumference, the predicted contact position with a yellow circle and the axis where the contacts occur with dotted lines. The top row shows the contact for the $<Cone, \theta=0>$ that results in the smallest error, and the bottom row shows the $<Slab, \tau=\SI{15}{\mm}>$ that results in the largest error.}} \label{fig:contact_localisation_samples} \end{figure} \subsection{Touch-guided grasping in a Blocks World environment} \label{sec:exp_blocks_world} In the task of grasping objects, the initial motion of the gripper is often planned using remote sensing, e.g., camera vision or Lidar. However, remote sensing suffers from occlusions and inaccurate predictions about geometry and distances. In such cases, the final grasp and re-grasp control policies have to rely on inaccurate information of where and when contacts occur. In contrast, touch sensing offers accurate feedback about such contacts. We can clearly verify the importance of considering of touch sensing, by studying two different policies in a simple grasp experiment, i.e., \textbf{\textit{Random grasp}} (Rg) and \textbf{\textit{Random grasp + Touch informed re-grasp}} (RgTr), in a simple grasp experiment. In this experiment, the robot is presented with a 4x4 board with one wooden block placed in each row in unknown columns (to the robot). The robot attempts to grasp each block, and if it fails to grasp one block after 5 attempts, it is considered a failure, and it skips to the next one, as shown in Figure~\ref{fig:exp_blocks_world}. Here, the random grasp in Rg and RgTr mimics the inaccuracies from remote sensing based grasping, by sampling the block position randomly. The touch informed re-grasp in RgTr, mimics an adaptation carried out by touch sensing, by sensing possible collisions, and moving the gripper towards the column in which the contact is detected. A \textbf{\textit{Control policy}} (C) can also be implemented for reference. In this case, the agent always knows the position of each block and consequently always moves directly towards it. The results of this experiment are summarised in Table~\ref{table:grasping_results}, after executing these policies 5 times. As it can be seen, in all the measured metrics, RgTr is a more successful policy than Rg. For instance, Rg fails to grasp 20\% of the blocks, i.e., on average one block is left on the board at the end of each run. In contrast, with the RgTr policy all the blocks are grasped, resulting in a failure rate of 0\%. Similarly, both the average number of attempts and the average number of collisions per block with the RgTr policy is also lower than the Rg policy, i.e., $1.85$ and $0.55$ \textit{versus} $3.30$ and $1.45$. This difference in performance is justified by the fact that in the case of RgTr, once a collision occurs the regrasp policy ensures that the grasp attempt is successful. If the grasp position is sampled randomly, it will be a success chance of $1/4$ for each grasping attempt. In contrast, with the touch feedback enabled, this chance jumps to $2.5/4$ on average. As a consequence, the RgTr policy finds a successful grasp more quickly and thus grasps more blocks within the maximum 5 attempts limit. This experiment shows that sensing contacts outside the grasp closure offers an important feature to improve the success chance of a given grasp attempt. \begin{figure} \centering \includegraphics[width=0.5\linewidth]{exp_blocks_world.jpg} \\ \caption{The Blocks World experimental setup. In each experiment, the robot actuator moves row by row, attempting to grasp each block. The experiment shows that, even with the initial uncertainty, the robot grasps all the blocks successfully, using the all-around touch feedback.} \label{fig:exp_blocks_world} \end{figure} \begin{table} \centering \caption{The table summarises the percentage of failing to grasp blocks (failure rate), and the average number of attempts and collisions per block in all the grasping attempts ($4 \times 5$). It can be noted that the \textit{Random Grasp + Touch Informed Regrasp} policy outperforms \textit{Random Grasp} in all three metrics, i.e., obtains lower failure rate, less attempts and less collisions.} \def1.5{1.5} \begin{tabular}{|L{3.5cm}|p{2cm}|p{2cm}|p{2cm}|} \hline \backslashbox{\\Policies}{}& Failure rate & \# of attempts per block & \# of collisions per block \\ \hline\hline \textbf{Control} & 0\% & 1 & 0 \\ \hline \textbf{Random Grasp} & 20\% & 3.30 & 1.45 \\ \hline \textbf{Random Grasp + Touch Informed Regrasp} & 0\% & 1.85 & 0.55 \\ \hline \end{tabular} \label{table:grasping_results} \end{table} \begin{figure} \centering \includegraphics[width=\linewidth]{discussion.png} \\ \caption{\textbf{(A)} Tactile images captured using our proposed GelTip sensor. From left to right, top to bottom: a fingerprint pressed against the tip of the sensor, two fingerprints on the sides, an open-cylinder shape being pressed against an side of the sensor and the same object being pressed against the corner of the tip, all highlighted in red circles. \textbf{(B)} A plastic strawberry being grasped by a parallel gripper equipped with two GelTip sensors, with the corresponding imprint highlighted in the obtained tactile image (in gray-scale).} \label{fig:sensor_demo} \end{figure} \section{Conclusions and Discussions} \label{sec:conclusion} In this chapter, we have reviewed the tactile sensors for robot grasping and manipulation, highlighted our proposed GelTip sensor, which can detect contacts around the robot finger. As illustrated in Figure~\ref{fig:sensor_demo}, it can capture fine ridges of human fingerprints and the fine texture of a plastic strawberry. The grasping experiments in the \textit{Blocks World} environment show the potential of the all-around finger sensing in facilitating dynamic manipulation tasks. In our future research, we will introduce imprinted markers to the GelTip sensor to track the force fields. The use of the GelTip sensor in the manipulation tasks, such as grasping in cluttered environments, will also be of our interest. Compared to the GelSight sensors~\cite{GelSightSmallParts, GelSight2017}, due to the sensor design of a finger shape, the light distribution throughout the sensor internal surface is no longer homogeneous. Specifically, a brightly illuminated ring can be observed near the discontinuity region (see Figure~\ref{fig:sensor_innerworkings_and_model}~(B)). Shadows can also be observed in the bottom-left sample of Figure~\ref{fig:sensor_demo}~(A) when contacts of large pressure are applied, due to the placement of the camera and light sources. It may pose a challenge to geometry reconstruction using the Poisson reconstruction method \cite{GelSightSmallParts, GelSight2017, br2020soft} that builds a fixed mapping of pixel intensities to surface orientations and requires carefully placed RGB LEDs. In future research, Convolutional Neural Networks could be used for geometry reconstruction of the GelTip sensor. \section*{ACKNOWLEDGMENT} This work was supported by the EPSRC project ``ViTac: Visual-Tactile Synergy for Handling Flexible Materials'' (EP/T033517/1). \chapter{Chapter Title\footnote{This is chapter footnote}}\label{chap2} \subchapter{Chapter Subtitle} \begin{aug} \author[addressrefs={ad1,ad2}]% {\fnm{Firstname} \snm{Surname}}% \author[addressrefs={ad2}]% {\fnm{Firstname} \snm{Surname}}% \address[id=ad1]% {Short Address}% \address[id=ad2]% {Long Address}% \end{aug} \begin{chapterpoints \item The ends of words and sentences are marked by spaces. It doesn't matter how many spaces you type; one is as good as 100. The end of a line counts as a space. \item The ends of words and sentences are marked by spaces. It doesn't matter how many spaces you type; one is as good as 100. The end of a line counts as a space. \end{chapterpoints} \begin{dispquote} The ends of words and sentences are marked by spaces. It doesn't matter how many spaces you type; one is as good as 100. The end of a line counts as a space. The ends of words and sentences are marked by spaces. It doesn't matter how many spaces you type; one is as good as 100. The end of a line counts as a space. \source{Name} \end{dispquote} \end{frontmatter} \section{Section title}\label{sec2.1} \chapter*{Contributors} \end{frontmatter} \begin{contributorslist} \name{Greg N. Gregoriou} is Professor of Finance in the School of Business and Economics at State University of New York at Plattsburgh. He obtained his Ph.D. (finance) from the University of Quebec at Montreal and is hedge-fund editor for the peer-reviewed scientific journal Derivatives Use, Trading and Regulation and editorial board member for the Journal of Wealth Management, and the Journal of Risk and Financial Institutions. He has written more than 50 articles on hedge funds and managed futures in various U.S. and U.K. peer-reviewed publications, including (among others) the Journal of Portfolio Management, Journal of Derivatives Accounting, Journal of Futures Markets, European Journal of Operational Research, Annals of Operations Research, European Journal of Finance, and Journal of Asset Management. He has edited 18 books for Elsevier, Wiley, Palgrave-MacMillan, and Risk and has coauthored one book for Wiley. \name{Donald F. Dansereau} Department of Psychology, Texas Christian University, Fort worth, Texas \name{Ruth Garner} Department of Educational Psychology, Texas A \& M University, College Station, Texas \end{contributorslist} \chapter*{List of Figures} \end{frontmatter} \listoffigures \part{} \end{frontmatter} \part{Part title}\label{part1} \end{frontmatter} \chapter*{Preface}% \end{frontmatter} The ends of words and sentences are marked by spaces. It doesn't matter how many spaces you type; one is as good as 100. The end of a line counts as a space. The ends of words and sentences are marked by spaces. It doesn't matter how many spaces you type; one is as good as 100. The end of a line counts as a space. The ends of words and sentences are marked by spaces. It doesn't matter how many spaces you type; one is as good as 100. The end of a line counts as a space. \source% {% \author{Name}\\% } \chapter*{References} \bibliographystyle{elsarticle-harv} \chapter*{Contents} \end{frontmatter} \tableofcontents
proofpile-arXiv_065-80
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:intro} In 1935, Erd\H os and Szekeres \cite{es35} proved that any sequence of $n$ distinct real numbers contains a monotone subsequence of length at least $\sqrt{n}$. This is a classical result in combinatorics and its generalizations and extensions have many important consequences in geometry, probability, and computer science. See Steele \cite{St} for 7 different proofs along with several applications. In this paper, we prove a positive fraction version of the Erd\H os-Szekeres theorem. We state this theorem using the following notion: A sequence $(a_1, a_2,\dots,a_{ks})$ of $ks$ distinct real numbers is said to be \emph{block-increasing (block-decreasing)} with \emph{depth} $k$ and \emph{block-size} $s$ if every subsequence $(a_{i_1},a_{i_2},\dots, a_{i_k})$, for $(j-1)s<i_j\leq js$, is increasing (decreasing). We call a sequence \emph{block-monotone} if it's either block-increasing or block-decreasing. \begin{theorem}\label{main} Let $k$ and $n > (k-1)^2$ be positive integers. Then every sequence of $n$ distinct real numbers contains a block-monotone subsequence of depth $k$ and block-size $s = \Omega(n/k^2)$. Furthermore, such a subsequence can be computed within $O(n^2\log n)$ time. \end{theorem} \noindent We prove Theorem \ref{main} by establishing a more general Ramsey-type result for monotone paths, which we describe in detail in the next section. The theorem is also asymptotically best possible, see Remark~\ref{extreme_construction_remark}. By a repeated application of Theorem \ref{main}, we can decompose any $n$-element sequence of real numbers into $O(k\log n)$ block-monotone subsequences of depth $k$ upon deleting at most $(k-1)^2$ entries. Our next result shows that we can obtain such a partition, where the number of parts doesn't depend on $n$. \begin{theorem}\label{partition} For any positive integer $k$, every finite sequence of distinct real numbers can be partitioned into at most $O(k^2\log k)$ block-monotone subsequences of depth at least $k$ upon deleting at most $(k-1)^2$ entries. \end{theorem} \noindent Our proof of Theorem~\ref{partition} is constructive and implies an algorithm for the claimed partition whose time complexity is polynomial in $k$ and $n$, where $n$ is the length of the given sequence. Our Theorem~\ref{partition} is inspired by a similar problem of partitioning planar point sets into convex-positioned clusters, which is studied in \cite{por2002partitioned}. A positive fraction Erd\H os-Szekeres-type result for convex polygons is given previously by B{\'a}r{\'a}ny and Valtr \cite{barany1998positive}. We give two applications of Theorems~\ref{main}~and~\ref{partition}. \medskip \noindent \emph{Mutually avoiding sets.} Let $A$ and $B$ be finite point sets of $\mathbb{R}^2$ in \emph{general position}, that is, no three points are collinear. We say that $A$ and $B$ are \emph{mutually avoiding} if no line generated by a pair of points in $A$ intersects the convex hull of $B$, and vice versa. Aronov et al.~\cite{aronov1991crossing} used the Erd\H os-Szekeres Theorem to show that every $n$-element planar point set $P$ in general position contains subsets $A,B\subset P$, each of size $\Omega(\sqrt{n})$, s.t. $A$ and $B$ are mutually avoiding. Valtr \cite{valtr1997mutually} showed that this bound is asymptotically best possible by slightly perturbing the points in an $\sqrt{n}\times \sqrt{n}$ grid. Following the same ideas of Aronov et al., we can use Theorem~\ref{main} to obtain the following. \begin{theorem} \label{avoiding}For every positive integer $k$ there is a constant $\epsilon_k=\Omega(\frac{1}{k^2})$ s.t. every sufficiently large point set $P$ in the plane in general position contains $2k$ disjoint subsets $A_1,\dots,A_k,B_1,\dots,B_k$, each of size at least $\epsilon_k|P|$, s.t. every pair of sets $A=\{a_1,\dots,a_k\}$ and $B=\{b_1,\dots,b_k\}$, with $a_i\in A_i$ and $b_i\in B_i$, are mutually avoiding. \end{theorem} \noindent This improves an earlier result of Mirzaei and the first author \cite{mirzaei2020positive}, who proved the theorem above with $\epsilon_k=\Omega(\frac{1}{k^4})$. The result above is asymptotically best possible for both $k$ and $|P|$: Consider a $k\times k$ grid $G$ and replace each point with a cluster of $|P|/k^2$ points placed very close to each other so that the resulting point set $P$ is in general position. If we can find subsets $A_i$'s and $B_i$'s as in Theorem~\ref{avoiding}, but each of size $\epsilon'_k |P|$ with $\epsilon'_k=\omega(\frac{1}{k^2})$, then we can find mutually avoiding subsets in $G$ of size $\omega(k)$, contradicting Valtr's \cite{valtr1997mutually}. Finally, let us remark that a recent result due to Pach, Rubin, and Tardos \cite{pach2021planar} shows that every $n$-element planar point set in general position determines at least $n/e^{O(\sqrt{\log n})}$ pairwise crossing segments. By using Theorem~\ref{avoiding} instead of Lemma~3.3 from their paper, one can improve the constant hidden in the $O$-notation. \medskip \noindent\emph{Monotone biarc diagrams.} A \emph{proper arc diagram} is a drawing of a graph in the plane, whose vertices are points placed on the $x$-axis, called the \emph{spine}, and each edge is drawn as a half-circle. A classic result of Bernhard and Kainen \cite{BH} shows that a planar graph admits a \emph{planar} proper arc diagram if and only if it's a subgraph of a planar Hamiltonian graph. A \emph{monotone biarc diagram} is a drawing of a graph in the plane, whose vertices are placed on a spine, and each edge is drawn either as a half-circle or two half-circles centered on the spine, forming a continuous $x$-monotone biarc. See Figure~\ref{fig:biarc_example} for an illustration. In \cite{di2005curve}, Di Giacomo et al. showed that every planar graph can be drawn as a \emph{planar} monotone biarc diagram. Using the Erd\H os-Szekeres Theorem, Bar-Yehuda and Fogel \cite{yehuda1998partitioning} showed that every graph $G=(V,E)$, with a given order on $V$, has a \emph{double-paged book embedding} with at most $O(\sqrt{E})$ pages. That is, $E$ can be partitioned into $O(\sqrt{|E|})$ parts, s.t. for each part $E_i$, $(V,E_i)$ can be drawn as a planar monotone biarc diagram, and $V$ appears on the spine with the given order. Our next result shows that we can significantly reduce the number of pages (parts), if we allow a small fraction of the pairs of edges to cross on each page. \begin{theorem}\label{biarc} For any $\epsilon>0$ and a graph $G=(V,E)$, where $V$ is an ordered set, $E$ can be partitioned into $O(\epsilon^{-2}\log(\epsilon^{-1})\log(|E|))$ subsets $E_i$ s.t. each $(V,E_i)$ can be drawn as a monotone biarc diagram having no more than $\epsilon|E_i|^2$ crossing edge-pairs, and $V$ appears on the spine with the given order. \end{theorem} This paper is organized as follows: In Section~\ref{sec:main_proof}, we prove Theorem~\ref{main} in the setting of monotone paths in multicolored ordered graphs. Section~\ref{sec:partition} is devoted to the proof of Theorem~\ref{partition}. In Section~\ref{sec:applications}, we present proofs for the applications claimed above. Section~\ref{sec:remarks} lists some remarks. \section{A positive fraction result for monotone paths}\label{sec:main_proof} Several authors \cite{FPSS,MS,MSW} observed that the Erd\H os-Szekeres theorem generalizes to the following graph-theoretic setting. Let $G$ be a graph with vertex set $[n] = \{1,\ldots ,n\}$. A \emph{monotone path of length} $k$ in $G$ is a $k$-tuple $(v_1,\dots, v_k)$ of vertices s.t. $v_i<v_j$ for all $i<j$ and all edges $v_iv_{i + 1}$, for $i \in [k-1]$, are in $G$. \begin{theorem}\label{espath} Let $\chi$ be a $q$-coloring of the pairs of $[n]$. Then there must be a monochromatic monotone path of length at least $n^{1/q}$. \end{theorem} Given subsets $A,B\subset [n]$, we write $A < B$ if every element in $A$ is less than every element in $B$. \begin{definition} Let $G$ be a graph with vertex set $[n]$ and let $V_1,\ldots, V_k \subset [n]$ and $p_1,\ldots, p_{k + 1} \in [n]$. Then we say that $(p_1,V_1,p_2,V_2,p_3,\ldots,p_{k}, V_k,p_{k + 1})$ is a block-monotone path of depth $k$ and block-size $s$ if \begin{enumerate} \item $|V_i| = s$ for all $i$, \item we have $p_1 < V_1 < p_2 < V_2 < p_3 < \ldots < p_k < V_k < p_{k + 1},$ \item and every $(2k + 1)$-tuple of the form\begin{align*} (p_1,v_1,p_2,v_2,\ldots, p_k,v_k,p_{k + 1}), \end{align*} where $v_i \in V_i$, is a monotone path in $G$. \end{enumerate} \end{definition} \noindent Our main result in this section is the following Ramsey-type theorem. \begin{theorem}\label{path} There is an absolute constant $c>0$ s.t. the following holds. Given integers $q\geq 2$, $k \geq 1$, and $n \geq (ck)^q$, let $\chi$ be a $q$-coloring of the pairs of $[n]$. Then $\chi$ produces a monochromatic block-monotone path of depth $k$ and block-size $s \geq \frac{n}{(ck)^q}$. \end{theorem} \noindent A careful calculation shows that we can take $c = 40$ in the theorem above. We will need the following lemma. \begin{lemma}\label{2path} Let $q\geq 2$ and $N > 3^{q}$. Then for any $q$-coloring of the pairs of $[N]$, there is a monochromatic block-monotone path of depth $1$ and block-size $s\geq \frac{N}{q3^{3q}}$. \end{lemma} \begin{proof} Let $\chi$ be a $q$-coloring of the pairs of $[N]$, and set $r = 3^q$. By Theorem \ref{espath}, every subset of size $r$ of $[N]$ gives rise to a monochromatic monotone path of length 3. Hence, $\chi$ produces at least\begin{align*} \frac{\binom{N}{r}}{\binom{N-3}{r-3}} \geq \frac{6}{r^3}\binom{N}{3} \end{align*} monochromatic monotone paths of length $3$ in $[N]$. Hence, there are at least $\frac{6}{qr^3}\binom{N}{3}$ monochromatic monotone paths of length 3, all of which have the same color. By averaging, there are two vertices $p_1,p_2 \in [N]$, s.t. at least $\frac{N}{qr^3}$ of these monochromatic monotone paths of length 3 start at vertex $p_1$ and ends at vertex $p_2$. By setting $V_1$ to be the ``middle" vertices of these paths, $(p_1,V_1,p_2)$ is a monochromatic block-monotone path of depth $1$ and block-size $s\geq \frac{N}{qr^3} = \frac{N}{q3^{3q}}$.\end{proof} \begin{proof}[Proof of Theorem \ref{path}] Let $\chi$ be a $q$-coloring of the pairs of $[n]$ and let $c$ be a sufficiently large constant that will be determined later. Set $s = \lceil\frac{n}{(ck)^q}\rceil$. For the sake of contradiction, suppose $\chi$ does not produce a monochromatic block-monotone path of depth $k$ and block-size $s$. For each element $v \in [n]$, we label $v$ with $f(v) = (b_1,\ldots, b_q)$, where $b_i$ denotes the depth of the longest block-monotone path with block-size $s$ in color $i$, ending at $v$. By our assumption, we have $0\leq b_i \leq k-1$, which implies that there are at most $k^q$ distinct labels. By the pigeonhole principle, there is a subset $V\subset [n]$ of size at least $n/k^q$, s.t. the elements of $V$ all have the same label. By Lemma \ref{2path}, there are vertices $p_1,p_2 \in V$, a subset $V'\subset V$, and a color $\alpha$ s.t. $(p_1,V',p_2)$ is a monochromatic block-monotone path in color $\alpha$, with block-size $t \geq \frac{|V|}{q3^{3q}}.$ By setting $c$ to be sufficiently large, we have\begin{align*} t \geq \frac{|V|}{q3^{3q}} \geq \frac{n}{k^qq3^{3q}} \geq \left\lceil\frac{n}{(ck)^q}\right\rceil = s. \end{align*} However, this contradicts the fact that $f(p_1) = f(p_2)$, since the longest supported monotone path with block-size $s$ in color $\alpha$ ending at vertex $p_1$ can be extended to a longer one ending at $p_2$. This completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{main}] Let $A=(a_1,\dots,a_n)$ be a sequence of distinct real numbers. Let $\chi$ be a red/blue coloring of the pairs of $A$ s.t. for $i<j$, we have $\chi(a_i,a_j) = $ red if $a_i < a_j$ and $\chi(a_i,a_j) = $ blue if $a_i > a_j$. In other words, we color the increasing pairs by red and the decreasing pairs by blue. If $n< (ck)^2$, notice that $n/(ck)^2<1$. By our assumption $n>(k-1)^2$, the classical Erd{\H o}s-Szekeres theorem gives us a monotone subsequence in $A$ of length at least $k$, which can be regarded as a block-monotone subsequence of depth at least $k$ and block-size $s=1>n/(ck)^2$. If $n\geq (ck)^2$, by Theorem~\ref{path}, there is a monochromatic block-monotone path of depth $k$ and block-size $s \geq n/(ck)^2$ in the complete graph on $A$, which can be regarded as a block-monotone subsequence of $A$ with the claimed depth and block-size. Now we focus on computing such a block-monotone subsequence. If $n< (ck)^2$, it suffices to compute the longest monotone subsequence of $A$. It's well-known that the longest increasing subsequence can be computed within $O(n\log n)$ time, see \cite{fredman1975computing}, so we are done with this case. If $n\geq (ck)^2$, we set $s=\lceil n/(ck)^2 \rceil$. We call a pair $(a_i,a_j)$ \emph{$s$-gapped} if there exist $s$ other entries $a_x$ with $i<x<j$ satisfying $a_i<a_x<a_j$ or $a_i>a_x>a_j$. We describe an $O(n^2\log n)$-time algorithm that computes the longest increasing subsequence with consecutive entries $s$-gapped. Firstly, we preprocess $A$ into a data structure s.t. we can answer within $O(\log n)$ time whether any given pair $(a_i,a_j)$ is $s$-gapped or not. The classical data structure for 2-D orthogonal range counting works for our purpose and its preprocessing time is $O(n\log n)$, see Exercise~5.10 in \cite{deberg2008computational}. Next, let $l(i)$ be the length of the longest increasing subsequence of $A$ with consecutive entries $s$-gapped ending at $a_i$. We compute each $l(i)$ as $i$ proceeds from $1$ to $n$ as follows: After $l(1),\dots, l(i-1)$ are all determined, we have\begin{align*} l(i)=\max_{j<i}\{l(j);(a_j,a_i)\text{ is $s$-gapped}\}+1. \end{align*}Here, we consider $\max(\emptyset):=0$. Hence we can compute $l(i)$ by checking which pairs in $\{(a_j,a_i);j<i\}$ are $s$-gapped using our preprocessed data structure. Clearly, this computation of all $l(i)$ takes $O(n^2\log n)$ time. While computing $l(i)$, let the algorithm record $p(i)$, which is the index $j<i$ with the largest $l(j)$ s.t. $(a_j,a_i)$ is $s$-gapped. This recording process won't increase the magnitude of time complexity. After all $l(i)$ and $p(i)$ are determined, we find the index $i_1$ with the largest $l(i)=:L$, and inductively set $i_{j+1}=p(i_j)$ for $j\in [L-1]$. Then $a_{i_L},a_{i_{L-1}},\ldots,a_{i_1}$ is the longest increasing subsequence of $A$ with consecutive entries $s$-gapped. Let's return to computing the block-monotone subsequence. By the previous argument on block-monotone paths, there exists a monotone subsequence $S\subset A$ with consecutive entries $s$-gapped whose length is at least $k+1$. We can use the algorithm above to compute $S$ within $O(n^2\log n)$ time. Clearly, the entries of $A$ "gapped" by consecutive entries of $S$ form a block-monotone subsequence as claimed, and they can be found within $O(n)$ time. Hence we conclude the theorem. \end{proof} \begin{remark}\label{extreme_construction_remark} For each $k,q,s>0$, the simple construction below shows Theorem~\ref{path} is tight up to the constant factor $c^q$. We first construct $K(k,q)$, for each $k$ and $q$, a $q$-colored complete graph on $[k^q]$, whose longest monochromatic monotone path has length $k$: $K(k,1)$ is just a monochromatic copy of the complete graph on $[k]$. To construct $K(k,q)$ from $K(k,q-1)$, take $k$ copies of $K(k,q-1)$ with the same set of $q-1$ colors, place them in order and color the remaining edges by a new color. Now replace each point in $K(k,q)$ by a cluster of $s$ points, where within each cluster one can arbitrarily color the edges. The resulting $q$-colored complete graph has no $k$ subsets $V_1,V_2,\dots,V_k\subset [n]$ each of size $s+1$ and edges between them monochromatic, otherwise $K(k,q)$ would have a monochromatic monotone path with length larger than $k$. It's well-known that the sharpness of the classical Erd{\H o}s-Szekeres theorem comes from sequences such as\begin{align*} S(k)=(k,k-1,\dots,1,2k,2k-1,\dots,2k+1,\ldots,k^2,k^2-1,k(k-1)+1). \end{align*} We note that if we color the increasing pairs of $S(k)$ by red and the decreasing pairs of $S(k)$ by blue, we obtain the graph $K(k,2)$. If we replace each entry $s_i\in S(k)$ by a cluster of $s$ distinct real numbers very close to $s_i$, we obtain an example showing that Theorem~\ref{main} is asymptotically best possible. \end{remark} \section{Block-monotone sequence partition}\label{sec:partition} This section is devoted to the proof of Theorem \ref{partition}. We shall consider this problem geometrically by identifying each entry $a_i$ of a given sequence $A=(a_i)_{i=1}^n$ as a planar point $(i,a_i)\in \mathbb{R}^2$. As we consider sequences of distinct real numbers, throughout this section, we assume that all point sets have the property that no two members share the same $x$-coordinate or the same $y$-coordinate. Thus, we analogously define block-monotone point sets as follows: A set of $ks$ planar points is said to be \emph{block-increasing (block-decreasing)} with \emph{depth} $k$ and \emph{block-size} $s$ if it can be written as $\{(x_i,y_i)\}_{i=1}^{ks}$ s.t. $x_i<x_{i+1}$ for all $i$ and every sequence $(y_{i_1},y_{i_2},\dots, y_{i_k})$, for $(j-1)s<i_j\leq js$, is increasing (decreasing). We say that a point set is \emph{block-monotone} if it's either block-increasing or block-decreasing. For each $j\in [k]$ we call the subset $\{(x_i,y_i)\}_{i=(j-1)s+1}^{js}$ the \emph{$j$-th block} of this block-monotone point set. Hence, Theorem~\ref{partition} immediately follows from the following. \begin{theorem}\label{partition_pointset} For any positive integer $k$, every finite planar point set can be partitioned into at most $O(k^2\log k)$ block-monotone point subsets of depth at least $k$ and a remaining set of size at most $(k-1)^2$. \end{theorem} Given a point set $P\subset \mathbb{R}^2$, let \begin{align} U(P):=\{(x,y)\in \mathbb{R}^2; y> y',\ \forall (x',y')\in P\}, \tag{up}\\ D(P):=\{(x,y)\in \mathbb{R}^2; y< y',\ \forall (x',y')\in P\}, \tag{down}\\ L(P):=\{(x,y)\in \mathbb{R}^2; x< x',\ \forall (x',y')\in P\}, \tag{left}\\ R(P):=\{(x,y)\in \mathbb{R}^2; x> x',\ \forall (x',y')\in P\}. \tag{right} \end{align} Our proof of Theorem~\ref{partition_pointset} relies on the following definitions. The constant $c$ below (and throughout this section) is from Theorem~\ref{path}. See Figure~\ref{fig_confpatt} for an illustration. \begin{definition}\label{def_conf} A point set $P$ is said to be a \emph{$(k,t)$-configuration} if $P$ can be written as a disjoint union of subsets $P = Y_1\cup Y_2\cup \cdots\cup Y_{2t+1}$ s.t. \begin{itemize} \item $\forall i\in [t]$, $Y_{2i}$ is a block-monotone point set of depth $k$ and block-size at least $|Y_{2j+1}|/(3ck)^2$ for all $j\in \{0\}\cup[t]$; \item either $\cup_{j=i+1}^{2t+1} Y_j$ is located entirely in $R(Y_i)\cap U(Y_i)$ for all $i\in [2t]$, or $\cup_{j=i+1}^{2t+1} Y_j$ is located entirely in $R(Y_i)\cap D(Y_i)$ for all $i\in [2t]$. \end{itemize} \end{definition} \begin{definition}\label{def_patt} A point set $P$ is said to be a \emph{$(k,l,t)$-pattern} if $P$ can be written as a disjoint union of subsets $P = S_1\cup S_2\cup\dots\cup S_l\cup Y$ s.t. \begin{itemize} \item $Y$ is a $(k,t)$-configuration; \item $\forall i\in [l]$, $S_i$ is a block-monotone point set of depth $k$ and block-size at least $|Y|/(3ck)^2$; \item $\forall i\in [l]$, the set $(\cup_{j=i+1}^l S_j) \cup Y$ is located entirely in one of the following regions: $U(S_i)\cap L(S_i)$, $U(S_i)\cap R(S_i)$, $D(S_i)\cap L(S_i)$ and $D(S_i)\cap R(S_i)$. \end{itemize} \end{definition} \begin{figure}[ht] \centering \includegraphics{fig_confpatt.pdf} \caption{(i) a $(3,2)$-configuration. (ii) a $(3,2,2)$-pattern.} \label{fig_confpatt} \end{figure} If a planar point set $P$ is a $(k,4k,t)$-pattern or a $(k,l,k)$-pattern, the next two lemmas state that we can efficiently partition $P$ into few block-monotone point sets of depth at least $k$ and a small remaining set. \begin{lemma}\label{partition_lemma_1} If $P$ is a $(k,4k,t)$-pattern, then $P$ can be partitioned into $O(k\log k)$ block-monotone point sets of depth at least $k$ and a remaining set of size $O(k^2)$. \end{lemma} \begin{lemma}\label{partition_lemma_2} If $P$ is a $(k,l,k)$-pattern, then $P$ can be partitioned into $O(k^2\log k+l)$ block-monotone point sets of depth at least $k$ and a remaining set of size $O(k^3)$. \end{lemma} Starting with an arbitrary point set $P$, which can be regarded as a $(k,0,0)$-pattern, we will repeatedly apply the following lemma until $P$ is partitioned into few block-monotone point sets, a set $P'$ that is either a $(k,4k,t)$-pattern or a $(k,l,k)$-pattern, and a small remaining set. \begin{lemma}\label{partition_lemma_3} For $l < 4k$ and $t < k$, a $(k,l,t)$-pattern $P$ can be partitioned into $r$ block-monotone point sets with depth at least $k$, a point set $P'$, and a remaining set $E$ s.t.\begin{enumerate} \item $r = O(k)$, $|P'|\leq k(3k-1)^2$ and $E=\emptyset$; or \item $r = O(k\log k)$, $P'$ is a $(k,l,t+1)$-pattern and $|E|=O(k^2)$; or \item $r = O(k\log k)$, $P'$ is a $(k,l+t,0)$-pattern and $|E|=O(k^2)$. \end{enumerate} Moreover, when $t=0$, we can always have this partition of $P$ as in either case 1 or case 2. \end{lemma} Before we prove the lemmas above, let us use them to prove Theorem \ref{partition_pointset}. \begin{proof}[Proof of Theorem~\ref{partition_pointset}] Let $P$ be the given point set. For $i\geq 0$, we inductively construct a partition $\mathcal{F}_i\cup \{P_i,E_i\}$ of $P$ s.t. \begin{itemize} \item $P_i$ is a $(k,l_i,t_i)$-pattern, \item $|E_i|=O(ik^2)$, \item $\mathcal{F}_i$ is a disjoint family of block-monotone point sets of depth at least $k$, and $|\mathcal{F}_i| = O(ik\log k)$. \end{itemize} We start with $P_0 = P$, which is a $(k,0,0)$-pattern, and $\mathcal{F}_0= E_0 = \emptyset$. Suppose we have constructed the $i$-th partition $\mathcal{F}_i\cup \{P_i,E_i\}$ of $P$. If $|P_i| \leq k(3k-1)^2$, or $l_i \geq 4k$, or $t_i \geq k$, we end this inductive construction process, otherwise, we construct the next partition $\mathcal{F}_{i+1}\cup \{P_{i+1},E_{i+1}\}$ as follows. According to Lemma~\ref{partition_lemma_3}, $P_i$ can be partitioned into $r$ block-monotone point sets with depth at least $k$, denoted as $\{P_{i,1},\dots, P_{i,r}\}$, a point set $P'$, and a remaining set $E$, s.t. either one of the following cases happens. \medskip \noindent \emph{Case 1.} We have $r = O(k)$, $|P'|\leq k(3k-1)^2$, and $E=\emptyset$. In this case, we define $\mathcal{F}_{i + 1}=\mathcal{F}_i\cup\{P_{i,1},\dots, P_{i,r}\}$, $P_{i + 1}=P'$, and $E_{i+1}=E_i\cup E$. Notice that we have $|\mathcal{F}_{i + 1}| = |\mathcal{F}_{i}| + O(k) = O((i + 1)k\log k)$ and $|E_{i+1}|=|E_i|+0=O((i+1)k^2)$. \medskip \noindent \emph{Case 2.} We have $r = O(k\log k)$, $P'$ is a $(k,l_i,t_i+1)$-pattern, and $|E|=O(k^2)$. In this case, we define $\mathcal{F}_{i + 1}=\mathcal{F}_i\cup\{P_{i,1},\dots, P_{i,r}\}$, $P_{i + 1}=P'$, and $E_{i+1}=E_i\cup E$. This means $l_{i+1}=l_i$ and $t_{i+1}=t_{i}+1$. Notice that we have $|\mathcal{F}_{i + 1}| = |\mathcal{F}_{i}| + O(k\log k) = O((i + 1)k\log k)$ and $|E_{i+1}|=|E_i|+O(k^2)=O((i+1)k^2)$. \medskip \noindent \emph{Case 3.} We have $r = O(k\log k)$, $P'$ is a $(k,l_i+t_i,0)$-pattern, and $|E|=O(k^2)$. In this case, we define $\mathcal{F}_{i + 1}=\mathcal{F}_i\cup\{P_{i,1},\dots, P_{i,r}\}$, $P_{i + 1}=P'$, and $E_{i+1}=E_i\cup E$. This means $l_{i+1}=l_i+t_i$ and $t_{i+1}=0$. Again, we have $|\mathcal{F}_{i + 1}| = O((i + 1)k\log k)$ and $|E_{i+1}|=O((i+1)k^2)$. \medskip When $t_i=0$, by Lemma~\ref{partition_lemma_3}, we can always partition $P_i$ as in Case 1 or Case 2. So we always construct $\mathcal{F}_{i+1}\cup \{P_{i+1},E_{i+1}\}$ according to Case 1 or Case 2 when $t_i=0$. Let $\mathcal{F}_w\cup \{P_w,E_w\}$ be the last partition of $P$ constructed in this process. Here, $P_w$ is a $(k,l_w,t_w)$-pattern. We must have either $|P_w| \leq k(3k-1)^2$, or $l_w \geq 4k$, or $t_w \geq k$. Since $t_{i+1}\leq t_i+1$ and $l_{i+1}\leq l_{i}+t_i$ for all $i$, we have $t_w\leq k$ and $l_w\leq 5k$. Since we always construct the $(i+1)$-th partition according to Case 1 or Case 2 when $t_i = 0$, the sum $l_i + t_i$ always increases by at least $1$ after $2$ inductive process. So we have $w/2\leq t_w+l_w\leq 6k$ and hence $w\leq 12k$. Now we handle $\mathcal{F}_w\cup \{P_w,E_w\}$ based on how the construction process ends. If the construction process ended with $|P_w| \leq k(3k-1)^2$, we define $E_{w+1}=E_w\cup P_w$ and $\mathcal{F}_{w + 1}=\mathcal{F}_{w}$. Since $w\leq 12k$, we have $|\mathcal{F}_{w + 1}|=O(k^2\log(k))$ and $|E_{w+1}|=O(k^3)$. If the construction process ended with $l_w\geq 4k$, by Definition~\ref{def_patt}, we can partition $P_w$ into $l_w-4k$ many block-monotone point sets of depth $k$, denoted as $\{P_{w,1},\dots, P_{w,l_w-4k}\}$, and a $(k,4k,t_w)$-pattern $P'_w$. Then, by Lemma~\ref{partition_lemma_1}, $P'_w$ can be partitioned into $r=O(k\log k)$ block-monotone point sets of depth at least $k$, denoted as $\{P'_{w,1},\dots, P'_{w,r}\}$, and a remaining set $E$ of size $O(k^2)$. We define $E_{w+1}=E_w\cup E$ and \begin{align*} \mathcal{F}_{w + 1}=\mathcal{F}_{w}\cup\{P_{w,1},\dots, P_{w,l_w-4k},P'_{w,1},\dots,P'_{w,r}\}. \end{align*} Using $w\leq 12k$ and other bounds we mentioned above, we can check $|\mathcal{F}_{w + 1}|=O(k^2\log(k))$ and $|E_{w+1}|=O(k^3)$. If the construction process ended with $t_w\geq k$, we actually have $t_w=k$ and $l_w<4k$. By Lemma~\ref{partition_lemma_2}, we can partition $P_w$ into $r=O(k^2\log(k)+l_{w})$ block-monotone point sets of depth at least $k$, denoted as $\{P_{w,1},\dots, P_{w,r}\}$, and a remaining set $E$ of size $O(k^3)$. We define $E_{w+1}=E_w\cup E$ and $ \mathcal{F}_{w + 1}=\mathcal{F}_{w}\cup\{P_{w,1},\dots, P_{w,r}\}$. Again, we can check $|\mathcal{F}_{w + 1}|=O(k^2\log(k))$ and $|E_{w+1}|=O(k^3)$. \medskip Overall, we can always obtain a partition $\mathcal{F}_{w + 1}\cup \{E_{w + 1}\}$ of $P$ with $|\mathcal{F}_{w + 1}|=O(k^2\log(k))$ and $|E_{w+1}|=O(k^3)$. Using the classical Erd{\H o}s-Szekeres theorem, we can always find a monotone sequence of length at least $k$ in $E_{w+1}$ when $|E_{w+1}|>(k-1)^2$. By a repeated application of this fact, we can partition $E_{w+1}$ into $O(k^2)$ block-monotone point sets of depth $k$ and block-size~$1$, and a remaining set $E$ of size at most $(k-1)^2$. We define $\mathcal{F}$ to be the union of $\mathcal{F}_{w+1}$ and these block-monotone sequences. The partition $\mathcal{F}\cup \{E\}$ of $P$ has the desired properties and concludes the proof. \end{proof} We now give proofs for Lemmas \ref{partition_lemma_1}, \ref{partition_lemma_2}, and \ref{partition_lemma_3}. We need the following facts. \begin{fact}\label{partition_fact_1} For any positive integer $k$, every point set $P$ can be partitioned into $O(k\log(k))$ block-monotone point sets of depth $k$ and a remaining set $P'$ with $|P'|\leq \max\{|P|/k,(k-1)^2\}$. \end{fact} This fact can be established by repeatedly using Theorem~\ref{main} to pull out block-monotone point sets and applying the elementary inequality $(1-x^{-1})^{x\log(x)}\leq x^{-1}$ for any $x>1$. \begin{fact}\label{partition_fact_2} For any positive integer $k$ and $m$, every block-monotone point set $P$ with depth $k$ and $|P|\geq m$ can be partitioned into a block-monotone point set of depth $k$, a subset of size exactly $m$, and a remaining set of size less than $k$. \end{fact} This fact can be established by taking out $\lceil m/k\rceil$ points from each block of $P$. Then we have taken out $k\cdot\lceil m/k\rceil=m+r$ points, where $0\leq r<k$. \begin{proof}[Proof of Lemma~\ref{partition_lemma_1}] Write the given $(k,4k,t)$-pattern $P=S_1\cup \dots \cup S_{4k}\cup Y$ as in Definition~\ref{def_patt}. By definition, each block-monotone point set $S_i$ is contained in one of the $4$ regions: $U(Y)\cap L(Y)$, $U(Y)\cap R(Y)$, $D(Y)\cap L(Y)$ and $D(Y)\cap R(Y)$. By Pigeonhole principle, there are $k$ indices $i_1,\dots, i_k$ s.t. all $S_{i_j}$, for $j\in [k]$, are contained in one of the regions above. Without loss of generality, we assume $S_1,\dots,S_{k}$ are all located entirely in $U(Y)\cap L(Y)$. We have $S_{i_2}\subset D(S_{i_1})\cap R(S_{i_1})$ for all $1\leq i_1<i_2\leq k$. Indeed, since $Y\subset D(S_{i_1})\cap R(S_{i_1})$, Definition~\ref{def_patt} guarantees that $(\cup_{j=i_1+1}^k S_j) \cup Y$ to be contained in $D(S_{i_1})\cap R(S_{i_1})$ and, in particular, $S_{i_2}$ is contained in this region. See Figure~\ref{fig_partition_lemma} for an illustration. \begin{figure} \centering \includegraphics{fig_partition_lemma.pdf} \caption{In proof of Lemma~\ref{partition_lemma_1}, $S_{i_2}\subset D(S_{i_1})\cap R(S_{i_1})$ for $i_1<i_2$.} \label{fig_partition_lemma} \end{figure} Now apply Fact~\ref{partition_fact_1} to $Y$, we can partition $Y$ into $\{A_1,\dots, A_w, Y'\}$, where $w=O(k\log(k))$, s.t. each $A_j$ is block-monotone of depth $9c^2k$, and either $|Y'|\leq |Y|/(9c^2k)$ or $|Y'|\leq (9c^2k-1)^2$. If $|Y'|\leq (9c^2k-1)^2$, we have partitioned $P$ into $O(k\log(k))$ block-monotone point sets of depth at least $k$, which are $\{A_1,\dots, A_w, S_1,\dots, S_{4k}\}$, and a remaining set $Y'$ of size $O(k^2)$, as wanted. If $|Y'|\leq |Y|/(9c^2k)$, by Definition~\ref{def_patt} we have $|Y'| \leq |S_i|$ for $i\in [k]$. We can apply Fact~\ref{partition_fact_2} with $m:=|Y'|$ to $S_i$ to obtain a partition $S_i=S'_i\cup B_i\cup E_i$ where $S'_i$ is block-monotone of depth $k$, $|B_i|=|Y'|$ and $|E_i|\leq k$. We observe that $C=B_1\cup B_2\cup \dots \cup B_k \cup Y'$ is block-monotone of depth $k+1$ by its construction. Then we have partitioned $P$ into $O(k\log(k))$ many block-monotone point sets, which are $\{A_1,\dots, A_w, S'_1,\dots, S'_k, S_{k+1},\dots, S_{4k}, C\}$, and a remaining set $E:=\cup_{i=1}^k E_i$ of size $O(k^2)$, as wanted. \end{proof} \begin{proof}[Proof of Lemma~\ref{partition_lemma_2}] Write the given $(k,l,k)$-pattern $P=S_1\cup \dots\cup S_l \cup Y$ as in Definition~\ref{def_patt} and the $(k,k)$-configuration $Y=Y_1\cup \dots \cup Y_{2k+1}$ as in Definition~\ref{def_conf}. Since each $S_i$ is block-monotone of depth $k$, it suffices to partition $Y$ into $O(k^2\log(k))$ many block-monotone point sets of depth at least $k$ and a remaining set of size $O(k^3)$. For each $j\in \{0\}\cup [k]$, we apply Fact~\ref{partition_fact_1} to obtain a partition of $Y_{2j+1}$ into $O(k\log(k))$ many block-monotone point sets of depth $9c^2k>k$ and a remaining set $Y'_{2j+1}$ of size at most $|Y_{2j+1}|/(9c^2k)$ or at most $(9c^2k-1)^2$. We can apply Fact~\ref{partition_fact_1} again to partition $Y'_{2j+1}$ into $O(k\log(k))$ many block-monotone point sets of depth $k+1$ and a remaining set $Y''_{2j+1}$ with\begin{align}\label{eq_partition_lemma_2} |Y''_{2j+1}|\leq \max\{|Y_{2j+1}|/(9c^2k(k+1)),(9c^2k-1)^2\}. \end{align} Denote the block-monotone point sets produced in this process as $\{A_{j,x};x\in [w_j]\}$, where $w_j=O(k\log(k))$. Next we denote $J_1:=\{j\in \{0\}\cup [k]; |Y_{2j+1}''|>(9c^2k-1)^2\}$ and $J_2:=(\{0\}\cup [k])\setminus J_1$. For each $j\in J_1$ and $i\in [k]$, we must have\begin{align*} |Y_{2j+1}''|\leq |Y_{2j+1}|/(9c^2k(k+1))\leq |Y_{2i}|/(k+1), \end{align*} where the second inequality is by Definition~\ref{def_conf}. Hence $|Y_{2i}|\geq |\cup_{j\in J_1} Y''_{2j+1}|$. We can apply Fact~\ref{partition_fact_2} with $m:=|\cup_{j\in J_1} Y''_{2j+1}|$ to $Y_{2i}$ to obtain a partition $Y_{2i}=Y'_{2i}\cup B_{i}\cup E_{i}$ where $Y'_{2i}$ is block-monotone of depth $k$, $|B_{i}|=m$, and $|E_{i}|\leq k$. Since $|B_{i}|=|\cup_{j\in J_1} Y''_{2j+1}|$, we can take a further partition $B_{i}=\cup_{j\in J_1} B_{j,i}$ with $|B_{j,i}|=|Y''_{2j+1}|$ for each $j\in J_1$. Then we observe that $C_{j}=B_{j,1}\cup \dots \cup B_{j,j}\cup Y''_{2j+1}\cup B_{j,j+1}\cup \dots \cup B_{j,k}$ is block-monotone of depth $k+1$ for each $j\in J_1$ by its construction. Finally, let $E:=(\cup_{i=1}^k E_i)\cup(\cup_{j\in J_2}Y''_{2j+1})$, it easy to check that $E=O(k^3)$. So we have partitioned $Y$ into $O(k^2\log(k))$ many block-monotone point sets, which are\begin{align*} \{A_{j,x}\}_{j\in\{0\}\cup [k], x\in [w_j]}\cup \{C_j\}_{j\in J_1} \cup \{Y'_{2i}\}_{i\in [k]}, \end{align*} and a remaining set $E$ of size $O(k^3)$, as wanted. \end{proof} \begin{proof}[Proof of Lemma~\ref{partition_lemma_3}] Write the given $(k,l,t)$-pattern $P=S_1\cup \dots\cup S_l \cup Y$ as in Definition~\ref{def_patt} and the $(k,t)$-configuration $Y=Y_1\cup \dots \cup Y_{2t+1}$ as in Definition~\ref{def_conf}. Without loss of generality, we assume $\cup_{j=i+1}^{2t+1} Y_j$ is located entirely in $R(Y_i)\cap U(Y_i)$ for all $i\in [2t]$. We also assume that $Y_1$ has the largest size among $\{Y_{2j+1}; j\in \{0\}\cup [t]\}$ because other scenarios can be proved similarly. If $|Y_1|\leq (3k-1)^2$, we can partition $P$ into $r=l+t=O(k)$ many block-monotone point sets of depth $k$, which are $\{S_1,\dots, S_l,Y_2,Y_4,\dots, Y_{2t}\}$, and a remaining set $P':=\cup_{j=0}^t Y_{2j+1}$ of size at most $k(3k-1)^2$, since $t<k$. So we conclude the lemma in case (1). Now we assume $|Y_1|>(3k-1)^2$. Apply Theorem~\ref{main} to extract a block-monotone point set $S\subset Y_1$ of depth $3k$ and block-size at least $|Y_1|/(3ck)^2$ and name the $i$-th block of $S$ as $B_i$ for $i\in [3k]$. Our proof splits into two cases: $S$ being block-increasing or $S$ being block-decreasing. \medskip \noindent \emph{Case 1.} Suppose $S$ is block-increasing, write $S_{l+i}:=Y_{2(t+1-i)}$ for each $i\in [t]$ and set $P'=S_1\cup \dots\cup S_{l+t}\cup (Y_1\setminus S)$. We can check that $P'$ is a $(k,k+l,0)$-pattern by Definition~\ref{def_patt}. Let $Z:=\cup_{j=1}^t Y_{2j+1}$. By an argument similar to \eqref{eq_partition_lemma_2}, we can apply Fact~\ref{partition_fact_1} three times to partition $Z$ into $\{A_{1},\dots,A_{w},Z'\}$, where $w=O(k\log(k))$, s.t. each $A_{i}$ is block-monotone of depth at least $k$ and $|Z'|\leq\max\{|Z|/(9c^2k^3), (9c^2k-1)^2\}$. If $|Z'|\leq (9c^2k-1)^2$, let $E=Z'$. We have partitioned $P$ into $O(k\log(k))$ block-monotone point sets of depth at least $k$, which are $\{A_1,\dots, A_w, S\}$, a $(k,k+l,0)$-pattern $P'$, and a remaining set $E$ of size $O(k^2)$. So we conclude the lemma in case (3). If $|Z'|\leq |Z|/(9c^2k^3)$, notice that $|Z|\leq k|Y_1|$ since $t<k$, we have $|Z'|\leq |Y_1|/(3ck)^2\leq |B_i|$, for each $i\in [3k]$. We can take a partition $B_i=B_{i}'\cup B_i''$ with $|B_i'|=|Z'|$. We observe that $C:=B_1'\cup \dots\cup B_{3k}'\cup Z'$ is block-increasing of depth $3k+1$ and $S':=B''_{1}\cup \dots\cup B''_{3k}$ is block-increasing of depth $3k$ by their construction. We have partitioned $P$ into $O(k\log(k))$ block-monotone point sets of depth at least $k$, which are $\{A_1,\dots, A_w, C, S'\}$, and a $(k,k+l,0)$-pattern $P'$. So we conclude the lemma in case (3). \medskip \noindent \emph{Case 2.} Suppose $S$ is block-decreasing, we choose two points in the following regions:\begin{align*} &(x_1, y_1)\in R(B_k)\cap D(B_k)\cap L(B_{k+1})\cap U(B_{k+1}),\\ &(x_2, y_2)\in R(B_{2k})\cap D(B_{2k})\cap L(B_{2k+1})\cap U(B_{2k+1}). \end{align*} Also we require $x_1$ or $x_2$ isn't the $x$-coordinate of any element in $P$, and $y_1$ or $y_2$ isn't the $y$-coordinate of any element in $P$. We use the lines $x=x_i$ and $y=y_i$ for $i=1,2$ to divide the plane into a $3\times 3$ grid and label the regions $R_i,i=1,\dots,9$ as in Figure~\ref{fig:partition_9region}. \begin{figure}[ht] \centering \includegraphics{fig_9region.pdf} \caption{Division of the plane into $9$ regions according to $(x_i,y_i),i=1,2$. Each ellipse represents a cluster of points as defined in the proof.} \label{fig:partition_9region} \end{figure} Let $C:=B_{k+1}\cup \dots \cup B_{2k}$ and notice that $C$ is block-monotone of depth $k$ and block-size at least $|Y_1|/(3ck)^2$. Define\begin{align*} Y':=(R_7\cap Y_1)\cup C\cup (R_3\cap Y_1)\cup Y_2\cup Y_3\cup \dots \cup Y_{2t+1}. \end{align*} We can check that $Y'$ is a $(k,t+1)$-configuration and $P':=S_1\cup \dots\cup S_l\cup Y'$ is a $(k,l,t+1)$-pattern according to Definitions~\ref{def_conf} and~\ref{def_patt}. Next, we set $Z_1:=(Y_1\setminus S)\cap (R_5\cup R_6\cup R_8\cup R_9)$ and $Z_2:=(Y_1\setminus S)\cap (R_1\cup R_2\cup R_4)$. By an argument similar to \eqref{eq_partition_lemma_2}, we can apply Fact~\ref{partition_fact_1} twice to partition $Z_j$ into $\{A_{j,1},\dots,A_{j,w_j},Z'_j\}$, where $w_j=O(k\log(k))$, s.t. each $A_{j,x}$ is block-monotone of depth at least $k$ and $|Z'_j|\leq \max\{ |Z_j|/(3ck)^2, (9c^2k-1)^2\}$. Writing $C_1:=B_1\cup \dots B_k$ and $C_2=B_{2k+1}\cup \dots B_{3k}$, then, for $j=1,2$, either $|Z'_j|=O(k^2)$ or $C_j\cup Z'_j$ can be partitioned into two block-decreasing point sets of depth at least $k$. Indeed, if $|Z'_1|> (9c^2k-1)^2$, we must have\begin{align*} |Z'_1|\leq |Z_1|/(3ck)^2\leq |Y_1|/(3ck)^2\leq |B_i|, \end{align*} for each $i\in [k]$. Take a partition $B_i=B_{i}'\cup B_i''$ with $|B_i'|=|Z'_1|$, then we can observe $C_1:=B'_1\cup \dots \cup B'_k\cup Z'_1$ is block-decreasing of depth $k+1$ and $C_1'=B''_1\cup \dots \cup B''_k$ is block-decreasing of depth $k$ by their construction, as wanted. A similar argument applies to $C_2\cup Z'_2$. We have partitioned $P\setminus (C_1\cup Z'_1\cup C_2\cup Z'_2)$ into $O(k\log(k))$ block-monotone sequence of depth at least $k$, which are $\{A_{j,x}; j=1,2, x\in [w_j]\}$, and a $(k,l,t+1)$-pattern $P'$. Combined with the claim in previous paragraph, we conclude the lemma in case (2). Finally, when we are in the special case $t=0$ and $S$ is block-increasing, we can still use the arguments for the case when $S$ is block-decreasing and conclude the lemma in case (2). The condition $t=0$ can be used to verify $Y'$ is a $(k,t+1)$-configuration, which is generally not true when $t>0$ and $S$ is block-increasing. \end{proof} \section{Applications}\label{sec:applications} \subsection{Mutually avoiding sets} We devote this subsection to the proof of Theorem~\ref{avoiding}. The proof is essentially the same as in \cite{aronov1991crossing}, but we include it here for completeness. Given a non-vertical line $L$ in the plane, we denote $L^+$ to be the closed upper-half plane defined by $L$, and $L^-$ to be the closed lower-half plane defined by $L$. We need the following result, which is Lemma~1 in \cite{aronov1991crossing}. \begin{lemma}\label{avoiding_lemma} Let $P,Q\subset \mathbb{R}^2$ be two $n$-element point sets with $P$ and $Q$ separated by a non-vertical line $L$ and $P\cup Q$ in general position. Then for any positive integer $m\leq n$, there is another non-vertical line $H$ s.t. $|H^+\cap P|=|H^+\cap Q|=m$ or $|H^-\cap P|=|H^-\cap Q|=m$. \end{lemma} \begin{proof}[Proof of Theorem \ref{avoiding}] Let $k$ be as given and $n > 24k^2$. Let $P$ be an $n$-element point set in the plane in general position. We start by taking a non-vertical line $L$ to partition the plane s.t. each half-plane contains $\lfloor \frac{n}{2} \rfloor$ points from $P$. Then by Lemma~\ref{avoiding_lemma}, we obtain a non-vertical line $H$ with, say, $H^+\cap(L^{+}\cap P)=H^+\cap (L^-\cap P)=\lfloor \frac{n}{6}\rfloor$. Next, we find a third line $N$, by first setting $N = H$, and then sweeping $N$ towards the direction of $H^-$, keeping it parallel with $H$, until $H^-\cap N^+\cap L^+$ or $H^-\cap N^+\cap L^-$ contains $\lfloor\frac{n}{6}\rfloor$ points from $P$. Without loss of generality, let us assume $Q:=P\cap (H^-\cap N^+\cap L^+)$ first reaches $\lfloor\frac{n}{6}\rfloor$ points, and the region $H^-\cap N^+\cap L^-$ has less than $\lfloor\frac{n}{6}\rfloor$ points from $P$. Hence, both $Q_l:=P\cap (H^+\cap L^-)$ and $Q_r:=P\cap (N^-\cap L^-)$ have at least $\lfloor \frac{n}{6}\rfloor$ points. See Figure~\ref{fig:avoiding1} for an illustration. \begin{figure}[ht] \centering \includegraphics{fig_avoiding1.pdf} \caption{The division of plane into regions according to $L,H,N$.} \label{fig:avoiding1} \end{figure} We can apply an affine transformation so that $L$ and $H$ are perpendicular, and $N$ is on the right side of $H$. Think of $L$ as the $x$-axis, $H$ as the $y$-axis, and $N$ as a vertical line with a positive $x$-coordinate. After ordering the elements in $Q$ according to their $x$-coordinates, we apply Theorem \ref{main} to $Q$ to obtain disjoint subsets $Q_1,\dots,Q_{2k+1} \subset Q$ s.t. $(Q_1,\dots, Q_{2k+1})$ is block-monotone of depth $2k + 1$ and block-size $\Omega(n/k^2)$, where each entry represents its $y$-coordinate. Without loss of generality, we can assume it is block-decreasing, otherwise we can work with $Q_r$ rather than $Q_l$ in the following arguments. \begin{figure}[ht] \centering \includegraphics{fig_avoiding2.pdf} \caption{An example when $A_i$'s are increasing. Each ellipse represents a cluster of points as defined in the proof.} \label{fig:avoiding2} \end{figure} Now fix a point $q \in Q_{k+1}$. We express the points in $Q_l$ in polar coordinates $(\rho,\theta)$ with $q$ being the origin. We can assume no two points in $Q_l$ are at the same distance to $q$, otherwise a slight perturbation may be applied. By ordering the points in $Q_l$ with respect to $\theta$, in counter-clockwise order, we apply Theorem~\ref{main} to $Q_l$ to obtain disjoint subsets $A_1,\dots,A_k \subset Q_l$ s.t. $(A_1,\dots,A_k)$ is block-monotone of depth $k$ and block-size $\Omega(n/k^2)$, where each entry represents its distance to $q$. If it's block-decreasing, take $B_i=Q_{i}$ for $i\in[k]$, and if it's block-increasing, take $B_i=Q_{k +1+i}$. It is easy to check that the sets $\{A_1,\ldots, A_k\}$ and $\{B_1,\ldots, B_k\}$ have the claimed properties. See Figure~\ref{fig:avoiding2} for an illustration.\end{proof} \subsection{Monotone biarc diagrams} We devote this subsection to the proof of Theorem~\ref{biarc}. Our proof is constructive, hence implying an recursive algorithm for the claimed outcome. We start by making the simple observation that our main results hold for sequences of (not necessarily distinct) real numbers, if the term \emph{block-monotone} now refers to being block-nondecreasing or block-nonincreasing. More precisely, a sequence $(a_1, a_2,\dots,a_{ks})$ of real numbers is said to be \emph{block-nondecreasing} (\emph{block-nonincreasing}) with \emph{depth} $k$ and \emph{block-size} $s$ if every subsequence $(a_{i_1},a_{i_2},\dots, a_{i_k})$, for $(j-1)s<i_j\leq js$, is nondecreasing (nonincreasing). \begin{theorem}\label{partition_notdistinct} For any positive integer $k$, every finite sequence of real numbers can be partitioned into at most $C_k=O(k^2\log k)$ block-monotone subsequences of depth at least $k$ upon deleting at most $(k-1)^2$ entries. \end{theorem} To see our main results imply the above variation, it suffices to slightly perturb the possibly equal entries of a given sequence until all entries are distinct. Algorithms for our main results can also be applied after such a perturbation. We need the following lemma in \cite{yehuda1998partitioning} for Theorem~\ref{biarc}. \begin{lemma}\label{biarc_lemma} For any graph $G=(V,E)$ with $V = [n]$, there exists $b\in [n]$ s.t. both the induced subgraphs of $G$ on $\{1,2,\dots,b\}$ and $\{b+1,b+2,\dots,n\}$ have no more than $|E|/2$ edges. \end{lemma} \begin{proof} For $U \subset [n]$, let $G_U$ denote the induced subgraph of $G$ on $U$. Let $b$ be the largest among $[n]$ s.t. $E(G_{[b]})\leq \frac{|E|}{2}$, so $E(G_{[b+1]})>\frac{|E|}{2}$. Notice that $E(G_{[b+1]})$ and $E(G_{[n]\setminus[b]})$ are two disjoint subsets of $E$, so $E(G_{[n]\setminus[b]})\leq |E|- E(G_{[b+1]})<\frac{|E|}{2}$, as wanted. \end{proof} \begin{proof}[Proof of Theorem \ref{biarc}] We prove by induction on $|E|$. The base case when $|E| = 1$ is trivial. For the inductive step, by the given order on $V$, we can identify $V$ with $[n]$. We find such a $b$ according to Lemma~\ref{biarc_lemma}. Consider the set $E'$ of edges between $[b]$ and $[n]\setminus [b]$. By writing each edge $e \in E'$ as $(x,y)$, where $x\in [b]$ and $y\in [n]\setminus[b]$, we order the elements in $E'$ lexicographically: for $(x,y),(x',y') \in E$, we have $(x,y) < (x,y)$ when $x < x'$ or when $x = x'$ and $y < y'$. Given the order on $E'$ described above, consider the sequence of right-endpoints in $E'$. We apply Theorem~\ref{partition_notdistinct} with parameter $k = \lceil\epsilon^{-1}\rceil$ to this sequence, to decompose it into $C_k$ many block-monotone sequences of depth $k$, upon deleting at most $(k-1)^2$ entries. For each block-monotone subsequence of depth $k$, we draw the corresponding edges on a single page as follows. If the subsequence is block-nonincreasing of depth $k$ and block-size $s$, we draw the corresponding edges as semicircles above the spine. Then, two edges cross only if they come from the same block. Since there are a total of $\binom{ks}{2}$ pairs of edges, and only $k\binom{s}{2}$ such pairs from the same block, the fraction of pairs of edges that cross in such a drawing is at most $1/k$. See Figure~\ref{fig:biarc_example}(i). Similarly, if the subsequence is block-nondecreasing of depth $k$ and block-size $s$, we draw the corresponding edges as monotone biarcs, consisting of two semicircles with the first (left) one above the spine, and the second (right) one below the spine. Furthermore, we draw the monotone biarc s.t. it crosses the spine at $b + 1 - \ell/n - r/(2n^2)$, where $\ell$ and $r$ are the left and right endpoints of the edge respectively. See Figure~\ref{fig:biarc_example}(ii). By the same argument above, the fraction of pairs of edges that cross in such a drawing is at most $1/k$. Hence, $E'$ can be decomposed into $C_k+(k-1)^2$ many monotone biarc diagrams, s.t. each monotone biarc diagram has at most $1/k$-fraction of pairs of edges that are crossing. \begin{figure}[ht] \centering \includegraphics{fig_biarc.pdf} \caption{(i) a proper arc diagram. (ii) a monotone biarc diagram.} \label{fig:biarc_example} \end{figure} For edges within $[b]$, Lemma \ref{biarc_lemma} and the inductive hypothesis tell us that they can be decomposed into $(C_k+(k-1)^2)(\log|E|-1)$ monotone biarc diagrams, s.t. the fraction of pairs of edges that are crossing in each diagram is at most $1/k$. The same argument applies to the edges within $[n]\setminus[b]$. However, notice that two such monotone biarc diagrams, one in $[b]$ and another in $[n]\setminus[b]$, can be drawn on the same page without introducing more crossings. Hence, we can decompose $E\backslash E'$ into at most $(C_k+(k-1)^2)(\log|E|-1)$ such monotone biarc diagrams, giving us a total of $(C_k+(k-1)^2)\log|E|$ monotone biarc diagrams.\end{proof} \section{Final remarks}\label{sec:remarks} 1. We call a sequence $(a_1,a_2,\dots,a_n)$ of $n$ distinct real numbers \emph{$\epsilon$-increasing} (\emph{$\epsilon$-decreasing}) if the number of decreasing (increasing) pairs $(a_i,a_j)$, where $i<j$, is less than $\epsilon n^2$. And we call a sequence \emph{$\epsilon$-monotone} if it's either $\epsilon$-increasing or $\epsilon$-decreasing. Clearly, a block-monotone sequence of depth $k$ is an $\epsilon$-monotone sequence with $\epsilon=k^{-1}$. Hence, Theorem~\ref{main} implies the following. \begin{corollary}\label{main_eps} For all $n > 0$ and $\epsilon>0$, every sequence of $n$ distinct real numbers contains an $\epsilon$-monotone subsequence of length at least $\Omega(\epsilon n)$. \end{corollary} \noindent This corollary is also asymptotically best possible. To see this, for $n>(k-1)^2$ and a sequence $A=(a_i)_{i=1}^n$ of distinct real numbers, we can apply Corollary~\ref{main_eps} with $\epsilon=(64k)^{-1}$ to $A$ and obtain an $\epsilon$-monotone subsequence $S\subset A$ and then apply Lemma~2.1 in \cite{pach2021planar} to $S$ to obtain a block-monotone subsequence of depth $k$ and block-size $\Omega(n/k^2)$. So Corollary~\ref{main_eps} implies Theorem~\ref{main}. \medskip \noindent 2. Let $f(k)$ be the smallest number $N$ s.t. every finite sequence of distinct real numbers can be partitioned into at most $N$ block-monotone subsequences of depth at least $k$ upon deleting $(k-1)^2$ entries. Our Theorem~\ref{partition} is equivalent to saying $f(k)=O(k^2\log(k))$. The $K(k,2)$-type construction in Remark~\ref{extreme_construction_remark} implies $f(k)\geq k$. What is the asymptotic order of $f(k)$? \medskip \noindent 3. We suspect our algorithm presented in Theorem~\ref{main} can be improved. How fast can we compute a block-monotone subsequence as large as claimed in Theorem~\ref{main}? Can we do it within time almost linear in $n$ for all $k$? \medskip \noindent {\bf Acknowledgement.} We wish to thank the anonymous SoCG referees for their valuable suggestions. \bibliographystyle{plain}
proofpile-arXiv_065-81
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Plenty of applications such as power grids, traffic nets can be abstracted into undirected graph. PageRank \cite{page1999pagerank,brin2012reprint} is an essential field of graph including both directed graph and undirected graph. There is little work on computing PageRank of undirected graph. Usually, we consider undirected graph as a complicated directed graph when computing PageRank, and thus some disadvantages such as the increase in computation are inevitable. However, some important properties of undirected graph, such as symmetry, are ignored when computing PageRank by existing method. For undirected graph, each eigenvalue of the probability transition matrix is real and the corresponding eigenvectors form a basis of the $n$-dimensional linear space. Then both personalized vector and PageRank vector are the linear combinations of the basis and completely determined by the coefficients. Thus computing PageRank vector from the personalized vector can be focused on the transformation of coefficients which is a function of eigenvalues. However, it is infeasible on large-scale graph because of the huge cost on computing eigenvalues and eigenvectors, we need another method to avoid this cost. Chebyshev polynomial\cite{2011Chebyshev} is an effective tool on approximating. The recursion relation of Chebyshev polynomial can be extended to the domain of matrix, and thus the multiplication between Chebyshev matrix polynomial and personalized vector can be computed iteratively. It is possible to obtain the approximated PageRank vector without computing eigenvectors if we approximate the function of eigenvalues by Chebyshev polynomial. Naturally, we propose a parallel PageRank algorithm for undirected graph. The contributions of this paper are as follows: \begin{enumerate} \item[$\blacksquare$] We show that PageRank vector of undirected graph is a linear combination of eigenvectors of the probability transition matrix and the corresponding coefficients are the functions of eigenvalues; \item[$\blacksquare$] We propose a parallel PageRank algorithm(CPAA) without computing eigenvectors for undirected graph; \item[$\blacksquare$] We compare CPAA with some other PageRank algorithms by experiments and the results demonstrate that CPAA converges faster. \end{enumerate} The remains of this paper are as follows: section 2 is about the relate work; in section 3, we propose CPAA; some experiments are given in section 4; we conclude this paper in section 5. \section{Related works} There have been plenty of PageRank algorithms\cite{Berkhin2005A} and Power method is the most fundamental one. Power method iteratively computes PageRank vector and the convergence rate depends on the damping factor. Although it has achieved success in many applications, power method is inefficient when dealing with the large-scale graph. Some improvements such as paralleling\cite{migallon2018parallel}, utilizing the block structure\cite{kamvar2003exploiting} and utilizing the DAG structure\cite{zhu2018fast} were proposed during the past decades. Monte Carlo method\cite{avrachenkov2007monte} is also an excellent PageRank algorithm. From the perspective of stochastic process, PageRank can be viewed as the probability of each vertex being visited by a random walk which starts at a random vertex and walks according to the probability transition matrix. Monte Carlo method naturally supports parallelization. Some improvements based on the Monte Carlo method can be found in \cite{sarma2013fast,2019Distributed,luo2020improved}. However, there is a big flaw that the Monte Carlo method requires large memory space and bandwidth. Forward Push\cite{2006Local} is the state-of-the-art personalized PageRank algorithm and one of the improvements based on it is ITA. ITA is parallel and has low requirements on memory space and bandwidth. However, ITA is inefficient on undirected graph even compared with the power method since there is no dangling vertex or unreferenced vertex. \section{CPAA} In this section, we will propose a parallel PageRank algorithm which is specially for undirected graph. Firstly, we demonstrate that PageRank vector is the linear combination of eigenvectors of the probability transition matrix and the corresponding coefficients are the functions of eigenvalues. Then we introduce the Chebyshev polynomial and a method by which PageRank vector can be obtained without computing eigenvectors. Finally, we propose CPAA, a parallel PageRank algorithm for undirected graph. Before proceeding, we introduce some basic notations. Let $G=(V,E)$ denotes an undirected graph where $V=\{v_{1},v_{2},\cdots,v_{n}\}$ is the set of vertices and $E=\{(v_{i},v_{j}) | v_{i},v_{j}\in V\}$ is the set of edges. $A=(a_{ij})_{n \times n}$ is the adjacency matrix of $G$ such that $a_{ij}$ is 1 if $(v_{j},v_{i}) \in E$ and 0 else. Let $D=diag(d_{1},d_{2},\cdots,d_{n})$ denote the degree matrix with $d_{i}=\sum \limits_{j=1}^{n}{a_{ij}}$ and $P=AD^{-1}$ is the probability transition matrix. Then the PageRank vector $\pi$ is given as \begin{equation} \pi=(1-c)(I-cP)^{-1}p, \end{equation} where $p$ is the personalized vector, $I$ the identity matrix and $c$ is the damping factor. \subsection{PageRank of Undirected graph} \newtheorem{lemma}{Lemma} \begin{lemma} \label{lemma1} $D-\frac{c}{2}A$ is positive definite matrix. \end{lemma} \begin{proof}[Proof] Given a $n$-dimensional vector $y$, we can have that \begin{equation} \begin{aligned} & y^{*}(D-\frac{c}{2}A)y \\ =&\sum \limits_{i=1}^{n} {d_{i}\overline{y}_{i}y_{i}}-\frac{c}{2}\sum \limits_{i=1}^{n} \sum \limits_{j=1}^{n}{a_{ij}\overline{y}_{i}y_{j}} \\ =&\frac{1}{2}\sum \limits_{i=1}^{n} \sum \limits_{j=1}^{n}{a_{ij}(\overline{y_{i}}y_{i}+\overline{y}_{j}y_{j}- \frac{c}{2}(\overline{y}_{i}y_{j}+y_{i}\overline{y}_{j}))}. \end{aligned} \end{equation} Since $\overline{y_{i}}y_{i}+\overline{y}_{j}y_{j}-\frac{c}{2}(\overline{y}_{i}y_{j}+y_{i}\overline{y}_{j})>0$, we can have $D-\frac{c}{2}A$ is positive definite matrix. \end{proof} \newtheorem{thm}{Theorem} \begin{thm} All the eigenvalues of $P^{T}$ are real. \end{thm} \begin{proof}[Proof] Let $B=D-\frac{c}{2}A$, then we can have that \begin{equation} \begin{aligned} (BP^{T})^{T} & =(A-\frac{c}{2}AD^{-1}A)^{T} \\ & = A-\frac{c}{2}AD^{-1}A \\ & = BP^{T}. \end{aligned} \end{equation} We denote the eigenvalue of $P^{T}$ by $\lambda$ and $\chi$ is the corresponding eigenvector, thus we can have \begin{equation} \begin{aligned} \lambda \chi^{*} B \chi & =\chi^{*}BP^{T}\chi \\ & =\chi^{*}(BP^{T})^{T}\chi \\ & =(P^{T}\overline{\chi})^{T}B\chi \\ & =\overline{\lambda}\overline{\chi}^{T} B \chi. \end{aligned} \end{equation} Since $B$ is positive definite matrix, we can have $\overline{\lambda}=\lambda$, namely $\lambda$ is real. \end{proof} Since $P$ has the same eigenvalues with $P^{T}$, there exist $n$ linearly independent eigenvectors of $P$. Let $\lambda_{1},\lambda_{2},\cdots,\lambda_{n}$ denote the eigenvalues of $P$ and $\chi_{1},\chi_{2},\cdots,\chi_{n}$ are the corresponding eigenvectors. Then $\chi_{1},\chi_{2},\cdots,\chi_{n}$ form a basis of $n$-dimensional linear space, and the personalized vector $p$ satisfies \begin{equation} p=\sum \limits_{l=1}^{n} \alpha_{l}\chi_{l}. \end{equation} Combine with (1), we can have that \begin{equation} \begin{aligned} \pi & = (1-c)(I-cP)^{-1}p\\ &= (1-c)\sum \limits_{l=1}^{n}\alpha_{l}(I-cP)^{-1}\chi_{l}\\ & = (1-c)\sum \limits_{l=1}^{n} \alpha_{l} \sum \limits_{k=0}^{\infty}(cP)^{k}\chi_{l} \\ & =(1-c)\sum \limits_{l=1}^{n} \alpha_{l} \sum \limits_{k=0}^{\infty}(c\lambda_{l})^{k}\chi_{l} \\ & = (1-c)\sum \limits_{l=1}^{n} \alpha_{l} (1-c\lambda_{l})^{-1}\chi_{l}. \end{aligned} \end{equation} (6) shows that PageRank vector $\pi$ is a linear combination of eigenvectors of $P$ and the corresponding coefficients are the functions of eigenvalues. However, computing eigenvectors is inefficient when $G(V,E)$ is large-scale graph. In next section, we will propose a new method by which $\pi$ can be obtained without computing eigenvalues and eigenvectors. \subsection{Chebyshev polynomial approximation} Chebyshev polynomial is a set of orthogonal polynomials with good properties. Let $\{T_{n}(x)\}$ denotes the Chebyshev polynomial, then $T_{n}(x)$ satisfies \begin{enumerate} \item[$(a)$] $\vert T_{n}(x) \vert \le 1$; \item[$(b)$] $\frac{2}{\pi}\int_{-1}^{1}\frac{T_{m}(x)T_{n}(x)}{\sqrt{1-x^{2}}}dx=\left\{ \begin{array}{ll} 2, & if \quad m=n=0\\ \delta_{mn}, & else \end{array} \right.$; \item[$(c)$] $T_{n+1}(x)=2xT_{n}(x)-T_{n-1}(x)$. \end{enumerate} Given a function $f(x) \in C(a,b)$ and $\int_{a}^{b}f^{2}(x)dx<\infty$, since Chebyshev polynomial is the optimal uniform approximating polynomial, we can have that \begin{center} $f(x)=\frac{c_{0}}{2}+\sum \limits_{k=1}^{\infty}c_{k}T_{k}(x)$, \end{center} where \begin{center} $c_{k}=\frac{2}{b-a}\int_{a}^{b} \frac{1}{\sqrt{1-(\frac{2}{b-a}x-\frac{a+b}{b-a})^2}}f(x)T_{k}(\frac{2}{b-a}x-\frac{a+b}{b-a})dx$. \end{center} Let $f(x)=(1-cx)^{-1}, x \in (-1,1)$, then $\forall \epsilon >0, \exists M \in N^{+}, s.t.$, for each $M_{1} \ge M$, we can have that \begin{center} $\max \limits_{x \in (-1,1)} \vert f(x)-(\frac{c_{0}}{2}+\sum \limits_{k=1}^{M_{1}}c_{k}T_{k}(x))\vert < \epsilon$, \end{center} where $c_{k}=\frac{2}{\pi}\int_{0}^{\pi} \frac{\cos{(kt)}}{1-c\cos{t}}dt$. From (4), we can have that \begin{equation} \begin{aligned} (I-cP)^{-1}p & = \sum \limits_{l=1}^{n}(1-c\lambda_{l})^{-1}\alpha_{l}\chi_{l} \\ & \approx \sum \limits_{l=1}^{n}(\frac{c_{0}}{2}+\sum \limits_{k=1}^{M}c_{k}T_{k}(\lambda_{l}))\alpha_{l}\chi_{l} \\ & = \frac{c_{0}}{2}\sum \limits_{l=1}^{n}\alpha_{l}\chi_{l} + \sum \limits_{k=1}^{M}c_{k}\sum \limits_{l=1}^{n}T_{k}(\lambda_{l})\alpha_{l}\chi_{l}. \end{aligned} \end{equation} Since $T_{k}(x)$ is polynomial of degree $k$, let $T_{k}(x)=\sum \limits_{i=0}^{k}a_{i}x^{i}$, we can have that \begin{equation} \begin{aligned} T_{k}(P)p & = \sum \limits_{i=0}^{k}a_{i}P^{i}p= \sum \limits_{i=0}^{k}a_{i}P^{i} \sum \limits_{l=1}^{n}\alpha_{l}\chi_{l} \\ & = \sum \limits_{l=1}^{n}\alpha_{l} \sum \limits_{i=0}^{k}a_{i}P^{i}\chi_{l} = \sum \limits_{l=1}^{n}\alpha_{l} \sum \limits_{i=0}^{k}a_{i}\lambda_{l}^{i}\chi_{l} \\ & = \sum \limits_{l=1}^{n}T_{k}(\lambda_{l})\alpha_{l}\chi_{l}. \end{aligned} \end{equation} From (7) and (8), we can have that \begin{equation} \pi=(1-c)(I-cP)^{-1}p \approx (1-c)(\frac{c_{0}}{2}p+\sum \limits_{k=1}^{M}c_{k}T_{k}(P)p). \end{equation} \subsection{CPAA} Since $T_{k+1}(P)p= 2PT_{k}(P)p-T_{k-1}(P)p$, from (9), PageRank vector can be approximated iteratively without computing eigenvectors. The calculations on each vertex can be done in parallel at the same iteration round. However, invoking thread for each vertex is unrealistic, we can generate $K$ calculating threads and then assign all the vertices to them. The parallel PageRank algorithm for undirected graph is given as the follow which we abbreviate as CPAA. \begin{algorithm} \scriptsize \caption{ \textbf{Chebyshev Polynomial Approximating Algorithm} } \begin{algorithmic}[1] \Require{The thread number $K$, the upper bound of iteration rounds $M$ and the Chebyshev polynomial coefficients $\{c_{0},c_{1},\cdots,c_{M}\}$.} \Ensure{PageRank $\pi$.} \State{Each calculating thread maintains bool variable $RDY$, each vertex $v_{i}$ maintains $T_{i}, T_{i}^{'}, T_{i}^{''}$ and $\overline{\pi}_{i}$.} \State{Assign all vertices to the $K$ calculation threads, denote by $S_{j}$ the set of vertices belonging to thread $j$, denote by $ATO$ and $TYP$ the global bool variables, denote by $CNT$ the global integer variable.} \State{Initially, let $ATO=true$, each thread sets $RDY=false$, each vertex $v_{i}$ sets $T_{i}=0, T_{i}^{'}=1$ and $\overline{\pi}_{i}=\frac{c_{0}}{2}T_{i}^{'}$.} \For{$j=1..K$} \State{Invoke Calculation(j);} \EndFor \For {$k=1;k \le M;k++$} \State{$TYP=true$, $CNT=K$, each thread sets $RDY=true$;} \While{$CNT>0$} \EndWhile \State{$TYP=false$, $CNT=K$, each thread sets $RDY=true$;} \While{$CNT>0$} \EndWhile \EndFor \State{$ATO=false$;} \State{Calculate $\pi$ following $\pi_{i}=\frac{\overline{\pi}_{i}}{\sum\limits_{i=1}^{n}\overline{\pi}_{i}}$ while all threads terminated.} \State \Function{Calculation}{$j$} \While{$ATO$} \If{$j.RDY$} \If{$TYP$} \ForAll{$u \in S_{j}$} \For{$v_{i} \in N(u)$} \State{$T_{u}^{''}=\frac{2*T_{i}^{'}}{deg(v_{i})}-T_{i}$;} \EndFor \State{$\overline{\pi}_{i}=\overline{\pi}_{i}+c_{k}*T_{u}^{''}$;} \EndFor \State{$--CNT$;} \Else \ForAll{$u \in S_{j}$} \State{$T_{u}=T_{u}^{'}$;} \State{$T_{u}^{'}=T_{u}^{''}$;} \EndFor \State{$--CNT$;} \EndIf \State{$j.RDY=false$;} \EndIf \EndWhile \EndFunction \end{algorithmic} \end{algorithm} \section{Experiments} \subsection{Experimental setting} In this section, we evaluate the performance of CPAA experimentally. We first show the convergence of CPAA and then compare CPAA with the single-threaded power method \cite{brin2012reprint,page1999pagerank} as well as the multi-threaded power method\cite{duong2012parallel}. Let $RES=\vert\vert \overline{\pi}(k)-\overline{\pi}(k-1) \vert\vert_{2}$, where $\overline{\pi}(k)$ is the result of the $k_{th}$ iteration. And $ERR=\max \limits_{v_{i} \in V}{\{\frac{|\overline{\pi}_{i}-\pi_{i}|}{\pi_{i}}\}}$, where $\pi_{i}$ is the true PageRank value of $v_{i}$. Damping factor $c=0.85$ and take the result of $210_{th}$ iterations of the power method as the true value. $R$ is the iteration round and $T$ is the consumption of CPU time. The thread numbers of CPAA and multi-threaded power method are all 32. All of the three algorithms are implemented by C++ multi-threading technology. Table \ref{table1} shows the hardware and software utilized by these numerical experiments. Table \ref{table2} shows all four data sets, where $n$ is the amount of vertices, $m$ is the amount of edges, $nd$ is the amount of dangling vertices and $deg=\frac{m}{n}$. \begin{table}[h] \scriptsize \centering \begin{tabular}{|l|l|} \hline \textbf{CPU} & Intel(R) Core(TM) i7-10510U CPU 1.80GHz 2.30GHz \\ \hline \textbf{Memory} & 16G \\ \hline \textbf{OS} & Windows 10 64bits \\ \hline \textbf{Database} & Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 \\ \hline \textbf{C++compiler} & Visual Studio 2019 \\ \hline \end{tabular} \caption{\textbf{\footnotesize Hardware and software}} \label{table1} \end{table} \begin{table}[h] \scriptsize \centering \begin{tabular}{|l|c|c|c|c|} \hline \textbf{Data sets} &\textbf{n} &\textbf{m} &\textbf{nd} &\textbf{deg} \\ \hline \textbf{NACA0015} &1039183 &6229636 &0 &5.99 \\ \hline \textbf{delaunay-n21} &2097152 &12582816 &0 &6.0 \\ \hline \textbf{M6} &3501776 &21003872 &0 &6.0 \\ \hline \textbf{NLR} &4163763 &24975952 &0 &6.0 \\ \hline \end{tabular} \caption{\textbf{\footnotesize Data sets}} \label{table2} \end{table} \subsection{The convergence of CPAA} Figure \ref{fig1} shows the relations between $k$, $RES$ and $T$. \begin{enumerate} \item[(1)] The blue lines show that $RES$ monotonously decreases with $k$ and exhibits an exponential relationship; \item[(2)] The red lines show that $T$ monotonously increase with $k$ and exhibits an linear relationship; \item[(3)] $RES$ decreases exponentially with $T$, namely CPAA is converged. \end{enumerate} \begin{figure}[h] \centering \includegraphics[width=8cm,height=6cm]{converge.jpg} \caption{$k$ versus $RES$ and $T$} \label{fig1} \end{figure} \subsection{The comparison between CPA and power method} Figure \ref{fig2} shows the relations between $T$ and $ERR$ and Table \ref{table3} shows $T$ and $R$ when $ERR < 0.001$. \begin{enumerate} \item[(1)] The blue and green lines in Figure \ref{fig2} show that MPI is faster than SPI. The multiplication between matrix and personalized vector is parallel in MPI while which is serial in SPI; \item[(2)] The red and blues lines in Figure \ref{fig2} show that CPAA is faster than MPI. CPAA and MPI are all parallel, but the convergence rate of CPAA is higher; \item[(3)] Table \ref{table3} shows that CPAA only takes 60\% of iteration rounds of power method and is at least 4 times faster than SPI. \end{enumerate} \begin{figure}[h] \centering \includegraphics[width=8cm,height=6cm]{err.jpg} \caption{$T$ versus $ERR$} \label{fig2} \end{figure} \begin{table}[h] \scriptsize \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}*{\textbf{Data sets}} &\multicolumn{2}{c|}{\textbf{SPI}} &\multicolumn{2}{c|}{\textbf{MPI}} &\multicolumn{2}{c|}{\textbf{ITA}} &\multicolumn{2}{c|}{\textbf{CPAA}} \\ \cline{2-9} &$R$ &$T$ &$R$ &$T$ &$R$ &$T$ &$R$ &$T$ \\ \hline \textbf{NACA0015} &40 &7.629 &40 &5.03 &- &4.649 &15 &1.916 \\ \hline \textbf{delaunay-n21} &20 &7.376 &20 &4.015 &- &12.242 &12 &2.095 \\ \hline \textbf{M6} &20 &17.663 &20 &7.4 &- &23.022 &12 &3.667 \\ \hline \textbf{NLR} &20 &21.689 &20 &8.834 &- &27.414 &12 &4.218 \\ \hline \end{tabular} \caption{\textbf{\footnotesize The iteration rounds and time consumption when \bm{$ERR<0.001$}}} \label{table3} \end{table} \section{Conclusion and future work} In this paper, we propose CPAA which is parallel and specially for undirected graph. The experimental results show that CPAA takes less iteration rounds and less CPU time. CPAA is suitable for implementing in the distributed environment, especially in map-reduce framework. Our ongoing work includes approximating the function of eigenvalues by other orthogonal polynomials and extending CPAA to directed graph. \bibliographystyle{IEEEtranIES}
proofpile-arXiv_065-82
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Hyperspectral remote sensing technology is a method that organically combines the spectrum of ground objects determined by their unique material composition with the spatial image reflecting the shape, texture and layout of ground objects, to realize the accurate detection, recognition and attribute analysis of ground objects. The resultant hyperspectral images (HSIs) not only contain abundant spectral information reflecting the unique physical properties of the ground features but also provide rich spatial information of the ground features. Therefore, HSIs can be utilized to solve problems that cannot be solved well in multispectral or natural images, such as the precise identification of each pixel. Since different materials exhibit specific spectral characteristics, the classification performance of HSI can be more accurate. Due to these advantages, hyperspectral remote sensing has been widely used in many applications, such as precision agriculture~\cite{teke2013agriculture}, crop monitoring~\cite{strachan2002environmental}, and land resources~\cite{bannari2006agricultural, chabrillat2014soilerosion}. In environmental protection, HSI has been employed to detect gas~\cite{gas_dectection}, oil spills~\cite{salem2001hyperspectral}, water quality~\cite{awad_2014, jay_guillaume_2014} and vegetation coverage~\cite{0Brightness, 2020Tree}, to better protect our living environment. In the medical field, HSI has been utilized for skin testing to examine the health of human skin~\cite{skin_detection}. As a general pattern recognition problem, HSI classification has received a substantial amount of attention, and a large number of research results have been achieved in the past several decades. According to the previous work~\cite{2019Deep_overview_domestic}, all researches can be divided into the spectral-feature method, spatial-feature method, and spectral-spatial-feature method. The spectral feature is the primitive characteristic of the hyperspectral image, which is also called the spectral vector or spectral curve. And the spatial feature~\cite{2009Incorporation} means the relationship between the central pixel and its context, which can greatly increase the robustness of the model. In the early period of the study on HSI classification, researchers mainly focused on the pure spectral feature-based methods, which simply apply classifiers to pixel vectors, such as support vector machines (SVM)~\cite{2004A_Melgani}, neural networks~\cite{zhong2011immune_network}, logistic regression~\cite{li2012logistic}, to obtain classification results without any feature extraction. But raw spectra contain much redundant information and the relation between spectra and ground objects is non-linear, which enlarges the difficulty of the model classification. Therefore, most later methods give more attention to dimension reduction and feature extraction to learn the more discriminative feature. For the approaches based on dimension reduction, principle component analysis~\cite{licciardi2011linear}, independent component analysis~\cite{2009C_Villa}, linear discriminant analysis~\cite{2014C_Zhang}, and low-rank~\cite{2016A_He_Gabor} are widely used. Nevertheless, the performance of those models is still unsatisfactory. Because, there is a common phenomenon in the hyperspectral image which is that different surface objects may have the same spectral characteristic and, otherwise, the same surface objects may have different spectral characteristics. The variability of spectra of ground objects is caused by illumination, environmental, atmospheric, and temporal conditions. Those enlarge the probability of misclassification. Thus, those methods are only based on spectral information, and ignore spatial information, resulting in unsatisfactory classification performance. The spatial characteristic of ground objects supply abundant information of shape, context, and layout about ground objects, and neighboring pixels belong to the same class with high probability, which is useful for improving classification accuracy and robustness of methods. Then, a large number of feature extraction methods that integrate the spatial structural and texture information with the spectral features have been developed, including morphological~\cite{2010A_DallaMura,2015A_Falco_TGRS,2011A_Mura}, filtering~\cite{2015_Jia_p1118_1129,2013A_Qian}, coding~\cite{li2015local}, etc. Since deep learning-based methods are mainly concerned in this paper, the readers are referred to~\cite{2015_Ghamisi_p2335_2353} for more details on these conventional techniques. In the past decade, deep learning technology has developed rapidly and received widespread attention. Compared with traditional machine learning model, deep learning technology does not need to artificially design feature patterns and can automatically learn patterns from data. Therefore, it has been successfully applied in the fields of natural language processing, speech recognition, semantic segmentation, autonomous driving, and object detection, and gained excellent performance. Recently, it also has been introduced into the field of HSI classification. Researchers have proposed a number of new deep learning-based HIS classification approaches, as shown in the left part of Figure \ref{frame}. Currently, all methods, based on the joint spectral-spatial feature, can be divided into two categories—Two-Stream and Single-Stream, according to whether they simultaneously extract the joint spectral-spatial feature. The architecture of two-stream usually includes two branches—spectral branch and spatial branch. The former is to extract the spectral feature of the pixel, and the latter is to capture the spatial relation of the central pixel with its neighbor pixels. And the existing methods have covered all deep learning modules, such as fully connected layer, convolutional layer, and recurrent unit. In the general deep learning framework, a large number of training samples should be provided to well train the model and tune the numerous parameters. However, in practice, manually labeling is often very time-consuming and expensive due to the need for expert knowledge, and thus, a sufficient training set is often unavailable. As shown in Figure \ref{sample-distribution} (here the widely used Kennedy Space Center (KSC) hyperspectral image is utilized for illustration), the left figure randomly selects 10 samples per class and contains 130 labeled samples in total, which is very scattered and can hardly be seen. Alternatively, the right figure in Figure \ref{sample-distribution} displays 50\% of labeled samples, which is more suitable for deep learning-based methods. Hence, there is a vast gap between the training samples required by deep learning models and the labeled samples that can be collected in practice. And there are many learning paradigms proposed for solving the problem of few label samples, as shown in the right part of Figure \ref{frame}. In section 2, we will discuss them in detail. And they can be integrated with any model architecture. Some pioneering works such as~\cite{yu2017convolutional} started the topic by training a deep model with good generalization only using few labeled samples. However, there are still many challenges for this topic. \begin{figure}[hbpt] \centering \includegraphics[width=\textwidth]{figure/sample_distribution.pdf} \caption{Illustration of the massive gap between practical situations (i.e., few labeled samples) and a large number of labeled samples of deep learning-based methods. Here, the widely used Kennedy Space Center (KSC) hyperspectral image is employed, which contains 13 land covers and 5211 labeled samples (detailed information can be found in the experimental section). Generally, sufficient samples are required to well train a deep learning model (as illustrated in the right figure), which is hard to be achieved in practice due to the difficulty of manually labeling (as shown in the left figure).} \label{sample-distribution} \end{figure} In this paper, we hope to provide a comprehensive review of the state-of-the-art deep learning-based methods for HSI classification with few labeled samples. First, instead of separating the various methods according to feature fusion manner, such as spectral-based, spatial-based, and joint spectral-spatial-based methods, the research progress of methods related to few training samples is categorized according to the learning paradigm, including transfer learning, active learning, and few-shot learning. Second, a number of experiments with various state-of-the-art approaches have been carried out, and the results are summarized to reveal the potential research directions. Further, it should be noted that different from the previous review papers~\cite{2019Deep_overview_domestic, 2019Deep_overview_foreign}, this paper mainly focuses on the few labeled sample issue, which is considered as the most challenging problem in the HSI classification scenario. For reproducibility, the source codes of the methods conducted in the paper can be found at the web site for the paper\footnote{\url{https://github.com/ShuGuoJ/HSI-Classification.git}}. The remainder of this paper is organized as follows. Section \ref{deep-learning-model} introduces the deep models that are popular in recent years. In Section \ref{learning-paradigm}, we divide the previous works into four mainstream learning paradigms, including transfer learning, active learning, and few-shot learning. In Section \ref{experiments}, we performed many experiments, and a number of representative deep learning-based classification methods are compared on several real hyperspectral image data sets. Finally, conclusions and suggestions are provided in Section \ref{conclutions}. \begin{figure}[hbpt] \centering \includegraphics[scale=0.2]{figure/frame/model_frame_solid.pdf} \caption{The category of deep learning-based methods for hyperspectral image classification. The left is from the model architecture point of view, while the right is from the learning paradigm point of view. It is worth noting that the both kinds of methods can be combined arbitrarily.} \label{frame} \end{figure} \section{Deep learning models for HSI classification} \label{deep-learning-model} In this section, three classical deep learning models, including the autoencoder, convolutional neural network (CNN), and recurrent neural network (RNN), for HSI classification are respectively described, and the relevant references are reviewed. \subsection{Autoencoder for HSI classification} An autoencoder~\cite{hinton_2006_reducing} is a classic neural network, which consists of two parts: an encoder and a decoder. The encoder $p_{encoder}(\bm{h} \vert \bm{x})$ maps the input $\bm{x}$ as a hidden representation $\bm{h}$, and then, the decoder $p_{decoder}(\hat{\bm{x}} \vert \bm{h})$ reconstructs $\hat{\bm{x}}$ from $\bm{h}$. It aims to make the input and output as similar as possible. The loss function can be formulated as follows: \begin{equation} \mathcal{L}(\bm{x},\hat{\bm{x}})=\min \vert \bm{x}-\hat{\bm{x}} \vert \end{equation} where $\mathcal{L}$ is the similarity measure. If the dimension of $\bm{h}$ is smaller than $\bm{x}$, the autoencoder procedure is undercomplete and can be used to reduce the data dimension. Evidently, if there is not any constraint on $\bm{h}$, the autoencoder is the simplest identical function. In other words, the network does not learn anything. To avoid such a situation, the usual way is to add the normalization term $\Omega(h)$ to the loss. In~\cite{2011An_sparse_autoencoder, zeng2018facial}, the normalization of the autoencoder, referred as a sparse autoencoder, is $\Omega(h)=\lambda \sum_ih_i$, which will make most of the parameters of the network very close to zero. Therefore, it is equipped with a certain degree of noise immunity and can produce the sparsest representation of the input. Another way to avoid the identical mapping is by adding some noise into $\bm{x} $ to make the damaged input $\bm{x_{noise}}$ and then forcing the decoder to reconstruct the $\bm{x}$. In this situation, it becomes the denoising autoencoder~\cite{2008Extracting_denoise_autoencoder}, which can remove the additional noise from $\bm{x_{noise}}$ and produce a powerful hidden representation of the input. In general, the autoencoder plays the role of feature extractor~\cite{windrim2019unsupervised} to learn the internal pattern of data without labeled samples. Figure \ref{auto-encoder} illustrates the basic architecture of the autoencoder model. \begin{figure}[hbpt] \centering \includegraphics[width=0.75\textwidth]{figure/architecture/auto-encoder.pdf} \caption{The architecture of the autoencoder. The solid line represents training, while the dashed line represents inference.} \label{auto-encoder} \end{figure} Therefore, Chen \emph{et al.}~\cite{chen2014deep} used an autoencoder for the first time for feature extraction and classification of HSIs. First, in the pretraining stage, the spectral vector of each pixel directly inputs the encoder module, and then, the decoder is used to reconstruct it so that the encoder has the ability to extract spectral features. Alternatively, to obtain the spatial features, principal component analysis (PCA) is utilized to reduce the dimensionality of the hyperspectral image, and then, the image patch is flattened into a vector. Another autoencoder is employed to learn the spatial features. Finally, the spatial-spectral joint information obtained above is fused and classified. Subsequently, a large number of hyperspectral image classification methods~\cite{abdi2017deep_sparse_autoencoder, xing2016stacked} based on autoencoders appeared. Most of these methods adopt the same training strategy as~\cite{chen2014deep}, which is divided into two modules: fully training the encoder in an unsupervised manner and fine-tuning the classifier in a supervised manner. Each of these methods attempts different types of encoders or preprocessing methods to adapt to HSI classification under the condition of small samples. For example, Xing \emph{et al.}~\cite{xing2016stacked} stack multiple denoising autoencoders to form a feature extractor, which has a stronger anti-noise ability to extract more robust representations. Given that the same ground objects may have different spectra while different ground objects may exhibit similar spectra, spectral-based classification methods often fail to achieve satisfactory performance, and spatial structural information of objects provides an effective supplement. To gain a better spatial description of an object, some autoencoder models combined with convolutional neural networks (CNNs) have been developed~\cite{yue2016spatial_pyramid_pooling, hao2017two_stream}. Concretely, the autoencoder module is able to extract spectral features on large unlabeled samples, while the CNN is proven to be able to extract spatial features well. After fusion, the spatial-spectral features can be achieved. Further, to reduce the number of trainable parameters, some researchers use the lightweight models, such as SVMs~\cite{sun2017encoding, mei2019_3d_convolutional_autoencoder}, random forests~\cite{zhao2017autoencoder_random_forest, wan2017multifractal_spectrum_features} or logistic regression~\cite{chen2014deep,wang2016multi_label}, to serve as the classifier. Due to the three-dimensional (3D) pattern of hyperspectral images, it is desirable to simultaneously investigate the spectral and spatial information such that the joint spatial-spectral correlation can be better examined. Some three-dimensional operators and methods have been proposed. In the preprocessing stage, Li \emph{et al.}~\cite{li2015deep_gabor} utilized the 3D Gabor operator to fuse spatial information and spectral information to obtain spatial-spectral joint features, which were then fed into the autoencoder to obtain more abstract features. Mei \emph{et al.}~\cite{mei2019_3d_convolutional_autoencoder} used a 3D convolutional operator to construct an autoencoder to extract spatial-spectral features directly. In addition, image segmentation has been introduced to characterize the region structure of objects to avoid misclassification of pixels at the boundary~\cite{mughees2016efficient}. Therefore, Liu \emph{et al.}~\cite{liu2015learnt_features} utilized superpixel segmentation technology as a postprocessing method to perform boundary regularization on the classification map. \subsection{Convolutional Neural Networks (CNNs) for HSI classification} In theory, the CNN uses a group of parameters that refer to a kernel function or kernel to scan the image and produce a specified feature. It has three main characteristics that make it very powerful for feature representation, and thus, the CNN has been successfully applied in many research fields. The first one is the local connection that greatly decreases the number of trainable parameters and makes itself suitable for processing large images. This is the most obvious difference from the fully connected network, which has a full connection between two neighboring neural layers and is unfriendly for large spatial images. To further reduce the number of parameters, the same convolutional kernel shares the same parameters, which is the second characteristic of CNNs. In contrast, in the traditional neural network, the parameters of the output are independent from each other. However, the CNN applies the same parameters for all of the output to cut back the number of parameters, leading to the third characteristic: shift invariance. It means that even if the feature of an object has shifted from one position to another, the CNN model still has the capacity to capture it regardless of where it appears. Specifically, a common convolutional layer consists of three traditional components: linear mapping, the activation function and the pooling function. Similar to other modern neural network architectures, activation functions are used to bring a nonlinear mapping feature into the network. Generally, the rectified linear unit (ReLU) is the prior choice. Pooling makes use of the statistical characteristic of the local region to represent the output of a specified position. Taking the max pooling step as an example, it employs the max value to replace the region of input. Clearly, the pooling operation is robust to small changes and noise interfere, which could be smoothed out by the pooling operation in the output, and thus, more abstract features can be reserved. In the early works of applying CNNs for HSI classification, two-dimensional convolution was the most widely used method, which is mainly employed to extract spatial texture information~\cite{lee2017contextual_cnn, yu2017convolutional, leng2016cube_cnn_svm}, but the redundant bands greatly enlarge the size of the convolutional kernel, especially the channel dimensionality. Later, a combination of one-dimensional convolution and two-dimensional convolution appeared~\cite{zhang2017_dual_channel_convolutional} to solve the above problem. Concretely, one-dimensional and two-dimensional convolutions are responsible for extracting spectral and spatial features, respectively. The two types of features are then fused before being input to the classifier. For the small training sample problem, due to insufficient labeled samples, it is difficult for CNNs to learn effective features. For this reason, some researchers usually introduced traditional machine learning methods, such as attribute profiles~\cite{aptoula2016_cnn_attribute_profiles}, GLCM~\cite{zhao2019_cnn_textural_feature}, hash learning~\cite{yu2019cnn_embedding_semantic}, and Markov Random fields~\cite{qing2018cnn_markov}, to introduce prior information to the convolutional network and improve the performance of the network. Similar to the trend of autoencoder-based classification methods, three-dimensional CNN models have also been applied to HSI classification in recent years and have shown better feature fusion capabilities~\cite{zhong20173d_residual, liu2018_3d_convolution}. However, due to the large number of parameters, three-dimensional convolution is not suitable for solving small-sample classification problems under supervised learning. To reduce the number of parameters of 3D convolution, Fang \emph{et al.}~\cite{fang2020lightweight_deep_clustering} designed a 3D separable convolution. In contrast, Mou \emph{et al.}~\cite{mou2017residual_conv_deconv, sellami2019_3D_network_band_selection} introduced an autoencoder scheme into the three-dimensional convolution module to solve this problem. By a combination with the classic autoencoder training method, the three-dimensional convolution autoencoder can be trained in an unsupervised learning manner, and then, the decoder is replaced with a classifier, while the parameters of the encoder are frozen. Finally, a small classifier is trained by supervised learning. Moreover, due to the success of ResNet~\cite{he2016deep_residual}, scholars have studied the HSI classification problem based on convolutional residuals~\cite{mou2017residual_conv_deconv, sellami2019_3D_network_band_selection, paoletti2018_pyramidal_residual,ma2018_deconvolution_skip_architecture}. These methods try to use jump connections to enable the network to learn complex features with a small number of labeled samples. Similarly, CNNs with dense connections have also been introduced into this field~\cite{paoletti2018dense_convolutional, wang2018dense_convolution}. In addition, the attention mechanism is another hotpot for fully mining sample features. Concretely, Haut and Xiong \emph{et al.}~\cite{haut2019visual_attention_driven, xiong2018attention_inception} incorporated the attention mechanism with CNNs for HSI classification. Although the above models can work well on HSI, they cannot overcome the disadvantage of the low spatial resolution of HSIs, which may cause mixed pixels. To make up for this shortcoming, multimodality CNN models have been proposed. These methods~\cite{feng2019multisource_convolutional, xu2017multisource_convolutional, li2018_three_stream_convolutional} combine HSIs and LiDAR data together to increase the discriminability of sample features. Moreover, to achieve good performance under the small-sample scenario, Yu \emph{et al.}~\cite{yu2017convolutional} enlarged the training set through data augmentation by implementing rotation and flipping. On the one hand, this method increases the number of samples and improves their diversity. On the other hand, it enhances the model's ability of rotation invariance, which is important in some fields such as remote sensing. Subsequently, Li \emph{et al.}~\cite{li2018data_augmentation, wei2018_cube_pair_network} designed a data augmentation scheme for HSI classification. They combined the samples in pairs so that the model no longer learns the characteristics of the samples themselves but learns the differences between the samples. Different combinations make the scale of the training set larger, which is more conducive for model training. \subsection{Recurrent neural network (RNN) for HSI classification} Compared with other forms of neural networks, recurrent neural networks (RNNs)~\cite{hochreiter1997long} have memory capabilities and can record the context information of sequential data. Because of this memory characteristic, recurrent neural networks are widely used in tasks such as speech recognition and machine translation. More precisely, the input of a recurrent neural network is usually a sequence of vectors. At each time step $t$, the network receives an element $\bm{x}_t$ in a sequence and the state $\bm{h}_{t-1}$ of the previous time step, and produces an output $\bm{y}_t$ and a state $\bm{h}_t$ representing the context information at the current moment. This process can be formulated as: \begin{equation} \bm{h}_t= f(\mathbf{W}_{hh}\bm{h}_{t-1}+\mathbf{W}_{xh}\bm{x}_t+\mathbf{b}) \end{equation} where $\mathbf{W}_{xh}$ represents the weight matrix from the input layer to the hidden layer, $\mathbf{W}_{hh}$ denotes the state transition weight in the hidden layer, and $\mathbf{b}$ is the bias. It can be seen that the current state of the recurrent neural network is controlled by both the state of the previous time step and the current input. This mechanism allows the recurrent neural network to capture the contextual semantic information implicitly between the input vectors. For example, in the machine translation task, it can enable the network to understand the semantic relationship between words in a sentence. However, the classic RNN is prone to encounter gradient explosion or gradient vanishing problems during the training process. When there are too many inputs, the derivation chain of the RNN will become too long, making the gradient value close to infinity or zero. Therefore, the classic RNN model is replaced by a long short-term memory (LSTM) network~\cite{hochreiter1997long} or a gated recurrent unit (GRU)~\cite{cho2014GRU} in the HSI classification task. Both LSTM and GRU use gating technology to filter the input and the previous state so that the network can forget unnecessary information and retain the most valuable context. LSTM maintains an internal memory state, and there are three gates: input gate $\bm{i}_t$, forget gate $\bm{f}_t$ and output gate $\bm{o}_t$, which are formulated as: \begin{equation} \bm{i}_t = \sigma(\mathbf{W_{i}} \cdot [\bm{x}_t, \bm{h}_{t-1}]) \end{equation} \begin{equation} \bm{f}_t = \sigma(\mathbf{W_{f}} \cdot [\bm{x}_t, \bm{h}_{t-1}]) \end{equation} \begin{equation} \bm{o}_t = \sigma(\mathbf{W_{io}} \cdot [\bm{x}_t, \bm{h}_{t-1}]) \end{equation} It can be found that the three gates are generated based on the current input and the previous state. First, the current input and the previous state will be spliced and mapped to a new input $\bm{g}_t$ according to the following formula: \begin{equation} \bm{g}_t = \tanh(\mathbf{W_{g}} \cdot [\bm{x}_t, \bm{h}_{t-1}]) \end{equation} Subsequently, the input gate, the forget gate, the new input $\bm{g}_t$ and the internal memory unit $\hat{\bm{h}}_{t-1}$ update the internal memory state tegother. In this process, LSTM discards invalid information and adds new semantic information. \begin{equation} \hat{\bm{h}}_t = \bm{f}_t \odot \hat{\bm{h}}_{t-1} + \bm{i}_t \odot \bm{g}_t \end{equation} Finally, the new internal memory state is filtered by the output gate to form the output of the current time step \begin{equation} \bm{h}_t = \bm{o}_t \odot \tanh(\hat{\bm{h}}_t) \end{equation} Concerning HSI processing, each spectral image is a high-dimensional vector and can be regarded as a sequence of data. There are many works using LSTM for HSI classification tasks. For instance, Mou \emph{et al.}~\cite{mou2017deep_recurrent_hyperspectral} proposed an LSTM-based HSI classification method for the first time, and their work only focused on spectral information. For each sample pixel vector, each band is input into the LSTM step by step. To improve the performance of the model, spatial information is considered in subsequent research. For example, Liu \emph{et al.} fully considered the spatial neighborhood of the sample and used a multilayer LSTM to extract spatial spectrum features~\cite{liu2018spectral_spatia_recurrent}. Specifically, in each time step, the sampling points of the neighborhood are sequentially input into the network to deeply mine the context information in the spatial neighborhood. In~\cite {zhou2019hyperspectral_ss_LSTMs}, Zhou \emph{et al.} used two LSTMs to extract spectral features and spatial features. In particular, for the extraction of spatial features, PCA is first used to extract principal components from the sample rectangular space neighborhood. Then, the first principal component is divided into several lines to form a set of sequence data, and gradually input into the network. In contrast, Ma and Zhang \emph{et al.}~\cite{ma2019hyperspectral_measurements_recurrent, zhang2018spatial_sequential_recurrent} measures the similarity between the sample point in the spatial neighborhood and the center point. The sample points in the neighborhood will be reordered according to the similarity and then input into the network step by step. This approach allows the network to focus on learning sample points that are highly similar to the center point, and the memory of the internal hidden state can thus be enhanced. Erting Pan \emph{et al.}~\cite{pan2020spectral_spatial_GRU} proposed an effective tiny model for spectral-spatial classification on HSIs based on a single gate recurrent unit (GRU). In this work, the rectangular space neighborhood is flattened into a vector, which is used to initialize the hidden vector $ h_0 $ of GRU, and the center point pixel vector is input into the network to learn features. In addition, Wu and Saurabh argue that it is difficult to dig out the internal features of the sample by directly inputting a single original spectral vector into the RNN~\cite{wu2017pseudo_labels_deep_learning, wu2017convolutional_recurrent}. The authors use a one-dimensional convolution operator to extract multiple feature vectors from the spectrum vector, which form a feature sequence and are then input to the RNN. Finally, the fully connected layer and the softmax function are adopted to obtain the classification result. It can be seen that only using recurrent neural networks or one-dimensional convolution to extract the spatial-spectrum joint features is actually not efficient because this will cause the loss of spatial structure information. Therefore, some researchers combine two-dimensional/three-dimensional CNNs with an RNN and use convolution operators to extract spatial-spectral joint features. For example, Hao \emph{et al.}~\cite{hao2020geometry_aware_recurrent} utilized U-Net to extract features and input them into an LSTM or GRU so that the contextual information between features could be explored. Moreover, Shi \emph{et al.}~\cite{shi2018hierarchical_recurrent} introduced the concept of the directional sequence to fully extract the spatial structure information of HSIs. First, the rectangular area of the sampling point is divided into nine overlapping patches. Second, the patch will be mapped to a set of feature vectors through a three-dimensional convolutional network, and the relative position of the patch can generate 8 combinations of directions (for example, top, middle, bottom, left, center, and right) to form a direction sequence. Finally, the sequence is input into the LSTM or GRU to obtain the classification result. In this way, the spatial distribution and structural characteristics of the features can be explored. \section{Deep learning paradigms for HSI classification with few labeled samples} \label{learning-paradigm} Although different HSI classification methods have different specific designs, they all follow some learning paradigms. In this section, we mainly introduce several learning paradigms that are applied to HSI classification with few labeled training samples. These learning paradigms are based on specific learning theories. We hope to provide a general guide for researchers to design algorithms. \subsection{Deep Transfer Learning for HSI classification} Transfer learning~\cite{pan_yang_2010} is an effective method to deal with the small-sample problem. Transfer learning tries to transfer knowledge learned from one domain to another. First, there are two data sets/domains, one is called a source domain that contains abundant labeled samples, and the other is called a target domain and only contains few labeled samples. To facilitate the subsequent description, we define the source domain as $\mathbf{D}_s$, the target domain as $\mathbf{D}_t$, and their label spaces as $\mathbf{Y}_s$ and $\mathbf{Y}_t$, respectively. Usually, the data distribution of the source domain and the target domain are inconsistent: $P(\bm{X}_s) \neq P(\bm{X}_t)$. Therefore, the purpose of transfer learning is to use the knowledge learned from $\mathbf{D}_s$ to identify the labels of samples in $\mathbf{D}_t$. Fine-tuning is a general method in transfer learning that uses $\mathbf{D}_s$ to train the model and adjust it by $\mathbf{D}_t$. Its original motivation is to reduce the number of samples needed during the training process. Since deep learning models generally contain a vast number of parameters and if it is trained on the target domain $\mathbf{D}_t$, it is easy to overfit and perform poorly in practice. However, fine-tuning allows the model parameters to reach a suboptimal state, and a small number of training samples of the target domain can tune the model to reach the optimal state. It involves two steps. First, the specific model will be fully trained on the source domain $\mathbf{D}_s$ with abundant labeled samples to make the model parameters arrive at a good state. Then, the model is transferred to the target domain $\mathbf{D}_t$, except for some task-related modules, and slightly tuned on $\mathbf{D}_t$ so that the model fits the data distribution of the target domain $\mathbf{D}_t$. \begin{figure}[hbpt] \centering \includegraphics[width=0.5\textwidth]{figure/architecture/transfer-learning.pdf} \caption{Flowchart of the fine-tuning method. The solid line represents pretraining, and the dashed line represents fine-tuning. $f_\omega$ is a learning function.} \label{transfer-learning} \end{figure} Because the fine-tuning method is relatively simple, it is widely used in the transfer learning method for hyperspectral image classification. To our knowledge, Yang \emph{et al.}~\cite{yang2016two_channel_transfer} are the first to combine deep learning with transfer learning to classify hyperspectral images. The model consists of two convolutional neural networks, which are used to extract spectral features and spatial features. Then, the joint spectral-spatial feature will be input into the fully connected layer to gain a final result. According to fine-tuning, the model is first fully trained on the hyperspectral image of the source domain. Next, the fully connected layer is replaced and the parameters of the convolutional network are reserved. Finally, the transfer model will be trained on the target hyperspectral image to adapt to the new data distribution. The later transfer learning models based on fine-tuning basically follow that architecture~\cite{yang2017_deep_joint_transferring,lin2019deep_transfer_information_measure,zhang2019transfer_lightweight_3DCNN,jiang2019transfer_3Dseparable_ResNet}. It is worth noting that Deng \emph{et al.}~\cite{deng2018active_transfer} combined transfer learning with active learning to classify HSI. Data distribution adaptation is another commonly used transfer learning method. The basic idea of this theory is that in the original feature space, the data probability distributions of the source domain and the target domain are usually different. However, they can be mapped to a common feature space together. In this space, their data probability distributions become similar. In 2014, Ghifary \emph{et al.}~\cite{ghifary2014deep_domain_adaptive} first proposed a shadow neural network-based domain adaptation model, called DaNN. The innovation of this work is that a maximum mean discrepancy (MMD) adaptation layer is added to calculate the distance between the source domain and the target domain. Moreover, the distance is merged into the loss function to reduce the difference between the two data distributions. Subsequently, Tzeng \emph{et al.}~\cite{tzeng2014deep_domain_confusion} extended this work with a deeper network and proposed deep domain confusion to solve the adaptive problem of deep networks. \begin{figure}[hbpt] \centering \includegraphics[width=0.8\textwidth]{figure/architecture/DANN.pdf} \caption{Flowchart of DANN.} \label{DANN} \end{figure} Wang \emph{et al.}~\cite{Wang2019deep_domain_hyperspectral} introduced the deep domain adaptation model to the field of hyperspectral image classification for the first time. In~\cite{Wang2019deep_domain_hyperspectral}, two hyperspectral images from different scenes will be mapped to two low-dimensional subspaces by the deep neural network, in which the samples are represented as manifolds. MMD is used to measure the distance between two low-dimensional subspaces and is added to the loss function to make two low-dimensional subspaces have high similarity. In addition, they still add the sum of the distances between samples and their neighbor into the loss function to ensure that the low-dimensional manifold is discriminative. Motivated by the excellent performance of generative adversarial net (GAN), Yaroslav \emph{et al.}~\cite{ganin2016domain_adversarial_training} first introduced it into transfer learning. The network is named DANN (domain-adversarial neural network), which is different from DaNN proposed by Ghifary \emph{et al.}~\cite{ghifary2014deep_domain_adaptive}. The generator $\mathbf{G}_f$ and the discriminator $\mathbf{G}_d$ compete with each other until they have converged. In transfer learning, the data in one of the domains (usually the target domain) are regarded as the generated sample. The generator aims to learn the characteristics of the target domain sample so that the discriminator cannot distinguish which domain the sample comes from to achieve the purpose of domain adaptation. Therefore, $\mathbf{G}_f$ is used to represent the feature extractor here. Elshamli \emph{et al.}~\cite{elshamli2017domain_DANN} first introduced the concept of DANN to the task of hyperspectral image classification. Compared to general GNN, it has two discriminators. One is the class discriminator predicting the class labels of samples, and the other is the domain discriminator predicting the source of the samples. Different from the two-stage method, DANN is an end-to-end model that can perform representation learning and classification tasks simultaneously. Moreover, it is easy to train. Further, it outperforms two-stage frameworks such as the denoising autoencoder and traditional approaches such as PCA in hyperspectral image classification. \subsection{Deep Active Learning for HSI classification} Active learning~\cite{settles2009active} in the supervised learning method can efficiently deal with small-sample problems. It can effectively learn discriminative features by autonomously selecting representative or high-information samples from the training set, especially when the labeled samples are scarce. Generally speaking, active learning consists of five components, $A=(C, L, U, Q, S)$. Among them, $C$ represents one or a group of classifiers. $L$ and $U$ represent the labeled samples and unlabeled samples, respectively. $Q$ is the query function, which is used to query the samples with a large amount of information among the unlabeled samples. $S$ is an expert and can label unlabeled samples. In general, active learning has two stages. The first stage is the initialization stage. In this stage, a small number of samples will be randomly selected to form the training set $L$ and be labeled by experts to train the classifier. The second stage is the iterative query. $Q$ will select new samples from the unlabeled sample set $U$ for $S$ to mark them based on the results of the previous iteration and add them to the training set $L$. The active learning method applied to hyperspectral image classification is mainly based on the active learning algorithm of the committee and the active learning algorithm based on the posterior probability. In the committee-based active learning algorithm, the EQB method uses entropy to measure the amount of information in unlabeled samples. Specifically, the training set L will be divided into $k$ subsets to train $k$ classifiers and then use these $k$ classifiers to classify all unlabeled samples. Therefore, each unlabeled sample corresponds to k predicted labels. The entropy value is calculated from this: \begin{equation} \bm{x}^{EQB}=\mathop{\arg\min}_{x_i \in U}\frac{H^{EQB}(x_i)}{log(N_i)} \end{equation} where $H$ represents the entropy value, and $N_i$ represents the number of classes predicted by the sample $x_i$. Samples with large entropy will be selected and manually labeled~\cite{haut2018active_deep}. In~\cite{liu2016active_deep}, the deep belief network is used to generate the mapping feature $h$ of the input $x$ in an unsupervised way, and then, $h$ will be used to calculate the information entropy. At the same time, sparse representation is used to estimate the representations of the sample. In the process of selecting samples for active learning, the information entropy and representations of the samples are comprehensively considered. In contrast, the active learning method based on posterior probability~\cite{li2015active_autoencoders, sun2016active_autoencoder, cao2020convolutional_active} is more widely used. Breaking ties belongs to the active learning method of posterior probability, which is widely used in hyperspectral classification tasks. This method first uses specifies models, such as convolutional networks, maximum likelihood estimation classifiers, support vector machines, etc., to estimate the posterior probabilities of all samples in the candidate pool. Then, the approach uses the posterior probability to input the following formula to produce a measure of sample uncertainty: \begin{equation} \label{BvSB} \bm{x}^{BT}=\mathop{\arg\min}_{x_i \in U} \left\{ \mathop{\max}_{w \in N}p \left ( y_i^*=w|x_i\right ) - \mathop{\max}_{w \in N\setminus w^+}p(y_i^*=w|x_i)\right\} \end{equation} In the above formula, we first calculate the difference between the largest probability and the second-largest probability among the posterior probabilities of all candidate samples and select the sample with the minimum difference to join the valuable data set. The lower $x^{BT}$ is, the more uncertain is the sample. In~\cite{li2015active_autoencoders}, Li \emph{et al.} first used an autoencoder to construct an active learning model for hyperspectral image classification tasks. At the same time, Sun \emph{et al.}~\cite{sun2016active_autoencoder} also proposed a similar method. However, this method only uses spectral features. Because of the effectiveness of spatial information, in~\cite{deng2018active_deep_spatial_spectral}, when generating the posterior probability, the space-spectrum joint features are considered at the same time. In contrast, Cao \emph{et al.}~\cite{cao2020convolutional_active} use convolutional neural networks to generate the posterior probability. In general, the active learning method can automatically select effective samples according to certain criteria, reduce inefficient redundant samples, and thus well alleviate the problem of missing training samples in the small-sample problem. \begin{figure}[hbpt] \centering \includegraphics[width=0.8\textwidth]{figure/architecture/active-learning.pdf} \caption{Architecture of active learning.} \label{active-learning} \end{figure} \subsection{Deep Few-shot Learning for HSI classification} Few-shot learning is among meta-learning approaches and aims to study the difference between the samples instead of directly learning what the sample is, different from most other deep learning methods. It makes the model learn to learn. In few-shot classification, given a small support set with N labeled samples $S_N^k=\lbrace(\bm{x}_1, y_1), \cdots, (\bm{x}_N, y_N)\rbrace$, which have $k$ categories, the classifier will mask the query sample with the label of the largest similarity sample among $S_N^k$. To achieve this target, many learning frameworks have been proposed and they can be divided into two categories: meta-based model and metric-based model. The prototype network~\cite{snell2017prototypical} is one of the metric-based models of few-shot learning. Its basic idea is that every class can be depicted by a prototype representation, and the samples that belong to the same category should be around the class prototype. First, all samples will be transformed into a metric space through an embedding function $f_\phi: \mathbb{R}^D \rightarrow \mathbb{R}^M$ and represented by the embedding vector $\mathbf{c}_k \in \mathbb{R}^M$. Due to the powerful ability of the convolutional network, it is used as the embedding function. Moreover, the prototype vector is usually the mean of the embedding vector of the samples in the support set for each class $c_i$. \begin{equation} \bm{c}_i = \frac{1}{|S^i|}\sum_{(\bm{x}_j, y_j)\in S^i}f_\phi(\bm{x}_j) \end{equation} In~\cite{liu2020deep}, Liu \emph{et al.} simply introduce the prototype network into hyperspectral image classification task and use ResNet~\cite{he2016deep_residual} to serve as a feature extractor that maps the samples into a metric space. Then, the prototype network is significantly improved for the hyperspectral image classification task by~\cite{tang2019SSPrototypical}. In the paper, the spatial-spectral feature is first integrated by the local pattern coding, and the 1D-CNN converts it to an embedding vector. The prototype is the weighted mean of these embedding vectors, which is contrary to the general prototype network. In~\cite{xi2020ResidualPrototypical} Xi \emph{et al.} replace the mapping function with hybrid residual attention~\cite{muqeet2019hran} and introduce a new loss function to force the network to increase the interclass distance and decrease the intraclass distance. \begin{figure}[hbpt] \centering \includegraphics[width=0.75\textwidth]{figure/architecture/prototype-network.pdf} \caption{Architecture of a prototype network.} \label{prototype-network} \end{figure} The relation network~\cite{Sung2018RelationNetwork} is another metric-based model of few-shot learning. In general, it has two modules: the embedding function $f_\phi: \mathbb{R}^D \rightarrow \mathbb{R}^M$ and relation function $f_\psi: \mathbb{R}^{2M} \rightarrow \mathbb{R}$. The function of the embedding module is the same as the prototype network, and its key idea is the relation module. The relation module is to calculate the similarity of samples. It is a learnable module that is different from the Euclidean distance or cosine distance. In other words, the relation network introduces a learnable metric function based on the prototype network. The relation module can more precisely describe the difference of samples by the study. During inference, the query embedding $f_\psi(x_i)$ will be combined with the support embedding $f_\psi(\bm{x}_j)$ as $\mathcal{C}(f_\psi(\bm{x}_i), f_\psi(\bm{x}_j))$. Usually, $\mathcal{C}(\cdot, \cdot)$ is a concatenation operation. Then, the relation function will transform the splicing vector to a relation score $r_{i,j}$, which indicates the similarity between $x_i$ and $x_j$. \begin{equation} r_{i,j} = f_\psi(\mathcal{C}(f_\psi(\bm{x}_i), f_\psi(\bm{x}_j))) \end{equation} Several works have introduced the relation network into hyperspectral image classification to solve the small sample set problem. Deng \emph{et al.}~\cite{deng2019relation} first introduced the relation network into HSI. They use a 2-dimensional convolutional neural network to construct both the embedding function and relation function. Gao \emph{et al.}~\cite{gao2020relation} and Ma \emph{et al.}~\cite{ma2019Two_Phase_Relation} have also proposed a similar architecture. In~\cite{rao2019SSRelation}, to extract the joint spatial-spectral feature, Rao \emph{et al.} implemented the embedding function with a 3-dimensional convolutional neural network. \begin{figure}[hbpt] \centering \includegraphics[width=0.9\textwidth]{figure/architecture/relation-network.pdf} \caption{Architecture of relation network.} \label{relation-network} \end{figure} The Siamese network~\cite{bromley1994signature,chopra2005similarity_learning,norouzi2012hamming} is a typical network in few-shot learning. Compared with the above network, its input is a sample pair. Thus, it is composed by two parallel subnetworks $f_{\phi 1}: \mathbb{R}^D \rightarrow \mathbb{R}^M$ with the same structure and sharing parameters. The subnetworks respectively accept an input sample and map it to a low-dimensional metric space to generate their own embedding $f_{\phi 1}(\bm{x}_i)$ and $f_{\phi 1}(\bm{x}_j)$. The Euclidean distances $D(\bm{x}_i, \bm{x}_j)$ is used to measure their similarity. \begin{equation} D(\bm{x}_i, \bm{x}_j) = \Vert f_{\phi 1}(\bm{x}_i)- f_{\phi 1}(\bm{x}_j) \Vert_2 \end{equation} The higher the similarity between the two samples is, the more likely they are to belong to the same class. Recently, the Siamese network was introduced into HSI classification. Usually, a 2-dimensional convolutional neural network~\cite{liu2017siamese, liu2018transfer} is used to serve as the embedding function, as in the above two networks. In the same way, several methods combined the 1-dimensional convolution neural network with the 2-dimensional one~\cite{li2020adaptation, huang2020dual_siamese} or use a 3-dimensional network~\cite{rao2020Siamese3D} for the joint spectral-spatial feature. Moreover, Miao \emph{et al.}~\cite{miao2019Siamese_Encoder} have tried to use the stack autoencoder to construct the embedding function $f_{\phi 1}$. After training, the model has the ability to identify the difference between samples. To obtain the final classification result, we still need a classifier to classify the embedding feature of the sample, which is different from the prototype network and the relation network. To avoid overfitting under limited labeled samples, an SVM is usually used as a classifier since it is famous for its lightweight. \begin{figure}[hbpt] \centering \includegraphics[width=0.8\textwidth]{figure/architecture/siamese-network.pdf} \caption{Architecture of the Siamese network.} \label{siamese-network} \end{figure} \section{Experiments} \label{experiments} In most papers, comprehensive experiments and analysis are introduced to describe the advantages and disadvantages of the methods in the paper. However, the problem is that different papers may choose different experimental settings. For example, the same number of samples for training or test is used in the experiments, and the chosen samples are normally different since they are chosen randomly. To evaluate different methods fairly, we should use the exact same experimental setting. That is the reason why we design experiments to evaluate different methods. As described above, the main methods of small-sample learning currently include the autoencoder, few-shot learning, transfer learning, active learning, and data augmentation. Therefore, some representative networks of the following methods--S-DMM~\cite{2020deep_metric_embedding}, SSDL~\cite{yue2016spatial_pyramid_pooling}, 3DCAE~\cite{mei2019_3d_convolutional_autoencoder}, TwoCnn~\cite{Yang2017deep_transferring_SS}, SSLstm~\cite{zhou2019hyperspectral_ss_LSTMs} and 3DVSCNN~\cite{2020valuable_selection_cnn}, which contain convolutional network models and recurrent network models, are selected to conduct experiments on three benchmark data sets--PaviaU, Salinas and KSC. All models are based on deep learning. Here, we only focus on the robustness of the model on a small-sample data set, so they classify hyperspectral images based on joint spectral-spatial features. According to the sample size per category in the training data set, the experiment is divided into three groups. The first has 10 samples for each category, the second has 50 samples for each category and the third has 100 samples for each category. At the same time, to ensure the stability of the model, each group of experiments is performed ten times, and the training data set is different each time. Finally, models are evaluated by average accuracy (AA) and overall accuracy (OA). \subsection{Introduction of data sets} \begin{itemize} \item \textbf{Pavia University (PaviaU)}: The Pavia University data set consists of hyperspectral images, each with 610*340 pixels and a spatial resolution of 1.3 meters, which was taken by the ROSIS sensor above Pavia University in Italy. The spectral imagery continuously images 115 wavelengths in the range of 0.43$\sim$0.86 um. Since 12 of the wavelengths are polluted by noise, each pixel in the final data set contains 103 bands. It contains 42,776 labeled samples in total, covering 9 objects. In addition, its sample size of each object is shown in Table \ref{PaviaU}. \item \textbf{Salinas}: The Salinas data set consists of hyperspectral images with 512*217 pixels and a spatial resolution of 3.7 meters, taken over the Salinas Valley in California by the AVIRIS sensor. The spectral imagery continuously images 224 wavelengths in the range of 0.2$\sim$2.4 um. Since 20 of the bands cannot be reflected by water, each pixel in the final data set contains 204 bands. It contains 54,129 labeled samples in total, covering 16 objects. In addition, its sample size of each object is shown in Table \ref{Salinas}. \item \textbf{Kennedy Space Center (KSC)}: The KSC data set was taken at the Kennedy Space Center (KSC), above Florida, and used the AVIRIS sensor. Its hyperspectral images contain 512*641 pixels, and the spatial resolution is 18 meters. The spectral imagery continuously images 224 wavelengths in the range of 400$\sim$2500 nm. Similarly, after removing 48 bands that are absorbed by water and have a low signal-to-noise ratio, each pixel in the final data set contains 176 bands. It contains 5211 label samples, covering 13 objects. Moreover, its sample size of each object is shown in Table \ref{KSC}. \end{itemize} \begin{table}[htbp] \centering \small \caption{Pavia University. It contains 9 objects. The second column and last column represent the name of objects and sample number, respectively.} \begin{tabular*}{0.6\textwidth}{c@{\extracolsep{\fill}}cr} \toprule NO.&Class&Total \\ \midrule C1&Asphalt&6631 \\ C2&Meadows&18649 \\ C3&Gravel&2099 \\ C4&Trees&3064 \\ C5&Painted metal sheets&1345 \\ C6&Bare Soil&5029 \\ C7&Bitumen&1330 \\ C8&Self-Blocking Bricks&3682 \\ C9&Shadows&947 \\ \bottomrule \end{tabular*} \label{PaviaU} \end{table} \begin{table}[htbp] \centering \caption{Salinas. It contains 16 objects. The second column and last column represent the name of objects and sample number, respectively.} \begin{tabular*}{0.6\textwidth}{c@{\extracolsep{\fill}}cr} \toprule NO.&Class&Total \\ \midrule C1&Broccoli green weeds 1&2009 \\ C2&Broccoli green weeds 22&3726 \\ C3&Fallow&1976 \\ C4&Fallow rough plow&1394 \\ C5&Fallow smooth&2678 \\ C6&Stubble&3959 \\ C7&Celery&3579 \\ C8&Grapes untrained&11271 \\ C9&Soil vineyard develop&6203 \\ C10&Corn senesced green weeds&3278 \\ C11&Lettuce romaine 4wk&1068 \\ C12&Lettuce romaine 5wk&1927 \\ C13&Lettuce romaine 6wk&916 \\ C14&Lettuce romaine 7wk&1070 \\ C15&Vineyard untrained&7268 \\ C16&Vineyard vertical trellis&1807 \\ \bottomrule \end{tabular*} \label{Salinas} \end{table} \begin{table}[htbp] \centering \caption{KSC. It contains 13 objects. The second column and last column represent the name of objects and sample number, respectively.} \begin{tabular*}{0.6\textwidth}{c@{\extracolsep{\fill}}cr} \toprule NO.&Class&Total \\ \midrule C1&Scrub&761 \\ C2&Willow swamp&243 \\ C3&Cabbage palm hammock&256 \\ C4&Cabbage palm/oak hammock&252 \\ C5&Slash pine&161 \\ C6&Oak/broadleaf hammock&229 \\ C7&Hardwood swamp&105 \\ C8&Graminoid marsh&431 \\ C9&Spartina marsh&520 \\ C10&Cattail marsh&404 \\ C11&Salt marsh&419 \\ C12&Mud flats&503 \\ C13&Water&927 \\ \bottomrule \end{tabular*} \label{KSC} \end{table} \subsection{Selected models} Some state-of-the-art methods are choose to evaluate their performance. They were trained using different platforms, including Caffe, PyTorch, etc. Some platforms such Caffe are not well supported by the new development environments. Most models are our re-implementations and are trained using the exact same setting. Most of the above model settings are based on the original paper, and some are modified slightly based on the experiment. All models are trained and tested on the same training data set that is picked randomly based on pixels and the test data set, and their settings have been optimally tuned. The implementation situation of the code is shown in Table \ref{code-of-model}. The descriptions of the chosen models are provided in the following part. \begin{table}[htbp] \centering \caption{Originators of model implementations. F denotes that the code of the model comes from the original paper. T denotes our implemented model.} \resizebox{\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline S-DMM&SSDL&3DCAE&TwoCnn&3DVSCNN&SSLstm&CNN\_HSI&SAE\_LR \\ \hline F&T&F&T&T&T&T&T \\ \hline \end{tabular}} \label{code-of-model} \end{table} \begin{itemize} \item \textbf{SAE\_LR~\cite{chen2014deep}}. This is the first paper to introduce the autoencoder into hyperspectral image classification, opening a new era of hyperspectral image processing. It adopts a raw autoencoder composed of linear layers to extract the feature. The size of the neighbor region is $5\times 5$, and the first 4 components of PCA are chosen. Subsequently, we can gain a spatial feature vector. Before inputting into the model, the raw spatial feature and the spatial feature are stacked to form a joint feature. To reduce the difficulty of training, it uses a greedy layerwise pretraining method to train each layer, and the parameters of the encoder and decoder are symmetric. Then, the encoder concatenates a linear classifier for fine tuning. According to~\cite{chen2014deep}, the hidden size is set to 60, 20, and 20 for PaviaU, Salinas, and KSC, respectively. \item \textbf{S-DMM~\cite{2020deep_metric_embedding}}. This is a relation network that contains an embedding module and relation module implemented by 2D convolutional networks. The model aims to make samples in the feature space have a small intraclass distance and a large interclass distance through a learnable feature embedding function and a metric function. After training, all samples will be assigned to the corresponding clusters. Finally, a simple KNN is used to classify the query sample. In the experiment, the neighbor region of the pixel is fixed as $5\times 5$ and the feature dimension is set to 64. \item \textbf{3DCAE~\cite{mei2019_3d_convolutional_autoencoder}}. This is a 3D convolutional autoencoder adopting a 3D convolution layer to extract the joint spectral-spatial feature. First, 3DCAE is trained by the traditional method, and then, an SVM classifier is adopted to classify the hidden features on the top of 3DCAE. In the experiment, the neighbor region of the pixel is set to $5\times 5$ and 90\% of the samples are used to train the 3D autoencoder. There are two different hyperparameter settings corresponding to Salinas and PaviaU, and the model has not been tested on KSC in~\cite{2020deep_metric_embedding}. Therefore, on the KSC, the model uses the same hyperparameter configuration as on the Salinas because they are collected by the same sensor. \item \textbf{SSDL~\cite{yue2016spatial_pyramid_pooling}}. This is a typical two-stream structure extracting the spectral and spatial feature separately through two different branches and merging them at the end. Inspired by~\cite{chen2014deep}, the author adopts a 1D autoencoder to extract the spectral feature. In the branch of spatial feature extraction, the model uses a spatial pyramid pooling layer to replace the traditional pooling layer on the top convolutional layer. The spatial pyramid pooling layer enables the deep convolutional neural network to generate a fixed-length feature. On the one hand, it enables the model to convert the input of different sizes into a fixed-length, which is good for the module that is sensitive to the input size; on the other hand, it is useful for the model to better adapt to objects of different scales, and the output will include features from coarse to fine, achieving multiscale feature fusion. Then, a simple logistic classifier is used to classify the spectra-spatial feature. In the experiment, 80\% of the data are used to train the autoencoder through the method of greedy layer-wise pretraining. Moreover, in the spatial branch, the size of the neighbor region is set to 42*42 and PCA is used to extract the first component. Then, the overall model is trained together. \item \textbf{TwoCnn~\cite{Yang2017deep_transferring_SS}}. This is a two-stream structure based on fine-tuning. In the spectral branch, it adopts a 1D convolutional layer to capture local information of spectral features, which is entirely different from SSDL. In particular, transfer learning is used to pretrain parameters of the model and endow it with good robustness on limited samples. The pairs of the source data set and target data set are Pavia Center--PavaU, Indian pines-Salinas, and Indian pines-KSC. In~\cite{Yang2017deep_transferring_SS}, they also did not test the model on KSC. Thus, we regard Indian pines as the source domain for KSC, given that both data sets come from the same type of sensor. The neighbor region of the pixel is set to 21*21. Additionally, it averages along the spectral channel to reduce the input dimension, instead of PCA. In the pretraining process, 15\% of samples of each category of Pavia and 90\% of samples of each category of Indian pines are treated as the training data set, and the rest serve as the test data set. To make the number of bands in the source data set and target data set the same, we filter out the band that has the smaller variance. According to~\cite{Yang2017deep_transferring_SS}, all other layers are transferred except for the softmax layer. Finally, the model is fine-tuned on the target data set with the same configuration. \item \textbf{3DVSCNN~\cite{2020valuable_selection_cnn}}. This is a general CNN-based image classification model, but it uses a 3D convolutional network to extract spectral-spatial features simultaneously followed by a fully connected network for classification. The main idea of~\cite{2020valuable_selection_cnn} is the usage of active learning. The process can be divided into two steps: the selection of valuable samples and the training of the model. In~\cite{2020valuable_selection_cnn}, an SVM serves as a selector to iteratively select some of the most valuable samples according to Eq.\eqref{BvSB}. Then, the 3DVSCNN is trained on the valuable data set. The size of its neighbor region is set to 13*13. During data preprocessing, it uses PCA to extract the top 10 components for PaviaU and Salinas, and the top 30 components for KSC, which contain more than 99\% of the original spectral information and still keep a clear spatial geometry. In the experiment, 80\% of samples will be picked by the SVM to form a valuable data set for 4 samples in each iteration. Then, the model is trained on the valuable data set. \item \textbf{CNN\_HSI~\cite{yu2017convolutional}}. The model combines multilayer $1\times 1$ 2D convolutions followed by local response normalization to capture the feature of hyperspectral images. To avoid the loss of information after PCA, it uses 2D convolution to extract spectral and spatial joint features directly, instead of 3D convolution. At the same time, it also adopts a dropout layer and data augmentation, including rotation and flipping, to improve the generalization of the model and reduce overfitting. After data augmentation, an image can generate eight different orientation images. Moreover, the model removes the linear classifier to decrease the number of trainable parameters. According to~\cite{yu2017convolutional}, the dropout rate is set to 0.6, the size of the neighbor region is $5\times 5$, and the batch size is 16 in the experiment. \item \textbf{SSLstm~\cite{zhou2019hyperspectral_ss_LSTMs}}. Unlike the above methods, SSLstm adopts recurrent networks to process spectral and spatial features simultaneously. In the spectral branch, called SeLstm, the spectral vector is seen as a sequence. In the spatial branch, called SaLstm, it treats each line of the image patch as a sequence element. Therefore, along the column direction, the image patch can be well converted into a sequence. In particular, it fuses the predictions of the two branches in the label space to obtain the final prediction result, which is defined as \begin{equation} \begin{split} P(y=j|x_i) = w_{spe}P_{spe}(y=j|x_i)+w_{spa}P_{spa}(y=j|x_i) \end{split} \end{equation} where $P(y=j|x_i)$ denotes the final posterior probability, $P_{spe}(y=j|x_i)$ and $P_{spa}(y=j|x_i)$ denote the posterior probabilities from spectral and spatial modules, respectively, and $w_{spe}$ and $w_{spa}$ are fusion weights that satisfy the sum of 1. In the experiment, the size of the neighbor region is set to 32*32 for PaviaU and Salinas. In addition, for KSC, it is set to 64*64. Next, the first component of PCA is reserved on all data sets. The number of hidden nodes of the spectral branch and the spatial branch are 128 and 256, respectively. In addition, $w_{spe}$ and $w_{spa}$ are set to 0.5 and 0.5 separately. \end{itemize} \subsection{Experimental results and analysis} The accuracy of the test data set is shown in Table \ref{ACCURACY-PAVIAU-TABLE}, Table \ref{ACCURACY-SALINAS-TABLE}, and Table \ref{ACCURACY-KSC-TABLE}. Corresponding classification maps are shown in Figure~\ref{PaviaU-10}$\sim$\ref{KSC-100}. The final classification result of the pixel is decided by the voting result of 10 experiments. Taking Table \ref{ACCURACY-PAVIAU-TABLE} as an example, the experiment is divided into three groups, and the sample sizes in each group are 10, 50, and 100, respectively. The aforementioned models are conducted 10 times in every experiment sets. Then, we count the average of their class classification accuracy, AA, and OA for comparing their performance. When sample size is 10, S-DMM has the highest AA and OA, which are 91.08\% and 84.45\% respectively, in comparison with the AA and OA of 71.58\% and 60.00\%, 75.34 \% and 74.79\%, 74.60\% and 78.61\%, 75.64\% and 75.17\%, 72.77\% and 69.59\%, 85.12\% and 82.13\%, 72.40\% and 66.05\% for 3DCAE, SSDL, TwoCnn, 3DVSCNN, SSLstm, CNN\_HSI and SAE\_LR. Besides, S-DMM has the largest number of class classification accuracy. When the sample size is 50, S-DMM and CNN\_HSI have the highest AA and OA respectively, which are 96.47\% and 95.21\%. In the last group, 3DVSCNN and CNN\_HSI have the highest AA and OA, which are 97.13\% and 97.35\%. According to the other two tables, we can conclude with a similar result. As shown in Table \ref{ACCURACY-PAVIAU-TABLE}, Table \ref{ACCURACY-SALINAS-TABLE} and Table \ref{ACCURACY-KSC-TABLE}, we can conclude that most models' performance on KSC, except for 3DCAE, is better than the other two data sets. Especially when the data set contains few samples, the accuracy of S-DMM is up to 94\%, superior to other data sets. This is because the surface objects on the KSC itself have a discriminating border between each other, regardless of its higher spatial resolution than that of the other data sets, as shown in Figure \ref{KSC-10}$\sim$\ref{KSC-100}. In the other data sets, models easily misclassify the objects that have a similar spatial structure, as illustrated in Meadows (class 2) and Bare soil (class 6) in PaviaU and Fallow rough plow (class 4) and Grapes untrained (class 8) in Salinas, as shown in \ref{PaviaU-10}$\sim$\ref{Salinas-100}. The accuracy of all models on Grapes untrained is lower than other classes in Salinas. In Figure~\ref{accuracy-curve}, on all data sets, as the number of samples increases, the accuracy of all models will improve together. As shown in Figure~\ref{accuracy-curve}, when the sample size of each category is 10, S-DMM and CNN\_HSI have achieved stable and excellent performance on all data sets. They are not sensitive to the size of the data set. In Figure~\ref{accuracy-curve-Salinas} and Figure~\ref{accuracy-curve-KSC}, with increasing sample size, the accuracy of S-DMM and CNN\_HSI have improved slightly, but their increase is lower than that of others. In Figure~\ref{accuracy-curve-PaviaU}, when the sample size increases from 50 to 100, we can obtain the same conclusion. This result shows that both of them can be applied to solve the small-sample problem in hyperspectral images. Especially for S-DMM, it has gained the best performance on the metric of AA and OA on Salinas and KSC in the experiment with a sample size of 10. On PaviaU, it still wins the third place. This result also proves that it can work well on a few samples. Although TwoCnn, 3DVSCNN, and SSLstm achieve good performance on all data sets, when the data set contains fewer samples, they will not work well. It is worth mentioning that 3DVSNN uses fewer samples to train than other models for selecting valuable samples. This approach may not be beneficial for those classes with few samples. As shown in \ref{ACCURACY-KSC-TABLE}, 3DVSCNN has a good performance on OA, but a bad performance on AA. For class 7, when its sample size increases from 10 to 50 and 100, its accuracy drops. This is because the total sample size of it is the smallest on KSC. Therefore, it contains few valuable samples. Moreover, the step of selecting valuable samples would cause an imbalance between the classes, which leads to the accuracy of class 7 decreasing. On almost all data sets, autoencoder-based models achieve poor performance compared with other models. Although unsupervised learning does not need to label samples, if there are no constraints, the autoencoder might actually learn nothing. Moreover, since it has a symmetric architecture, it would result in a vast number of parameters and increase the difficulty of training. Therefore, SSDL and SAE\_LR use a greedy layerwise pretraining method to solve this problem. However, 3DCAE does not adopt this approach and achieves the worst performance on all data sets. As shown in Figure~\ref{accuracy-curve}, it still has considerable room for improvement. Overall, classification results based on few-shot learning, active learning, transfer learning, and data augmentation are better than autoencoder-based unsupervised learning methods on the limited sample in all experiments. Few-shot learning benefits from the exploration of the relationship between samples to find a discriminative decision boarder. Active learning benefits from the selection of valuable samples, which enables the model to focus more attention to indistinguishable samples. Transfer learning makes good use of the similarity between different data sets, which reduces the quantity of data required for training and trainable parameters, improving the model's robustness. According to raw data, the method of data augmentation generates more samples to expand the diversity of samples. Although the autoencoder can learn the internal structure of the unlabeled data set, the final feature representation might not have task-related characteristics. This is the reason why its performance on a small-sample data set is inferior to supervised learning. \begin{table}[htbp] \centering \caption{PaviaU. Classification accuracy obtained by S-DMM~\cite{2020deep_metric_embedding}, 3DCAE~\cite{mei2019_3d_convolutional_autoencoder}, SSDL~\cite{yue2016spatial_pyramid_pooling}, TwoCnn~\cite{Yang2017deep_transferring_SS}, 3DVSCNN~\cite{2020valuable_selection_cnn}, SSLstm~\cite{zhou2019hyperspectral_ss_LSTMs}, CNN\_HSI~\cite{yu2017convolutional} and SAE\_LR~\cite{chen2014deep} on PaviaU. The best accuracies are marked in bold. The "size" in the first line denotes the sample size per category.} \large \resizebox{\textwidth}{!}{ \begin{tabular}{cccccccccc} \hline size&classes&S-DMM&3DCAE&SSDL&TwoCnn&3DVSCNN&SSLstm&CNN\_HSI&SAE\_LR\\\hline \multirow{11}{*}{10}&1&\textbf{94.34}&49.41&68.33&71.80&63.03&72.59&84.60&66.67\\ &2&73.13&51.60&72.94&\textbf{88.27}&69.22&68.86&67.57&56.68\\ &3&\textbf{86.85}&54.06&53.71&47.58&71.77&48.08&72.80&46.37\\ &4&95.04&94.81&88.58&\textbf{96.29}&85.10&79.06&93.65&80.10\\ &5&\textbf{99.98}&99.86&97.21&94.99&98.61&93.80&99.84&98.81\\ &6&\textbf{85.58}&57.40&66.21&49.75&75.17&62.53&78.35&55.87\\ &7&\textbf{98.55}&80.34&68.17&58.65&65.61&65.39&92.14&81.42\\ &8&\textbf{86.47}&57.97&64.07&66.95&55.77&67.60&78.17&66.83\\ &9&\textbf{99.81}&98.76&98.83&97.15&96.48&97.02&98.92&98.90\\\cline{2-10} &AA&\textbf{91.08}&71.58&75.34&74.60&75.64&72.77&85.12&72.40\\ &OA&\textbf{84.55}&60.00&74.79&78.61&75.17&69.59&82.13&66.05\\\hline \hline \multirow{11}{*}{50}&1&\textbf{97.08}&80.76&76.11&88.50&90.60&82.96&93.66&78.83\\ &2&90.09&63.14&87.39&86.43&93.68&82.42&\textbf{94.82}&65.36\\ &3&\textbf{95.15}&62.57&70.28&69.21&90.64&81.59&94.87&65.50\\ &4&97.35&97.33&89.27&\textbf{98.80}&93.47&91.31&94.49&92.43\\ &5&\textbf{100.00}&\textbf{100.00}&98.14&99.81&99.92&99.67&\textbf{100.00}&99.47\\ &6&\textbf{96.32}&80.15&75.12&84.93&94.15&82.58&88.14&72.30\\ &7&\textbf{99.31}&88.45&75.80&83.12&94.98&92.34&97.21&86.04\\ &8&\textbf{92.97}&75.11&70.57&83.57&91.55&84.75&87.52&79.74\\ &9&\textbf{99.98}&99.69&99.61&99.91&98.72&99.39&99.78&99.29\\\cline{2-10} &AA&\textbf{96.47}&83.02&82.48&88.25&94.19&88.56&94.50&82.10\\ &OA&94.04&64.17&84.92&90.69&94.23&84.50&\textbf{95.21}&77.42\\\hline \hline \multirow{11}{*}{100}&1&\textbf{97.11}&83.05&85.59&92.21&94.38&90.84&94.44&78.64\\ &2&91.64&73.45&86.17&76.86&\textbf{95.90}&83.26&97.75&74.28\\ &3&94.23&73.02&80.29&72.24&\textbf{95.96}&80.66&95.37&79.87\\ &4&98.70&97.87&97.14&\textbf{99.28}&97.65&92.54&95.88&93.54\\ &5&\textbf{100.00}&\textbf{100.00}&99.06&99.89&99.95&99.57&99.99&99.24\\ &6&93.51&86.82&83.16&95.90&\textbf{97.92}&87.61&91.01&69.83\\ &7&\textbf{99.21}&90.17&94.08&89.88&98.39&93.45&98.37&89.42\\ &8&92.73&88.31&88.43&90.03&\textbf{94.21}&90.08&92.41&85.05\\ &9&\textbf{99.99}&99.82&99.65&99.98&99.85&99.80&99.70&99.55\\\cline{2-10} &AA&96.35&88.06&90.40&90.70&\textbf{97.13}&90.87&96.10&85.49\\ &OA&94.65&70.15&89.33&94.76&97.05&87.19&\textbf{97.35}&81.44 \\\hline \end{tabular}} \label{ACCURACY-PAVIAU-TABLE} \end{table} \begin{table}[htbp] \centering \caption{Salinas. Classification accuracy obtained by S-DMM~\cite{2020deep_metric_embedding}, 3DCAE~\cite{mei2019_3d_convolutional_autoencoder}, SSDL~\cite{yue2016spatial_pyramid_pooling}, TwoCnn~\cite{Yang2017deep_transferring_SS}, 3DVSCNN \cite{2020valuable_selection_cnn}, SSLstm~\cite{zhou2019hyperspectral_ss_LSTMs}, CNN\_HSI~\cite{yu2017convolutional} and SAE\_LR~\cite{chen2014deep} on Salinas. The best accuracies are marked in bold. The "size" in the first line denotes the sample size per category.} \large \resizebox{\textwidth}{!}{ \begin{tabular}{cccccccccc} \hline size&classes&S-DMM&3DCAE&SSDL&TwoCnn&3DVSCNN&SSLstm&CNN\_HSI&SAE\_LR\\\hline \multirow{18}{*}{10}&1&\textbf{99.45}&99.28&76.01&88.22&97.92&79.38&98.80&86.01\\ &2&99.21&59.04&69.24&78.09&\textbf{99.71}&72.49&98.77&44.21\\ &3&\textbf{96.70}&66.54&69.89&74.80&95.09&86.83&95.48&44.72\\ &4&\textbf{99.56}&98.65&94.96&98.19&99.28&99.45&98.36&97.40\\ &5&\textbf{97.12}&81.94&89.43&96.54&93.35&94.95&92.55&83.93\\ &6&89.64&98.52&96.19&98.96&99.81&93.65&\textbf{99.96}&87.28\\ &7&\textbf{99.82}&97.31&76.83&92.52&96.73&87.82&99.61&96.94\\ &8&\textbf{70.53}&68.11&42.58&54.35&67.89&61.64&77.51&41.58\\ &9&99.02&95.06&89.58&81.22&\textbf{99.42}&90.47&97.19&78.45\\ &10&91.13&9.43&76.40&75.18&\textbf{91.75}&86.66&89.23&30.75\\ &11&\textbf{97.56}&72.26&93.04&92.26&95.26&91.37&95.45&23.52\\ &12&99.87&72.16&86.60&86.40&96.65&95.38&\textbf{99.96}&82.63\\ &13&99.25&\textbf{99.78}&95.46&98.18&96.64&96.90&99.22&92.88\\ &14&96.30&89.93&90.50&96.10&\textbf{99.68}&91.68&96.80&62.40\\ &15&72.28&56.98&65.40&55.60&\textbf{83.86}&75.55&72.03&57.10\\ &16&\textbf{95.29}&44.35&75.89&92.39&92.03&88.43&94.07&76.75\\\cline{2-10} &AA&93.92&75.58&80.50&84.94&\textbf{94.07}&87.04&94.06&67.91\\ &OA&89.69&71.50&74.29&77.54&90.18&81.20&\textbf{91.31}&67.43\\\hline \hline \multirow{18}{*}{50}&1&99.97&98.81&92.70&97.99&\textbf{99.99}&94.18&99.20&85.37\\ &2&99.84&86.97&88.30&91.35&\textbf{99.94}&92.34&99.57&92.51\\ &3&\textbf{99.84}&54.83&87.50&94.87&99.74&97.02&99.62&81.25\\ &4&99.93&98.87&99.41&99.96&99.89&\textbf{99.95}&99.63&98.40\\ &5&\textbf{99.40}&95.62&95.83&98.96&99.38&98.34&98.79&95.12\\ &6&99.92&99.62&98.95&99.87&\textbf{100.00}&98.78&99.98&98.86\\ &7&\textbf{99.92}&98.17&96.47&96.60&99.85&97.80&99.78&98.55\\ &8&68.92&81.74&62.99&68.05&\textbf{85.35}&77.17&77.93&46.04\\ &9&99.76&94.87&95.34&86.01&\textbf{99.99}&96.15&99.71&94.84\\ &10&97.18&12.87&95.31&93.94&\textbf{98.23}&97.23&97.33&77.69\\ &11&\textbf{99.57}&75.82&97.73&97.10&98.59&97.71&99.54&77.14\\ &12&\textbf{99.90}&58.18&97.51&97.16&99.89&98.88&99.84&96.87\\ &13&99.84&99.98&98.55&98.60&\textbf{100.00}&99.12&99.87&97.33\\ &14&98.15&93.80&97.54&99.37&\textbf{99.91}&99.24&99.53&91.49\\ &15&76.12&41.84&69.04&67.21&\textbf{88.77}&86.24&83.39&65.15\\ &16&\textbf{98.87}&69.00&94.34&97.78&98.55&97.64&98.15&91.94\\\cline{2-10} &AA&96.07&78.81&91.72&92.80&\textbf{98.00}&95.49&96.99&86.78\\ &OA&90.92&74.73&85.79&87.01&\textbf{95.30}&91.37&95.08&79.49\\\hline \hline \multirow{18}{*}{100}&1&99.86&98.81&98.22&98.74&\textbf{99.99}&97.86&99.77&92.44\\ &2&99.74&91.88&96.54&96.70&\textbf{99.99}&97.74&99.86&89.46\\ &3&\textbf{99.99}&63.20&95.40&97.47&99.16&98.91&99.79&92.05\\ &4&99.84&99.12&99.29&\textbf{99.95}&99.85&99.78&99.44&99.03\\ &5&99.58&98.24&98.09&99.61&\textbf{99.70}&98.89&99.54&96.32\\ &6&99.99&99.95&99.12&99.79&\textbf{100.00}&99.62&\textbf{100.00}&98.96\\ &7&\textbf{99.93}&98.71&97.14&97.94&99.88&98.97&99.86&98.42\\ &8&67.88&71.43&59.51&66.83&\textbf{90.54}&86.00&79.90&39.73\\ &9&99.81&95.51&94.87&90.65&\textbf{99.98}&98.15&99.75&96.34\\ &10&96.54&22.92&96.97&96.21&97.77&\textbf{98.55}&97.29&84.35\\ &11&99.75&76.67&99.28&99.25&\textbf{99.82}&99.39&99.70&92.76\\ &12&\textbf{100.00}&64.12&99.39&98.01&99.99&99.84&99.99&96.97\\ &13&99.87&\textbf{99.98}&98.74&99.34&\textbf{99.98}&99.38&99.75&97.48\\ &14&98.66&94.73&98.62&99.72&\textbf{99.91}&99.44&99.67&93.52\\ &15&78.73&63.65&83.03&70.16&91.31&86.77&\textbf{91.86}&69.09\\ &16&\textbf{99.27}&79.70&96.65&99.26&99.26&98.69&99.10&93.21\\\cline{2-10} &AA&96.21&82.41&94.43&94.35&\textbf{98.57}&97.37&97.83&89.38\\ &OA&91.56&76.61&88.67&90.25&\textbf{96.89}&94.41&96.28&81.95\\\hline \end{tabular}} \label{ACCURACY-SALINAS-TABLE} \end{table} \begin{table}[htbp] \centering \caption{KSC. Classification accuracy obtained by S-DMM~\cite{2020deep_metric_embedding}, 3DCAE~\cite{mei2019_3d_convolutional_autoencoder}, SSDL~\cite{yue2016spatial_pyramid_pooling}, TwoCnn~\cite{Yang2017deep_transferring_SS}, 3DVSCNN~\cite{2020valuable_selection_cnn}, SSLstm~\cite{zhou2019hyperspectral_ss_LSTMs}, CNN\_HSI~\cite{yu2017convolutional} and SAE\_LR~\cite{chen2014deep} on KSC. The best accuracies are marked in bold. The "size" in the first line denotes the sample size per category.} \large \resizebox{\textwidth}{!}{ \begin{tabular}{cccccccccc} \hline size&classes&S-DMM&3DCAE&SSDL&TwoCnn&3DVSCNN&SSLstm&CNN\_HSI&SAE\_LR\\\hline \multirow{15}{*}{10}&1&93.49&35.46&79.21&67.11&\textbf{95.33}&73.58&92.17&83.95\\ &2&\textbf{89.74}&49.40&67.68&58.37&40.39&68.45&81.67&69.01\\ &3&\textbf{95.16}&40.41&76.87&77.20&75.41&81.59&86.91&50.61\\ &4&58.72&5.54&70.33&75.12&35.87&\textbf{76.16}&60.83&20.21\\ &5&87.95&33.38&81.26&\textbf{88.08}&47.42&87.22&64.37&23.11\\ &6&\textbf{93.42}&51.05&79.18&66.44&64.29&76.71&66.16&45.39\\ &7&\textbf{98.63}&16.32&95.26&92.74&57.79&96.42&96.00&63.58\\ &8&\textbf{97.93}&46.44&72.42&61.92&71.88&52.95&85.77&58.05\\ &9&\textbf{94.88}&86.25&87.00&92.31&79.00&90.65&91.06&76.24\\ &10&\textbf{98.12}&8.76&72.59&86.27&56.57&89.04&85.13&63.12\\ &11&\textbf{97.51}&76.21&88.68&78.17&86.99&89.32&95.60&89.98\\ &12&\textbf{93.69}&8.54&83.65&78.09&60.79&83.96&89.66&69.59\\ &13&\textbf{100.00}&46.95&99.98&\textbf{100.00}&84.92&\textbf{100.00}&99.95&97.90\\\cline{2-10} &AA&\textbf{92.25}&38.82&81.09&78.60&65.90&82.00&84.25&62.36\\ &OA&\textbf{94.48}&49.73&83.71&82.29&77.40&83.07&91.13&72.68\\\hline \hline \multirow{15}{*}{50}&1&97.99&22.53&96.12&72.95&\textbf{98.45}&96.77&94.40&88.21\\ &2&\textbf{98.24}&30.98&94.56&94.04&39.90&98.19&91.50&78.50\\ &3&98.69&45.10&96.55&90.10&99.13&\textbf{99.47}&94.47&83.06\\ &4&78.22&3.86&93.51&92.33&74.01&\textbf{98.32}&76.49&43.07\\ &5&92.16&40.54&96.94&97.12&64.32&\textbf{99.55}&87.03&53.33\\ &6&98.49&62.07&96.70&93.80&77.21&\textbf{99.72}&70.89&51.90\\ &7&98.36&18.00&99.64&97.82&20.36&\textbf{100.00}&98.00&84.73\\ &8&\textbf{99.21}&43.04&91.92&90.60&96.25&97.40&93.86&77.77\\ &9&\textbf{99.96}&89.77&98.57&89.55&63.91&98.83&98.77&86.47\\ &10&\textbf{99.92}&12.12&93.70&95.56&54.72&99.52&91.67&85.28\\ &11&98.62&80.38&97.86&98.40&90.95&\textbf{99.11}&87.75&96.56\\ &12&\textbf{99.07}&19.85&94.99&95.01&87.37&99.67&89.54&82.19\\ &13&\textbf{100.00}&91.24&\textbf{100.00}&90.00&96.77&99.46&98.95&99.44\\\cline{2-10} &AA&96.84&43.04&96.24&92.10&74.10&\textbf{98.92}&90.25&77.73\\ &OA&98.68&54.01&96.88&96.61&96.03&\textbf{98.72}&97.39&84.93\\\hline \hline \multirow{15}{*}{100}&1&98.17&19.03&97.41&96.51&98.94&\textbf{99.74}&93.93&89.77\\ &2&98.74&34.13&98.60&99.58&56.50&\textbf{99.79}&89.93&80.77\\ &3&99.55&57.18&96.67&99.42&\textbf{99.81}&99.23&98.33&82.88\\ &4&88.29&1.38&97.96&98.68&88.29&\textbf{99.14}&85.86&53.95\\ &5&93.11&52.46&99.51&\textbf{100.00}&76.23&\textbf{100.00}&93.77&58.52\\ &6&\textbf{99.61}&59.77&98.68&97.36&80.62&99.53&74.96&58.22\\ &7&\textbf{100.00}&8.00&\textbf{100.00}&\textbf{100.00}&32.00&\textbf{100.00}&98.00&86.00\\ &8&\textbf{99.79}&51.81&95.53&98.07&98.91&99.40&97.37&83.96\\ &9&99.74&87.40&98.74&98.74&63.93&99.12&\textbf{99.76}&91.95\\ &10&\textbf{100.00}&13.16&98.22&99.61&72.47&\textbf{100.00}&97.70&91.28\\ &11&99.91&83.76&99.06&\textbf{99.97}&94.42&99.81&99.84&97.81\\ &12&99.33&24.94&97.99&99.03&94.32&\textbf{99.80}&95.31&85.73\\ &13&\textbf{100.00}&90.07&99.96&99.94&97.62&99.94&99.85&99.58\\\cline{2-10} &AA&98.17&44.85&98.33&98.99&81.08&\textbf{99.65}&94.20&81.57\\ &OA&98.96&59.63&98.75&99.15&98.55&\textbf{99.68}&98.05&89.15\\\hline \end{tabular}} \label{ACCURACY-KSC-TABLE} \end{table} \begin{figure}[hbpt] \centering \subfigure[]{ \label{accuracy-curve-PaviaU} \includegraphics[width=0.47\textwidth]{figure/accuracy-curve/PaviaU.pdf} } \subfigure[]{ \label{accuracy-curve-Salinas} \includegraphics[width=0.47\textwidth]{figure/accuracy-curve/Salinas.pdf} } \subfigure[]{ \label{accuracy-curve-KSC} \includegraphics[width=0.47\textwidth]{figure/accuracy-curve/KSC.pdf} } \caption{Change in accuracy over the number of samples for each category. \subref{accuracy-curve-PaviaU} PaviaU. \subref{accuracy-curve-Salinas} Salinas. \subref{accuracy-curve-KSC} KSC.} \label{accuracy-curve} \end{figure} \begin{figure}[hbpt] \centering \subfigure[]{ \label{PaviaU-10-Original} \includegraphics[scale=0.25]{figure/map/Original/PaviaU.pdf} } \subfigure[]{ \label{PaviaU-10-S-DMM} \includegraphics[scale=0.25]{figure/map/PaviaU/10/S-DMM.pdf} } \subfigure[]{ \label{PaviaU-10-3DCAE} \includegraphics[scale=0.25]{figure/map/PaviaU/10/3DCAE.pdf} } \subfigure[]{ \label{PaviaU-10-SSDL} \includegraphics[scale=0.25]{figure/map/PaviaU/10/SSDL.pdf} } \subfigure[]{ \label{PaviaU-10-TwoCnn} \includegraphics[scale=0.25]{figure/map/PaviaU/10/TwoCnn.pdf} } \subfigure[]{ \label{PaviaU-10-3DVSCNN} \includegraphics[scale=0.25]{figure/map/PaviaU/10/3DVSCNN.pdf} } \subfigure[]{ \label{PaviaU-10-SSLstm} \includegraphics[scale=0.25]{figure/map/PaviaU/10/SSLstm.pdf} } \subfigure[]{ \label{PaviaU-10-CNN_HSI} \includegraphics[scale=0.25]{figure/map/PaviaU/10/CNN_HSI.pdf} } \subfigure[]{ \label{PaviaU-10-SAE_LR} \includegraphics[scale=0.25]{figure/map/PaviaU/10/SAE_LR} } \caption{Classification maps on the PaviaU data set (10 samples per class). \subref{PaviaU-10-Original} Original. \subref{PaviaU-10-S-DMM} S-DMM. \subref{PaviaU-10-3DCAE} 3DCAE. \subref{PaviaU-10-SSDL} SSDL. \subref{PaviaU-10-TwoCnn} TwoCnn. \subref{PaviaU-10-3DVSCNN} 3DVSCNN. \subref{PaviaU-10-SSLstm} SSLstm. \subref{PaviaU-10-CNN_HSI} CNN\_HSI. \subref{PaviaU-10-SAE_LR} SAE\_LR.} \label{PaviaU-10} \end{figure} \begin{figure}[hbpt] \centering \subfigure[]{ \label{PaviaU-50-Original} \includegraphics[scale=0.25]{figure/map/Original/PaviaU.pdf} } \subfigure[]{ \label{PaviaU-50-S-DMM} \includegraphics[scale=0.25]{figure/map/PaviaU/50/S-DMM.pdf} } \subfigure[]{ \label{PaviaU-50-3DCAE} \includegraphics[scale=0.25]{figure/map/PaviaU/50/3DCAE.pdf} } \subfigure[]{ \label{PaviaU-50-SSDL} \includegraphics[scale=0.25]{figure/map/PaviaU/50/SSDL.pdf} } \subfigure[]{ \label{PaviaU-50-TwoCnn} \includegraphics[scale=0.25]{figure/map/PaviaU/50/TwoCnn.pdf} } \subfigure[]{ \label{PaviaU-50-3DVSCNN} \includegraphics[scale=0.25]{figure/map/PaviaU/50/3DVSCNN.pdf} } \subfigure[]{ \label{PaviaU-50-SSLstm} \includegraphics[scale=0.25]{figure/map/PaviaU/50/SSLstm.pdf} } \subfigure[]{ \label{PaviaU-50-CNN_HSI} \includegraphics[scale=0.25]{figure/map/PaviaU/50/CNN_HSI.pdf} } \subfigure[]{ \label{PaviaU-50-SAE_LR} \includegraphics[scale=0.25]{figure/map/PaviaU/50/SAE_LR.pdf} } \caption{Classification maps on the PaviaU data set (50 samples per class). \subref{PaviaU-50-Original} Original. \subref{PaviaU-50-S-DMM} S-DMM. \subref{PaviaU-50-3DCAE} 3DCAE. \subref{PaviaU-50-SSDL} SSDL. \subref{PaviaU-50-TwoCnn} TwoCnn. \subref{PaviaU-50-3DVSCNN} 3DVSCNN. \subref{PaviaU-50-SSLstm} SSLstm. \subref{PaviaU-50-CNN_HSI} CNN\_HSI. \subref{PaviaU-50-SAE_LR} SAE\_LR.} \label{PaviaU-50} \end{figure} \begin{figure}[hbpt] \centering \subfigure[]{ \label{PaviaU-100-Original} \includegraphics[scale=0.25]{figure/map/Original/PaviaU.pdf} } \subfigure[]{ \label{PaviaU-100-S-DMM} \includegraphics[scale=0.25]{figure/map/PaviaU/100/S-DMM.pdf} } \subfigure[]{ \label{PaviaU-100-3DCAE} \includegraphics[scale=0.25]{figure/map/PaviaU/100/3DCAE.pdf} } \subfigure[]{ \label{PaviaU-100-SSDL} \includegraphics[scale=0.25]{figure/map/PaviaU/100/SSDL.pdf} } \subfigure[]{ \label{PaviaU-100-TwoCnn} \includegraphics[scale=0.25]{figure/map/PaviaU/100/TwoCnn.pdf} } \subfigure[]{ \label{PaviaU-100-3DVSCNN} \includegraphics[scale=0.25]{figure/map/PaviaU/100/3DVSCNN.pdf} } \subfigure[]{ \label{PaviaU-100-SSLstm} \includegraphics[scale=0.25]{figure/map/PaviaU/100/SSLstm.pdf} } \subfigure[]{ \label{PaviaU-100-CNN_HSI} \includegraphics[scale=0.25]{figure/map/PaviaU/100/CNN_HSI.pdf} } \subfigure[]{ \label{PaviaU-100-SAE_LR} \includegraphics[scale=0.25]{figure/map/PaviaU/100/SAE_LR.pdf} } \caption{Classification maps on the PaviaU data set (100 samples per class). \subref{PaviaU-100-Original} Original. \subref{PaviaU-100-S-DMM} S-DMM. \subref{PaviaU-100-3DCAE} 3DCAE. \subref{PaviaU-100-SSDL} SSDL. \subref{PaviaU-100-TwoCnn} TwoCnn. \subref{PaviaU-100-3DVSCNN} 3DVSCNN. \subref{PaviaU-100-SSLstm} SSLstm. \subref{PaviaU-100-CNN_HSI} CNN\_HSI. \subref{PaviaU-100-SAE_LR} SAE\_LR.} \label{PaviaU-100} \end{figure} \begin{figure}[hbpt] \centering \subfigure[]{ \label{Salinas-10-Original} \includegraphics[scale=0.4]{figure/map/Original/Salinas.pdf} } \subfigure[]{ \label{Salinas-10-S-DMM} \includegraphics[scale=0.4]{figure/map/Salinas/10/S-DMM.pdf} } \subfigure[]{ \label{Salinas-10-3DCAE} \includegraphics[scale=0.4]{figure/map/Salinas/10/3DCAE.pdf} } \subfigure[]{ \label{Salinas-10-SSDL} \includegraphics[scale=0.4]{figure/map/Salinas/10/SSDL.pdf} } \subfigure[]{ \label{Salinas-10-TwoCnn} \includegraphics[scale=0.4]{figure/map/Salinas/10/TwoCnn.pdf} } \subfigure[]{ \label{Salinas-10-3DVSCNN} \includegraphics[scale=0.4]{figure/map/Salinas/10/3DVSCNN.pdf} } \subfigure[]{ \label{Salinas-10-SSLstm} \includegraphics[scale=0.4]{figure/map/Salinas/10/SSLstm.pdf} } \subfigure[]{ \label{Salinas-10-CNN_HSI} \includegraphics[scale=0.4]{figure/map/Salinas/10/CNN_HSI.pdf} } \subfigure[]{ \label{Salinas-10-SAE_LR} \includegraphics[scale=0.4]{figure/map/Salinas/10/SAE_LR.pdf} } \caption{Classification maps on the Salinas data set (10 samples per class). \subref{Salinas-10-Original} Original. \subref{Salinas-10-S-DMM} S-DMM. \subref{Salinas-10-3DCAE} 3DCAE. \subref{Salinas-10-SSDL} SSDL. \subref{Salinas-10-TwoCnn} TwoCnn. \subref{Salinas-10-3DVSCNN} 3DVSCNN. \subref{Salinas-10-SSLstm} SSLstm. \subref{Salinas-10-CNN_HSI} CNN\_HSI. \subref{Salinas-10-SAE_LR} SAE\_LR.} \label{Salinas-10} \end{figure} \begin{figure}[hbpt] \centering \subfigure[]{ \label{Salinas-50-Original} \includegraphics[scale=0.4]{figure/map/Original/Salinas.pdf} } \subfigure[]{ \label{Salinas-50-S-DMM} \includegraphics[scale=0.4]{figure/map/Salinas/50/S-DMM.pdf} } \subfigure[]{ \label{Salinas-50-3DCAE} \includegraphics[scale=0.4]{figure/map/Salinas/50/3DCAE.pdf} } \subfigure[]{ \label{Salinas-50-SSDL} \includegraphics[scale=0.4]{figure/map/Salinas/50/SSDL.pdf} } \subfigure[]{ \label{Salinas-50-TwoCnn} \includegraphics[scale=0.4]{figure/map/Salinas/50/TwoCnn.pdf} } \subfigure[]{ \label{Salinas-50-3DVSCNN} \includegraphics[scale=0.4]{figure/map/Salinas/50/3DVSCNN.pdf} } \subfigure[]{ \label{Salinas-50-SSLstm} \includegraphics[scale=0.4]{figure/map/Salinas/50/SSLstm.pdf} } \subfigure[]{ \label{Salinas-50-CNN_HSI} \includegraphics[scale=0.4]{figure/map/Salinas/50/CNN_HSI.pdf} } \subfigure[]{ \label{Salinas-50-SAE_LR} \includegraphics[scale=0.4]{figure/map/Salinas/50/SAE_LR.pdf} } \caption{Classification maps on the Salinas (50 samples per class). \subref{Salinas-50-Original} Original. \subref{Salinas-50-S-DMM} S-DMM. \subref{Salinas-50-3DCAE} 3DCAE. \subref{Salinas-50-SSDL} SSDL. \subref{Salinas-50-TwoCnn} TwoCnn. \subref{Salinas-50-3DVSCNN} 3DVSCNN. \subref{Salinas-50-SSLstm} SSLstm. \subref{Salinas-50-CNN_HSI} CNN\_HSI. \subref{Salinas-50-SAE_LR} SAE\_LR.} \label{Salinas-50} \end{figure} \begin{figure}[hbpt] \centering \subfigure[]{ \label{Salinas-100-Original} \includegraphics[scale=0.4]{figure/map/Original/Salinas.pdf} } \subfigure[]{ \label{Salinas-100-S-DMM} \includegraphics[scale=0.4]{figure/map/Salinas/100/S-DMM.pdf} } \subfigure[]{ \label{Salinas-100-3DCAE} \includegraphics[scale=0.4]{figure/map/Salinas/100/3DCAE.pdf} } \subfigure[]{ \label{Salinas-100-SSDL} \includegraphics[scale=0.4]{figure/map/Salinas/100/SSDL.pdf} } \subfigure[]{ \label{Salinas-100-TwoCnn} \includegraphics[scale=0.4]{figure/map/Salinas/100/TwoCnn.pdf} } \subfigure[]{ \label{Salinas-100-3DVSCNN} \includegraphics[scale=0.4]{figure/map/Salinas/100/3DVSCNN.pdf} } \subfigure[]{ \label{Salinas-100-SSLstm} \includegraphics[scale=0.4]{figure/map/Salinas/100/SSLstm.pdf} } \subfigure[]{ \label{Salinas-100-CNN_HSI} \includegraphics[scale=0.4]{figure/map/Salinas/100/CNN_HSI.pdf} } \subfigure[]{ \label{Salinas-100-SAE_LR} \includegraphics[scale=0.4]{figure/map/Salinas/100/SAE_LR.pdf} } \caption{Classification maps on the Salinas data set (100 samples per class). \subref{Salinas-100-Original} Original. \subref{Salinas-100-S-DMM} S-DMM. \subref{Salinas-100-3DCAE} 3DCAE. \subref{Salinas-100-SSDL} SSDL. \subref{Salinas-100-TwoCnn} TwoCnn. \subref{Salinas-100-3DVSCNN} 3DVSCNN. \subref{Salinas-100-SSLstm} SSLstm. \subref{Salinas-100-CNN_HSI} CNN\_HSI. \subref{Salinas-100-SAE_LR} SAE\_LR.} \label{Salinas-100} \end{figure} \begin{figure}[hbpt] \centering \subfigure[]{ \label{KSC-10-Original} \includegraphics[scale=0.17]{figure/map/Original/KSC.pdf} } \subfigure[]{ \label{KSC-10-S-DMM} \includegraphics[scale=0.17]{figure/map/KSC/10/S-DMM.pdf} } \subfigure[]{ \label{KSC-10-3DCAE} \includegraphics[scale=0.17]{figure/map/KSC/10/3DCAE.pdf} } \subfigure[]{ \label{KSC-10-SSDL} \includegraphics[scale=0.17]{figure/map/KSC/10/SSDL.pdf} } \subfigure[]{ \label{KSC-10-TwoCnn} \includegraphics[scale=0.17]{figure/map/KSC/10/TwoCNN.pdf} } \subfigure[]{ \label{KSC-10-3DVSCNN} \includegraphics[scale=0.17]{figure/map/KSC/10/3DVSCNN.pdf} } \subfigure[]{ \label{KSC-10-SSLstm} \includegraphics[scale=0.17]{figure/map/KSC/10/SSLstm.pdf} } \subfigure[]{ \label{KSC-10-CNN_HSI} \includegraphics[scale=0.17]{figure/map/KSC/10/CNN_HSI.pdf} } \subfigure[]{ \label{KSC-10-SAE_LR} \includegraphics[scale=0.17]{figure/map/KSC/10/SAE_LR.pdf} } \caption{Classification maps on the KSC data set (10 samples per class). \subref{KSC-10-Original} Original. \subref{KSC-10-S-DMM} S-DMM. \subref{KSC-10-3DCAE} 3DCAE. \subref{KSC-10-SSDL} SSDL. \subref{KSC-10-TwoCnn} TwoCnn. \subref{KSC-10-3DVSCNN} 3DVSCNN. \subref{KSC-10-SSLstm} SSLstm. \subref{KSC-10-CNN_HSI} CNN\_HSI. \subref{KSC-10-SAE_LR} SAE\_LR.} \label{KSC-10} \end{figure} \begin{figure}[hbpt] \centering \subfigure[]{ \label{KSC-50-Original} \includegraphics[scale=0.17]{figure/map/Original/KSC.pdf} } \subfigure[]{ \label{KSC-50-S-DMM} \includegraphics[scale=0.17]{figure/map/KSC/50/S-DMM.pdf} } \subfigure[]{ \label{KSC-50-3DCAE} \includegraphics[scale=0.17]{figure/map/KSC/50/3DCAE.pdf} } \subfigure[]{ \label{KSC-50-SSDL} \includegraphics[scale=0.17]{figure/map/KSC/50/SSDL.pdf} } \subfigure[]{ \label{KSC-50-TwoCnn} \includegraphics[scale=0.17]{figure/map/KSC/50/TwoCNN.pdf} } \subfigure[]{ \label{KSC-50-3DVSCNN} \includegraphics[scale=0.17]{figure/map/KSC/50/3DVSCNN.pdf} } \subfigure[]{ \label{KSC-50-SSLstm} \includegraphics[scale=0.17]{figure/map/KSC/50/SSLstm.pdf} } \subfigure[]{ \label{KSC-50-CNN_HSI} \includegraphics[scale=0.17]{figure/map/KSC/50/CNN_HSI.pdf} } \subfigure[]{ \label{KSC-50-SAE_LR} \includegraphics[scale=0.17]{figure/map/KSC/50/SAE_LR.pdf} } \caption{Classification maps on the KSC data set (50 samples per class). \subref{KSC-50-Original} Original. \subref{KSC-50-S-DMM} S-DMM. \subref{KSC-50-3DCAE} 3DCAE. \subref{KSC-50-SSDL} SSDL. \subref{KSC-50-TwoCnn} TwoCnn. \subref{KSC-50-3DVSCNN} 3DVSCNN. \subref{KSC-50-SSLstm} SSLstm. \subref{KSC-50-CNN_HSI} CNN\_HSI. \subref{KSC-50-SAE_LR} SAE\_LR.} \label{KSC-50} \end{figure} \begin{figure}[hbpt] \centering \subfigure[]{ \label{KSC-100-Original} \includegraphics[scale=0.17]{figure/map/Original/KSC.pdf} } \subfigure[]{ \label{KSC-100-S-DMM} \includegraphics[scale=0.17]{figure/map/KSC/100/S-DMM.pdf} } \subfigure[]{ \label{KSC-100-3DCAE} \includegraphics[scale=0.17]{figure/map/KSC/100/3DCAE.pdf} } \subfigure[]{ \label{KSC-100-SSDL} \includegraphics[scale=0.17]{figure/map/KSC/100/SSDL.pdf} } \subfigure[]{ \label{KSC-100-TwoCnn} \includegraphics[scale=0.17]{figure/map/KSC/100/TwoCNN.pdf} } \subfigure[]{ \label{KSC-100-3DVSCNN} \includegraphics[scale=0.17]{figure/map/KSC/100/3DVSCNN.pdf} } \subfigure[]{ \label{KSC-100-SSLstm} \includegraphics[scale=0.17]{figure/map/KSC/100/SSLstm.pdf} } \subfigure[]{ \label{KSC-100-CNN_HSI} \includegraphics[scale=0.17]{figure/map/KSC/100/CNN_HSI.pdf} } \subfigure[]{ \label{KSC-100-SAE_LR} \includegraphics[scale=0.17]{figure/map/KSC/100/SAE_LR.pdf} } \caption{Classification maps on the KSC data set (100 samples per class). \subref{KSC-100-Original} Original. \subref{KSC-100-S-DMM} S-DMM. \subref{KSC-100-3DCAE} 3DCAE. \subref{KSC-100-SSDL} SSDL. \subref{KSC-100-TwoCnn} TwoCnn. \subref{KSC-100-3DVSCNN} 3DVSCNN. \subref{KSC-100-SSLstm} SSLstm. \subref{KSC-100-CNN_HSI} CNN\_HSI. \subref{KSC-100-SAE_LR} SAE\_LR.} \label{KSC-100} \end{figure} \subsection{Model parameters} To further explore the reasons why the model has achieved different results on the benchmark data set, we also counted the number of trainable parameters of each framework (including the decoder module) on different data sets, which are shown in Table \ref{AMOUNT}. On all data sets, the model with the least number of training parameters is the SAE\_LR, the second is the CNN\_HSI and the most is the TwoCnn. SAE\_LR is a lightweight architecture in all models for the simple linear layer, but its performance is poor. Different from other 2D convolution approaches in HSI, CNN\_HSI solely uses a $1\times 1$ kernel to process an image. Moreover, it uses a $1\times 1$ convolution layer to serve as a classifier instead of the linear layer, which greatly reduces the number of trainable parameters. The next is the S-DMM. This also explains why S-DMM and CNN\_HSI are less affected by augmentation in sample size but very effective on few samples. Additionally, the problem of overfitting is of little concern in these approaches. Stacking the spectral and spatial feature to generate the final fused feature is the main reason for the large number of parameters of TwoCnn. However, regardless of its potentially millions of trainable parameters, it can work well on limited samples, benefiting from transfer learning, which decreases trainable parameters and achieves good performance on all target data sets. Next, the models with the most parameters are successively 3DCAE and SSLstm. 3DCAE's trainable parameters are at most eight times those of SSDL, which contains not only a 1D autoencoder in the spectral branch but also a spatial branch based on a 2D convolutional network, but 3DCAE is still worse than SSDL. Although 3D convolutional and pooling modules can greatly avoid the problem of data structure information loss caused by the flattening operation, the complexity of the 3D structure and the symmetric structure of the autoencoder increase the number of model parameters, which make it easy to overfit the model. 3DVSCNN also uses a 3D convolutional module and is better than 3DCAE, which first reduces the number of redundant bands by PCA. That may also be applied to 3DCAE to decrease the number of model parameters and make good use of characteristics of 3D convolution, extracting spectral and spatial information simultaneously. The main contribution of the parameter of SSLstm comes from the spatial branch. Although the gate structure of LSTM improves the model's capabilities of long and short memory, it increases the complexity of the model. When the number of hidden layer units increases, the model's parameters will also skyrocket greatly. Perhaps it is the coupling between the spectral features and recurrent network that make performance of SSLstm not as bad as that of 3DCAE on all data sets, which has a similar number of parameters and even achieved superior results on KSC. Moreover, there are no methods that were adopted for solving the problem of few samples. This finding also shows that supervised learning is better than unsupervised learning in some tasks. \begin{table}[htbp] \centering \caption{The number of trainable parameters} \begin{tabular}{crrr} \toprule &PaviaU&Salinas&KSC \\ \midrule S-DMM&33921&40385&38593\\ 3DCAE&256563&447315&425139 \\ SSDL&35650&48718&44967 \\ TwoCnn&1379399&1542206&1501003\\ 3DVSCNN&42209&42776&227613\\ SSLstm&367506&370208&401818\\ CNN\_HSI&22153&33536&31753\\ SAE\_LR&\textbf{21426}&\textbf{5969}&\textbf{5496}\\ \bottomrule \end{tabular} \label{AMOUNT} \end{table} \subsection{The speed of model convergence} In addition, we compare the convergence speed of the model according to the changes in training loss of each model in the first 200 epochs on each group of experiments (see Figure~\ref{PAVIAU-CURVE}$\sim$\ref{KSC-CURVE}). Because the autoencoder and classifier of 3DCAE are be trained separately, and all data are used during training the autoencoder, it is not comparable to other models. Therefore, it is not be listed here. On all data sets, S-DMM has the fastest convergence speed. After approximately 3 epochs, the training loss tends to become stable given its fewer parameters. Although CNN\_HSI has a similar performance to S-DMM and fewer parameters, the learning curve of CNN\_HSI's convergence rate is slower than that of S-DMM and is sometimes accompanied by turbulence. The second place regarding performance is held by TwoCnn, which is mainly due to transfer learning to better position the initial parameters, and it actually has fewer parameters requiring training. Thus, it just needs a few epochs to fine-tune on the target data set. Moreover, the training curve of most models stabilizes after 100 epochs. The training loss of the SSLstm has severe oscillations in all data sets. This is especially noted in the SeLstm, where the loss sometimes has difficulty in decreasing. When the sequence is very long, the challenge might be that the recurrent neural network is more susceptible to a vanishing or exploding gradient. Moreover, the pixels of the hyperspectral image usually contain hundreds of bands, which is the reason why the training loss has difficulty decreasing or oscillations occur in SeLstm. In the spatial branch, it does not have this serious condition because the length of the spatial sequence depending on patch size is shorter than spectral sequences. During training, the LSTM-based model spent a considerable amount of time because it cannot train in parallel. \begin{figure}[hbpt] \centering \subfigure[]{ \label{PAVIAU-CURVE-10} \includegraphics[width=0.46\textwidth]{figure/curve/PaviaU/sample10.pdf} } \subfigure[]{ \label{PAVIAU-CURVE-50} \includegraphics[width=0.46\textwidth]{figure/curve/PaviaU/sample50.pdf} } \subfigure[]{ \label{PAVIAU-CURVE-100} \includegraphics[width=0.46\textwidth]{figure/curve/PaviaU/sample100.pdf} } \caption{Training Loss on the PaviaU data set. \subref{PAVIAU-CURVE-10} 10 samples per class. \subref{PAVIAU-CURVE-50} 50 samples per class. \subref{PAVIAU-CURVE-100} 100 samples per class.} \label{PAVIAU-CURVE} \end{figure} \begin{figure}[hbpt] \centering \subfigure[]{ \label{SALINAS-CURVE-10} \includegraphics[width=0.46\textwidth]{figure/curve/Salinas/sample10.pdf} } \subfigure[]{ \label{SALINAS-CURVE-50} \includegraphics[width=0.46\textwidth]{figure/curve/Salinas/sample50.pdf} } \subfigure[]{ \label{SALINAS-CURVE-100} \includegraphics[width=0.46\textwidth]{figure/curve/Salinas/sample100.pdf} } \caption{Training Loss on the Salinas data set.\subref{SALINAS-CURVE-10} 10 samples per class. \subref{SALINAS-CURVE-50} 50 samples per class. \subref{SALINAS-CURVE-100} 100 samples per class.} \label{SALINAS-CURVE} \end{figure} \begin{figure}[hbpt] \centering \subfigure[]{ \label{KSC-CURVE-10} \includegraphics[width=0.46\textwidth]{figure/curve/KSC/sample10.pdf} } \subfigure[]{ \label{KSC-CURVE-50} \includegraphics[width=0.46\textwidth]{figure/curve/KSC/sample50.pdf} } \subfigure[]{ \label{KSC-CURVE-100} \includegraphics[width=0.46\textwidth]{figure/curve/KSC/sample100.pdf} } \caption{Training Loss on the KSC data set. \subref{KSC-CURVE-10} 10 samples per class. \subref{KSC-CURVE-50} 50 samples per class. \subref{KSC-CURVE-100} 100 samples per class.} \label{KSC-CURVE} \end{figure} \section{Conclusions} \label{conclutions} In this paper, we introduce the current research difficulties, namely, few samples, in the field of hyperspectral image classification and discuss popular learning frameworks. Furthermore, we also introduce several popular learning algorithms to solve the small-sample problem, such as autoencoders, few-shot learning, transfer learning, activate learning, and data augmentation. According to the above methods, we select some representative models to conduct experiments on hyperspectral benchmark data sets. We developed three different experiments to explore the performance of the models on small-sample data sets and documented their changes with increasing sample size, finally evaluating their effectiveness and robustness through AA and OA. Then, we also compared the number of parameters and convergence speeds of various models to further analyze their differences. Ultimately, we also highlight several possible future directions of hyperspectral image classification on small samples: \begin{itemize} \item Autoencoders, including linear autoencoders and 3D convolutional autoencoders, have been widely explored and applied to solve the sample problem in HSI. Nevertheless, their performance does not approach excellence. The future development trend should be focused on few-shot learning, transfer learning, and active learning. \item We can fuse some learning paradigms to make good use of the advantages of each approach. For example, regarding the fusion of transfer learning and active learning, such an approach can select the valuable samples on the source data set and transfer the model to the target data set to avoid the imbalance of the class sample size. \item According to the experimental results, the RNN is also suitable for hyperspectral image classification. However, there is little work focused on combining the learning paradigms with RNN. Recently, the transformer, as an alternative to the RNN that is capable of processing in parallel, has been introduced into the computer vision domain and has achieved good performance on some tasks such as object detection. Therefore, we can also employ this method in hyperspectral image classification and combine it with some learning paradigms. \item Graph convolution network has been growing more and more interested in hyperspectral image classification. Fully connected network, convolution network, and recurrent network are just suitable for processing the euclidean data and do not solve with the non-euclidean data directly. And image can be regarded as a special case of the euclidean-data. Thus, there are many researches~\cite{wan2019multiscale, liu2020semisupervised, wan2020hyperspectral} utilizing graph convolution networks to classify HSI. \item The reason for requiring a large amount of label samples is the tremendous trainable parameters of the deep learning model. There are many methods proposed, such as group convolution~\cite{howard2017mobilenets}, to light the weight of a deep neural network. So, how to construct a light-weight model further is also a future direction. \end{itemize} Although few label classification can save much time and labor force to collect and label diverse samples, the models are easy to suffer from over-fit and gaining a weak generalization. Thus, how to avoid the over-fitting and improve model's generalization is the huge challenge of HSI few label classification in the application potential. \section*{Acknowledgments} The work is partly supported by the National Natural Science Foundation of China (Grant No. 61976144). \bibliographystyle{elsarticle-num}
proofpile-arXiv_065-83
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Computed Tomography Angiography (CTA) is a commonly used modality in many clinical scenarios including the diagnosis of ischemic strokes. An ischemic stroke is caused by an occluded blood vessel resulting in a lack of oxygen in the affected brain parenchyma. An occlusion in the internal carotid artery (ICA), proximal middle cerebral artery (MCA) or basilar artery is often referred to as large vessel occlusion (LVO). These LVOs are visible in CTA scans as a discontinuation of contrast agent in the vascular tree, which is a complex system of arteries and veins and varies from patient to patient. Consequently the diagnosis takes time and requires expertise. Clinics and patients would therefore benefit from an automated classification of LVOs on CTA scans. Prior research in that field is described in literature. Amukotuwa et al.~detected LVOs in CTA scans with a pipeline consisting of 14 steps and tested their commercially available algorithm on two different data cohorts, reporting a performance of 0.86 ROC AUC in the first trial with 477 patients \cite{amukotuwa2019automated} and 0.94 in the second trial \cite{amukotuwa2019fast} with 926 patients. Stib et al.~\cite{stib2020detecting} computed maximum-intensity projections of segmentations of the vessel tree based on multi-phase CTA scans (three CTA scans covering the arterial, peak venous and late venous phases), and trained a 2D-DenseNet \cite{huang2017densely} on 424 patients to classify the presence of an LVO. They report ROC AUC values between 0.74 and 0.85 depending on the phase. Luijten et al.'s work \cite{luijten2021diagnostic} investigated the performance of another commercially available LVO detection algorithm based on a Convolutional Neural Network (CNN) and determined a ROC AUC of 0.75 on 646 test patients. In all studies, very large data cohorts were available. This appears mandatory to train (and test) AI-based detection algorithms, since in case-wise classification the number of training samples equals the number of available patients. In this work we present a data-efficient method that achieves performance comparable to what is seen in related work while relying on only 100 data sets for training. \section{Materials and methods} \subsection{Data} Altogether, 168 thin-sliced ($0.5$ to $1$\,mm) head CTA data sets were available. Of these, 109 patients were LVO positive due to an occlusion either in the middle cerebral artery or the internal carotid. Regarding the affected hemisphere, 54 (52) LVOs were located on the left (right) side. The data was acquired from a single site with a Somatom Definition AS+ (Siemens Healthineers, Forchheim, Germany). \subsection{Methodology} The method we propose is based on the idea of aggressively augmenting the vessel tree segmentations in order to artificially extend the amount of trainable data. The classification pipeline itself (Fig.~\ref{fig:pipe}), consists of three subsequent steps. In the first step, the cerebrovascular tree is segmented using segmentation approach published by Thamm et al.~\cite{thamm2020virtualdsa}. Additionally, the algorithm prunes the vessel tree to the relevant arteries by masking out all vascular structures which are a walking distance (geodesic distance w.r.t.~vessel center lines) of more than 150\,mm away from the Circle of Willis. Thereby veins, which are not relevant in the diagnosis of LVOs, like the Sinus Sagittalis are mostly excluded from further processing. In the second step, the original CTA scan is non-rigidly registered to the probabilistic brain atlas by Kemmling et al. \cite{kemmling2012decomposing}. The registration is based on Chefd'hotel et al.'s method \cite{chefd2002flows} but may be done using other, publicly available registration methods as well. The resulting deformation field is used to transform the segmentation mask into the atlas coordinate system. An accurate registration between an atlas and the head scan is not crucial in our work as variations in the vessel tree are present in all patients anyway. Once the volumes are in the atlas coordinate space, they are equally sized with 182 $\times$ 205 $\times$ 205 voxels with isotropic spacing of 1\,mm in all dimensions. The primary purpose of the registration is to consistently orient and somewhat anatomically ``normalize'' the segmentation for the next step, in which a convolutional neural network classifies the presence of an LVO. The network receives the binary segmentation masks volume-wise and predicts a softmax activated vector of length 3, representing the three classes: No LVO, left LVO and right LVO. In our work, we tested a 3D-version of DenseNet \cite{hara3dcnns} ($\approx$ 4.6m parameters) and EfficientNetB1 \cite{tan2019efficientnet} ($\approx$ 6.5m parameters) where the channel dimension has been repurposed as the z-Axis. Cross entropy serves as the loss optimized with Adam on PyTorch 1.6 \cite{pytorch} and Python 3.8.5. \begin{figure}[b] \centering \caption{Proposed pipeline. First, the CTA volume is registered to a probabilistic brain atlas. The cerebrovascular vessel tree is then segmented and transformed into the atlas coordinate system by applying the resulting deformation vector field. For augmentation purposes, the vessel tree segmentation masks are elastically deformed for training only. A network predicts the three mutually exclusive classes: No LVO, left or right LVO.} \label{fig:pipe} \includegraphics[width = \textwidth]{3214_Picture2.eps} \end{figure} \subsection{Augmentation} \label{sec:augmentation} From patient to patient, the cerebrovascular anatomy follows coarsely the same structure. However, anatomical variations (e.g.,~absence ICA, accessory MCA) combined with the individual course of the vessels lead to a wide variety of configurations in intracranial vascular systems such that no vessel tree is quite like another. Considering this, augmentations can be used in order to artificially generate more vessel trees and from a network's perspective visually new patients. Therefore, we propose to elastically and randomly deform the segmentation masks for training. While the use of elastic deformation for augmentation per se is not a novel technique \cite{nalepa2019data}, in our setup we are uniquely able to apply it much more aggressively than otherwise possible, enabling us to dramatically increase its benefit compared to typical use cases. This is possible due to the fact that only vessel segmentations are used as input for our CNN-based classifier model. Whereas strong deformations on a conventional image volume will quickly introduce resampling artifacts that render the image unrealistic, masks remain visually comparable to the original samples even when heavily deformed. As the segmentation is performed on full volumes, an online augmentation, i.e.~deforming while the network is trained, is computationally too expensive and would increase the training time to an impractical level. Instead, we suggest to elastically deform the segmentation masks prior to the training for a fixed number of random fields for each original volume. As masks, unlike regular image volumes, are highly compressible, this does not notably increase data storage requirements as would typically be associated with such an approach. In this work we aim to demonstrate the impact of this data augmentation on the performance for the classification of LVOs. Using the RandomElasticDeformation of TorchIO \cite{torchio} which interpolates displacements with cubic B-splines, we randomly deformed each segmentation mask 10 times with 4 and 10 times with 5 random anchors, all 20 augmentations with a maximal displacement of 90 voxels (Examples in Fig.~\ref{fig:deform}). Additionally, we mirror the original data sets sagitally and again apply the above procedure to create 20 variants, which flips the right/left labels but has no effect if no LVO is present. We thus create 40 samples out of one volume, resulting in a total of 6720 vessel tree samples generated from 168 patients. \begin{figure}[b] \centering \caption{Two examples with four augmentations of the original tree on the left, all viewed axially caudal with the same camera parameters. The upper row shows a case with an occlusion in the left middle cerebral artery, indicated by the arrow. The lower row shows an LVO-negative vessel tree. Instead of binary masks, surface meshes of the deformed masks were rendered for a sharper and clearer visualization.} \label{fig:deform} \includegraphics[width = \textwidth]{3214_deformations_withLVO3.png} \end{figure} \section{Results} We investigated the impact on the elastic augmentations in an ablation study considering two architectures, where we systematically disable features (deformation and mirroring). For a fully 3D variant we evaluated the 3D-DenseNet architecture \cite{hara3dcnns} and a 2D variant where the channel axis of the input is used for the axial ($z$) dimension using the EfficientNetB1 architecture \cite{tan2019efficientnet}. A 5-fold cross validation setup with a 3-1-1 split ratio for training, validation and testing was conducted, where on average, 100 original data sets were used for training per cycle. The baseline (no augmentation) was trained for 200 epochs, a variant using the original and the deformed, but not mirrored data sets was trained for 100 epochs, and finally, as the proposed setting, models were trained for 50 epochs using the original, the deformed and mirrored data. Epoch numbers differ as there are more samples available when augmentation is used. All models overfitted by the end of their allotted epochs. The validation loss was used to pick the best performing network out of all epochs. The test data was not augmented to provide a fair comparison between all setups. Both architectures significantly benefit from the deformation-based augmentation (Tab.~\ref{tab:results}); in particular, EfficientNet failed to grasp the problem at all without it. The 3D-DenseNet trained with deformed and mirrored data sets outperformed the other setups by a significant margin, especially in detecting LVOs and left LVOs. Depending on the chosen threshold, this variant achieved a sensitivity of 80\% (or 90\%) and a specificity of 82\% (or 60\% respectively) for the detection of LVOs. \begin{table}[t] \caption{ROC AUCs with 95\% confidence intervals (by bootstrapping) for the 3D-DenseNet and EfficientNetB1 architecture. ``D'' stands for ``deformation'' and ``M'' for ``mirroring''. To compute ``AUC Left'', the right and no-LVO class were combined to one class enabling a binary classification. ``AUC Right'' was calculated analogously.} \label{tab:results} \begin{tabular*}{\textwidth}{l@{\extracolsep\fill}lll} \hline Setup & AUC LVO & AUC Left & AUC Right \\ \hline 3D-DenseNet + D + M & \textbf{0.87} {[}0.81, 0.92{]} & \textbf{0.93} {[}0.87, 0.97{]} & 0.93 {[}0.88, 0.97{]} \\ 3D-DenseNet + D & 0.84 {[}0.77, 0.90{]} & 0.89 {[}0.84, 0.94{]} & \textbf{0.94} {[}0.90, 0.97{]} \\ 3D-DenseNet & 0.77 {[}0.69, 0.84{]} & 0.85 {[}0.78, 0.91{]} & 0.85 {[}0.78, 0.92{]} \\ EfficientNetB1 + D + M & 0.85 {[}0.79, 0.90{]} & 0.86 {[}0.79, 0.91{]} & 0.90 {[}0.84, 0.96{]} \\ EfficientNetB1 + D & 0.83 {[}0.77, 0.89{]} & 0.85 {[}0.79, 0.91{]} & 0.91 {[}0.85, 0.96{]} \\ EfficientNetB1 & 0.56 {[}0.47, 0.65{]} & 0.59 {[}0.49, 0.68{]} & 0.68 {[}0.58, 0.78{]} \\ \hline \end{tabular*} \end{table} \section{Discussion} We presented a method for automated classification of LVOs based on CTA data which makes heavy use of deformation fields for augmentation. With an AUC of 0.87 for LVO detection, we achieved a performance comparable to that of other DL-based approaches while using as few as 100 patient data sets for training. While not novel in itself, elastic deformation for the purpose of augmentation could be applied much more aggressively in our setup compared to regular use cases as our model relies exclusively on segmented vessel tree masks as input; for these, even strong deformations---that would cause severe resampling artifacts when applied to regular image volumes---still lead to anatomically meaningful representations that are virtually indistinguishable from real samples. In an ablation study we showed that the performed augmentation was crucial to properly learn the task at hand from a small number of data sets. This leads us to the conclusion that a learning-based detection of LVOs stands and falls with the number of training data sets. The cerebrovascular system is highly patient-specific, which is why the use of sophisticated augmentation techniques offers great potential. We postulate that also larger data pools could benefit from more extensive data augmentation if applied meaningfully. \bibliographystyle{bvm}
proofpile-arXiv_065-84
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{introduction} Assembly of galaxies in the early universe is a matter of intense debate in current astrophysics. Among others, the formation of bulges is a key ingredient which brought about the morphological diversity of the present-day disc galaxies. Despite much effort in clarifying the bulge formation process from both observational and theoretical perspectives, we are still far from satisfactory understanding of this important piece of galaxy formation. Complex structures and kinematics of galactic bulges, especially the dichotomization into classical and pseudo bulges, suggest contribution of several mechanisms in their formation process. \citep{ko04}. Classical bulges are usually linked to early formation by direct collpase \citep{la76,zo15} and/or minor galaxy mergers \citep{ho09}. Pseudo bulges are often allged to be the product of the secular formation processes from the disc material, such as gas infall induced by galactic bars \citep[e.g.][]{at92}, the bending instability of the bars themselves \citep[e.g.][]{ra91}, and inward migration of massive clumps formed in gas-rich young galactic discs \citep[e.g.][]{no98,no99,in12,bo08}. Any consistent theory of bulge formation must explain the observed properties of other galactic components in the same framework. In seeking such a picture, we are working on the galaxy evolution model based on the cold accretion picture for gas accretion onto forming galaxies \citep[e.g.][]{fa01,ke05,de06}. \citet{no20} suggested a new possibility that the bulge formation is fueled by the cold gas streams characteristic of the halo gas in massive galaxies at high redshifts. It was found that this picture can reproduce the observed trend that the mass fraction of the bulge relative to total stellar mass of the galaxy increases with the galaxy mass. We here report that the same model can also explain the observed age structures of galactic bulges, namely the mass dependence of the mean stellar age and the age difference within the bulge region. \begin{figure*} \includegraphics[width=0.6\linewidth]{Fig1} \caption{ Evolution of the virial mass is indicated by sold lines for eight models analyzed in the present study overlaid on the three domains for the different gas states. Black circles on each evolution path indicate the two epochs bewteen which the cold gas in Domain H arrives at the disc plane. Red circles indicate, in increasing size, the times $t_{\rm arr}+t_{\rm dyn}, t_{\rm arr}+10t_{\rm dyn}$, and $t_{\rm arr}+20t_{\rm dyn}$. Here, $t_{\rm dyn} \equiv (G M_{\rm gal}/R_{\rm gal})^{1/2}$, where the galaxy radius is set to be $R_{\rm gal} = 0.1 R_{\rm vir}$ considering the high spin parameter $\sim 0.1$ at high redshift \citep{da15} and the galaxy mass $M_{\rm gal}$ includes all stars and the portion of dark matter within the galaxy radius. The peak masses for the blue nuggets observed by \citet{hu18} for three different redshifts are shown by green crosses, whereas the blue diamonds are the maximum compaction in three galaxies in VELA simulation by \citet{zo15}. The virial masses for the observed BNs are derived by using the stellar-to-halo mass ratio (SHMR) by \citet{ro17} for the corresponding redshifts. The virial masses for the simulated galaxies are extrapolated along the expected evolutionary tracks (black lines) from the cited values at $z=2$. \label{Fig.1}} \end{figure*} \section{Models} The cold accretion theory has been proposed on the basis of realistic simulations for thermal and hydrodynamical evolution of the primordial gas in the cold dark matter universe \citep[e.g.][]{fa01,bi03,ke05,de06,oc08,va12,ne13}. It states that the intergalactic gas flows into the hierarchically growing dark matter halos in unheated state and fuels the forming galaxies except in the most massive halos in recent cosmological epochs, where the cooling flow of the shock heated halo gas prevails. This picture represents a major modification to the long-standing paradigm which aruges that the heating by shock waves is the universal behaviour of the gas that enters growing dark matter halos \citep[e.g.][]{re77}. This new scenatio provides possible solutions to several observations unexplained in the shock-heating theory, including the existence of abundant luminous galaxies at high redshifts and very red colors (and therefore complete quenching of star formation) of present massive elliptical galaxies. \citep[e.g.][]{ca06,de09}. Recently, the application of this scenario was extended to subgalactic scales. \citet{no20} examined the morphological buildup of disc galaxies under the cold accretion while \citet{no18} tried to explain the chemical bimodality observed in the Milky Way disc stars \citep[e.g.][]{ad12,ha16,qu20}. Especially, \citet{no20} succeeded in reproducing the structural variation of disc galaxies as a function of the galaxy mass that is revealed by the photometric decomposition of stellar contents into thin and thick discs and bulges \citep[e.g.][]{yo06,co14}. Examinations in the present work are based on the same evolution model as employed in \citet{no20}, but we here concentrate on the age structures of the bulges and compare the model with the currently available observational data. The cold accretion theory which underlies the present study predicts three different regimes for the properties of the gas distributed in the dark matter halos depending on the virlal mass and the redshift \citep[e.g.][]{de06,oc08}. It introduces two characteristic mass scales : $M_{\rm shock}$ above which the halo gas develops a stable shock that heats the gas nearly to the virial temperature and $M_{\rm stream}$ below which part of the halo gas remains cold and is confined into narrow filements that thread the smoothly distributed shock-heated gas. The latter mass scale is valid only for high redshifts. These mass scales demarcate three different regions as depicted in Fig.1. The halo gas in Domain F is unheated and expected to accrete in free-fall to the inner region (the disc plane). In domain G, the gas heated to the virial temperature attains near hydrostatic equilibruium in the halo gravitational field and the radiative cooling induces cooling flow to the center with the cooling timescale. The gas behaviour is not so clear in Domain H, where cold gas streams coexist with the surrounding shock-heated hot gas. They may behave independently and accrete with their own timesclaes or they may interact with each other leading to modification of accretion timescales. Because no detailed information is available, we assume that the cold and hot gases in Domain H accrete with the free-fall time and the radiative cooling time, respectively. In \citet{no20}, the one-to-one correspondence was assumed between the gas components in this diagram and the three galactic mass components of disc galaxies. Namely, the cold gas in Domain F produces thick discs, and thin discs are formed from the hot gas in Domain G. This part of correspondence is supported from the chemical point of view because it gives a satisfactory reproduction of the stellar abundance distribution for the Milky Way thin and thick discs \citep{no18}. The hot gas in Domain H is assumed to result into additional thin discs, while the cold gas produces bulges. This whole hypothesis can reproduce the observed variation in the mass ratios of thin discs, thick discs, and bulges with the galaxy mass as shown in \citet{no20}. The cosmological simulation of \citet{br09} also suggests the correspondence between the gas properties and the resultant galactic components similar to the one assumed in \citet{no20}. The existence of surrounding hot gas, characteristic of Domain H, may indeed provide favourable condition for bulge formation from the embedded cold gas. \citet{bi16} have shown that the external pressure resulting from AGN feedback triggers active star formation in galactic discs by promoting the formation of massive clumps in the destabilized discs. The simulation by \citet{du19} may provide another relevant result. It shows that the ram pressure of the hot gas around massive galaxies exterted on inplunging dwarf galaxies confines their metal-rich gas produced by supernovae and stellar winds, leading to subsequent star formation. The hot halo gas in Domain H thus may help clumps formed in the cold gas streams survive until they create centrally concentrated stellar systems for example by radial migration. Based on these considerations, we adopt the same correspondence hypothesis as in \citet{no20} in this study. The locations of the borders of three domains shown in Fig.1 actually depend on the detailed physical state (e.g., metallicity, temperature) of the gas infalling into dark matter halos which is only poorly constrained by the observation \citep[e.g.][]{de06,oc08}. We use the same configuration as adopted in \citet{no20} because it leads to good reproduction of the observed mass fractions of the thin discs, thick discs, and bulges as a function of the galaxy mass observed by \citet{yo06} and \citet{co14}. To be specific, we assume that $M_{\rm shock} = 1.5 \times 10^{11} M_\odot$ and ${\rm log } M_{\rm stream} = 9.2+1.067z$. Both $M_{\rm shock}$ and $M_{\rm stream}$ are smaller than those indicated in Fig. 7 of \citet{de06} but closer to the \citet{ke05} shock mass and the \citet{oc08} stream mass. The model used here treats a disc galaxy as a three-component stellar system comprising a thin disc, a thick disc and a bulge embedded in a dark matter halo that grows in mass as specified by the hierarchical mergers of dark matter halos. Following \citet{we02}, the growth of the virial mass is given by \begin{equation} M_{\rm vir} = M_{\rm vir,0} e^{-2z/(1+z_{\rm c})} \end{equation} where $M_{\rm vir,0}$ is the present halo mass and $z_{\rm c}$ is the collapse redshift explained below. The NFW density profile is assumed with the evolving concentration parameter \begin{equation} c(z) = \min \left[ K{{1+z_{\rm c}} \over {1+z}} , K \right] \end{equation} with $K=3.7$ \citep{bu01}. $z_{\rm c}$ is calculated once the present concentration $c(0)$ is specified. We assume following \citet{ma08} that \begin{equation} \log c(0) = 0.971 - 0.094 \log(M_{\rm vir,0}/[10^{12}h^{-1}{\rm M}_{\odot}]) \end{equation} Growth of each component is driven by the accretion of gas from the halo, the timescale of which is determined by the cold accretion theory. Namely, the gas newly added to the halo in Domain F is asummed to accrete with the free-fall time (dynamical time) defined by $(G M_{\rm vir}/R_{\rm vir})^{1/2}$ at that moment, where $R_{\rm vir}$ is the virial radius. The accretion timescale in Domain G is the radiative cooling time of the collisionally excited gas with the halo virial temperature and metallicty $Z=0.01 Z_\odot$, calculated from \citet{su93}. The gas density is taken to be the halo density at the virial radius multiplied by the cosmic baryon fraction of 0.17, assuming the NFW density profile. The cold gas which occupies half the newly added gas in mass in Domain H is assumed to accrete with the free-fall time whereas the residual hot gas accretes with the radiative cooling time. The gas mass added is assumed to be the increase of the halo total mass multiplied by the cosmic baryon fraction. These specifications determines the mass accretion rate for each gas component completely. We do not consider the internal structure (i.e., the density distribution) of each stellar component. Each component is characterized only by its mass and we calculate its time variation under the gas accretion from the halo. Actually, a significant part of the accreted gas is expected to escape from the galaxy due to feedback from star formation events such as supernova explosions especially for low mass galaxies. The fraction of this expelled gas is assumed to be proportional to the inverse of the halo virial velocity at the accretion time and the mass of the expelled gas is adjusted so that the stellar-to-virial mass ratio at present agrees with the observed one (Fig.5 of \citet{ro15} for blue galaxies). In this study, we assume that the cold gas contained in the halo in Domain H is turned into bulge stars immediately when it accretes onto the disc plane (the disc arrival time, $t_{\rm arr}$). This is likely to be oversimplification and the possible effect of delay is discussed in section 4. Another caveat is that it is not clear if galaxies at high redshifts have a disc. \citet{de20} argue that thin discs cannot develop below the critical stellar mass of $\sim 10^{10} M_\odot$ due to frequent mergers. The observation by \citet{zh19} suggests that galaxies in this mass range tend to be not discy but prolate at $z \sim 2$. The present model cannot discuss the shape evolution of galaxies by construction and the disc envisaged in the present study should not be taken literally but shoud be more appropriately regarded as the inner part where most stars are distributed. Bulge formation in the present scenario is restricted to relatively massive galaxies. We run a series of models with the present halo virial masses in the range $4.33 \times 10^{11}{\rm M}_\odot \leq M_{\rm vir,0} \leq 4.98 \times 10^{12}{\rm M}_\odot$. The evolution of more massive galaxies is likely to be dominated by mergers that could turn those galaxies into elliptical galaxies. The tracks of calculated models are shown in Fig.1. The least massive model (and models less massive than this) does not exter Domain H so that no bulge component is formed. \section{Results} Fig.2 illustrates the star formation history for each model. It is seen that more massive models form bulges earlier than less massive ones and the bulge formation in those models spans longer periods in time. These trends are illustrated in a different form in Fig.1, where two black circles on each evolutionary track indicate the redshifts at which the cold accretion originating in Domain H reaches to the disc plane first and last. This mass dependence of bulge formation history is quantified and compared with observations later. \begin{figure} \includegraphics[width=1.0\linewidth]{Fig2} \caption{ Star formation rate for the bulge component as a function of time, with thicker black lines indicating models with larger virial masses at present. Plotted values are running means with the width of 0.28 Gyr. Tiny spikes are caused by the numerical method used in the evolution model and do not affect our conclusions. Red lines indicate the star formation history for which delay of twenty dynamical times is taken into account. } \label{Fig.2} \end{figure} \begin{figure*} \includegraphics[scale=0.3]{Fig3} \caption{ Model bulge fractions compared with three sets of observations. Black dots connected by solid lines are model results, whereas observational data are represented by small dots. Green circles and pluses indicate, respectively, the running mean and median in the mass bin having the width of 0.25 dex and moved by every 0.125 dex in the galaxy total stellar mass. In the left and central panels, orange symbols denote means and medians only for classical bulges (orange small dots) defined to have the Sersic index larger than 2 in i-band and H-band, respectively. The orange lines indicate the number fraction of classic bulges in each mass bin. \citet{br18} do not derive the Sersic index and no classification of bulges is possible. } \label{Fig.3} \end{figure*} \begin{figure} \includegraphics[scale=0.4]{Fig4} \caption{ The mass-weighted mean stellar age of the bulge (upper panel) and the age difference over the bulge radius (bottom panel) are compared with the observational data of \citet{br18} and \citet{br20}, respectively. Each observed value is plotted with gray. Orange circles and pluses indicate the mean and median in each mass bin with the width of 0.25 dex. No star formation delay is taken into account for black circles. Red circles indicate, in increasing size, the results with the delay time, $t_{\rm dyn}, 10t_{\rm dyn}$, and $20t_{\rm dyn}$. } \label{Fig.4} \end{figure} Fig.3 shows the mass fraction of the bulge as a function of the total stellar mass of the galaxy at the present epoch. We compare the model with three sets of the observation. The sample of \citet{ga09} comprises nearly face-on galaxies extracted fron the Sloan Digital Sky Survey. \citet{we09} decomposed H-band images of S0/a-Sm galaxies in the Ohio State University Bright Spiral Galaxy Survey \citep{es02}. Finally, \citet{br18} analyzed 135 late-type galaxies from the CALIFA survey \citep{sa12,sa16}. The observed bulge masses likely suffer from large uncertainties as guessed from this figure. Nevertheless, the three different analyses consistently indicate the increase in bulge mass fraction with the total stellar mass. The model reproduces this qualitative trend. Model values agree well with the result of \citet{we09} but about half the values reported by \citet{ga09} and \citet{br18}. We discuss possible reasons for this discrepancy later. Fig.4 summarizes the age structures of the model bulges. The mean stellar age plotted in the upper panel increases with the total stellar mass in qualitative agreement with the observation by \citet{br18}. However, the model dependence (black circles) is shallower than the observed one and the discrepancy increases toward lower galaxy masses. We discuss later the effect of including possible delay in the bulge star formation. The present model predicts the lower mass limit for bulge formation originating in Domain H around $M_{\rm star} \sim 10^{10}{\rm M}_\odot$. We discuss later possible different origins for bulges in lower mass galaxies plotted in Fig.4. The age difference plotted in the bottom panel is simply the time at which the bulge star formation starts minus the time at which it ends. Because the gas that accretes to the disc plane later ends up at a larger distance from the galactic center, the age difference thus defined essentially corresponds to the 'age gradient within the bulge radius' shown in Fig.2 of \citet{br20}. It should be noted that the 'gradient' given in \citet{br20} is not the age difference per unit length but the difference between the outer and inner edges of the bulge. Reflecting the inside-out nature of gas accretion, all the models produce negative gradients. Furthermore the absolute value of the gradient increases with the galaxy mass. Over the mass range for which the model produces bulges, the model values are in good agreement with the observed ones. \section{discussion and conclusions} We have shown that the present model which is based on the cold accretion scenario for galactic gas accretion reproduces the observed bulge properties despite its idealized nature although some discrepany remains. In alliance with its success in explaining the chemical bimodality in the Milky Way disc \citep{no18} and the morphological variation with the galaxy mass observed for extra-galaxies \citep{no20}, this result may be regarded to reinforce the cold accretion scenario from the viewpoint of internal structures of individual disc galaxies. Nevertheless, there are missing ingredients in the simplified approach taken here. We touch upon these unresolved issues in the following. In addition to the bulge mass fraction, the bulge size is also known to increase with the galaxy total stellar mass \citep[e.g.][]{ga09}. Although the present model cannot determine the bulge size because of its one-zone nature, it may be instructive to make rough estimate for the expected size from the virial radius $R_{\rm vir}$ and the spin parameter $\lambda$ of the dark matter halo. The size calculated as $r_{\rm bulge} = \lambda R_{\rm vir}$ at the bulge formation epoch is $2 \sim 3 $ kpc assuming $\lambda=0.03$, which is similar to the obseved sizes for the most massive galaxies but depends little on the galaxy mass for the calculated mass range. This is because the lower-mass galaxies experience bulge formation later than the higher-mass galaxies so that the virial radius at the bulge formation epoch as defined in this study is nearly constant with the galaxy mass. We assumed that the cold gas in Domain H is turned into stars upon its arrival at the disc plane. This assumption may be oversimplified. It is conceivable that the cold gas streams contain gas clumps and after disc arrival individual clumps are transported inward due to violent disc instability (VDI) before star formation occurs in them (or while making stars en route to the galactic center). Clump formation within the cold gas filaments due to gravitational instability is suggested by \citet{ma18} in relation to globular cluster formation. Many cosmological simulations also reveal gas clumps in those filaments \citep[e.g.][]{ke05,de06,oc08,va12,ne13}, part of which are brought in to forming galactic disks \citep{ds09}. Radial migration timescale due to VDI is estimated to be of the order of ten times the dynamical time \citep{de14}. Red circles in Fig.1 and Fig.4 illustrate how this delay affects the star formation epochs and age structures of the bulges. We see that the inclusion of star formation delay improves the agreement with the observation (especially bulge ages) with the delay time of ten times the dynamical time giving the best fit. Influence of delay is larger for smaller galaxies because of longer migration times, resulting to significantly younger bulge ages than the fiducial case (black circles in Fig.4). Bulge formation in the present study may be related to the compaction and blue nuggets (BNs) reported in the cosmological simulation by \citet{zo15}. Fig.1 plots the simulated compaction events on the $z-M_{\rm vir}$ plane. They are located in the bulge formation region in the present model bordered by black and red circles. The peak masses for the BNs observed by \citet{hu18} in different redshift ranges also fall on the domain expected for bulge formation once a certain delay from the disc arrival is taken into account. The star forming galaxies at $z\sim2$ observed by \citet{ta16} exhibit different star formation profiles depending on the stellar mass with galaxies of intermediate masses ($10^{10.1} {\rm M}_\odot < M_{\rm star} < 10^{10.6} {\rm M}_\odot$) showing more centrally-concentrated profiles than either less massive or more massive galaxies. This result also seems to be in line with the present study which proposes bulge formation in the restricted mass range, $M_{\rm shock} < M_{\rm vir} < M_{\rm stream}$. The present model predicts bulge formation only above a certain threshold for the present galaxy mass around $M_{\rm star} \sim 10^{10}{\rm M}_\odot$. It is possible that bulge formation involves several mechanisms and those bulges in less massive galaxies are formed by different processes. One possibility is the secondary bulge formation from disc material in later cosmological epochs as mentioned in introduction. Indeed, the upper panel of Fig.4 shows a steep decrease in bulge ages below $M_{\rm star} \sim 10^{10}{\rm M}_\odot$ in the observation by \citet{br20}. The age gradient (the bottom panel) also turns to positive below this critical mass, suggesting a different mechanism operating other than the inside-out gas accretion form the halo. There seems to be a tendency that classical bulges inhabit massive galaxies whereas pseudo bulges are observed in less massive galaxies \citep{ga09,we09,fi11}. This habitat segregation may make the cold-accretion driven bulge formation proposed in this study a likely candidate specific to classical bulge formation. Indeed, Fig.3 shows that the threshold mass for bulge formation in the model nearly coincides with the mass above which the classical bulges start to emerge in actual disc galaxies. If this inference is correct, part of the discrepancy between the model and observations appearing in Fig.3 may be also solved. The observed excess of the bulge mass in \citet{ga09} and \citet{br18} could be contributed by secular processes. On the other hand, the bulge fraction in \citet{we09}, which is actually the luminosity fraction in H-band, could be underestimated if the stellar population in the bulge is systematically older (and threfore redder) than the disc in those galaxies, which is quite likely. In either case, we need not consider that the classical and pseud bulge formation processes occur excluisively with each other. Regading bulge formation, the galaxy mass sequence may be a continuous sequence along which the relative importance of two (or more) bulge formation processes changes gradually. We applied for the first time the cold-accretion driven galaxy evolution model to the currently available observational data for bulge properties in galaxies with various masses. The model, despite its highly idealized nature, can reproduce the observed behaviours at least qualitatively, although observational data are still meager and future observations are required to construct a more concrete picture for bulge formation. Especially, galaxies at $z \sim 2-3$ will provide wealth of information on the bulge formation because galactic bulges are thought to grow vigorously in this epoch (see Fig.1). It is interesting that \citet{ta16} found a sign for increasing bulge dominance for more massive galaxies in this redshift range in agreement with the theoretical result by \citet{no20}. The scrutinization of internal properties of the nearby bulges such as performed by \citet{br18} and \citet{br20} will put constraints at the present cosmological epoch, playing a complementary role with high-redshift surveys. On the theoretical side, recent cosmological simulations start to produce disc galaxies with realistic bulge-to-disc mass ratios unlike early simulations that produced too massive bulge components \citep[e.g.][]{ma14,ga19}. \citet{ga19} report that their bulges in the Auriga simulation comprise mostly in-Situ stars and merger contribution is negligible. The work of \citet{br09} is pioneering in that it related different structural components of disc galaxies formed in the cosmological simulations to different modes of gas accretion, namely accretion of clumpy, shocked, and unshocked gas. Although high-resolution cosmological simulations are very expensive, such close inspection of even a small number of created galaxies will provide valuable insight into the build-up of disc galaxies free from idealization made in the present work. \section*{Acknowledgements} We thank Iris Breda and Polychronis Papaderos for providing the observational data for galactic bulges and stimulating discussion on the bulge formation mechanisms. We also thank the anonymous referees for invaluable comments which helped improve the manuscript. \vspace{16pt} \noindent{Data availability} \vspace{6pt} \noindent{The data underlying this article will be shared on reasonable request to the corresponding author.}
proofpile-arXiv_065-85
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} This paper aims at broadening the understanding of the link between uncertainty principles and localized controllability of evolution equations. An uncertainty principle is a property which gives some limitations on the simultaneous concentration of a function and its Fourier transform. There exist different forms of uncertainty principles and one of them consists in studying the support of functions whose Fourier transforms are localized. The Logvinenko-Sereda Theorem \cite{logvinenko-sereda} ensures the equivalence of the norms $\| \cdot \|_{L^2(\mathbb{R}^d)}$ and $\| \cdot \|_{L^2(\omega)}$, where $\omega \subset \mathbb{R}^d$ is a measurable subset, on the subspace $$\big\{f \in L^2(\mathbb{R}^d); \ \operatorname{supp} \hat{f} \subset \overline{B(0,R)} \big\} \quad \text{with} \quad R>0,$$ where $\hat{f}$ denotes the Fourier transform of $f$, as soon as $\omega$ is thick. The thickness property is defined as follows: \begin{definition}\label{thick_def} Let $d \in \mathbb{N}^*$ and $\omega$ be a measurable subset of $\mathbb{R}^d$. For $0<\gamma \leq 1$ and $L>0$, the set $\omega$ is said to be $\gamma$-thick at scale $L>0$ if and only if \begin{equation*} \forall x \in \mathbb{R}^d, \quad |\omega \cap (x+[0,L]^d)| \geq \gamma L^d, \end{equation*} where $|A|$ denotes the Lebesgue measure of the measurable set $A$. The set $\omega$ is said to be thick if and only if $\omega$ is $\gamma$-thick at scale $L>0$ for some $0<\gamma \leq 1$ and $L>0$. \end{definition} We define more generally the thickness with respect to a density: \begin{definition}\label{thick_density} Let $d \in \mathbb{N}^*$, $0<\gamma\leq 1$, $\omega$ be a measurable subset of $\mathbb{R}^d$ and $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ a positive function. The set $\omega$ is said to be $\gamma$-thick with respect to $\rho$ if and only if \begin{equation*} \forall x \in \mathbb{R}^d, \quad |\omega \cap B(x,\rho(x))| \geq \gamma |B(x,\rho(x))|, \end{equation*} where $B(x,L)$ denotes the Euclidean ball of $\mathbb{R}^d$ centered at $x$ with radius $L$. \end{definition} Of course, a measurable subset of $\mathbb{R}^d$ is thick if and only if it is thick with respect to a positive constant density. Kovrijkine provided a quantitative version of the Logvinenko-Sereda Theorem in \cite[Theorem~3]{Kovrijkine}: \begin{theorem}[Kovrijkine {\cite[Theorem~3]{Kovrijkine}}]\label{Kovrijkine1} Let $\omega \subset \mathbb{R}^d$ be a measurable subset $\gamma$-thick at scale $L>0$. There exists a universal positive constant $C>0$ independent on the dimension $d \geq 1$ such that for all $f \in L^2(\mathbb{R}^d)$ satisfying $\operatorname{supp} \hat{f} \subset J$, with $J$ a cube with sides of length $b$ parallel to coordinate axes, \begin{equation}\label{kovrijkine1.1} \|f\|_{L^2(\mathbb{R}^d)} \leq c(\gamma,d, L, b) \|f\|_{L^2(\omega)}, \end{equation} with $$c(\gamma, d, L, b)= \Big( \frac{C^d}{\gamma} \Big)^{Cd(Lb+1)}.$$ \end{theorem} The thickness property was recently shown to play a key role in spectral inequalities for finite combinations of Hermite functions. In \cite[Theorem~2.1]{MP}, the authors establish quantitative estimates with an explicit dependence on the energy level $N$ with respect to the growth of the density appearing in Definition~\ref{thick_density}: \medskip \begin{theorem}[Pravda-Starov \& Martin]\label{Spectral} Let $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ be a $\frac{1}{2}$-Lipschitz positive function with $\mathbb{R}^d$ being equipped with the Euclidean norm, such that there exist some positive constants $0< \varepsilon \leq 1$, $m>0$, $R>0$ such that \begin{equation*} \forall x \in \mathbb{R}^d, \quad 0<m \leq \rho(x) \leq R{\left\langle x\right\rangle}^{1-\varepsilon}. \end{equation*} Let $\omega$ be a measurable subset of $\mathbb{R}^d$ which is $\gamma$-thick with respect to the density $\rho$. Then, there exist some positive constant $\kappa_d(m, R, \gamma, \varepsilon)>0$, $\tilde{C}_d(\varepsilon, R) >0$ and a positive universal constant $\tilde{\kappa}_d >0$ such that \begin{equation}\label{spec_ineq} \forall N \geq 1, \ \forall f \in \mathcal{E}_N, \quad \|f\|_{L^2(\mathbb{R}^d)} \leq \kappa_d(m, R, \gamma, \varepsilon) \Big( \frac{\tilde{\kappa}_d}{\gamma} \Big)^{\tilde{C}_d(\varepsilon, R) N^{1-\frac{\varepsilon}{2}}} \|f\|_{L^2(\omega)}, \end{equation} with $\mathcal E_{N}$ being the finite dimensional vector space spanned by the Hermite functions $(\Phi_{\alpha})_{\val \alpha \leq N}$. \end{theorem} \medskip We refer the reader to Section~\ref{Hermite_functions} for the definition and some notations related to Hermite functions $(\Phi_{\alpha})_{\alpha \in \mathbb{N}^d}$. We emphasize that Theorem~\ref{Spectral} ensures, in particular, the equivalence of the norms $\| \cdot \|_{L^2(\mathbb{R}^d)}$ and $\| \cdot \|_{L^2(\omega)}$ on the subspace $\mathcal{E}_N$ as soon as the measurable subspace $\omega$ is thick with respect to a suitable density. Actually, contrary to the case when the functional subspace is the space of functions whose Fourier transforms are compactly supported, this fact holds true as soon as $\omega$ is a measurable subset of positive measure. As explained by the authors of \cite[Section~2]{kkj}, the analyticity property of finite combinations of Hermite functions together with an argument of finite dimension imply that for all $N \in \mathbb{N}$, there exists a positive constant $C_N(\omega)>0$ such that $$\forall f \in \mathcal{E}_N, \quad \|f\|_{L^2(\mathbb{R}^d)} \leq C_N(\omega) \|f \|_{L^2(\omega)},$$ as soon as $|\omega| >0$. The main interest of Theorem~\ref{Spectral} is the quantitative estimate from above on the growth of the positive constant $C_N(\omega)$ with respect to the energy level $N$, which is explicitly related to the growth of the density $\rho$ thanks to $\varepsilon$. As the norms $\| \cdot \|_{L^2(\mathbb{R}^d)}$ and $\| \cdot \|_{L^2(\omega)}$ are not equivalent on $L^2(\mathbb{R}^d)$ when $|\mathbb{R}^d \setminus \omega|>0$, the constant $C_N(\omega)$ does have to blow up when $N$ tends to infinity. However, the asymptotic of this blow-up is very much related to the geometric properties of the control set $\omega$, and understanding this asymptotic can be assessed as an uncertainty principle. One of the purpose of this work is to establish new uncertainty principles holding in a general class of Gelfand-Shilov spaces and to provide sufficient conditions on the growth of the density allowing these uncertainty principles to hold. Furthermore, this paper aims at providing new null-controllability results as a byproduct of these uncertainty principles. Indeed, some recent works have highlighted the key link between uncertainty principles and localized control of evolution equations matters. Thanks to the explicit dependence of the constant with respect to the length of the sides of the cube in \eqref{kovrijkine1.1}, Egidi and Veseli\'c~ \cite{veselic}; and Whang, Whang, Zhang and Zhang \cite{Wang} have independently established that the heat equation \begin{equation*}\label{heat} \left\lbrace \begin{array}{ll} (\partial_t -\Delta_x)f(t,x)=u(t,x){\mathrm{1~\hspace{-1.4ex}l}}_{\omega}(x)\,, \quad & x \in \mathbb{R}^d,\ t>0, \\ f|_{t=0}=f_0 \in L^2(\mathbb{R}^d), & \end{array}\right. \end{equation*} is null-controllable in any positive time $T>0$ from a measurable control subset $\omega \subset \mathbb{R}^d$ if and only if the control subset $\omega$ is thick in $\mathbb{R}^d$. By using the same uncertainty principle, Alphonse and Bernier established in \cite{AlphonseBernier} that the thickness condition is necessary and sufficient for the null-controllability of fractional heat equations \begin{equation}\label{fractional_heat} \left\lbrace \begin{array}{ll} (\partial_t + (-\Delta_x)^s)f(t,x)=u(t,x){\mathrm{1~\hspace{-1.4ex}l}}_{\omega}(x)\,, \quad & x \in \mathbb{R}^d,\ t>0, \\ f|_{t=0}=f_0 \in L^2(\mathbb{R}^d), & \end{array}\right. \end{equation} when $s >\frac{1}{2}$. On the other hand, Koenig showed in \cite[Theorem~3]{Koenig} and \cite[Theorem~2.3]{Koenig_thesis} that the null-controllability of \eqref{fractional_heat} fails from any non-dense measurable subset of $\mathbb{R}$ when $0< s \leq \frac{1}{2}$. In \cite{AlphonseMartin}, Alphonse and the author point out the fact that the half heat equation, which is given by \eqref{fractional_heat} with $s= \frac{1}{2}$, turns out to be approximately null-controllable with uniform cost if and only if the control subset is thick. Regarding the spectral inequalities in Theorem~\ref{Spectral}, thanks to the quantitative estimates \eqref{spec_ineq}, Pravda-Starov and the author established in \cite[Corollary~2.6]{MP} that the fractional harmonic heat equation \begin{equation*} \left\lbrace \begin{array}{ll} \partial_tf(t,x) + (-\Delta_x+ |x|^2)^s f(t,x)=u(t,x){\mathrm{1~\hspace{-1.4ex}l}}_{\omega}(x), \quad & x \in \mathbb{R}^d,\ t>0, \\ f|_{t=0}=f_0 \in L^2(\mathbb{R}^d), & \end{array}\right. \end{equation*} with $\frac{1}{2} < s \leq 1$, is null-controllable at any positive time from any measurable set $\omega$ which is thick with respect to the density \begin{equation*} \forall x \in \mathbb{R}^d, \quad \rho(x)= R \langle x \rangle^{\delta}, \end{equation*} with $0 \leq \delta < 2s-1$ and $R>0$. More generally, the result of \cite[Theorem~2.5]{MP} shows that this thickness condition is a sufficient condition for the null-controllability of a large class of evolution equations associated to a closed operator whose $L^2(\mathbb{R}^d)$-adjoint generates a semigroup enjoying regularizing effects in specific symmetric Gelfand-Shilov spaces $S_{\frac{1}{2s}}^{\frac{1}{2s}}$. The sufficiency of the thickness conditions for control subsets to ensure null-controllability results for these evolution equations is derived from an abstract observability result based on an adapted Lebeau-Robbiano method established by Beauchard and Pravda-Starov with some contributions of Miller in \cite[Theorem~2.1]{BeauchardPravdaStarov}. This abstract observability result was extended in~\cite[Theorem~3.2]{BEP} to the non-autonomous case with moving control supports under weaker dissipation estimates allowing a controlled blow-up for small times in the dissipation estimates. The main limitation in the work \cite{MP} is that Hermite expansions can only characterize symmetric Gelfand-Shilov spaces (see Section~\ref{gelfand}) and therefore, the null-controllability results in \cite{MP} are limited to evolution equations enjoying only symmetric Gelfand-Shilov smoothing effects. This work partially adresses this matter by investigating the null-controllability of evolution equations associated to anharmonic oscillators, which are known to regularize in non-symmetric Gelfand-Shilov spaces. More generally, we establish null-controllability results for abstract evolution equations whose adjoint systems enjoy smoothing effects in non-symmetric Gelfand-Shilov spaces. This work precisely describes how the geometric properties of the control subset are related to the two indexes $\mu, \nu$ defining the Gelfand-Shilov space $S^{\mu}_{\nu}$. This paper is organized as follows: In Section~\ref{uncertainty_principle_general}, new uncertainty principles and quantitative estimates are presented. We first establish uncertainty principles for a general class of Gelfand-Shilov spaces in Section~\ref{general_GS}. In a second time, we deal with the particular case of spaces of functions with weighted Hermite expansions in Section~\ref{up_symmetric_GS}. These results are derived from sharp estimates for quasi-analytic functions established by Nazarov, Sodin and Volberg in \cite{NSV}. Some facts and results related to quasi-analytic functions are recalled in Sections~\ref{main_results} and \ref{qa_section}. Thanks to these new uncertainty principles, we establish sufficient geometric conditions for the null-controllability of evolutions equations with adjoint systems enjoying quantitative Gelfand-Shilov smoothing effects in Section~\ref{null_controllability_results}. \section{Statement of the main results}\label{main_results} The main results contained in this work are the quantitative uncertainty principles holding for general Gelfand-Shilov spaces given in Theorem~\ref{general_uncertaintyprinciple}. The first part of this section is devoted to present these new uncertainty principles and to discuss the particular case of spaces of functions with weighted Hermite expansions. In a second part, we deduce from these new uncertainty principles some null-controllability results for abstract evolution equations with adjoint systems enjoying Gelfand-Shilov smoothing effects. Before stating these results, miscellaneous facts and notations need to be presented. A sequence $\mathcal{M}=(M_p)_{p \in \mathbb{N}}$ of positive real numbers is said to be \textit{logarithmically convex} if \begin{equation*}\label{log_conv} \forall p \geq 1, \quad M_p^2 \leq M_{p+1} M_{p-1}, \end{equation*} where $\mathbb{N}$ denotes the set of non-negative integers. Let $U$ be an open subset of $\mathbb{R}^d$, with $d \geq 1$. We consider the following class of smooth functions defined on $U$ associated to the sequence $\mathcal{M}$, \begin{equation*}\label{function_class} \mathcal{C}_{\mathcal{M}}(U)= \left\{ f \in \mathcal{C}^{\infty}(U, \mathbb{C}): \quad \forall \beta \in \mathbb{N}^d, \; \|\partial_x^{\beta} f \|_{L^{\infty}(U)} \leq M_{|\beta|} \right\}. \end{equation*} A logarithmically convex sequence $\mathcal{M}$ is said to be quasi-analytic if the class of smooth functions $\mathcal{C}_{\mathcal{M}}((0,1))$ associated to $\mathcal{M}$ is quasi-analytic, that is, when the only function in $\mathcal{C}_{\mathcal{M}}((0,1))$ vanishing to infinite order at a point in $(0,1)$ is the zero function. A necessary and sufficient condition on the logarithmically convex sequence $\mathcal{M}$ to generate a quasi-analytic class is given by the Denjoy-Carleman theorem (see e.g. \cite{Koosis}): \medskip \begin{theorem}[Denjoy-Carleman] \label{Den_Carl_thm} Let $\mathcal{M}=(M_p)_{p \in \mathbb{N}}$ be a logarithmically convex sequence of positive real numbers. The sequence $\mathcal{M}$ defines a quasi-analytic sequence if and only if \begin{equation*} \sum_{p= 1}^{+\infty} \frac{M_{p-1}}{M_p} = + \infty. \end{equation*} \end{theorem} \medskip Let us now introduce the notion of Bang degree defined in \cite{Bang} and \cite{NSV}, and used by Jaye and Mitkovski in \cite{JayeMitkovski}, \begin{equation}\label{Bang} \forall 0<t \leq 1, \forall r>0, \quad 0 \leq n_{t, \mathcal{M},r}= \sup\Big\{N \in \mathbb{N}: \, \sum_{-\log t < n \leq N} \frac{M_{n-1}}{M_n} < r \Big\}\leq +\infty, \end{equation} where the sum is taken equal to $0$ when $N=0$. Notice that if $\mathcal{M}$ is quasi-analytic, then the Bang degree $n_{t, \mathcal{M},r}$ is finite for any $0<t \leq 1$ and $r>0$. This Bang degree allows the authors of \cite{JayeMitkovski} to obtain uniform estimates for $L^2$-functions with fast decaying Fourier transforms and to establish uncertainty principles for a general class of Gevrey spaces. These authors also define \begin{equation}\label{def_gamma} \forall p \geq 1, \quad \gamma_{\mathcal{M}}(p) = \sup \limits_{1 \leq j \leq p} j \Big(\frac{M_{j+1} M_{j-1}}{M_j^2} -1\Big) \quad \text{and} \quad \Gamma_{\mathcal{M}} (p)= 4 e^{4+4\gamma_{\mathcal{M}}(p)}. \end{equation} We refer the reader to the Section~\ref{qa_section} for some examples and useful results about quasi-analytic sequences. \newpage \subsection{Some uncertainty principles}\label{uncertainty_principle_general} \subsubsection{Uncertainty principles in general Gelfand-Shilov spaces}\label{general_GS} In this section, we study uncertainty principles holding in general Gelfand-Shilov spaces. We consider the following subspaces of smooth functions \begin{equation*}\label{gelfandshilov} GS_{\mathcal{N},\rho} := \Big\{ f \in \mathcal{C}^{\infty}(\mathbb{R}^d), \quad \sup_{k \in \mathbb{N},\ \beta \in \mathbb{N}^d} \frac{\| \rho(x)^k \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)}}{N_{k,|\beta|}} < +\infty \Big\}, \end{equation*} where $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ is a positive measurable function and $\mathcal{N}=(N_{p,q})_{(p,q) \in \mathbb{N}^2}$ is a sequence of positive real numbers. Associated to these spaces, are the following semi-norms \begin{equation*} \forall f \in GS_{\mathcal{N},\rho}, \quad \|f\|_{GS_{\mathcal{N},\rho}} = \sup_{k \in \mathbb{N}, \ \beta \in \mathbb{N}^d} \frac{\| \rho(x)^k \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)}}{N_{k,|\beta|}}. \end{equation*} When $$\forall x \in \mathbb{R}^d, \quad \rho(x)= \langle x \rangle= (1+\|x\|^2)^{\frac{1}{2}}$$ and $\mathcal{N}=\big( C^{p+q}(p!)^{\nu} (q!)^{\mu} \big)_{(p,q) \in \mathbb{N}^2}$ for some $C \geq 1$ and $\mu, \nu >0$ with $\mu+\nu \geq1$, $GS_{\mathcal{N}, \rho}$ is a subspace of the classical Gelfand-Shilov space $\mathcal{S}^{\mu}_{\nu}$, whereas when $\rho \equiv 1$, the space $GS_{\mathcal{N}, \rho}$ characterizes some Gevrey type regularity. We choose here to not discuss this particular case since it is studied in the recent works \cite{AlphonseMartin, JayeMitkovski}. In the following, a positive function $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ is said to be a contraction mapping when there exists $0\leq L <1$ such that $$\forall x,y \in \mathbb{R}^d, \quad |\rho(x)-\rho(y)| \leq L \|x-y\|,$$ where $\| \cdot \|$ denotes the Euclidean norm. A double sequence of real numbers $\mathcal{N}=(N_{p,q})_{(p,q) \in \mathbb{N}^2}$ is said to be non-decreasing with respect to the two indexes when $$\forall p \leq p', \forall q \leq q', \quad N_{p,q} \leq N_{p',q'}.$$ The following result provides some uncertainty principles holding for the spaces $GS_{\mathcal{N}, \rho}$: \medskip \begin{theorem}\label{general_uncertaintyprinciple} Let $0<\gamma \leq 1$, $\mathcal{N}=(N_{p, q})_{(p,q) \in \mathbb{N}^2} \in (0,+\infty)^{\mathbb{N}^2}$ be a non-decreasing sequence with respect to the two indexes such that the diagonal sequence $\mathcal{M}=(N_{p,p})_{p \in \mathbb{N}} \in (0,+\infty)^{\mathbb{N}}$ defines a logarithmically-convex quasi-analytic sequence and $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ a positive contraction mapping such that there exist some constants $m>0$, $R>0$ so that \begin{equation*} \forall x \in \mathbb{R}^d, \quad 0<m \leq \rho(x) \leq R \langle x \rangle. \end{equation*} Let $\omega$ be a measurable subset of $\mathbb{R}^d$. If $\omega$ is $\gamma$-thick with respect to $\rho$, then there exist some positive constants $ K=K(d,\rho) \geq 1$, $K'=K'(d,\rho, \gamma)\geq 1$, $r=r(d, \rho) \geq 1$ depending on the dimension $d \geq 1$, on $\gamma$ for the second and on the density $\rho$ such that for all $0<\varepsilon \leq N^2_{0,0}$, \begin{equation*} \forall f \in GS_{\mathcal{N},\rho}, \quad \|f\|^2_{L^2(\mathbb{R}^d)} \leq C_{\varepsilon} \|f\|^2_{L^2(\omega)} + \varepsilon \|f\|^2_{GS_{\mathcal{N},\rho}}, \end{equation*} where \begin{equation*} C_{\varepsilon}= K' \bigg(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}(2n_{t_0, \mathcal{M}, r}) \bigg)^{4n_{t_0, \mathcal{M}, r} } \end{equation*} with $n_{t_0, \mathcal{M}, r}$ being defined in \eqref{Bang} and \begin{equation*} t_0=\frac{\varepsilon^{\frac{1}{2}}}{K N_{d,d}}. \end{equation*} \end{theorem} \medskip It is particularly interesting to notice that Theorem~\ref{general_uncertaintyprinciple} provides a quantitative estimate of the constant $C_{\varepsilon}$ with respect to the different parameters. In specific cases, the Bang degree is easily computable (see Lemma~\ref{ex_qa_sequence}) and an explicit upper bound on the constant $C_{\varepsilon}$ can be obtained. The above uncertainty principles apply in particular to the case of the classical Gelfand-Shilov spaces $S_{\nu}^{\mu}(\mathbb{R}^d)$ as follows: \medskip \begin{theorem}\label{specific_GS_uncertaintyprinciple} Let $A \geq 1$, $0<\mu \leq 1$, $\nu >0$ with $\mu+\nu \geq 1$ and $0\leq \delta \leq \frac{1-\mu}{\nu}\leq 1$. Let $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ be a positive contraction mapping such that there exist some constants $m>0$, $R>0$ so that \begin{equation*} \forall x \in \mathbb{R}^d, \quad 0<m \leq \rho(x) \leq R{\left\langle x\right\rangle}^{\delta}. \end{equation*} Let $\omega$ be a measurable subset of $\mathbb{R}^d$. If $\omega$ is thick with respect to $\rho$, then for all $0<\varepsilon \leq 1$, there exists a positive constant $C_{\varepsilon,A}>0$ such that for all $f \in \mathscr{S}(\mathbb{R}^d)$, \begin{equation}\label{up_schwartz} \| f \|^2_{L^2(\mathbb{R}^d)} \leq C_{\varepsilon,A} \|f\|^2_{L^2(\omega)} + \varepsilon \sup_{p \in \mathbb{N}, \beta \in \mathbb{N}^d} \bigg(\frac{\|\langle x\rangle^{p} \partial^{\beta}_{x} f\|_{L^2(\mathbb{R}^d)}}{A^{p+|\beta|} (p!)^{\nu}(|\beta|!)^{\mu}}\bigg)^2, \end{equation} where, when $\delta < \frac{1-\mu}{\nu}$, there exists a positive constant $K=K(d, \gamma, \rho,\mu, \nu) \geq 1$ depending on the dimension $d$, $\rho$ and $\nu$ such that $$0<C_{\varepsilon,A} \leq e^{K(1-\log \varepsilon +A^{\frac{2}{1-\mu-\delta \nu}})}, $$ whereas, when $\delta= \frac{1-\mu}{\nu}$, there exists a positive constant $K=K(d, \gamma, \rho, \mu, \nu) \geq 1$ depending on the dimension $d$, $\rho$ and $\nu$ such that $$0<C_{\varepsilon,A} \leq e^{K(1-\log \varepsilon+\log A)e^{KA^2}}.$$ \end{theorem} \medskip Let us notice that the estimate \eqref{up_schwartz} is only relevant when \begin{equation*} \sup_{p \in \mathbb{N}, \beta \in \mathbb{N}^d} \frac{\|\langle x\rangle^{p} \partial^{\beta}_x f\|_{L^2(\mathbb{R}^d)}}{A^{p+|\beta|} (p!)^{\nu}(|\beta|!)^{\mu}} <+\infty, \end{equation*} that is, when $f \in GS_{\mathcal{N}, \tilde{\rho}}$, with $\mathcal{N}=(A^{p+q}(p!)^{\nu} (q!)^{\mu})_{(p,q) \in \mathbb{N}^2}$ and $\tilde{\rho}= \langle \cdot \rangle$. The quantitative estimates given in Theorem~\ref{specific_GS_uncertaintyprinciple} are playing a key role in order to establish the following null-controllability results. The proof of Theorem~\ref{general_uncertaintyprinciple} is given in Section~\ref{proof_mainprop}. It follows the strategy developed by Kovrijkine in \cite{Kovrijkine}, and its generalization given in Theorem~\ref{Spectral} together with a quantitative result on quasi-analytic functions which is a multidimensional version of \cite[Theorem~B]{NSV} from Nazarov, Sodin and Volberg. Regarding Theorem~\ref{specific_GS_uncertaintyprinciple}, its proof is given in Section~\ref{proof2}. It is a direct application of Theorem~\ref{general_uncertaintyprinciple} together with Lemma~\ref{ex_qa_sequence}. Next section shows that Theorem~\ref{general_uncertaintyprinciple} also applies to more general sequences. \subsubsection{Uncertainty principles in symmetric weighted Gelfand-Shilov spaces}\label{up_symmetric_GS} Let $$\Theta : [0,+\infty) \longrightarrow [0,+\infty),$$ be a non-negative continuous function. We consider the following symmetric weighted Gelfand-Shilov spaces \begin{equation*} GS_{\Theta}= \Big\{ f \in L^2(\mathbb{R}^d): \quad \|f\|_{GS_{\Theta}} := \Big\|\big(e^{\Theta(|\alpha|)} \langle f, \Phi_{\alpha} \rangle_{L^2(\mathbb{R}^d)}\big)_{\alpha \in \mathbb{N}^d}\Big\|_{l^2(\mathbb{N}^d)} < +\infty \Big\}, \end{equation*} where $(\Phi_{\alpha})_{\alpha \in \mathbb{N}^d}$ denotes the Hermite basis of $L^2(\mathbb{R}^d)$. The definition and basic facts about Hermite functions are recalled in Section~\ref{Hermite_functions}. Before explaining how the spaces $GS_{\Theta}$ relate to Gelfand-Shilov spaces defined in Section~\ref{gelfand}, the assumptions on the weight function $\Theta$ need to be specify further. Let us consider the following logarithmically-convex sequence \begin{equation}\label{lc_sequence} \forall p \in \mathbb{N}, \quad M_p= \sup_{t \geq 0} t^pe^{-\Theta(t)}. \end{equation} Let $s >0$. We assume that the sequence $(M_p)_{p \in \mathbb{N}}$ satisfies the following conditions: \medskip \text{(H1)} $\forall p \in \mathbb{N}, \quad 0<M_p < +\infty$, \medskip \text{(H2)} There exist some positive constants $C_{\Theta}>0$, $L_{\Theta}\geq 1$ such that \begin{equation*}\label{H2} \forall p \in \mathbb{N}, \quad p^{p} \leq C_{\Theta} L_{\Theta}^p M_p, \end{equation*} with the convention $0^0=1$, \medskip $\text{(H3)}_s$ The sequence $(M^s_p)_{p \in \mathbb{N}}$ is quasi-analytic, that is, \begin{equation*} \sum_{p=1}^{+\infty} \Big(\frac{M_{p-1}}{M_p}\Big)^s = +\infty, \end{equation*} according to Denjoy-Carleman Theorem. Under these assumptions, the following Bernstein type estimates hold for the spaces $GS_{\Theta}$: \medskip \begin{proposition}\label{bernstein_estim1} Let $\Theta : [0,+\infty) \longrightarrow [0,+\infty)$ be a non-negative continuous function. If the associated sequence $(M_p)_{p \in \mathbb{N}}$ in \eqref{lc_sequence} satisfies the assumptions $(H1)$ and $(H2)$, then the space $GS_{\Theta}$ is included in the Schwartz space $\mathscr{S}(\mathbb{R}^d)$, and for all $0< s \leq 1$, there exists a positive constant $D_{\Theta, d,s}\geq 1$ such that \begin{multline*} \forall f \in GS_{\Theta}, \forall r \in [0,+\infty), \forall \beta \in \mathbb{N}^d,\\ \|\langle x \rangle^{r} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} \leq (D_{\Theta,d,s})^{1+r+|\beta|} \Big(M_{\left\lfloor \frac{r+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}}, \end{multline*} where $\lfloor \cdot \rfloor$ denotes the floor function. \end{proposition} \medskip \begin{remark} Let us notice that Proposition~\ref{bernstein_estim1} implies in particular the inclusion of spaces $GS_{\Theta} \subset GS_{\mathcal{N}, \rho_s}$, when $\frac{1}{2} \leq s \leq 1$, with $$\forall x \in \mathbb{R}^d, \quad \rho_s(x)= \langle x \rangle^{2s-1}$$ and $$\mathcal{N}= \Big((D_{\Theta,d, s}^{(2s-1)p+q+1} M_{\left\lfloor \frac{(2s-1)p+1 +q+(2-s)(d+1)}{2s} \right\rfloor +1}^s \Big)_{(p,q) \in \mathbb{N}^2},$$ and the following estimates \begin{equation*} \forall f \in GS_{\Theta}, \quad \|f\|_{GS_{\mathcal{N},\rho_s}} \leq \|f \|_{GS_{\Theta}}. \end{equation*} \end{remark} The proof of Proposition~\ref{bernstein_estim1} is given in Appendix (Section~\ref{appendix}). In order to derive uncertainty principles for functions with weighted Hermite expansions, the sequence $\mathcal{M}$ has in addition to satisfy the assumption $(H3)_s$ for some $\frac{1}{2} \leq s \leq 1$. The quantitative estimates in Proposition~\ref{bernstein_estim1} together with the uncertainty principles given by Theorem~\ref{general_uncertaintyprinciple} allow us to establish the following estimates: \medskip \begin{theorem}\label{uncertainty_principle} Let $0 \leq \delta \leq 1$ and $\Theta : [0,+\infty) \longrightarrow [0,+\infty)$ be a non-negative continuous function. Let us assume that the associated sequence $(M_p)_{p \in \mathbb{N}}$ in \eqref{lc_sequence} satisfies the assumptions $\text{(H1)}$, $\text{(H2)}$ and $\text{(H3)}_{\frac{1+\delta}{2}}$. Let $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ be a positive contraction mapping satisfying \begin{equation*} \exists m>0, \exists R>0, \forall x \in \mathbb{R}^d, \quad 0<m \leq \rho(x) \leq R \left\langle x \right\rangle^{\delta}. \end{equation*} If $\omega$ is a measurable subset of $\mathbb{R}^d$ thick with respect to $\rho$, then there exists a positive constant $\varepsilon_0=\varepsilon_0(\Theta, d, \delta)>0$ such that for all $0<\varepsilon \leq \varepsilon_0$, there exists a positive constant $D_{\varepsilon}=D(d, \Theta, \varepsilon, \delta, \rho)>0$ so that \begin{equation}\label{uncertainty_principle_sym} \forall f \in GS_{\Theta}, \quad \|f\|^2_{L^2(\mathbb{R}^d)} \leq D_{\varepsilon} \|f\|^2_{L^2(\omega)}+ \varepsilon \|f\|_{GS_{\Theta}}^2 . \end{equation} \end{theorem} \medskip The above result provides some uncertainty principles for functions with weighted Hermite expansions. Its proof is given in Section~\ref{thm_hermite_proof}. Let us point out that it is possible to obtain quantitative estimates on the constant $D_{\varepsilon}$ thanks to the ones in Theorem~\ref{general_uncertaintyprinciple}, and to recover the spectral inequalities for finite combinations of Hermite functions established in \cite{MP} with a constant growing at the same rate with respect to $N$. Indeed, by taking $\Theta(t)=t$ on $[0,+\infty)$, we readily compute that \begin{equation*} \forall p \in \mathbb{N}, \quad M_p= \sup_{t \geq 0} t^p e^{-t} = \Big( \frac{p}{e} \Big)^p \end{equation*} and the Stirling's formula provides \begin{equation*} M_p \underset{p \to +\infty}{\sim} \frac{p!}{\sqrt{2\pi p}}. \end{equation*} It follows that the assumptions $\text{(H1)}$, $\text{(H2)}$ and $\text{(H3)}_1$ are satisfied. By noticing that \begin{equation*} \|f\|^2_{GS_{\Theta}} = \sum_{|\alpha| \leq N} e^{2|\alpha|} |\langle f, \Phi_{\alpha} \rangle_{L^2(\mathbb{R}^d)} |^2 \leq e^{2N} \|f\|^2_{L^2(\mathbb{R}^d)}, \end{equation*} when $N \in \mathbb{N}$ and $f=\sum_{|\alpha| \leq N} \langle f, \Phi_{\alpha} \rangle \Phi_{\alpha}$, we deduce from \eqref{uncertainty_principle_sym} while taking $\varepsilon = \frac{1}{2} e^{-2N}$ that \begin{equation*} \forall N \in \mathbb{N}, \forall f \in \mathcal{E}_N, \quad \|f\|^2_{L^2(\mathbb{R}^d)} \leq 2 D_{\frac{1}{2} e^{-2N}} \|f\|^2_{L^2(\omega)}, \end{equation*} where $\mathcal{E}_N= \textrm{Span}_{\mathbb{C}}\big\{\Phi_{\alpha}\big\}_{\alpha \in \mathbb{N}^d, \, |\alpha|\leq N}$. We end this section by providing some examples of functions $\Theta$, which define a sequence $\mathcal{M}=(M_p)_{p\in \mathbb{N}}$ satisfying hypotheses $\text{(H1)}$, $\text{(H2)}$ and $\text{(H3)}_{s}$ for some $\frac{1}{2}\leq s \leq 1$. In \cite[Proposition~4.7]{AlphonseMartin}, Alphonse and the author devise the following examples in the case $s=1$: \medskip \begin{proposition}[\cite{AlphonseMartin}, Alphonse \& Martin]\label{ex_theta1} Let $k\geq1$ be a positive integer and $\Theta_{k,1} : [0,+\infty)\rightarrow[0,+\infty)$ be the non-negative function defined for all $t\geq0$ by $$\Theta_{k,1}(t) = \frac t{g(t)(g\circ g)(t)... g^{\circ k}(t)},\quad\text{where}\quad g(t) = \log(e+t),$$ with $g^{\circ k} = g\circ\ldots\circ g$ ($k$ compositions). The associated sequence $\mathcal M^{\Theta_{k,1}}=(M^{\Theta_{k,1}}_p)_{p \in \mathbb{N}}$ defined in \eqref{lc_sequence} is a quasi-analytic sequence of positive real numbers. \end{proposition} \medskip Let us notice that the assumption $\text{(H2)}$ is satisfied as \begin{equation*} \forall k \geq 1, \forall p \in \mathbb{N}, \quad M^{\Theta_{k,1}}_p= \sup_{t \geq 0} t^p e^{-\Theta_{k,1}(t)} \geq \sup_{t \geq 0} t^p e^{-t} = \Big(\frac{p}{e}\Big)^p. \end{equation*} Proposition~\ref{ex_theta1} allows to provide some examples for the cases $\frac{1}{2} \leq s \leq 1$: \medskip \begin{proposition}\label{ex_qa_bertrand} Let $k\geq1$ be a positive integer, $\frac{1}{2} \leq s \leq 1$ and $\Theta_{k,s} : [0,+\infty)\rightarrow[0,+\infty)$ be the non-negative function defined for all $t\geq0$ by $$\Theta_{k,s}(t) = \frac {t^s}{g(t)(g\circ g)(t)... g^{\circ k}(t)},\quad\text{where}\quad g(t) = \log(e+t),$$ with $g^{\circ k} = g\circ\ldots\circ g$ ($k$ compositions). The associated sequence $\mathcal M^{\Theta_{k,s}}=(M^{\Theta_{k,s}}_p)_{p \in \mathbb{N}}$ defined in \eqref{lc_sequence} satisfies the assumptions $\text{(H1)}$, $\text{(H2)}$ and $\text{(H3)}_{s}$. \end{proposition} \medskip The proof of Proposition~\ref{ex_qa_bertrand} is given in Section~\ref{qa_section}. \subsection{Applications to the null-controllability of evolution equations}\label{null_controllability_results} This section is devoted to state some null-controllability results for evolution equations whose adjoint systems enjoy Gelfand-Shilov smoothing effects. Before presenting these results, let us recall the definitions and classical facts about controllability. The notion of null-controllability is defined as follows: \medskip \begin{definition} [Null-controllability] Let $P$ be a closed operator on $L^2(\mathbb{R}^d)$, which is the infinitesimal generator of a strongly continuous semigroup $(e^{-tP})_{t \geq 0}$ on $L^2(\mathbb{R}^d)$, $T>0$ and $\omega$ be a measurable subset of $\mathbb{R}^d$. The evolution equation \begin{equation}\label{syst_general} \left\lbrace \begin{array}{ll} (\partial_t + P)f(t,x)=u(t,x){\mathrm{1~\hspace{-1.4ex}l}}_{\omega}(x), \quad & x \in \mathbb{R}^d,\ t>0, \\ f|_{t=0}=f_0 \in L^2(\mathbb{R}^d), & \end{array}\right. \end{equation} is said to be {\em null-controllable from the set $\omega$ in time} $T>0$ if, for any initial datum $f_0 \in L^{2}(\mathbb{R}^d)$, there exists a control function $u \in L^2((0,T)\times\mathbb{R}^d)$ supported in $(0,T)\times\omega$, such that the mild \emph{(}or semigroup\emph{)} solution of \eqref{syst_general} satisfies $f(T,\cdot)=0$. \end{definition} \medskip By the Hilbert Uniqueness Method, see \cite{coron_book} (Theorem~2.44) or \cite{JLL_book}, the null-controllability of the evolution equation \eqref{syst_general} is equivalent to the observability of the adjoint system \begin{equation} \label{adj_general} \left\lbrace \begin{array}{ll} (\partial_t + P^*)g(t,x)=0, \quad & x \in \mathbb{R}^d, \ t>0, \\ g|_{t=0}=g_0 \in L^2(\mathbb{R}^d), \end{array}\right. \end{equation} where $P^*$ denotes the $L^2(\mathbb{R}^d)$-adjoint of $P$. The notion of observability is defined as follows: \medskip \begin{definition} [Observability] Let $T>0$ and $\omega$ be a measurable subset of $\mathbb{R}^d$. The evolution equation \eqref{adj_general} is said to be {\em observable from the set $\omega$ in time} $T>0$, if there exists a positive constant $C_T>0$ such that, for any initial datum $g_0 \in L^{2}(\mathbb{R}^d)$, the mild \emph{(}or semigroup\emph{)} solution of \eqref{adj_general} satisfies \begin{equation*}\label{eq:observability} \int\limits_{\mathbb{R}^d} |g(T,x)|^{2} dx \leq C_T \int\limits_{0}^{T} \Big(\int\limits_{\omega} |g(t,x)|^{2} dx\Big) dt\,. \end{equation*} \end{definition} \medskip In the following, we shall always derive null-controllability results from observability estimates on adjoint systems. \subsubsection{Null-controllability of evolution equations whose adjoint systems enjoy non symmetric Gelfand-Shilov smoothing effects}\label{null_controllability_non_symmetric_GS} In this section, we aim at establishing null-controllability results for evolution equations whose adjoint systems enjoy Gelfand-Shilov smoothing effects. We consider $A$ a closed operator on $L^2(\mathbb{R}^d)$, that is the infinitesimal generator of a strongly continuous contraction semigroup $(e^{-tA})_{t \geq 0}$ on $L^2(\mathbb{R}^d)$, that is satisfying $$\forall t \geq 0, \forall f \in L^2(\mathbb{R}^d), \quad \|e^{-tA}f\|_{L^2(\mathbb{R}^d)} \leq \|f\|_{L^2(\mathbb{R}^d)},$$ and study the evolution equation \begin{equation}\label{PDEgeneral} \left\lbrace \begin{array}{ll} \partial_tf(t,x) + Af(t,x)=u(t,x){\mathrm{1~\hspace{-1.4ex}l}}_{\omega}(x), \quad & x \in \mathbb{R}^d, \ t>0, \\ f|_{t=0}=f_0 \in L^2(\mathbb{R}^d). & \end{array}\right. \end{equation} We assume that the semigroup $(e^{-tA^*})_{t \geq 0}$ generated by the $L^2(\mathbb{R}^d)$-adjoint operator $A^*$, enjoys some Gelfand-Shilov smoothing effects for any positive time, that is, \begin{equation*}\label{GS0} \forall t >0, \forall f \in L^2(\mathbb{R}^d), \quad e^{-tA^*}f \in S^{\mu}_{\nu} (\mathbb{R}^d), \end{equation*} for some $\mu, \nu >0$ satisfying $\mu +\nu \geq 1$. More precisely, we assume that the following quantitative regularizing estimates hold: there exist some constants $C \geq 1$, $r_1>0$, $r_2\geq0$, $0<t_0 \leq 1$ such that \begin{multline}\label{GS_estimate} \forall 0< t \leq t_0, \forall \alpha, \beta \in \mathbb{N}^d, \forall f \in L^2(\mathbb{R}^d), \\ \|x^{\alpha} \partial^{\beta}_x (e^{-tA^*} f) \|_{L^2(\mathbb{R}^d)} \leq \frac{C^{1+|\alpha|+|\beta|}}{t^{r_1(|\alpha|+|\beta|)+r_2}} (\alpha!)^{\nu} (\beta !)^{\mu} \|f\|_{L^2(\mathbb{R}^d)}, \end{multline} where $\alpha!=\alpha_1!...\alpha_d!$ if $\alpha=(\alpha_1,...,\alpha_d) \in \mathbb{N}^d$. In a recent work \cite{Alphonse}, Alphonse studies the smoothing effects of semigroups generated by anisotropic Shubin operators \begin{equation*} \mathcal{H}_{m,k} = (-\Delta_x)^m+|x|^{2k}, \end{equation*} equipped with domains \begin{equation*} D(\mathcal{H}_{m,k})= \left\{f \in L^2(\mathbb{R}^d) : \mathcal{H}_{m,k} f \in L^2(\mathbb{R}^d) \right\}, \end{equation*} when $m,k \geq 1$ are positive integers. This author establishes in \cite[Corollary~2.2]{Alphonse} the following quantitative estimates for fractional anisotropic Shubin operators: for all $m, k \geq 1$, $s >0$, there exist some positive constants $C\geq 1$, $r_1, r_2>0$, $t_0>0$ such that \begin{multline*} \forall 0< t \leq t_0, \forall \alpha, \beta \in \mathbb{N}^d, \forall f \in L^2(\mathbb{R}^d), \\ \|x^{\alpha} \partial_x^{\beta} \big(e^{-t\mathcal{H}_{m,k}^s} f\big) \|_{L^2(\mathbb{R}^d)} \leq \frac{C^{1+|\alpha|+|\beta|}}{t^{r_1(|\alpha|+|\beta|)+r_2}} (\alpha!)^{\nu_{m,k,s}} (\beta !)^{\mu_{m,k,s}} \|f\|_{L^2(\mathbb{R}^d)}, \end{multline*} with \begin{equation*} \nu_{m,k,s}= \max\Big(\frac{1}{2sk}, \frac{m}{k+m} \Big) \quad \text{and} \quad \mu_{m,k,s}= \max\Big(\frac{1}{2sm}, \frac{k}{k+m} \Big). \end{equation*} Thanks to these quantitative estimates and the general result of null-controllability for evolution equations whose adjoint systems enjoy symmetric Gelfand-Shilov smoothing effects in \cite[Theorem~2.5]{MP}, Alphonse derives in \cite[Theorem~2.3]{Alphonse} a sufficient growth condition on the density $\rho$ to ensure the null-controllability of evolution equations associated to the Shubin operators $\mathcal{H}_{l,l}$, with $l \geq 1$, in any positive time from any measurable thick control subset with respect to $\rho$. In this work, we extend this result to general Shubin operators $\mathcal{H}_{m,k}$, with $m, k \geq 1$, and more generally establish the null-controllability of evolution equations whose adjoint systems enjoy quantitative smoothing effects in specific Gelfand-Shilov spaces $S_{\nu}^{\mu}$. The following result shows that null-controllability holds for the evolution equations \eqref{PDEgeneral} when the parameter $\delta$ ruling the growth of the density is strictly less than the critical parameter $\delta^*=\frac{1-\mu}{\nu}$. \begin{theorem}\label{observability_result} Let $(A,D(A))$ be a closed operator on $L^2(\mathbb{R}^d)$ which is the infinitesimal generator of a strongly continuous contraction semigroup $(e^{-tA})_{t \geq 0}$ on $L^2(\mathbb{R}^d)$ whose $L^2(\mathbb{R}^d)$-adjoint generates a semigroup satisfying the quantitative smoothing estimates \eqref{GS_estimate} for some $0<\mu <1 $, $\nu >0$ such that $\mu+\nu \geq 1$. Let $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ be a contraction mapping such that there exist some constants $0 \leq \delta < \frac{1-\mu}{\nu}$, $m>0$, $R>0$ so that \begin{equation*} \forall x \in \mathbb{R}^d, \quad 0< m \leq \rho(x) \leq R \langle x \rangle^{\delta}. \end{equation*} If $\omega \subset \mathbb{R}^d$ is a measurable subset thick with respect to the density $\rho$, the evolution equation \begin{equation*}\label{PDEadjoint} \left\lbrace \begin{array}{ll} \partial_tf(t,x) + Af(t,x)=u(t,x){\mathrm{1~\hspace{-1.4ex}l}}_{\omega}(x), \quad & x \in \mathbb{R}^d, \ t>0, \\ f|_{t=0}=f_0 \in L^2(\mathbb{R}^d), & \end{array}\right. \end{equation*} is null-controllable from the control subset $\omega$ in any positive time $T>0$; and equivalently, the adjoint system \begin{equation*}\label{PDEadjoint} \left\lbrace \begin{array}{ll} \partial_t g(t,x) + A^*g(t,x)=0, \quad & x \in \mathbb{R}^d, \ t>0, \\ g|_{t=0}=g_0 \in L^2(\mathbb{R}^d), & \end{array}\right. \end{equation*} is observable from the control subset $\omega$ in any positive time $T>0$. More precisely, there exists a positive constant $K=K(d, \rho, \delta, \mu, \nu) \geq 1$ such that \begin{equation*} \forall g \in L^2(\mathbb{R}^d), \forall T>0, \quad \|e^{-TA^*}g\|^2_{L^2(\mathbb{R}^d)} \leq K \exp\Big(\frac{K}{T^{\frac{2r_1}{1-\mu-\delta \nu}}}\Big) \int_0^T \|e^{-tA^*}g\|^2_{L^2(\omega)} dt. \end{equation*} \end{theorem} \medskip The proof of Theorem~\ref{observability_result} is given in Section~\ref{proof_obs}. It is derived from the uncertainty principles established in Theorem~\ref{specific_GS_uncertaintyprinciple} while revisiting the adapted Lebeau-Robbiano method used in \cite[Section~8.3]{BeauchardPravdaStarov} with some inspiration taken from the work of Miller \cite{Miller}. Contrary to \cite[Theorem~2.5]{MP}, where the authors take advantage of the characterization of symmetric Gelfand-Shilov spaces through the decomposition into Hermite basis, let us stress that the above proof does not rely on a similar characterization of general Gelfand-Shilov spaces through the decomposition into an Hilbert basis composed by the eigenfunctions of a suitable operator. In the critical case $\delta = \delta^*$, the null-controllability of the evolution equation \eqref{PDEgeneral} whose adjoint system enjoys quantitative smoothing estimates in the Gelfand-Shilov space $S_{\nu}^{\mu}$ is still an open problem. As mentionned above, the general Shubin operators $\mathcal{H}_{m,k}$ are self-adjoint and generate strongly continuous semigroups on $L^2(\mathbb{R}^d)$, which enjoy quantitative smoothing effects. Consequently, Theorem~\ref{observability_result} can be directly applied to obtain the following null-controllability results: \medskip \begin{corollary} Let $m,k \geq 1$ be positive integers, $s> \frac{1}{2m}$ and \begin{equation*} \delta^*_{m,k,s} := \left\lbrace \begin{array}{ll} 1 & \text{if } s \geq \frac{m+k}{2mk}, \\ \frac{k}{m} (2sm-1) & \text{if } \frac{1}{2m} < s \leq \frac{m+k}{2mk}. \end{array}\right. \end{equation*} Let $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ be a contraction mapping such that there exist some constants $0 \leq \delta < \delta^*$, $m>0$, $R>0$ so that \begin{equation*} \forall x \in \mathbb{R}^d, \quad 0< m \leq \rho(x) \leq R \langle x \rangle^{\delta}. \end{equation*} If $\omega \subset \mathbb{R}^d$ is a measurable subset thick with respect to the density $\rho$, the evolution equation associated to the fractional Shubin operator \begin{equation*}\label{PDE_shubin} \left\lbrace \begin{array}{ll} \partial_tf(t,x) + \mathcal{H}_{m,k}^s f(t,x)=u(t,x){\mathrm{1~\hspace{-1.4ex}l}}_{\omega}(x), \quad & x \in \mathbb{R}^d, \ t>0, \\ f|_{t=0}=f_0 \in L^2(\mathbb{R}^d), & \end{array}\right. \end{equation*} is null-controllable from the control subset $\omega$ in any time $T>0$. \end{corollary} \medskip \section{Proof of the uncertainty principles} \subsection{Proof of Theorem~\ref{general_uncertaintyprinciple}}\label{proof_mainprop} This section is devoted to the proof of Theorem~\ref{general_uncertaintyprinciple}. Let $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ be a positive contraction mapping such that there exist some positive constants $m>0$, $R>0$ so that \begin{equation}\label{rho_condi} \forall x \in \mathbb{R}^d, \quad 0<m \leq \rho(x) \leq R \left\langle x \right\rangle. \end{equation} Let $\omega \subset \mathbb{R}^d$ be a measurable subset $\gamma$-thick with respect to the density $\rho$, that is, \begin{equation*}\label{thick_rho} \exists 0 < \gamma \leq 1, \forall x \in \mathbb{R}^d, \quad |\omega \cap B(x,\rho(x))| \geq \gamma |B(x,\rho(x))|=\gamma \rho(x)^d |B(0,1)|, \end{equation*} where $B(x,r)$ denotes the Euclidean ball centered at $x \in \mathbb{R}^d$ with radius $r>0$, and where $|\cdot|$ denotes the Lebesgue measure. Since $\rho$ is a positive contraction mapping, Lemma~\ref{slowmet} in Appendix and the remark made after the statement of this result show that the family of norms $(\|\cdot\|_x)_{x \in \mathbb{R}^d}$ given by \begin{equation*} \forall x \in \mathbb{R}^d, \forall y \in \mathbb{R}^d, \quad \|y\|_x=\frac{\|y\|}{\rho(x)}, \end{equation*} where $\|\cdot\|$ denotes the Euclidean norm in $\mathbb{R}^d$, defines a slowly varying metric on $\mathbb{R}^d$. \subsection{Step 1. Bad and good balls} By using Theorem~\ref{slowmetric} in appendix, we can find a sequence $(x_k)_{k \geq 0}$ in $\mathbb{R}^d$ such that \begin{multline}\label{recov} \exists K_0 \in \mathbb{N}, \forall (i_1, ..., i_{K_0+1}) \in \mathbb{N}^{K_0+1} \textrm{ with } i_k \neq i_l \textrm{ if }1 \leq k \neq l \leq K_0+1, \\ \bigcap \limits_{k=1}^{K_0+1} {B_{i_k}}=\emptyset \end{multline} and \begin{equation}\label{recov1} \mathbb{R}^d=\bigcup_{k=0}^{+\infty} {B_k}, \end{equation} where \begin{equation}\label{asdf1} B_k=\{y \in \mathbb{R}^d:\ \|y-x_k\|_{x_k} <1\}=\{y \in \mathbb{R}^d:\ \|y-x_k\| <\rho(x_k)\}=B(x_k,\rho(x_k)). \end{equation} Let us notice from Theorem~\ref{slowmetric} that the non-negative integer $K_0 =K_0(d, L)$ only depends on the dimension $d$ and $L$ the Lipschitz constant of $\rho$, since the constant $C \geq 1$ appearing in slowness condition (\ref{equiv}) can be taken equal to $C=\frac{1}{1-L}$. It follows from (\ref{recov}) and (\ref{recov1}) that \begin{equation}\label{asdf2} \forall x \in \mathbb{R}^d, \quad 1 \leq \sum \limits_{k=0}^{+\infty} \mathbbm{1}_{B_k} (x) \leq K_0, \end{equation} where $\mathbbm{1}_{B_k}$ denotes the characteristic function of $B_k$. We deduce from (\ref{asdf2}) and the Fubini-Tonelli theorem that for all $g \in L^2(\mathbb{R}^d)$, \begin{equation*} \|g\|_{L^2(\mathbb{R}^d)}^2 = \int_{\mathbb{R}^d}|g(x)|^2dx \leq \sum_{k=0}^{+\infty}\int_{B_k}|g(x)|^2dx \leq K_0 \|g\|_{L^2(\mathbb{R}^d)}^2. \end{equation*} Let $f \in GS_{\mathcal{N}, \rho} \setminus \{0\}$ and $\varepsilon>0$. We divide the family of balls $(B_k)_{k \geq 0}$ into families of good and bad balls. A ball $B_k$, with $k \in \mathbb{N}$, is said to be good if it satisfies \begin{equation}\label{good_ball} \forall p \in \mathbb{N}, \forall \beta \in \mathbb{N}^d, \quad \int_{B_k} |\rho(x)^{p} \partial_{x}^{\beta} f(x)|^2 dx \leq \varepsilon^{-1}2^{2(p+|\beta|)+d+1} K_0 N^{2}_{p, |\beta|} \int_{B_k} |f(x)|^2 dx, \end{equation} On the other hand, a ball $B_k$, with $k \in \mathbb{N}$, which is not good, is said to be bad, that is, when \begin{multline}\label{bad_ball} \exists (p_0, \beta_0) \in \mathbb{N} \times \mathbb{N}^d, \\ \int_{B_k} |\rho(x)^{p_0} \partial_{x}^{\beta_0} f(x)|^2 dx > \varepsilon^{-1}2^{2(p_0+|\beta_0|)+d+1} K_0 N^{2}_{p_0, |\beta_0|} \int_{B_k} |f(x)|^2 dx. \end{multline} If $B_k$ is a bad ball, it follows from \eqref{bad_ball} that there exists $(p_0, \beta_0) \in \mathbb{N} \times \mathbb{N}^d$ such that \begin{multline}\label{gh05} \int_{B_k}|f(x)|^2dx \leq \frac{\varepsilon}{2^{2(p_0+|\beta_0|)+d+1} K_0 N^{2}_{p_0, |\beta_0|}}\int_{B_k} \rho(x)^{2p_0}|\partial_x^{\beta_0}f(x)|^2dx \\ \leq \sum_{(p,\beta) \in \mathbb{N}\times \mathbb{N}^d} \frac{\varepsilon}{2^{2(p+|\beta|)+d+1} K_0 N^{2}_{p, |\beta|} }\int_{B_k}\rho(x)^{2p}|\partial_x^{\beta}f(x)|^2dx. \end{multline} By summing over all the bad balls and by using from (\ref{recov}) that \begin{equation*} \mathbbm{1}_{\bigcup_{\textrm{bad balls}} B_k} \leq \sum_{\textrm{bad balls}} \mathbbm{1}_{B_k} \leq K_0 \mathbbm{1}_{\bigcup_{\textrm{bad balls}} B_k}, \end{equation*} we deduce from (\ref{gh05}) and the Fubini-Tonelli theorem that \begin{multline}\label{gh6} \int_{\bigcup_{\textrm{bad balls}} B_k}|f(x)|^2dx \leq \sum_{\textrm{bad balls}}\int_{B_k}|f(x)|^2dx \\ \leq \sum_{(p,\beta) \in \mathbb{N}\times \mathbb{N}^d}\frac{\varepsilon}{2^{2(p+|\beta|)+d+1} N^{2}_{p, |\beta|} } \int_{\bigcup_{\textrm{bad balls}} B_k} \hspace{-8mm} |\rho(x)^{p} \partial_x^{\beta} f(x)|^2dx. \end{multline} By using that the number of solutions to the equation $p+\beta_1+...+\beta_{d}=m$, with $m \geq 0$, $d \geq 1$ and unknowns $p \in \mathbb{N}$ and $\beta=(\beta_1,...,\beta_d) \in \mathbb{N}^{d}$, is given by $\binom{m+d}{m}$, we obtain from (\ref{gh6}) that \begin{multline}\label{gh6y} \int_{\bigcup_{\textrm{bad balls}} B_k}|f(x)|^2dx \leq \varepsilon \sum_{m \geq 0} \frac{\binom{m+d}{m}}{2^{d+1} 4^m} \|f\|_{GS_{\mathcal{N},\rho}}^2 \\ \leq \varepsilon \sum_{m \geq 0} \frac{2^{m+d}}{2^{d+1} 4^m} \|f\|^2_{GS_{\mathcal{N},\rho}}= \varepsilon \|f\|^2_{GS_{\mathcal{N},\rho}}, \end{multline} since \begin{equation*}\label{gh45} \binom{m+d}{m} \leq \sum_{j=0}^{m+d}\binom{m+d}{j}=2^{m+d}. \end{equation*} Recalling from (\ref{recov1}) that $$ 1 \leq \mathbbm{1}_{\bigcup_{\textrm{bad balls}}B_k}+ \mathbbm{1}_{\bigcup_{\textrm{good balls}}B_k},$$ we notice that \begin{equation}\label{asdf5} \|f\|_{L^2(\mathbb{R}^d)}^2 \leq \int_{\bigcup_{\textrm{good balls}} B_k}|f(x)|^2dx+ \int_{\bigcup_{\textrm{bad balls}} B_k}|f(x)|^2dx. \end{equation} It follows from (\ref{gh6y}) and (\ref{asdf5}) that \begin{equation}\label{gh7} \|f\|_{L^2(\mathbb{R}^d)}^2 \leq \int_{\bigcup_{\textrm{good balls}} B_k}|f(x)|^2dx+ \varepsilon \|f\|^2_{GS_{\mathcal{N},\rho}}. \end{equation} \subsection{Step 2. Properties on good balls} As the ball $B(0,1)$ is an Euclidean ball, the Sobolev embedding $$W^{d,2}(B(0,1)) \xhookrightarrow{} L^{\infty}(B(0,1)),$$ see e.g.~\cite{adams} (Theorem~4.12), implies that there exists a positive constant $C_{d}\geq 1$ depending only the dimension $d \geq 1$ such that \begin{equation}\label{sobolev} \forall u \in W^{d,2}(B(0,1)), \quad \|u\|_{L^{\infty}(B(0,1))} \leq C_{d} \|u\|_{W^{d,2}(B(0,1))}. \end{equation} By translation invariance and homogeneity of the Lebesgue measure, it follows from (\ref{rho_condi}), (\ref{asdf1}) and (\ref{sobolev}) that for all $u \in {W^{d,2}(B_k)}$, \begin{multline*} \|u\|^2_{L^{\infty}(B_k)}=\|x \mapsto u(x_k+x \rho(x_k))\|^2_{L^{\infty}(B(0,1))} \leq C_{d}^2 \|x \mapsto u(x_k+x \rho(x_k))\|^2_{W^{d,2}(B(0,1))} \\ =C_{d}^2 \sum_{\substack{\alpha \in \mathbb{N}^d, \\ |\alpha| \leq d}} \int_{B_k} \rho(x_k)^{2|\alpha|-d} |\partial^{\alpha}_x u(x)|^2 dx =C_{d}^2 \sum_{\substack{\alpha \in \mathbb{N}^d, \\ |\alpha| \leq d}} \int_{B_k}m^{2|\alpha|-d} \Big( \frac{\rho(x_k)}{m} \Big)^{2|\alpha|-d} |\partial^{\alpha}_x u(x)|^2 dx\end{multline*} and \begin{multline}{\label{se1}} \|u\|^2_{L^{\infty}(B_k)} \leq C_{d}^2 \max(m,m^{-1})^d \sum_{\substack{\alpha \in \mathbb{N}^d, \\ |\alpha| \leq d}} \int_{B_k} \Big( \frac{\rho(x_k)}{m} \Big)^{d} |\partial^{\alpha}_x u(x)|^2 dx\\ = C_{d}^2 \max(1,m^{-1})^{2d} \rho(x_k)^{d} \sum_{\substack{\alpha \in \mathbb{N}^d, \\ |\alpha| \leq d}} \int_{B_k} |\partial^{\alpha}_x u(x)|^2 dx. \end{multline} We deduce from (\ref{se1}) that for all $u \in {W^{d,2}(B_k)}$, \begin{equation}{\label{se2}} \|u\|_{L^{\infty}(B_k)} \leq C_{d} \max(1,m^{-1})^{d} \rho(x_k)^{\frac{d}{2}} \|u\|_{W^{d,2}(B_k)}. \end{equation} Let $B_k$ be a good ball. By using the fact that $\rho$ is a $L$-Lipschitz function, we notice that \begin{equation}{\label{equi}} \forall x \in B_k=B(x_k,\rho(x_k)), \quad 0 < \rho(x_k) \leq \frac{1}{1-L} \rho(x). \end{equation} We deduce from (\ref{se2}) and (\ref{equi}) that for all $\beta \in \mathbb{N}^d$ and $k \in \mathbb{N}$ such that $B_k$ is a good ball \begin{align}\label{gh30} & \ \rho(x_k)^{|\beta|+ \frac{d}{2}}\|\partial_x^{\beta}f\|_{L^{\infty}(B_k)} \\ \notag \leq & \ C_d \max(1,m^{-1})^{d} \rho(x_k)^{|\beta|+ d}\Big(\sum_{\substack{\tilde{\beta} \in \mathbb{N}^d, \ |\tilde{\beta}| \leq d}}\|\partial_x^{\beta+\tilde{\beta}}f\|^2_{L^{2}(B_k)}\Big)^{\frac{1}{2}}\\ \notag = & \ C_d \max(1,m^{-1})^{d} \Big(\sum_{\substack{\tilde{\beta} \in \mathbb{N}^d, \ |\tilde{\beta}| \leq d}}\| \rho(x_k)^{|\beta|+ d}\partial_x^{\beta+\tilde{\beta}}f\|^2_{L^{2}(B_k)}\Big)^{\frac{1}{2}} \\ \notag \leq & \ C_d \max(1,m^{-1})^{d} \frac{1}{(1-L)^{|\beta|+d}} \Big(\sum_{\substack{\tilde{\beta} \in \mathbb{N}^d, \ |\tilde{\beta}| \leq d}}\| \rho(x)^{|\beta|+ d}\partial_x^{\beta+\tilde{\beta}}f\|^2_{L^{2}(B_k)}\Big)^{\frac{1}{2}}. \end{align} By using (\ref{rho_condi}) and the definition of good balls (\ref{good_ball}), it follows from (\ref{gh30}) and the fact that $\mathcal{N}$ is non-decreasing with respect to the two indexes that for all $\beta \in \mathbb{N}^d$ and $k \in \mathbb{N}$ such that $B_k$ is a good ball \begin{align}\label{asdf7} & \ \rho(x_k)^{|\beta|+ \frac{d}{2}}\|\partial_x^{\beta}f\|_{L^{\infty}(B_k)} \\ \notag \leq & \ C_d \max(1,m^{-1})^{d} \frac{1}{(1-L)^{|\beta|+d}} \Big(\sum_{\substack{\tilde{\beta} \in \mathbb{N}^d, \\ |\tilde{\beta}| \leq d}} \varepsilon^{-1} 2^{2(2|\beta|+ |\tilde{\beta}|)+3d+1} K_0 N^{2}_{|\beta|+d, |\beta|+ |\tilde{\beta}|} \|f\|^2_{L^{2}(B_k)}\Big)^{\frac{1}{2}} \\ \notag \leq & \ \varepsilon^{-\frac{1}{2}} K_{d,m, L} \Big(\frac{4}{1-L} \Big)^{|\beta|} N_{|\beta|+d, |\beta|+d} \|f\|_{L^{2}(B_k)}, \end{align} with $$K_{d,m, L}= C_d \max(1,m^{-1})^d \sqrt{2K_0}\Big(\frac{4\sqrt{2}}{1-L} \Big)^d (d+1)^{\frac{d}{2}} \geq 1,$$ since $C_d \geq 1$. \subsection{Step 3 : Recovery of the $L^2(\mathbb{R}^d)$-norm.} Let $B_k$ be a good ball. Let us assume that $\|f\|_{L^2(B_k)} \neq0$. We can therefore define the following function \begin{equation}\label{gh13b} \forall y \in B(0,1), \quad \phi(y)=\varepsilon^{\frac{1}{2}}\rho(x_k)^{\frac{d}{2}}\frac{f(x_k+\rho(x_k) y)}{ K_{d,m,L} N_{d,d} \|f\|_{L^2(B_k)}}. \end{equation} We observe that \begin{equation*} \|\phi\|_{L^{\infty}(B(0,1))} = \varepsilon^{\frac{1}{2}}\rho(x_k)^{\frac{d}{2}}\frac{\|f\|_{L^{\infty}(B_k)}}{K_{d,m,L} N_{d,d} \|f\|_{L^2(B_k)}} \geq \frac{\varepsilon^{\frac{1}{2}}}{|B(0,1)|^{\frac{1}{2}}K_{d,m,L} N_{d,d}}, \end{equation*} and \begin{equation}\label{qa1} \forall \beta \in \mathbb{N}^d, \quad \| \partial_x^{\beta} \phi \|_{L^{\infty}(B(0,1))} = \frac{\varepsilon^{\frac{1}{2}} \rho(x_k)^{|\beta|+\frac{d}{2}} \|\partial_x^{\beta} f\|_{L^{\infty}(B_k)}}{K_{d,m,L} N_{d,d}\|f\|_{L^2(B_k)}}. \end{equation} It follows from \eqref{asdf7} and \eqref{qa1} that \begin{equation}\label{qa2} \forall \beta \in \mathbb{N}^d, \quad \| \partial_x^{\beta} \phi \|_{L^{\infty}(B(0,1))} \leq \Big( \frac{4}{1-L} \Big)^{|\beta|} \frac{N_{|\beta|+d,|\beta|+d}}{N_{d,d}}. \end{equation} We deduce from \eqref{qa2} that $\phi \in \mathcal{C}_{\mathcal{M}'}(B(0,1))$ with $$\mathcal{M}'=(M'_p)_{p \in \mathbb{N}}= \Big(\Big( \frac{4}{1-L} \Big)^{p} \frac{N_{p+d,p+d}}{N_{d,d}}\Big)_{p \in \mathbb{N}}.$$ The assumption that the diagonal sequence $\mathcal{M}=(N_{p,p})_{p \in \mathbb{N}}$ is logarithmically-convex and quasi-analytic implies that these two properties hold true as well for the sequence $\mathcal{M}'$. Indeed, the logarithmic convexity of $\mathcal{M}'$ is straightforward and since \begin{equation*} \sum_{p=0}^{+\infty} \frac{M'_p}{M'_{p+1}} = \frac{1-L}{4} \sum_{p=d}^{+\infty} \frac{N_{p,p}}{N_{p+1,p+1}}, \end{equation*} we deduce from the quasi-analyticity of $\mathcal{M}$ and from the Denjoy-Carleman's Theorem (Theorem~\ref{Den_Carl_thm}) that the sequence $\mathcal{M}'$ is also quasi-analytic. Furthermore, we observe from the definition of the Bang degree \eqref{Bang} and the equality \begin{equation*} \forall 0<t \leq 1, \forall n \in \mathbb{N}^*, \quad \sum_{-\log t < p \leq n} \frac{M'_{p-1}}{M'_{p}} = \frac{1-L}{4} \sum_{-\log(t e^{-d})< p \leq n+d} \frac{N_{p-1,p-1}}{N_{p,p}} \end{equation*} that \begin{equation}\label{bang1309} \forall 0< t \leq 1, \quad n_{t, \mathcal{M}',2e d} = n_{te^{-d}, \mathcal{M}, \frac{8d}{1-L}e} -d \leq n_{te^{-d}, \mathcal{M}, \frac{8d}{1-L}e}. \end{equation} Setting \begin{equation*}\label{m_10} E_k=\Big\{\frac{x-x_k}{\rho(x_k)} \in B(0,1): \ x \in B_k \cap \omega \Big\} \subset B(0,1), \end{equation*} we notice from \eqref{asdf1} that \begin{equation}\label{m_11} |E_k| = \frac{|\omega \cap B_k|}{\rho(x_k)^d} \geq \frac{\gamma |B_k|}{\rho(x_k)^d} \geq \gamma |B(0,1)| >0, \end{equation} since $\omega$ is $\gamma$-thick with respect to $\rho$ and $B_k=B(x_k,\rho(x_k))$. From now on, we shall assume that \begin{equation}\label{small_eps} 0< \varepsilon \leq N^2_{0,0}. \end{equation} We deduce from \eqref{bang1309} and Proposition~\ref{NSV_multid_L2} applied with the function $\phi$ and the measurable subset $E_k$ of the bounded convex open ball $B(0,1)$ that there exists a positive constant $D_{\varepsilon}=D\big(\varepsilon, \mathcal{N}, d, \gamma, L,m\big)>1$ independent on $\phi$ and $k$ such that \begin{equation}\label{qa3} \int_{B(0,1)} |\phi(x)|^2 dx \leq D_{\varepsilon} \int_{E_k} |\phi(x)|^2 dx, \end{equation} with \begin{equation*} D_{\varepsilon}= \frac{2}{\gamma} \bigg(\frac{2d}{\gamma} \Gamma_{\mathcal{M}'}\big(2n_{t, \mathcal{M}', 2ed}\big) \bigg)^{4n_{t, \mathcal{M}', 2ed}}, \end{equation*} and \begin{equation*} t= \frac{\varepsilon^{\frac{1}{2}}}{\max\big(1,|B(0,1)|^{\frac{1}{2}}\big)K_{d,m,L} N_{d,d}}. \end{equation*} Notice that from \eqref{small_eps}, we have $$0< t \leq 1,$$ since $K_{d,m,L} \geq 1$ and $\mathcal{N}$ is non-decreasing with respect to the two indexes. Let us also notice from the definitions in \eqref{def_gamma} that $$ \forall n \geq 1, \quad 1 \leq \Gamma_{\mathcal{M}'}(n) \leq \Gamma_{\mathcal{M}}(n+d).$$ We deduce from \eqref{bang1309} and the non-decreasing property of $\Gamma_{\mathcal{M}'}$ that \begin{align}\label{C_eps} D_{\varepsilon} & \leq \frac{2}{\gamma} \bigg(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}\big(2n_{t, \mathcal{M}', 2ed}+d\big) \bigg)^{4n_{t, \mathcal{M}', 2ed}} \\\nonumber & \leq \frac{2}{\gamma} \bigg(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}\big(2n_{te^{-d}, \mathcal{M}, \frac{8d}{1-L}e}\big) \bigg)^{4n_{te^{-d}, \mathcal{M}, \frac{8d}{1-L}e}}. \end{align} Let us denote $$C_{\varepsilon}=\frac{2}{\gamma} \bigg(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}\big(2n_{te^{-d}, \mathcal{M}, \frac{8d}{1-L}e}\big) \bigg)^{4n_{te^{-d}, \mathcal{M}, \frac{8d}{1-L}e}}.$$ We deduce from \eqref{gh13b}, \eqref{m_11}, \eqref{qa3} and \eqref{C_eps} that \begin{equation}\label{qa4} \int_{B_k} |f(x)|^2 dx \leq C_{\varepsilon} \int_{\omega \cap B_k} |f(x)|^2 dx. \end{equation} Let us notice that the above estimate holds as well when $\| f \|_{L^2(B_k)}=0$. By using anew from (\ref{recov}) that \begin{equation*} \mathbbm{1}_{\bigcup_{\textrm{good balls}} B_k} \leq \sum \limits_{\textrm{good balls}} \mathbbm{1}_{B_k} \leq K_0 \mathbbm{1}_{\bigcup_{\textrm{good balls}} B_k}, \end{equation*} it follows from (\ref{gh7}) and (\ref{qa4}) that \begin{align*}\label{gh56y} \|f\|_{L^2(\mathbb{R}^d)}^2 & \leq \int_{\bigcup_{\textrm{good balls}} B_k}|f(x)|^2 dx + \varepsilon \|f\|^2_{GS_{\mathcal{N},\rho}} \\ \notag & \leq \sum_{\textrm{good balls}}\|f\|_{L^{2}(B_k)}^2 + \varepsilon \|f\|^2_{GS_{\mathcal{N}, \rho}} \\ \notag & \leq C_{\varepsilon} \sum_{\textrm{good balls}} \int_{\omega \cap B_k} |f(x)|^2 dx + \varepsilon \|f\|^2_{GS_{\mathcal{N}, \rho}} \\ \notag & \leq K_0 C_{\varepsilon} \int_{\omega \cap \big(\bigcup_{\textrm{good balls}} B_k\big)}|f(x)|^2 dx +\varepsilon \|f\|^2_{GS_{\mathcal{N}, \rho}}. \end{align*} The last inequality readily implies that \begin{equation*} \|f \|^2_{L^2(\mathbb{R}^d)} \leq K_0 C_{\varepsilon} \| f \|^2_{L^2(\omega)} +\varepsilon \| f \|^2_{GS_{\mathcal{N}, \rho}}. \end{equation*} This ends the proof of Theorem~\ref{general_uncertaintyprinciple}. \subsection{Proof of Theorem~\ref{specific_GS_uncertaintyprinciple}}\label{proof2} Let $A \geq 1$, $0< \mu \leq 1$, $\nu >0$ with $\mu+\nu \geq 1$ and $0 \leq \delta \leq \frac{1-\mu}{\nu} \leq 1$. Let $f \in \mathscr{S}(\mathbb{R}^d)$. We first notice that if the quantity \begin{equation*} \sup_{p \in \mathbb{N}, \beta \in \mathbb{N}^d} \frac{\|\langle x \rangle^p \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)}}{A^{p+|\beta|} (p!)^{\nu} (|\beta|!)^{\mu}} =+\infty, \end{equation*} is infinite, then the result of Theorem~\ref{specific_GS_uncertaintyprinciple} clearly holds. We can therefore assume that \begin{equation}\label{bernst_estim_gs} \sup_{p \in \mathbb{N}, \beta \in \mathbb{N}^d} \frac{\|\langle x \rangle^p \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)}}{A^{p+|\beta|} (p!)^{\nu} (|\beta|!)^{\mu}} <+\infty. \end{equation} By assumption, we have \begin{equation}\label{rho_assum} \exists m,R >0, \forall x \in \mathbb{R}^d, \quad 0< m \leq \rho(x) \leq R \langle x \rangle^{\delta}. \end{equation} We deduce from \eqref{bernst_estim_gs}, \eqref{rho_assum} and Lemma~\ref{interpolation} that \begin{align*} \forall p \in \mathbb{N}, \forall \beta \in \mathbb{N}^d, \quad & \|\rho(x)^p \partial_x^{\beta} f\|_{L^2(\mathbb{R}^d)} \leq R^p \| \langle x \rangle^{\delta p} \partial^{\beta}_x f\|_{L^2(\mathbb{R}^d)}\\ & \leq R^p (4^{\nu}e^{\nu}A)^{p+|\beta|} (p!)^{\delta \nu} (|\beta|!)^{\mu} \sup_{q \in \mathbb{N}, \gamma \in \mathbb{N}^d} \frac{\|\langle x \rangle^q \partial_x^{\gamma} f \|_{L^2(\mathbb{R}^d)}}{A^{q+|\gamma|} (q!)^{\nu} (|\gamma|!)^{\mu}} \\ & \leq (4^{\nu}e^{\nu}\max(1,R) A)^{p+|\beta|} (p!)^{\delta \nu} (|\beta|!)^{\mu} \sup_{q \in \mathbb{N}, \gamma \in \mathbb{N}^d} \frac{\|\langle x \rangle^q \partial_x^{\gamma} f \|_{L^2(\mathbb{R}^d)}}{A^{q+|\gamma|} (q!)^{\nu} (|\gamma|!)^{\mu}}, \end{align*} which implies that \begin{equation*} \|f\|_{GS_{\mathcal{N}, \rho}} \leq \sup_{p \in \mathbb{N}, \beta \in \mathbb{N}^d} \frac{\|\langle x \rangle^p \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)}}{A^{p+|\beta|} (p!)^{\nu} (|\beta|!)^{\mu}} < +\infty, \end{equation*} with the non-decreasing sequence $$\mathcal{N}=\Big(\big(4^{\nu} e^{\nu}\max(1,R)A\big)^{p+q}(p!)^{\delta \nu} (q!)^{\mu}\Big)_{(p,q) \in \mathbb{N}^2}.$$ The assumption $0\leq \delta \leq \frac{1-\mu}{\nu}$ implies that the diagonal sequence $$\mathcal{M}=(M_p)_{p \in \mathbb{N}}=\Big(\big(4^{\nu} e^{\nu}\max(1,R)A\big)^{2p} (p!)^{\delta \nu + \mu}\Big)_{p \in \mathbb{N}}$$ is a logarithmically convex quasi-analytic sequence thanks to the Denjoy-Carleman's theorem (Theorem~\ref{Den_Carl_thm}) and since $\delta \nu +\mu \leq 1$. Since $\omega$ is assumed to be $\gamma$-thick with respect to $\rho$ for some $0<\gamma \leq 1$, we deduce from Theorem~\ref{general_uncertaintyprinciple} that there exist some constants $K=K(d, \rho, \delta, \mu, \nu)\geq 1, K'=K'(d, \rho, \gamma)\geq1, r=r(d, \rho) \geq 1$ so that for all $0< \varepsilon \leq M^2_0=1$, \begin{multline}\label{appl_thm_up} \|f\|^2_{L^2(\mathbb{R}^d)} \leq C_{\varepsilon} \| f\|^2_{L^2(\omega)} + \varepsilon \|f\|^2_{GS_{\mathcal{N},\rho}} \\ \leq C_{\varepsilon} \|f\|^2_{L^2(\omega)} + \varepsilon \bigg(\sup_{p \in \mathbb{N}, \beta \in \mathbb{N}^d} \frac{\|\langle x \rangle^p \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)}}{A^{p+|\beta|} (p!)^{\nu} (|\beta|!)^{\mu}}\bigg)^2, \end{multline} where \begin{equation}\label{c_eps1609} C_{\varepsilon} =K' \bigg(\frac{2d}{\gamma}\Gamma_{\mathcal{M}}(2n_{t_0,\mathcal{M},r}) \bigg)^{4n_{t_0,\mathcal{M}, r}} \end{equation} with \begin{equation*} 0< t_0=\frac{\varepsilon^{\frac{1}{2}}}{K A^{2d}} \leq 1. \end{equation*} Direct computations \begin{equation*} \forall N \geq 1, \quad \sum_{p > -\log t_0}^N \frac{M_{p-1}}{M_{p}} = (4^{\nu} e^{\nu}\max(1,R) A)^{-2} \sum_{p > -\log t_0}^{N} \frac{(p-1)!^{\delta \nu+\mu}}{p!^{\delta \nu+\mu}}, \end{equation*} show that \begin{equation*} n_{t_0, \mathcal{M}, r} = n_{t_0, \mathcal{M}_{\delta \nu+\mu}, r'A^2}, \end{equation*} with \begin{equation*} \mathcal{M}_{\delta \mu+\nu} = \big((p!)^{\delta \nu + \mu} \big)_{p \in \mathbb{N}} \quad \text{and} \quad r'=r 16^{\nu} e^{2\nu} \max(1,R^2). \end{equation*} By using from Lemma~\ref{ex_qa_sequence} that $\Gamma_{\mathcal{M}_{\delta \nu +\mu}}$ is bounded, it follows that there exists a positive constant $D \geq 1$ such that \begin{equation*} \forall n \in \mathbb{N}^*, \quad \Gamma_{\mathcal{M}}(n) = \Gamma_{\mathcal{M}_{\delta \nu +\mu}}(n) \leq D. \end{equation*} By using anew Lemma~\ref{ex_qa_sequence} and \eqref{c_eps1609}, we deduce that when $0 \leq \delta \nu+\mu <1$, \begin{align}\label{ceps} 0< C_{\varepsilon} & \leq K'\bigg(\frac{2d}{\gamma} D \bigg)^{4n_{t_0, \mathcal{M}_{\delta \nu +\mu}, r' A^2}}\\ \nonumber & \leq K'\bigg(\frac{2d}{\gamma} D \bigg)^{ 2^{\frac{1}{1-\delta \nu-\mu}+2} \big( 1-\log t_0+(A^2 r')^{\frac{1}{1-\delta \nu-\mu}}\big)} \\ \nonumber & =K'\bigg(\frac{2d}{\gamma} D \bigg)^{2^{\frac{1}{1-\delta \nu-\mu}+2} \big(1+\log K+ 2d\log A-\frac{1}{2}\log \varepsilon+(A^2 r')^{\frac{1}{1-\delta \nu-\mu}} \big)}. \end{align} Since $0 \leq \log A \leq A \leq A^{\frac{2}{1-\delta \nu-\mu}}$ and $\log \varepsilon \leq 0$, it follows from \eqref{ceps} that \begin{equation*}\label{ceps2} 0< C_{\varepsilon} \leq K'\bigg(\frac{2d}{\gamma} D \bigg)^{D'\big(1-\log \varepsilon +A^{\frac{2}{1-\delta \nu-\mu}} \big)}, \end{equation*} with $D'= 2^{\frac{1}{1-\delta \nu-\mu}+2}\max \Big(1+\log K, 2d+ {r'}^{\frac{1}{1-\delta \nu-\mu}},\frac{1}{2}\Big)$. On the other hand, when $\delta \nu +\mu =1$, Lemma~\ref{ex_qa_sequence} and the estimates \eqref{c_eps1609} imply that \begin{align}\label{ceps3} 0< C_{\varepsilon} & \leq K'\bigg( \frac{2d}{\gamma} D \bigg)^{4n_{t_0, \mathcal{M}_{1}, r'A^2}}\\ \notag & \leq K'\bigg( \frac{2d}{\gamma} D \bigg)^{4(1-\log t_0)e^{r' A^2}}\\ \notag & \leq K'\bigg( \frac{2d}{\gamma} D \bigg)^{4(1+\log K+2d \log A-\frac{1}{2}\log \varepsilon)e^{r' A^2}}. \end{align} While setting $D'=4 \max\big(1+\log K, 2d, r', \frac{1}{2}\big)$, we obtain from \eqref{ceps3} \begin{equation*} 0< C_{\varepsilon} \leq K'\bigg( \frac{2d}{\gamma} D \bigg)^{D'(1+ \log A-\log \varepsilon)e^{D'A^2}}. \end{equation*} This ends the proof of Theorem~\ref{specific_GS_uncertaintyprinciple}. \subsection{Proof of Theorem~\ref{uncertainty_principle}}\label{thm_hermite_proof} Let $0 \leq \delta \leq 1$ and $\Theta : [0,+\infty) \longrightarrow [0,+\infty)$ be a non-negative continuous function such that the associated sequence $(M_p)_{p \in \mathbb{N}}$ in \eqref{lc_sequence} satisfies the assumptions $\text{(H1)}$, $\text{(H2)}$ and $\text{(H3)}_{\frac{1+\delta}{2}}$. Beforehand, let us notice that the logarithmic convexity property of the sequence $(M_p)_{p \in \mathbb{N}}$, that is, \begin{equation*} \forall p \in \mathbb{N}^*, \quad M_{p}^2 \leq M_{p+1} M_{p-1}, \end{equation*} implies that \begin{equation*} \forall p \in \mathbb{N}^*, \quad \frac{M_p}{M_{p+1}} \leq \frac{M_{p-1}}{M_p} \leq \frac{M_0}{M_1}, \end{equation*} since $M_p >0$ for all $p \in \mathbb{N}$. It follows that the modified sequence $(M'_p)_{p \in \mathbb{N}}= \big(\big(\frac{M_0}{M_1}\big)^p M_p \big)_{p \in \mathbb{N}}$ is a non-decreasing logarithmically convex sequence. Let $f \in GS_{\Theta}$ and $s=\frac{1+\delta}{2}$. According to Proposition~\ref{bernstein_estim1}, there exists a positive constant $D_{\Theta, d, \delta}\geq 1$ independent on $f$, such that for all $r \geq 0$, $\beta \in \mathbb{N}^d$, \begin{align*} \|\langle x \rangle^{r} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} & \leq (D_{\Theta,d,\delta})^{1+r+|\beta|} \Big(M_{\left\lfloor \frac{r+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}} \\ & =(D_{\Theta,d,\delta})^{1+r+|\beta|} \Big(\frac{M_1}{M_0}\Big)^{s\left\lfloor \frac{r+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor+s} \Big(M'_{\left\lfloor \frac{r+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}} \\ & \leq D'^{1+r+|\beta|} \Big(M'_{\left\lfloor \frac{r+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}}, \end{align*} where $D'=D'( \Theta, d , \delta) \geq 1$ is a new positive constant. Since the sequence $(M'_p)_{p \in \mathbb{N}}$ is non-decreasing and $\frac{1}{2} \leq s \leq 1$, we deduce that \begin{equation*} \forall r \geq0, \forall \beta \in \mathbb{N}^d, \quad \|\langle x \rangle^{r} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} \leq D'^{1+r+|\beta|} \Big(M'_{\left\lfloor \frac{r+|\beta|}{2s} \right\rfloor +2d+4}\Big)^s \|f\|_{GS_{\Theta}}. \end{equation*} This implies that $f \in GS_{\mathcal{N}, \rho}$ and $$\| f \|_{GS_{\mathcal{N}, \rho}} \leq \|f\|_{GS_{\Theta}}$$ with $$\mathcal{N}=(N_{p,q})_{p,q \in \mathbb{N}^d}=\Big(\max(R,1)^pD'^{1+p+q} \Big(M'_{\left\lfloor \frac{\delta p+q}{1+\delta} \right\rfloor +2d+4}\Big)^{\frac{1+\delta}{2}}\Big)_{(p,q) \in \mathbb{N}^2}.$$ We conclude by applying Theorem~\ref{general_uncertaintyprinciple}. To that end, we notice that the non-decreasing property of the sequence $(M'_p)_{p \in \mathbb{N}}$ ensures that the sequence $\mathcal{N}$ is non-decreasing with respect to the two indexes. We deduce from the Denjoy-Carleman Theorem (Theorem~\ref{Den_Carl_thm}) and assumption $\text{(H3)}_{\frac{1+\delta}{2}}$ that the diagonal sequence $$(N_{p,p})_{p \in \mathbb{N}} = \big(\max(R,1)^pD'^{1+2p} (M'_{p +2d+4})^{\frac{1+\delta}{2}}\big)_{p \in \mathbb{N}},$$ is quasi-analytic since \begin{multline*} \sum_{p=0}^{+\infty} \frac{N_{p,p}}{N_{p+1,p+1}}= \frac{1}{ \max(R,1)D'^2} \sum_{p=2d+4}^{+\infty} \Big(\frac{{M'_p}}{{M'_{p+1}}}\Big)^{\frac{1+\delta}{2}} \\= \frac{1}{\max(R,1) D'^2} \Big(\frac{M_1}{M_0}\Big)^{\frac{1+\delta}{2}} \sum_{p=2d+4}^{+\infty} \Big(\frac{{M_p}}{{M_{p+1}}}\Big)^{\frac{1+\delta}{2}}=+\infty. \end{multline*} The result of Theorem~\ref{uncertainty_principle} then follows from Theorem~\ref{general_uncertaintyprinciple}. It ends the proof of Theorem~\ref{uncertainty_principle}. \section{Proof of Theorem~\ref{observability_result}}\label{proof_obs} This section is devoted to the proof of the null-controllability result given by Theorem~\ref{observability_result}. Let $(A,D(A))$ be a closed operator on $L^2(\mathbb{R}^d)$ which is the infinitesimal generator of a strongly continuous contraction semigroup $(e^{-tA})_{t \geq 0}$ on $L^2(\mathbb{R}^d)$ that satisfies the following quantitative smoothing estimates: there exist some constants $0 < \mu <1$, $\nu > 0$ with $\mu+ \nu \geq 1$ and $C \geq 1$, $r_1>0$, $r_2 \geq 0$, $0< t_0 \leq 1$ such that \begin{multline}\label{GS_estimate2} \forall 0< t \leq t_0, \forall \alpha, \beta \in \mathbb{N}^d, \forall g \in L^2(\mathbb{R}^d),\\ \| x^{\alpha}\partial_x^{\beta}( e^{-t A^*}g)\|_{L^2(\mathbb{R}^d)} \leq \frac{C ^{1+|\alpha|+|\beta|}}{t^{r_1 (|\alpha|+|\beta|)+r_2}}(\alpha !)^{\nu} (\beta!)^{\mu} \|g\|_{L^2(\mathbb{R}^d)}, \end{multline} where $A^*$ denotes the $L^2(\mathbb{R}^d)$-adjoint of $A$. Let $\rho : \mathbb{R}^d \longrightarrow (0,+\infty)$ be a $L$-Lipschitz positive function with~$\mathbb{R}^d$ being equipped with the Euclidean norm and $0 < L <1$ such that there exist some constants $0 \leq \delta < \frac{1-\mu}{\nu}$, $m>0$, $R>0$ so that \begin{equation*} \forall x \in \mathbb{R}^d, \quad 0<m \leq \rho(x) \leq R{\left\langle x\right\rangle}^{\delta}. \end{equation*} Let $\omega \subset \mathbb{R}^d$ be a measurable subset that is thick with respect to the density $\rho$. Let us show that Theorem~\ref{observability_result} can be deduced from the uncertainty principles given in Theorem~\ref{specific_GS_uncertaintyprinciple}. To that end, we deduce from the estimates \eqref{GS_estimate2} and Lemma~\ref{croch} that there exists a positive constant $C'=C'(C, d)\geq 1$ such that \begin{multline}\label{bernstein_estimate_GS} \forall 0< t \leq t_0, \forall p \in \mathbb{N}, \forall \beta \in \mathbb{N}^d, \forall g \in L^2(\mathbb{R}^d),\\ \| \langle x \rangle^{p} \partial_x^{\beta}( e^{-t A^*}g)\|_{L^2(\mathbb{R}^d)} \leq \frac{C'^{1+p+|\beta|}}{t^{r_1 (p+|\beta|)+r_2}}(p !)^{\nu} (|\beta|!)^{\mu} \|g\|_{L^2(\mathbb{R}^d)}. \end{multline} It follows from \eqref{bernstein_estimate_GS} and Theorem~\ref{specific_GS_uncertaintyprinciple} applied with $f=e^{-tA^*}g \in \mathscr{S}(\mathbb{R}^d)$ that there exists a positive constant $K=K(\gamma, d, \rho, \mu, \nu) \geq 1$ such that $\forall 0< t \leq t_0, \forall g \in L^2(\mathbb{R}^d), \forall 0< \varepsilon \leq 1,$ \begin{equation} \|e^{-tA^*}g\|^2_{L^2(\mathbb{R}^d)} \leq e^{K ( 1-\log \varepsilon+(C' t^{-r_1})^{\frac{2}{1-s}})} \|e^{-tA^*}g\|^2_{L^2(\omega)} + \frac{C'^2}{t^{2r_2}} \varepsilon \|g\|^2_{L^2(\mathbb{R}^d)}, \end{equation} with $0<s=\delta \nu +\mu<1$. Thanks to the contraction property of the semigroup $(e^{-tA^*})_{t \geq 0}$, we deduce that for all $0< \tau \leq t_0$, $\frac{1}{2} \leq q <1$, $0<\varepsilon \leq 1$, $g \in L^2(\mathbb{R}^d)$, \begin{align*} \|e^{-\tau A^*}g\|^2_{L^2(\mathbb{R}^d)} & \leq \frac{1}{(1-q)\tau} \int_{q\tau}^{\tau} \|e^{-t A^*}g \|^2_{L^2(\mathbb{R}^d)} dt \\ \nonumber & \leq \frac{ e^{K (1-\log\varepsilon+(C'(q\tau)^{-r_1})^{\frac{2}{1-s}})}}{(1-q)\tau} \int_{q \tau}^{\tau} \|e^{-tA^*} g \|^2_{L^2(\omega)} dt + \varepsilon \frac{C'^2}{(q \tau)^{2r_2}} \|g\|^2_{L^2(\mathbb{R}^d)} \\ \nonumber & \leq \frac{ e^{K (1-\log\varepsilon+(C'2^{r_1}\tau^{-r_1})^{\frac{2}{1-s}})}}{(1-q)\tau} \int_{q \tau}^{\tau} \|e^{-tA^*} g \|^2_{L^2(\omega)} dt + \varepsilon \frac{4^{r_2}C'^2}{\tau^{2 r_2}} \|g\|^2_{L^2(\mathbb{R}^d)}. \end{align*} For $0< \tau \leq t_0$ and $\frac{1}{2} \leq q <1$, we choose $$0<\varepsilon = \exp\big(-\tau^{-\frac{2r_1}{1-s}}\big)\leq 1.$$ Since $1 \leq \frac{1}{\tau^{2r_1}}$, it follows that there exists a new constant $K'=K'(\gamma, d, \rho, \delta, \mu, \nu,C', r_1, s)\geq 1$ such that for all $0< \tau \leq t_0$, $\frac{1}{2} \leq q <1$, $g \in L^2(\mathbb{R}^d)$, \begin{equation*} \|e^{-\tau A^*}g\|^2_{L^2(\mathbb{R}^d)} \leq \frac{e^{K'\tau^{-\frac{2r_1}{1-s}}}}{(1-q)\tau} \int_{q \tau}^{\tau} \|e^{-tA^*} g \|^2_{L^2(\omega)} dt + \exp\big(-\tau^{-\frac{2r_1}{1-s}}\big) \frac{4^{r_2}C'^2}{\tau^{2 r_2}} \|g\|^2_{L^2(\mathbb{R}^d)}. \end{equation*} We follow the strategy developed by Miller in \cite{Miller}. Let $0<t_1\leq t_0$ such that for all $0< \tau \leq t_1$, \begin{equation*} \frac{\exp \Big(K' \tau^{-\frac{2r_1}{1-s}}\Big)}{\tau} \leq \exp\Big(2K' \tau^{-\frac{2r_1}{1-s}}\Big) \end{equation*} and \begin{equation*} \exp\big(-\tau^{-\frac{2r_1}{1-s}}\big) \frac{4^{r_2}C'^2}{\tau^{2 r_2}} \leq \exp\Big(-\frac{\tau^{-\frac{2r_1}{1-s}}}{2}\Big). \end{equation*} It follows that for all $0 < \tau \leq t_1$, $\frac{1}{2} \leq q <1$, $g \in L^2(\mathbb{R}^d)$, \begin{multline*} (1-q)\exp\Big(-\frac{2K'}{\tau^{\frac{2r_1}{1-s}}}\Big) \|e^{-\tau A^*}g\|^2_{L^2(\mathbb{R}^d)} \\ \leq \int_{q \tau}^{\tau} \|e^{-tA^*} g \|^2_{L^2(\omega)} dt + (1-q)\exp\Big(-\frac{2K'+\frac{1}{2}}{\tau^{\frac{2r_1}{1-s}}}\Big)\|g\|^2_{L^2(\mathbb{R}^d)}. \end{multline*} Setting $f(\tau)=(1-q)\exp\Big(-\frac{2K'}{\tau^{\frac{2r_1}{1-s}}}\Big)$ and choosing $q$ so that $$ \max\Big(\Big(\frac{2K'}{2K'+\frac{1}{2}}\Big)^{\frac{1-s}{2r_1}}, \frac{1}{2} \Big) \leq q<1,$$ we obtain that for all $0< \tau \leq t_1$ and $g \in L^2(\mathbb{R}^d)$, \begin{equation}\label{approx_obs1609} f(\tau) \|e^{-\tau A^*}g\|^2_{L^2(\mathbb{R}^d)} \leq \int_{q \tau}^{\tau} \|e^{-tA^*} g \|^2_{L^2(\omega)} dt + f(q \tau) \|g\|^2_{L^2(\mathbb{R}^d)}. \end{equation} Thanks to this estimate, the observability estimate is established as follows: let $0< T \leq t_1$ and define the two sequences $(\tau_k)_{k \geq 0}$ and $(T_k)_{k \geq 0}$ as \begin{equation*} \forall k \geq 0, \quad \tau_k= q^k (1-q) T \quad \text{ and } \quad \forall k \geq 0, \quad T_{k+1}= T_k-\tau_k, \quad T_0=T. \end{equation*} By applying \eqref{approx_obs1609} with $e^{-T_{k+1}A^*}g$, it follows that for all $g \in L^2(\mathbb{R}^d)$ and $k \in \mathbb{N}$, \begin{multline*} f(\tau_k) \|e^{-T_k A^*}g\|^2_{L^2(\mathbb{R}^d)} -f(\tau_{k+1}) \|e^{-T_{k+1} A^*}g\|^2_{L^2(\mathbb{R}^d)} \\ \leq \int_{\tau_{k+1}}^{\tau_k} \|e^{-(t+T_{k+1})A^*}g \|^2_{L^2(\omega)} dt = \int_{\tau_{k+1}+T_{k+1}}^{T_k} \|e^{-tA^*}g \|^2_{L^2(\omega)} dt \leq \int_{T_{k+1}}^{T_k} \|e^{-tA^*}g \|^2_{L^2(\omega)} dt. \end{multline*} By summing over all the integers $k \in \mathbb{N}$ and by noticing that $$\lim \limits_{k \to +\infty} f(\tau_k)=0, \quad \lim \limits_{k \to +\infty} T_k= T- \sum_{k \in \mathbb{N}} \tau_k =0,$$ and $$ \forall k \geq 0, \quad \|e^{-T_k A^*}g\|_{L^2(\mathbb{R}^d)} \leq \|g\|_{L^2(\mathbb{R}^d)},$$ by the contraction property of the semigroup $(e^{-tA^*})_{t \geq 0}$, it follows that \begin{equation*} \|e^{-T A^*}g\|^2_{L^2(\mathbb{R}^d)} \leq \frac{1}{1-q}\exp\Big(\frac{2K'}{((1-q)T)^{\frac{2r_1}{1-\mu -\delta\nu}}} \Big) \int_{0}^{T} \|e^{-tA^*}g \|^2_{L^2(\omega)} dt. \end{equation*} By using anew the contraction property of the semigroup $(e^{-tA^*})_{t \geq 0}$, we deduce that for all $g \in L^2(\mathbb{R}^d)$, $T \geq t_1$, \begin{multline*} \|e^{-T A^*}g\|^2_{L^2(\mathbb{R}^d)} \leq \|e^{-t_1 A^*}g\|^2_{L^2(\mathbb{R}^d)} \leq \frac{1}{1-q} \exp\Big(\frac{2K'}{((1-q)t_1)^{\frac{2 r_1}{1-\mu -\delta\nu}}} \Big) \int_{0}^{t_1} \|e^{-tA^*}g \|^2_{L^2(\omega)} dt \\ \leq \frac{1}{1-q} \exp\Big(\frac{2K'}{((1-q)t_1)^{\frac{2 r_1}{1-\mu -\delta\nu}}} \Big) \int_{0}^{T} \|e^{-tA^*}g \|^2_{L^2(\omega)} dt. \end{multline*} This ends the proof of Theorem~\ref{observability_result}. \section{Appendix}\label{appendix} \subsection{Bernstein type estimates}\label{Hermite_functions} This section is devoted to the proof of the Bernstein type estimates given in Proposition~\ref{bernstein_estim1}. To that end, we begin by recalling basic facts about Hermite functions. The standard Hermite functions $(\phi_{k})_{k\geq 0}$ are defined for $x \in \mathbb{R}$, \begin{equation*}\label{defi} \phi_{k}(x)=\frac{(-1)^k}{\sqrt{2^k k!\sqrt{\pi}}} e^{\frac{x^2}{2}}\frac{d^k}{dx^k}(e^{-x^2}) =\frac{1}{\sqrt{2^k k!\sqrt{\pi}}} \Bigl(x-\frac{d}{dx}\Bigr)^k(e^{-\frac{x^2}{2}})=\frac{ a_{+}^k \phi_{0}}{\sqrt{k!}}, \end{equation*} where $a_{+}$ is the creation operator $$a_{+}=\frac{1}{\sqrt{2}}\Big(x-\frac{d}{dx}\Big).$$ The Hermite functions satisfy the identity \begin{equation*}\label{eq2ui1} \forall k \in \mathbb{N}, \quad \Big(-\frac{d^2}{dx^2}+x^2\Big)\phi_{k}=(2k+1)\phi_{k}. \end{equation*} The family $(\phi_{k})_{k\in \mathbb{N}}$ is a Hilbert basis of $L^2(\mathbb R)$. We set for $\alpha=(\alpha_{j})_{1\le j\le d}\in\mathbb N^d$, $x=(x_{j})_{1\le j\le d}\in \mathbb R^d,$ \begin{equation*}\label{jk1} \Phi_{\alpha}(x)=\prod_{j=1}^d\phi_{\alpha_j}(x_j). \end{equation*} The family $(\Phi_{\alpha})_{\alpha \in \mathbb{N}^d}$ is a Hilbert basis of $L^2(\mathbb R^d)$ composed of the eigenfunctions of the $d$-dimensional harmonic oscillator \begin{equation*}\label{6.harmo} \mathcal{H}=-\Delta_x+|x|^2=\sum_{k\ge 0}(2k+d)\mathbb P_{k},\quad \text{Id}=\sum_{k \ge 0}\mathbb P_{k}, \end{equation*} where $\mathbb P_{k}$ is the orthogonal projection onto $\text{Span}_{\mathbb{C}} \{\Phi_{\alpha}\}_{\alpha\in \mathbb N^d,\val \alpha =k}$, with $\val \alpha=\alpha_{1}+\dots+\alpha_{d}$. Instrumental in the sequel are the following basic estimates proved by Beauchard, Jaming and Pravda-Starov in the proof of \cite[Proposition~3.3]{kkj} (formula (3.38)). \begin{lemma}\label{lem1} With $\mathcal{E}_N= \emph{\textrm{Span}}_{\mathbb{C}}\{\Phi_{\alpha}\}_{\alpha \in \mathbb{N}^d, \ |\alpha| \leq N}$, we have for all $N \in \mathbb{N}$, $f \in \mathcal{E}_N$, \begin{equation*} \forall (\alpha, \beta) \in \mathbb{N}^d \times \mathbb{N}^d, \quad \|x^{\alpha}\partial_x^{\beta}f\|_{L^2(\mathbb{R}^d)}\leq 2^{\frac{|\alpha|+|\beta|}{2}}\sqrt{\frac{(N+|\alpha|+|\beta|)!}{N!}}\|f\|_{L^2(\mathbb{R}^d)}. \end{equation*} \end{lemma} We can now prove Proposition~\ref{bernstein_estim1}. Let $\Theta : [0,+\infty) \longrightarrow [0,+\infty)$ be a non-negative continuous function such that the associated sequence $(M_p)_{p \in \mathbb{N}}$ in \eqref{lc_sequence} satisfies the assumptions $(H1)$ and $(H2)$. Let $f \in GS_{\Theta}$, $0< s \leq 1$ and $(\alpha, \beta) \in \mathbb{N}^d \times \mathbb{N}^d$. We begin by proving that there exist some positive constants $C'_{\Theta}>0$, $\tilde{C}_{\Theta}>0$, independent on $f$, $\alpha$ and $\beta$ such that \begin{equation*}\label{gse_1709} \| x^{\alpha} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} \leq C'_{\Theta} \tilde{C}^{|\alpha|+|\beta|}_{\Theta} \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}}. \end{equation*} It is sufficient to prove that there exist some positive constants $C'_{\Theta}>0$, $\tilde{C}_{\Theta}>0$, independent on $f$, $\alpha$ and $\beta$ such that for all $N \geq |\alpha|+|\beta|+1$, \begin{equation}\label{bernst_goal} \|x^{\alpha} \partial_x^{\beta} \pi_Nf \|_{L^2(\mathbb{R}^d)} \leq C'_{\Theta} \tilde{C}^{|\alpha|+|\beta|}_{\Theta} \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}}, \end{equation} with $\pi_N f$ the orthogonal projection of the function $f$ onto the space $\textrm{Span}_{\mathbb{C}}\{\Phi_{\alpha}\}_{\alpha \in \mathbb{N}^d, \ |\alpha| \leq N}$ given by \begin{equation}\label{orth_proj} \pi_N f = \sum_{\substack{\alpha \in \mathbb{N}^d, \\ |\alpha| \leq N}} \left\langle f, \Phi_{\alpha} \right\rangle_{L^2(\mathbb{R}^d)} \Phi_{\alpha}. \end{equation} Indeed, by using that $(\pi_N f)_{N \in \mathbb{N}}$ converges to $f$ in $L^2(\mathbb{R}^d)$ and therefore in $\mathcal{D}'(\mathbb{R}^d)$, we obtain that the sequence $(x^{\alpha} \partial^{\beta}_x \pi_{N} f)_{N\in \mathbb{N}}$ converges to $x^{\alpha}\partial^{\beta}_x f$ in $\mathcal{D}'(\mathbb{R}^d)$. If the estimates \eqref{bernst_goal} hold, the sequence $(x^{\alpha} \partial^{\beta}_x \pi_{N} f)_{N\in \mathbb{N}}$ is bounded in $L^2(\mathbb{R}^d)$ and therefore weakly converges (up to a subsequence) to a limit $g \in L^2(\mathbb{R}^d)$. Thanks to the uniqueness of the limit in $\mathcal{D}'(\mathbb{R}^d)$, it follows that $g=x^{\alpha} \partial^{\beta}_x f \in L^2(\mathbb{R}^d)$. Moreover, we have \begin{equation*} \|x^{\alpha} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} \leq \liminf_{N \to +\infty} \|x^{\alpha} \partial_x^{\beta} \pi_{\phi(N)} f \|_{L^2(\mathbb{R}^d)} \leq C'_{\Theta} \tilde{C}^{|\alpha|+|\beta|}_{\Theta} \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s\|f\|_{GS_{\Theta}}. \end{equation*} Let us prove that the estimates \eqref{bernst_goal} hold. Since $\pi_{|\alpha|+|\beta|}$ is an orthogonal projection on $L^2(\mathbb{R}^d)$ and therefore satisfies $$\| \pi_{|\alpha|+|\beta|} f \|_{L^2(\mathbb{R}^d)} \leq \| f \|_{L^2(\mathbb{R}^d)},$$ we deduce from Lemma~\ref{lem1} and \eqref{orth_proj} that for all $N \geq |\alpha|+ |\beta|+1$, \begin{align*} & \quad \|x^{\alpha}\partial_x^{\beta} \pi_N f \|_{L^2} \leq \|x^{\alpha}\partial_x^{\beta} \pi_{|\alpha|+|\beta|} f \|_{L^2} + \|x^{\alpha}\partial_x^{\beta} (\pi_{N}-\pi_{|\alpha|+|\beta|}) f \|_{L^2} \\ \notag & \leq 2^{\frac{|\alpha|+|\beta|}{2}}\sqrt{\frac{(2(|\alpha|+|\beta|))!}{(|\alpha|+|\beta|)!}} \|f\|_{L^2(\mathbb{R}^d)} + \sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\alpha|+|\beta|+1 \leq |\gamma| \leq N}} |\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}| \|x^{\alpha} \partial_x^{\beta} \Phi_{\gamma} \|_{L^2(\mathbb{R}^d)}. \end{align*} By using anew Lemma~\ref{lem1}, it follows that for all $N \geq |\alpha|+ |\beta|+1$, \begin{align}\label{gse_1709} & \|x^{\alpha}\partial_x^{\beta} \pi_N f \|_{L^2} \\ \notag & \leq 2^{|\alpha|+|\beta|}(|\alpha|+|\beta|)^{\frac{|\alpha|+|\beta|}{2}} \|f\|_{L^2(\mathbb{R}^d)} + \sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\alpha|+|\beta|+1 \leq |\gamma| \leq N}} |\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}| 2^{\frac{|\alpha|+|\beta|}{2}}\sqrt{\frac{(|\gamma|+|\alpha|+|\beta|)!}{|\gamma|!}} \\ \notag & \leq 2^{|\alpha|+|\beta|}(|\alpha|+|\beta|)^{\frac{|\alpha|+|\beta|}{2}} \|f\|_{L^2(\mathbb{R}^d)} + \sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\alpha|+|\beta|+1 \leq |\gamma| \leq N}} |\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}| 2^{|\alpha|+|\beta|} |\gamma|^{\frac{|\alpha|+|\beta|}{2}}. \end{align} On the first hand, it follows from $0< s \leq 1$ that for all $N \geq |\alpha|+ |\beta|+1$, \begin{align}\label{gse_2} & \quad \sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\alpha|+|\beta|+1 \leq |\gamma| \leq N}} |\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}| |\gamma|^{\frac{|\alpha|+|\beta|}{2}} \\ \notag &\leq \sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\gamma| \geq 1}} |\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}| e^{s\Theta(|\gamma|)} |\gamma|^{-\frac{(2-s)(d+1)}{2}} |\gamma|^{\frac{|\alpha|+|\beta|+(2-s)(d+1)}{2}}e^{-s\Theta(|\gamma|)} \\ \notag &\leq \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\gamma| \geq 1}} |\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}| e^{s\Theta(|\gamma|)} |\gamma|^{-\frac{(2-s)(d+1)}{2}} \\ \notag &\leq \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \vert|\big(\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}\big)_{\gamma \in \mathbb{N}^d} \vert|_{l^{\infty}(\mathbb{N}^d)}^{1-s} \sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\gamma| \geq 1}} \Big(|\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}| e^{\Theta(|\gamma|)}\Big)^s |\gamma|^{-\frac{(2-s)(d+1)}{2}}. \end{align} H\"older's inequality implies that for all $0 < s \leq 1$, \begin{equation}\label{holder} \sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\gamma| \geq 1}} \Big(|\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}| e^{\Theta(|\gamma|)}\Big)^s |\gamma|^{-\frac{(2-s)(d+1)}{2}} \leq D_{d,s}\left\|\Big(e^{\Theta(|\gamma|)}\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}\Big)_{\gamma \in \mathbb{N}^d} \right\|_{l^{2}(\mathbb{N}^d)}^{s}, \end{equation} with \begin{equation*} D_{d,s} =\Big(\sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\gamma| \geq 1}} |\gamma|^{-(d+1)}\Big)^{1-\frac{s}{2}} < +\infty. \end{equation*} Since $\Theta(|\gamma|) \geq 0$ for all $\gamma \in \mathbb{N}^d$, it follows that \begin{equation}\label{infnorm1709} \vert|\big(\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}\big)_{\gamma \in \mathbb{N}^d} \vert|_{l^{\infty}(\mathbb{N}^d)} \leq \vert|\big(\left\langle f, \Phi_{\gamma} \right\rangle_{L^2} e^{\Theta(|\gamma|)}\big)_{\gamma \in \mathbb{N}^d} \vert|_{l^{\infty}(\mathbb{N}^d)} \leq \| f \|_{GS_{\Theta}}. \end{equation} We deduce from \eqref{gse_2}, \eqref{holder} and \eqref{infnorm1709} that \begin{equation}\label{gse_3} \sum_{\substack{\gamma \in \mathbb{N}^d, \\ |\alpha|+|\beta|+1 \leq |\gamma| \leq N}} |\left\langle f, \Phi_{\gamma} \right\rangle_{L^2}| |\gamma|^{\frac{|\alpha|+|\beta|}{2}} \leq D_{d,s} \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}}. \end{equation} On the other hand, the assumption $\text{(H2)}$ implies that there exist $C_{\Theta}\geq 1$ and $L_{\Theta}\geq 1$ such that if $|\alpha|+|\beta| \geq 1$ then \begin{align}\label{gse_4} (|\alpha|+|\beta|)^{\frac{|\alpha|+|\beta|}{2}} & = (2s)^{\frac{|\alpha|+|\beta|}{2}} \Big(\frac{|\alpha|+|\beta|}{2s} \Big)^{s \frac{|\alpha|+|\beta|}{2s}} \\ \notag & \leq (2s)^{\frac{|\alpha|+|\beta|}{2}} \Big(\left\lfloor \frac{|\alpha|+|\beta|}{2s} \right\rfloor +1\Big)^{s \Big(\left\lfloor \frac{|\alpha|+|\beta|}{2s}\right\rfloor +1\Big)} \\ \notag & \leq (2s)^{\frac{|\alpha|+|\beta|}{2}} C^s_{\Theta}L^s_{\Theta} L_{\Theta}^{\frac{|\alpha|+|\beta|}{2}} \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|}{2s} \right\rfloor +1}\Big)^s. \end{align} The last inequality holds true as well when $|\alpha| +|\beta|=0$ with the convention $0^0=1$ since $C_{\Theta} M_1 \geq 1$ and $L_{\Theta} \geq 1$. The logarithmical convexity of the sequence $(M_p)_{p \in \mathbb{N}}$ gives \begin{equation*}\label{log_conv0} \forall p \in \mathbb{N}, \quad M_p \leq \frac{M_{0}}{M_1} M_{p+1} \end{equation*} and therefore, \begin{equation*}\label{log_conv} \forall 0 \leq p \leq q, \quad M_p \leq \Big(\frac{M_{0}}{M_1}\Big)^{q-p} M_{q}. \end{equation*} By using this estimate together with the following elementary inequality \begin{equation*} \forall x,y \geq 0, \quad \lfloor x +y \rfloor \leq \lfloor x \rfloor + \lfloor y \rfloor +1, \end{equation*} we obtain \begin{equation}\label{log_conv2} \forall 0 \leq r \leq r', \quad M_{\lfloor r \rfloor} \leq \max\Big(1,\frac{M_{0}}{M_1}\Big)^{\lfloor r'-r \rfloor+1} M_{\lfloor r' \rfloor}. \end{equation} It follows from \eqref{gse_4} and \eqref{log_conv2} that \begin{align}\label{gse_5} (|\alpha|+|\beta|)^{\frac{|\alpha|+|\beta|}{2}} & \leq C_{\Theta}^s L_{\Theta}^s \Big(\sqrt{2s L_{\Theta}} \Big)^{|\alpha|+|\beta|} \max\Big(1,\frac{M_0}{M_1}\Big)^{s (\left\lfloor \frac{(2-s)(d+1)}{2s} \right\rfloor +1)}\Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \\ \notag & \leq C_{\Theta}^s L_{\Theta}^s \Big(\sqrt{2s L_{\Theta}} \Big)^{|\alpha|+|\beta|} \max\Big(1,\frac{M_0}{M_1}\Big)^{d+2} \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s, \end{align} since $0< s \leq 1$. We deduce from \eqref{gse_1709}, \eqref{gse_3} and \eqref{gse_5} that for all $N \geq |\alpha|+|\beta|+1$, \begin{equation*} \| x^{\alpha} \partial_x^{\beta} \pi_N f \|_{L^2(\mathbb{R}^d)} \leq K_{\Theta, s} K'^{|\alpha|+|\beta|}_{\Theta,s} \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}}, \end{equation*} with $K_{\Theta,s} = D_{d,s} +C^s_{\Theta} L^s_{\Theta} \max\Big(1,\frac{M_0}{M_1}\Big)^{d+2}\geq 1$ and $K'_{\Theta,s} = 2 \max(1, \sqrt{2sL_{\Theta}}) \geq 1$. This implies that $f \in \mathscr{S}(\mathbb{R}^d)$ and for all $\alpha, \beta \in \mathbb{N}^d$, \begin{equation}\label{gse_6} \| x^{\alpha} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} \leq K_{\Theta, s} K'^{|\alpha|+|\beta|}_{\Theta,s} \Big(M_{\left\lfloor \frac{|\alpha| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}}. \end{equation} By using Newton formula, we obtain that for all $k \in \mathbb{N}$, \begin{multline*} \|\left\langle x\right\rangle^k \partial_x^\beta f \|_{L^2(\mathbb{R}^d)}^2 = \int_{\mathbb{R}^d} \Big( 1 + \sum \limits_{i=1}^d {x_i^2} \Big)^k |\partial_x^\beta f(x) |^2 dx \\ = \int_{\mathbb{R}^d} \sum_{\substack{\gamma \in \mathbb{N}^{d+1}, \\ |\gamma|=k}} \frac{k!}{\gamma !} x^{2 \tilde{\gamma}} |\partial_x^\beta f(x) |^2 dx =\sum_{\substack{\gamma \in \mathbb{N}^{d+1}, \\ |\gamma|=k}} \frac{k!}{\gamma !} \|x^{\tilde{\gamma}} \partial_x^\beta f \|_{L^2(\mathbb{R}^d)}^2 , \end{multline*} where we denote $\tilde{\gamma}=(\gamma_1,...,\gamma_d) \in \mathbb{N}^d$ if $\gamma=(\gamma_1,...\gamma_{d+1}) \in \mathbb{N}^{d+1}$. It follows from \eqref{log_conv2} and \eqref{gse_6} that for all $k \in \mathbb{N}$ and $\beta \in \mathbb{N}^d$, \begin{align}\label{gse7} \|\left\langle x\right\rangle^k \partial_x^\beta f \|_{L^2(\mathbb{R}^d)}^2 \leq & \ \sum_{\substack{\gamma \in \mathbb{N}^{d+1}, \\ |\gamma|=k}} \frac{k!}{\gamma !} K^2_{\Theta,s} K'^{2(|\tilde{\gamma}|+|\beta|)}_{\Theta, s} \Big(M_{\left\lfloor \frac{|\tilde{\gamma}| +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^{2s} \|f\|^2_{GS_{\Theta}} \\ \nonumber \leq & \ \sum_{\substack{\gamma \in \mathbb{N}^{d+1}, \\ |\gamma|=k}} \frac{k!}{\gamma !} K^2_{\Theta, s} K'^{2(k+|\beta|)}_{\Theta, s} \max\Big(1,\frac{M_0}{M_1}\Big)^{k-|\tilde{\gamma}|+2} \Big(M_{\left\lfloor \frac{k +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^{2s} \|f\|^2_{GS_{\Theta}} \\ \nonumber \leq & K^2_{\Theta, s} (d+1)^k \max\Big(1,\frac{M_0}{M_1}\Big)^{k+2}K'^{2(k+|\beta|)}_{\Theta, s} \Big(M_{\left\lfloor \frac{k +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^{2s} \|f\|^2_{GS_{\Theta}}, \end{align} since \begin{equation*} \sum_{\substack{\gamma \in \mathbb{N}^{d+1}, \\ |\gamma|=k}} \frac{k!}{\gamma !}=(d+1)^k, \end{equation*} thanks to Newton formula. Let $r \in \mathbb{R}_+^* \setminus \mathbb{N}$. There exist $0 < \theta < 1$ and $k \in \mathbb{N}$ such that \begin{equation*}\label{floor} r= \theta k + (1- \theta)(k+1). \end{equation*} By using Hölder inequality, it follows from \eqref{gse7} that \begin{multline}\label{holder1709} \|\left\langle x\right\rangle^r \partial_x^\beta f \|_{L^2(\mathbb{R}^d)} \leq \|\langle x\rangle^k \partial_x^\beta f\|_{L^2(\mathbb{R}^d)}^{\theta}\|\langle x\rangle^{k+1} \partial_x^\beta f\|_{L^2(\mathbb{R}^d)}^{1-\theta} \\ \leq K_{\Theta, s} (d+1)^{\frac{r}{2}} \max\Big(1,\frac{M_0}{M_1}\Big)^{\frac{r}{2}+1}K'^{r+|\beta|}_{\Theta, s}\Big(M_{\left\lfloor \frac{k +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^{s \theta} \Big(M_{\left\lfloor \frac{k+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^{s(1-\theta)} \|f\|_{GS_{\Theta}}. \end{multline} By using anew \eqref{log_conv2}, we have \begin{equation*} M_{\left\lfloor \frac{k +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1} \leq \max\Big(1,\frac{M_0}{M_1}\Big)^{\frac{r+1-k}{2s}+1} M_{\left\lfloor \frac{r+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1} \end{equation*} and \begin{equation*} M_{\left\lfloor \frac{k+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1} \leq \max\Big(1,\frac{M_0}{M_1}\Big)^{\frac{r-k}{2s}+1} M_{\left\lfloor \frac{r+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}, \end{equation*} since $k \leq r$. We deduce from \eqref{holder1709} that \begin{align*} \|\left\langle x\right\rangle^r \partial_x^\beta f \|_{L^2(\mathbb{R}^d)} & \leq K_{\Theta, s} \max\Big(1,\frac{M_0}{M_1}\Big)^{\frac{2r+\theta-k}{2}+1+s} (d+1)^{\frac{r}{2}} K'^{r+|\beta|}_{\Theta, s} \Big(M_{\left\lfloor \frac{r+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}} \\ & \leq K_{\Theta, s} \max\Big(1,\frac{M_0}{M_1}\Big)^{\frac{r}{2}+3} (d+1)^{\frac{r}{2}} K'^{r+|\beta|}_{\Theta, s} \Big(M_{\left\lfloor \frac{r+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^s \|f\|_{GS_{\Theta}}, \end{align*} since $0<s\leq 1$, $k \leq r < k+1$ and $0< \theta <1$. Let us notice that the above inequality also holds for $r \in \mathbb{N}$. Indeed, it follows from \eqref{log_conv2} and \eqref{gse7} that \begin{align*} \|\left\langle x\right\rangle^k \partial_x^\beta f \|_{L^2(\mathbb{R}^d)} \leq & K_{\Theta, s} (d+1)^{\frac{k}{2}} \max\Big(1,\frac{M_0}{M_1}\Big)^{\frac{k}{2}+1}K'^{k+|\beta|}_{\Theta, s} \Big(M_{\left\lfloor \frac{k +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^{s} \|f\|_{GS_{\Theta}} \\ \leq &K_{\Theta, s} (d+1)^{\frac{k}{2}} \max\Big(1,\frac{M_0}{M_1}\Big)^{\frac{k}{2}+1+\frac{1}{2}+1}K'^{k+|\beta|}_{\Theta, s} \Big(M_{\left\lfloor \frac{k+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^{s} \|f\|_{GS_{\Theta}}\\ \leq &K_{\Theta, s} (d+1)^{\frac{k}{2}} \max\Big(1,\frac{M_0}{M_1}\Big)^{\frac{k}{2}+3}K'^{k+|\beta|}_{\Theta, s} \Big(M_{\left\lfloor \frac{k+1 +|\beta|+(2-s)(d+1)}{2s} \right\rfloor +1}\Big)^{s} \|f\|_{GS_{\Theta}}. \end{align*} This ends the proof of Proposition~\ref{bernstein_estim1}. \subsection{Gelfand-Shilov regularity}\label{gelfand} We refer the reader to the works~\cite{gelfand_shilov,rodino1,rodino,toft} and the references herein for extensive expositions of the Gelfand-Shilov regularity theory. The Gelfand-Shilov spaces $S_{\nu}^{\mu}(\mathbb{R}^d)$, with $\mu,\nu>0$, $\mu+\nu\geq 1$, are defined as the spaces of smooth functions $f \in C^{\infty}(\mathbb{R}^d)$ satisfying the estimates $$\exists A,C>0, \quad |\partial_x^{\alpha}f(x)| \leq C A^{|\alpha|}(\alpha !)^{\mu}e^{-\frac{1}{A}|x|^{1/\nu}}, \quad x \in \mathbb{R}^d, \ \alpha \in \mathbb{N}^d,$$ or, equivalently $$\exists A,C>0, \quad \sup_{x \in \mathbb{R}^d}|x^{\beta}\partial_x^{\alpha}f(x)| \leq C A^{|\alpha|+|\beta|}(\alpha !)^{\mu}(\beta !)^{\nu}, \quad \alpha, \beta \in \mathbb{N}^d,$$ with $\alpha!=(\alpha_1!)...(\alpha_d!)$ if $\alpha=(\alpha_1,...,\alpha_d) \in \mathbb{N}^d$. These Gelfand-Shilov spaces $S_{\nu}^{\mu}(\mathbb{R}^d)$ may also be characterized as the spaces of Schwartz functions $f \in \mathscr{S}(\mathbb{R}^d)$ satisfying the estimates $$\exists C>0, \varepsilon>0, \quad |f(x)| \leq C e^{-\varepsilon|x|^{1/\nu}}, \quad x \in \mathbb{R}^d; \qquad |\widehat{f}(\xi)| \leq C e^{-\varepsilon|\xi|^{1/\mu}}, \quad \xi \in \mathbb{R}^d.$$ In particular, we notice that Hermite functions belong to the symmetric Gelfand-Shilov space $S_{1/2}^{1/2}(\mathbb{R}^d)$. More generally, the symmetric Gelfand-Shilov spaces $S_{\mu}^{\mu}(\mathbb{R}^d)$, with $\mu \geq 1/2$, can be nicely characterized through the decomposition into the Hermite basis $(\Phi_{\alpha})_{\alpha \in \mathbb{N}^d}$, see e.g. \cite[Proposition~1.2]{toft}, \begin{multline*} f \in S_{\mu}^{\mu}(\mathbb{R}^d) \Leftrightarrow f \in L^2(\mathbb{R}^d), \ \exists t_0>0, \ \big\|\big(\langle f,\Phi_{\alpha}\rangle_{L^2}\exp({t_0|\alpha|^{\frac{1}{2\mu}})}\big)_{\alpha \in \mathbb{N}^d}\big\|_{l^2(\mathbb{N}^d)}<+\infty\\ \Leftrightarrow f \in L^2(\mathbb{R}^d), \ \exists t_0>0, \ \|e^{t_0\mathcal{H}^{\frac{1}{2\mu}}}f\|_{L^2(\mathbb{R}^d)}<+\infty, \end{multline*} where $\mathcal{H}=-\Delta_x+|x|^2$ stands for the harmonic oscillator. We end this section by proving two technical lemmas: \begin{lemma}\label{croch} Let $\mu, \nu >0$ such that $\mu+\nu \geq 1$, $C>0$ and $A \geq 1$. If $f \in S_{\nu}^{\mu}(\mathbb{R}^d)$ satisfies \begin{equation}\label{gs_estim} \forall \alpha \in \mathbb{N}^d, \forall \beta \in \mathbb{N}^d, \quad \| x^{\alpha} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} \leq C A^{|\alpha|+|\beta|} (\alpha!)^{\nu} (\beta!)^{\mu}, \end{equation} then, it satisfies \begin{equation*} \forall p \in \mathbb{N}, \forall \beta \in \mathbb{N}^d, \quad \|\langle x \rangle^{p} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} \leq C (d+1)^{\frac{p}{2}}A^{p+|\beta|} (p!)^{\nu} (|\beta|!)^{\mu}. \end{equation*} \end{lemma} \begin{proof} Let $f \in S_{\nu}^{\mu}(\mathbb{R}^d)$ satisfying the estimates \eqref{gs_estim}. By using Newton formula, we obtain that for all $p \in \mathbb{N}$, $\beta \in \mathbb{N}^d$, \begin{multline}\label{croch_estim} \|\langle x \rangle^p \partial_x^{\beta} f \|^2_{L^2(\mathbb{R}^d)} = \int_{\mathbb{R}^d} \Big(1+\sum_{i=1}^d x_i^2 \Big)^p |\partial_x^{\beta}f(x)|^2 dx \\ = \int_{\mathbb{R}^d} \sum_{\substack{\gamma \in \mathbb{N}^{d+1}, \\ |\gamma|=p}} \frac{p!}{\gamma!} x^{2\tilde{\gamma}} |\partial_x^{\beta}f(x)|^2 dx = \sum_{\substack{\gamma \in \mathbb{N}^{d+1}, \\ |\gamma|=p}} \frac{p!}{\gamma!} \|x^{\tilde{\gamma}} \partial_x^{\beta} f \|^2_{L^2(\mathbb{R}^d)}, \end{multline} where we denote $\tilde{\gamma}=(\gamma_1,...,\gamma_d) \in \mathbb{N}^d$ if $\gamma=(\gamma_1,...,\gamma_{d+1}) \in \mathbb{N}^{d+1}$. Since for all $\alpha \in \mathbb{N}^d$, $\alpha! \leq (|\alpha|)!$, it follows from \eqref{gs_estim} and \eqref{croch_estim} that \begin{align*} \|\langle x \rangle^p \partial_x^{\beta} f \|^2_{L^2(\mathbb{R}^d)} & \leq C^2\sum_{\substack{\gamma \in \mathbb{N}^{d+1}, \\ |\gamma|=p}} \frac{p!}{\gamma!} A^{2(|\tilde{\gamma}|+|\beta|)} (|\tilde{\gamma}|!)^{2\nu} (|\beta|!)^{2\mu} \\ & \leq C^2 (d+1)^p A^{2(p+|\beta|)} (p!)^{2\nu} (|\beta|!)^{2\mu}, \end{align*} since $$ \sum_{\substack{\gamma \in \mathbb{N}^{d+1}, \\ |\gamma|=p}} \frac{p!}{\gamma!} = (d+1)^p.$$ \end{proof} \begin{lemma}\label{interpolation} Let $\mu, \nu >0$ such that $\mu+\nu \geq 1$, $0 \leq \delta \leq 1$, $C>0$ and $A \geq 1$. If $f \in S_{\nu}^{\mu}(\mathbb{R}^d)$ satisfies \begin{equation}\label{int} \forall p \in \mathbb{N}, \forall \beta \in \mathbb{N}^d, \quad \|\langle x \rangle^p \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} \leq C A^{p+|\beta|} (p!)^{\nu} (|\beta|!)^{\mu}, \end{equation} then, it satisfies \begin{equation*} \forall p \in \mathbb{N}, \forall \beta \in \mathbb{N}^d, \quad \|\langle x \rangle^{\delta p} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} \leq C(8^{\nu} e^{\nu}A)^{p+|\beta|} (p!)^{\delta \nu} (|\beta|!)^{\mu}. \end{equation*} \end{lemma} \begin{proof} Let $f \in S_{\nu}^{\mu}(\mathbb{R}^d)$ satisfying the estimates \eqref{int}. It follows from H\"older inequality that for all $r \in (0,+\infty) \setminus \mathbb{N}$ and $\beta \in \mathbb{N}^d$, \begin{multline}\label{holder0} \|\langle x \rangle^r \partial_x^{\beta} f \|^2_{L^2(\mathbb{R}^d)} = \int_{\mathbb{R}^d} \big(\langle x \rangle^{2 \lfloor r \rfloor} |\partial^{\beta}_x f(x)|^2\big)^{\lfloor r \rfloor +1-r} \big(\langle x \rangle^{2(\lfloor r \rfloor+1)} |\partial^{\beta}_x f(x)|^2\big)^{r-\lfloor r \rfloor} dx \\ \leq \|\langle x \rangle^{\lfloor r \rfloor} \partial_x^{\beta} f \|^{2(\lfloor r \rfloor +1- r)}_{L^2(\mathbb{R}^d)} \|\langle x \rangle^{\lfloor r \rfloor+1} \partial_x^{\beta} f \|^{2(r-\lfloor r \rfloor)}_{L^2(\mathbb{R}^d)}, \end{multline} where $\lfloor \cdot \rfloor$ denotes the floor function. Since the above inequality clearly holds for $r \in \mathbb{N}$, we deduce from \eqref{int} and \eqref{holder0} that for all $r \geq 0$ and $\beta \in \mathbb{N}^d$, \begin{align}\label{GS_1} \|\langle x \rangle^r \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} & \leq C A^{r+|\beta|} (\lfloor r \rfloor!)^{(\lfloor r \rfloor +1-r) \nu} \big((\lfloor r \rfloor+1)!\big)^{(r-\lfloor r \rfloor) \nu} (|\beta|!)^{\mu} \\ \nonumber & \leq C A^{r+|\beta|} \big((\lfloor r \rfloor+1)!\big)^{\nu} (|\beta|!)^{\mu} \\ \nonumber & \leq C A^{r+|\beta|} (\lfloor r \rfloor+1)^{(\lfloor r \rfloor+1)\nu} (|\beta|!)^{\mu} \\ \nonumber & \leq C A^{r+|\beta|} (r+1)^{(r+1)\nu} (|\beta|!)^{\mu}. \end{align} It follows from \eqref{GS_1} that for all $p \in \mathbb{N}^*$, $\beta \in \mathbb{N}^d$, \begin{align}\label{puiss_frac} \|\langle x \rangle^{\delta p} \partial_x^{\beta} f \|_{L^2(\mathbb{R}^d)} & \leq C A^{p+|\beta|} (p+1)^{(\delta p+1)\nu} (|\beta|!)^{\mu} \leq C A^{p+|\beta|} (2p)^{(\delta p+1)\nu} (|\beta|!)^{\mu} \\ \notag & \leq C (2^{\nu}A)^{p+|\beta|} p^{\nu} (2p)^{\delta \nu p} (|\beta|!)^{\mu} \leq C (8^{\nu}e^{\nu}A)^{p+|\beta|} (p!)^{\delta \nu} (|\beta|!)^{\mu}, \end{align} since for all positive integer $p \geq 1$, \begin{equation*} p+1 \leq 2p \leq 2^p \quad \text{and} \quad p^p \leq e^p p!. \end{equation*} Notice that from \eqref{int}, since $8^{\nu} e^{\nu} \geq 1$, estimates \eqref{puiss_frac} also hold for $p=0$. This ends the proof of Lemma~\ref{interpolation}. \end{proof} \subsection{Quasi-analytic sequences}\label{qa_section} This section is devoted to recall some properties of quasi-analytic sequences and to state a multidimensional version of the Nazarov-Sodin-Volberg theorem (Corollary~\ref{NSV}). This theorem plays a key role in the proof of Theorem~\ref{general_uncertaintyprinciple}. We begin by a lemma which provides some quasi-analytic sequences and quantitative estimates on the Bang degree $n_{t, \mathcal{M}, r}$ defined in \eqref{Bang}: \begin{lemma}\label{ex_qa_sequence} Let $0<s \leq 1$, $A \geq 1$ and $\mathcal{M}_s= (A^p(p!)^s)_{p \in \mathbb{N}}$. If $0<s<1$, then for all $0<t \leq 1$, $r>0$, \begin{equation} n_{t, \mathcal{M}_s, r} \leq 2^{\frac{1}{1-s}}\big(1-\log t+(Ar)^{\frac{1}{1-s}}\big). \end{equation} If $s=1$, then for all $0< t \leq 1$, $r>0$, \begin{equation}\label{cass1} n_{t, \mathcal{M}_1, r} \leq (1-\log t) e^{Ar} . \end{equation} Moreover, $$\forall 0 < s \leq 1, \forall p \in \mathbb{N}^*, \quad 0 \leq \gamma_{\mathcal{M}_s}(p) \leq s.$$ \end{lemma} \medskip \begin{proof} Let $0<s\leq1$ and $0< t \leq 1$. The sequence $\mathcal{M}_s$ is logarithmically convex. By using that the Riemann series $$A^{-1}\sum \frac{1}{p^s} = \sum \frac{A^{p-1}((p-1)!)^s}{A^p(p!)^s}$$ is divergent, we notice that for all $r>0$, $n_{t, \mathcal{M}_s,r} < +\infty$. When $0<s<1$, we have that for all integers $p \geq 1$, \begin{equation*} \frac{1}{1-s}\big((p+1)^{1-s}-p^{1-s}\big)=\int_{p}^{p+1} \frac{1}{x^s} dx \leq \frac{1}{p^s}. \end{equation*} It follows that for all $N \in \mathbb{N}^*$, \begin{equation*} \frac{1}{1-s} \big((N+1)^{1-s}-(-\log t+1)^{1-s}\big) \leq \sum_{-\log t <p \leq N} \frac{1}{p^s}. \end{equation*} By taking $N= n_{t, \mathcal{M}_s, r}$ and since $0< 1-s < 1$, it follows that \begin{equation*} n_{t, \mathcal{M}_s, r} \leq \Big((1-\log t)^{1-s}+Ar\Big)^{\frac{1}{1-s}}. \end{equation*} The result then follows by using the basic estimate \begin{equation*} \forall x, y \geq0, \quad (x+y)^{\frac{1}{1-s}} \leq 2^{\frac{1}{1-s}} \max\Big(x^{\frac{1}{1-s}}, y^{\frac{1}{1-s}}\Big) \leq 2^{\frac{1}{1-s}} \Big(x^{\frac{1}{1-s}}+y^{\frac{1}{1-s}}\Big). \end{equation*} By proceeding in the same manner in the case when $s=1$, we deduce the upper bound \eqref{cass1} thanks to the formula \begin{equation*} \forall p \in \mathbb{N}^*, \quad \log(p+1)-\log p = \int_p^{p+1} \frac{dx}{x} \leq \frac{1}{p}. \end{equation*} By noticing that \begin{equation*} \forall 0<s \leq 1, \forall j \in \mathbb{N}^*, \quad (j+1)^s-j^s= \int_j^{j+1} \frac{s}{x^{1-s}} dx \leq s \frac{1}{j^{1-s}}, \end{equation*} we finally obtain that for all $0<s \leq 1$, $p \in \mathbb{N}^*$, \begin{equation*} \gamma_{\mathcal{M}_s}(p)= \sup_{1 \leq j \leq p} j \Big(\frac{M_{j+1} M_{j-1}}{M_j^2} -1\Big) = \sup_{1\leq j \leq p} j^{1-s} \big((j+1)^s-j^s\big) \leq s< +\infty. \end{equation*} \end{proof} Let us now prove Proposition~\ref{ex_qa_bertrand}. This proof uses the following lemmas established in~\cite{AlphonseMartin}: \begin{lemma}[{\cite[Lemma~4.4]{AlphonseMartin}}]\label{relation} Let $\mathcal M=(M_p)_{p \in \mathbb{N}}$ and $\mathcal M'=(M'_p)_{p\in\mathbb{N}}$ be two sequences of positive real numbers satisfying $$\forall p \in \mathbb{N}, \quad M_p \le M'_p.$$ If $\mathcal M'$ is a quasi-analytic sequence, so is the sequence $\mathcal M$. \end{lemma} \medskip \begin{lemma}[{\cite[Lemma~4.5]{AlphonseMartin}}]\label{linearcomb} Let $\Theta : [0,+\infty)\rightarrow [0,+\infty)$ be a continuous function. If the associated sequence $\mathcal{M}^{\Theta}$ in \eqref{lc_sequence} is quasi-analytic, so is $\mathcal M^{T\Theta+c}$ for all $c \geq 0$ and $T>0$. \end{lemma} \medskip Let $k \geq 1$ be a positive integer, $\frac{1}{2} \leq s \leq 1$ and $\Theta_{k,s} : [0,+\infty) \longrightarrow [0,+\infty)$ be the non-negative function defined in Proposition~\ref{ex_qa_bertrand}. We first notice that the assumption $\text{(H1)}$ clearly holds for $\mathcal M^{\Theta_{k,s}}$. Let us check that the assumption $\text{(H2)}$ holds as well. To that end, we notice that \begin{equation*} \forall t \geq 0, \quad \Theta_{k,s}(t) \leq t+1 \end{equation*} and we deduce that \begin{equation*} \forall p \in \mathbb{N}, \quad M^{\Theta_{k,s}}_p \geq \sup_{t \geq 0} t^p e^{-(t+1)}= e^{-1} \Big( \frac{p}{e}\Big)^p. \end{equation*} It remains to check that $\text{(H3)}_s$ holds. Thanks to the morphism property of the logarithm, it is clear that \begin{equation*} \Theta_{k,s}(t) \underset{t \to +\infty}{\sim} s\Theta_{k,1}(t^s) \end{equation*} and this readily implies that there exists a positive constant $C_{k,s}>0$ such that \begin{equation*} \forall t \geq 0, \quad \Theta_{k,s}(t) + C_{k,s} \geq s\Theta_{k,1}\big(t^s\big). \end{equation*} It follows that \begin{align*} \forall p \in \mathbb{N}, \quad \Big(M^{\Theta_{k,s}}_p\Big)^s=e^{sC_{k,s}}\Big(M^{\Theta_{k,s}+C_{k,s}}_p\Big)^s & \leq e^{sC_{k,s}} \sup_{t \geq 0} t^{sp}e^{-s^2 \Theta_{k,1}(t^s)} \\ &= e^{sC_{k,s}} \sup_{t \geq 0} t^{p}e^{-s^2 \Theta_{k,1}(t)} \\ & = e^{sC_{k,s}} M_p^{s^2\Theta_{k,1}}. \end{align*} By using Proposition~\ref{ex_theta1} together with Lemmas \ref{relation} and \ref{linearcomb}, the quasi-analyticity of the sequence $\big((M^{\Theta_{k,s}}_p)^s\big)_{p \in \mathbb{N}}$ follows from the quasi-analyticity of $\mathcal{M}^{\Theta_{k,1}}$. The following result by Nazarov, Sodin and Volberg \cite{NSV} provides an uniform control on the uniform norm of quasi-analytic functions ruled by their values on a positive measurable subset. Originally stated in \cite[Theorem~B]{NSV}, it has been used by Jaye and Mitkovski (\cite{JayeMitkovski}) in the following form: \begin{theorem}[{\cite[Theorem~2.5]{JayeMitkovski}}]\label{JayeNSV} Let $\mathcal{M}=(M_p)_{p \in \mathbb{N}}$ be a logarithmically convex quasi-analytic sequence with $M_0=1$ and $f \in \mathcal{C}_{\mathcal{M}}([0,1]) \setminus \{0\}$. For any interval $I \subset [0,1]$ and measurable subset $\mathcal{J} \subset I$ with $|\mathcal{J}| >0$, \begin{equation*} \sup_{I} |f| \leq \Big(\frac{\Gamma_{\mathcal{M}}(2n_{\|f\|_{L^{\infty}([0,1])}, \mathcal{M},e}) |I|}{|\mathcal{J}|} \Big)^{2n_{\|f\|_{L^{\infty}([0,1])}, \mathcal{M},e}} \sup_{\mathcal{J}} |f|. \end{equation*} \end{theorem} The following corollary is instrumental in this work: \medskip \begin{corollary}\label{NSV} Let $\mathcal{M}=(M_p)_{p \in \mathbb{N}}$ be a logarithmically convex quasi-analytic sequence with $M_0=1$ and $0< s, t \leq 1$. There exists a positive constant $C=C(\mathcal{M}) \geq 1$ such that for any interval $I \subset [0,1]$ and measurable subset $\mathcal{J} \subset I$ with $|\mathcal{J}| \geq s >0$, \begin{equation*} \forall f \in \mathcal{C}_{\mathcal{M}}([0,1]) \text{ with } \|f\|_{L^{\infty}([0,1])} \geq t, \quad \sup_{I} |f| \leq \Big(\frac{\Gamma_{\mathcal{M}}(2n_{t, \mathcal{M},e}) |I|}{s} \Big)^{2n_{t, \mathcal{M},e}} \sup_{\mathcal{J}} |f|. \end{equation*} \end{corollary} \medskip Corollary~\ref{NSV} is directly deduced from Theorem~\ref{JayeNSV} by noticing that for all $f \in \mathcal{C}_{\mathcal{M}}([0,1])$ satisfying $\| f \|_{L^{\infty}([0,1])} \geq t$, $$n_{\|f\|_{L^{\infty}([0,1])}, \mathcal{M},e} \leq n_{t, \mathcal{M},e}.$$ In order to use this result in control theory, we need a multidimensional version of Corollary~\ref{NSV}: \medskip \begin{proposition}\label{NSV_multid} Let $d \geq 1$ and $U$ be a non-empty bounded open convex subset of $\mathbb{R}^d$ satisfying $|\partial U|=0$. Let $\mathcal{M}=(M_p)_{p \in \mathbb{N}}$ be a logarithmically convex quasi-analytic sequence with $M_0=1$, $0< \gamma \leq 1$ and $0<t\leq1$. For any measurable subset $E \subset U$ satisfying $|E| \geq \gamma |U|>0$, we have \begin{multline}\label{NSV_estimate} \forall f \in \mathcal{C}_{\mathcal{M}}(U) \text{ with } \|f\|_{L^{\infty}(U)} \geq t, \\ \sup_{U} |f| \leq \Big(\frac{d}{\gamma} \Gamma_{\mathcal{M}}\big(2n_{t, \mathcal{M},d \operatorname{diam}(U) e}\big)\Big)^{2n_{t, \mathcal{M},d \operatorname{diam}(U) e}} \sup_{E} |f|. \end{multline} \end{proposition} \medskip \begin{proof} Let $0< \gamma \leq 1$ and $0<t \leq 1$. Let $E$ be a measurable subset of $U$ satisfying $|E| \geq \gamma |U|>0$ and $f \in \mathcal{C}_{\mathcal{M}}(U)$ with $\|f\|_{L^{\infty}(U)} \geq t$. Since $\overline{U}$ is compact and that $f$ can be extended as a continuous map on $\overline{U}$, there exists $x_0 \in \overline{U}$ such that \begin{equation}\label{max} \sup_U |f|= |f(x_0)|. \end{equation} By using spherical coordinates, we have \begin{equation*} |E|= \int_{\mathbb{R}^d} {\mathrm{1~\hspace{-1.4ex}l}}_{E}(x) dx = \int_{\mathbb{R}^d} {\mathrm{1~\hspace{-1.4ex}l}}_{E}(x_0+x) dx = \int_0^{+\infty} \int_{\mathbb{S}^{d-1}} {\mathrm{1~\hspace{-1.4ex}l}}_{E} (x_0 + t \sigma) d\sigma t^{d-1}dt. \end{equation*} Since $\overline{U}$ is convex, we deduce that \begin{align*}\label{m1} 0<|E| & =\int_{\mathbb{S}^{d-1}} \int_{0}^{J_{\overline{U}}(\sigma)} {\mathrm{1~\hspace{-1.4ex}l}}_{E} (x_0 + t \sigma) t^{d-1}dt d\sigma \\ & = \int_{\mathbb{S}^{d-1}} J_{\overline{U}}(\sigma)^d\int_{0}^{1} {\mathrm{1~\hspace{-1.4ex}l}}_{E} \big(x_0 + J_{\overline{U}}(\sigma) t \sigma\big) t^{d-1}dt d\sigma \\ & \leq \int_{\mathbb{S}^{d-1}} J_{\overline{U}}(\sigma)^d\int_{0}^{1} {\mathrm{1~\hspace{-1.4ex}l}}_{E} \big(x_0 + J_{\overline{U}}(\sigma) t \sigma\big)dt d\sigma \leq \int_{\mathbb{S}^{d-1}} J_{\overline{U}}(\sigma)^d |I_{\sigma}| d\sigma, \end{align*} with \begin{equation}\label{jauge} J_{\overline{U}}(\sigma) =\sup\{t\geq0: \, x_0+ t\sigma \in \overline{U} \} \quad \text{and} \quad I_{\sigma} = \Big\{t \in [0,1]: \, x_0 + J_{\overline{U}}(\sigma) t \sigma \in E \Big\}, \end{equation} when $\sigma \in \mathbb{S}^{d-1}$. Notice that $$\forall \sigma \in \mathbb{S}^{d-1}, \quad J_{\overline{U}}(\sigma) <+\infty,$$ since $\overline{U}$ is bounded. It follows that there exists $\sigma_0 \in \mathbb{S}^{d-1}$ such that \begin{equation}\label{m3} |E| \leq |I_{\sigma_0}| \int_{\mathbb{S}^{d-1}} J_{\overline{U}}(\sigma)^d d\sigma . \end{equation} By using the assumption that $|\partial U|=0$ and $U$ is an open set, we observe that \begin{equation*} |U|=|\overline{U}|= \int_{\mathbb{S}^{d-1}} J_{\overline{U}}(\sigma)^d \int_0^1 t^{d-1}dt d\sigma = \frac{1}{d} \int_{\mathbb{S}^{d-1}} J_{\overline{U}}(\sigma)^d d\sigma. \end{equation*} By using that $|E| \geq \gamma |U|$, the estimate \eqref{m3} and the above formula provide the lower bound \begin{equation} \label{m4} |I_{\sigma_0}| \geq \frac{\gamma}{d}>0. \end{equation} Setting \begin{equation}\label{fonct_aux} \forall t \in [0,1], \quad g(t)=f\big(x_0 + J_{\overline{U}}(\sigma_0) t \sigma_0 \big), \end{equation} we notice that this function is well-defined as $x_0 + J_{\overline{U}}(\sigma_0) t \sigma_0 \in \overline{U}$ for all $t \in [0,1]$. We deduce from the fact that $f \in \mathcal{C}_{\mathcal{M}}(U)$, the estimate $$ J_{\overline{U}}(\sigma_0) \leq \operatorname{diam}(\overline{U})=\operatorname{diam}(U),$$ where $\operatorname{diam}(U)$ denotes the Euclidean diameter of $U$, and the multinomial formula that for all $p \in \mathbb{N}$, \begin{multline*} \|g^{(p)}\|_{L^{\infty}([0,1])} \leq \sum_{\substack{\beta \in \mathbb{N}^d, \\ |\beta|=p}} \frac{p!}{\beta !} \|\partial^{\beta}_x f \|_{L^{\infty}(\overline{U})} \big(J_{\overline{U}}(\sigma_0)\big)^p \\ \leq \bigg( \sum_{\substack{\beta \in \mathbb{N}^d, \\ |\beta|=p}} \frac{p!}{\beta !}\bigg) \operatorname{diam}(U)^p M_p = \big(d\operatorname{diam}(U)\big)^p M_p. \end{multline*} We observe that the new sequence $$\mathcal{M}':= \Big(\big(d\operatorname{diam}(U)\big)^p M_p \Big)_{p \in \mathbb{N}},$$ inherits from $\mathcal{M}$ its logarithmical convexity, its quasi-analytic property with the following identity for the associated Bang degrees \begin{equation*} n_{t,\mathcal{M}',e}= n_{t,\mathcal{M},d\operatorname{diam}(U)e}. \end{equation*} The function $g$ belongs to $\mathcal{C}_{\mathcal{M}'}([0,1])$. By using from \eqref{m4} that $|I_{\sigma_0}| >0$ and $\|g\|_{L^{\infty}([0,1])} \geq |g(0)|=\|f\|_{L^{\infty}(U)} \geq t$, we can apply Corollary~\ref{NSV} to obtain that \begin{equation}\label{NSV_1} \sup_{[0,1]} |g| \leq \Big(\frac{\Gamma_{\mathcal{M}'}(2n_{t, \mathcal{M}',e})}{|I_{\sigma_0}|} \Big)^{2n_{t, \mathcal{M}',e}} \sup_{I_{\sigma_0}} |g|. \end{equation} By noticing that $$\Gamma_{\mathcal{M}}=\Gamma_{\mathcal{M}'},$$ we deduce from \eqref{max}, \eqref{jauge}, \eqref{m4}, \eqref{fonct_aux} and \eqref{NSV_1} that \begin{multline*} \sup_U |f| = |f(x_0)| =|g(0)| \leq \sup \limits_{[0,1]} |g| \leq \Big(\frac{d}{\gamma} \Gamma_{\mathcal{M}}(2n_{t, \mathcal{M},d\operatorname{diam}(U) e}) \Big)^{2n_{t, \mathcal{M},d\operatorname{diam}(U) e}} \sup_{I_{\sigma_0}} |g| \\ \leq \Big(\frac{d}{\gamma} \Gamma_{\mathcal{M}}(2n_{t, \mathcal{M},d\operatorname{diam}(U) e}) \Big)^{2n_{t, \mathcal{M},d\operatorname{diam}(U) e}} \sup_E |f|. \end{multline*} This ends the proof of Proposition~\ref{NSV_multid}. \end{proof} In order to use estimates as (\ref{NSV_estimate}) to derive the null-controllability of evolution equations posed in $L^2(\mathbb{R}^d)$, we need the following $L^2$-version of the Nazarov-Sodin-Volberg Theorem: \medskip \begin{proposition}\label{NSV_multid_L2} Let $d \geq 1$ and $U$ be a non-empty bounded open convex subset of $\mathbb{R}^d$. Let $\mathcal{M}=(M_p)_{p \in \mathbb{N}}$ be a logarithmically convex quasi-analytic sequence with $M_0=1$, $0< \gamma \leq 1$ and $0<t \leq 1$. If $E \subset U$ is a measurable subset satisfying $|E| \geq \gamma |U|$, then for all $f \in \mathcal{C}_{\mathcal{M}}(U)$ with $\|f\|_{L^{\infty}(U)} \geq t$, \begin{equation*} \int_U |f(x)|^2 dx \leq \frac{2}{\gamma}\Big(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}\big(2n_{t, \mathcal{M},d\operatorname{diam}(U) e}\big)\Big)^{4n_{t, \mathcal{M},d\operatorname{diam}(U) e}} \int_E |f(x)|^2 dx. \end{equation*} \end{proposition} \medskip \begin{proof} Let $0<t\leq 1$, $f \in \mathcal{C}_{\mathcal{M}}(U)$ so that $\|f\|_{L^{\infty}(U)} \geq t$ and $E$ be a subset of $U$ satisfying $|E| \geq \gamma |U| >0$. Setting \begin{equation*} \tilde{E}= \Big\{x \in E: \ |f(x)|^2 \leq \frac{2}{|E|} \int_E |f(y)|^2 dy\Big\}, \end{equation*} we observe that \begin{equation}\label{m_20} \int_E |f(x)|^2dx \geq \int_{ E \setminus \tilde{E}} |f(x)|^2 dx \geq \frac{2|E \setminus \tilde{E}|}{|E|} \int_E |f(x)|^2dx. \end{equation} Let us prove by contradiction that the integral \begin{equation*} \int_E |f(x)|^2 dx >0, \end{equation*} is positive. If $$\int_E |f(x)|^2 dx =0,$$ then, $$E_{\mathcal{Z}}=\Big\{ x \in E: \quad f(x)=0 \Big\},$$ satisfies $|E_{\mathcal{Z}}|=|E|>0$. We therefore deduce from Proposition~\ref{NSV_multid}, since $\|f\|_{L^{\infty}(U)} \geq t$ and $|E_{\mathcal{Z}}|>0$, that $f=0$ on $U$. This contradicts the assumption $\| f \|_{L^{\infty}(U)} \geq t>0$ and therefore \begin{equation*} \int_E |f(x)|^2 dx >0. \end{equation*} We deduce from \eqref{m_20} that \begin{equation*}\label{m_21} |\tilde{E}| = |E|-|E\setminus \tilde{E}| \geq \frac{|E|}{2} \geq \frac{\gamma}{2} |U| >0. \end{equation*} Applying Proposition~\ref{NSV_multid} provides that \begin{multline*} \sup_U |f| \leq \Big(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}(2n_{t, \mathcal{M},d\operatorname{diam}(U) e}) \Big)^{2n_{t, \mathcal{M},d\operatorname{diam}(U) e}} \sup_{\tilde{E}} |f| \\ \leq \Big(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}(2n_{t, \mathcal{M},d\operatorname{diam}(U) e})\Big)^{2n_{t, \mathcal{M},d\operatorname{diam}(U) e}} \frac{\sqrt{2}}{\sqrt{|E|}} \Big(\int_E|f(x)|^2dx\Big)^{\frac{1}{2}}. \end{multline*} It follows that \begin{align*} \int_U |f(x)|^2dx \leq |U|\big(\sup_U |f|\big)^2 & \leq \Big(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}(2n_{t, \mathcal{M},d\operatorname{diam}(U) e}) \Big)^{4n_{t, \mathcal{M},d\operatorname{diam}(U) e}}\frac{2|U|}{|E|} \int_E |f(x)|^2dx \\ & \leq \Big(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}(2n_{t, \mathcal{M},d\operatorname{diam}(U) e}) \Big)^{4n_{t, \mathcal{M},d\operatorname{diam}(U) e}} \frac{2}{\gamma}\int_E |f(x)|^2dx. \end{align*} This concludes the proof of Proposition~\ref{NSV_multid_L2}. \end{proof} In \cite{JayeMitkovski}, the authors also establish a multi-dimensional version and a $L^2$-version of the Nazarov-Sodin-Volberg Theorem (Theorem~\ref{JayeNSV}) but the constants obtained there are less explicit than the ones given in Propositions~\ref{NSV_multid} and \ref{NSV_multid_L2}. Quantitative constants will be essential in Section~\ref{null_controllability_results} to set up an adapted Lebeau-Robbiano method in order to derive null-controllability results. We end this section by illustrating the above result with an example: \begin{example}\label{NSV_example} Let $0< s \leq 1$, $A\geq 1$, $R>0$, $d \geq 1$, $0<t \leq 1$, $0< \gamma \leq 1$ and $\mathcal{M}=(A^p (p!)^s)_{p \in \mathbb{N}}$. Let $E \subset B(0,R)$ be a measurable subset of the Euclidean ball centered at $0$ with radius $R$ such that $|E| \geq \gamma |B(0,R)|$. There exists a constant $K=K(s, d) \geq 1$ such that for all $f \in \mathcal{C}_{\mathcal{M}}(B(0,R))$ with $\|f\|_{L^{\infty}(B(0,R))} \geq t$, \begin{equation*} \| f \|_{L^{\infty}(B(0,R))} \leq C_{t, A, s, R, \gamma, d} \|f\|_{L^{\infty}(E)} \quad \text{and} \quad \| f \|_{L^2(B(0,R))} \leq C_{t, A, s, R, \gamma, d} \|f\|_{L^2(E)} , \end{equation*} where when $0<s<1$, $$0<C_{t, A, s, R, \gamma, d} \leq \Big(\frac{K}{\gamma}\Big)^{K(1-\log t+ (AR)^{\frac{1}{1-s}})}$$ and when $s=1$, $$0<C_{t, A, 1, R, \gamma, d} \leq \Big(\frac{K}{\gamma}\Big)^{K(1-\log t)e^{KAR}}.$$ \end{example} Let us check that Example~\ref{NSV_example} is a consequence of Propositions~\ref{NSV_multid} and \ref{NSV_multid_L2}, together with Lemma~\ref{ex_qa_sequence}. We deduce from Propositions~\ref{NSV_multid} and \ref{NSV_multid_L2} that for all $f \in \mathcal{C}_{\mathcal{M}}(B(0,R))$ with $\|f\|_{L^{\infty}(B(0,R))} \geq t$, \begin{equation*} \| f \|_{L^{\infty}(B(0,R))} \leq \Big(\frac{d}{\gamma} \Gamma_{\mathcal{M}}(2n_{t, \mathcal{M},2Rd e}) \Big)^{2n_{t, \mathcal{M},2Rd e}} \|f\|_{L^{\infty}(E)} \end{equation*} and \begin{equation*} \| f \|_{L^2(B(0,R))} \leq \sqrt{\frac{2}{\gamma}} \Big(\frac{2d}{\gamma} \Gamma_{\mathcal{M}}(2n_{t, \mathcal{M},2Rd e}) \Big)^{2n_{t, \mathcal{M},2Rd e}} \|f\|_{L^2(E)}. \end{equation*} Furthermore, Lemma~\ref{ex_qa_sequence} provides that $$\forall n \in \mathbb{N}^*, \quad \Gamma_{\mathcal{M}}(n) \leq e^{4+4s}$$ and if $0<s<1$ then, \begin{equation*} n_{t, \mathcal{M}, 2Rde} \leq 2^{\frac{1}{1-s}}\big(1-\log t+(2ARde)^{\frac{1}{1-s}}\big), \end{equation*} whereas if $s=1$, then \begin{equation*} n_{t, \mathcal{M}, 2Rde} \leq (1-\log t) e^{2ARde}. \end{equation*} The result of Example~\ref{NSV_example} therefore follows from the above estimates. \subsection{Slowly varying metrics}{\label{vsm}} This section is devoted to recall basic facts about slowly varying metrics. We refer the reader to~\cite{Hormander} (Section 1.4) for the proofs of the following results. Let $X$ be an open subset in a finite dimensional $\mathbb{R}$-vector space $V$ and $\|\cdot\|_x$ a norm in $V$ depending on $x \in X$. The family of norms $(\|\cdot\|_x)_{x \in X}$ is said to define a slowly varying metric in $X$ if there exists a positive constant $C \geq 1$ such that for all $x \in X$ and for all $y \in V$ satisfying $\|y-x\|_x <1$, then $y \in X$ and \begin{equation}{\label{equiv}} \forall v \in V, \quad \frac{1}{C} \|v \|_x \leq \|v\|_y \leq C \|v \|_x. \end{equation} \medskip \begin{lemma}\label{slowmet}\cite[Example~1.4.8]{Hormander}. Let $X$ be an open subset in a finite dimensional $\mathbb{R}$-vector space $V$ and $d(x)$ a $\frac{1}{2}$-Lipschitz continuous function, positive in $X$ and zero in $V \setminus X$, satisfying \begin{equation*} \forall x,y \in X, \quad |d(x) - d(y) | \leq \frac{1}{2}\|x-y \|, \end{equation*} where $\|\cdot\|$ is a fixed norm in $V$. Then, the family of norms $(\|\cdot\|_x)_{x \in X}$ given by \begin{equation*}\label{family_norms} \|v\|_x= \frac{ \|v\|}{d(x)}, \quad x \in X, v \in V, \end{equation*} defines a slowly varying metric in X. \end{lemma} \medskip The proof given in \cite[Example~1.4.8]{Hormander} shows more generally that result of Lemma~\ref{slowmet} holds true as well when $d$ is a contraction mapping function, that is, when there exists $0 \leq k <1$ such that \begin{equation*} \forall x,y \in X, \quad |d(x) - d(y) | \leq k \|x-y \|. \end{equation*} Let us consider the case when $X=V=\mathbb{R}^d$ and $\|\cdot\|$ is the Euclidian norm. If $0 < \delta \leq 1$ and $0< R <\frac{1}{\delta}$, then the gradient of the function $\rho_\delta(x)=R\left\langle x\right\rangle^{\delta}$ given by $$\forall x \in \mathbb{R}^d, \quad \nabla \rho_\delta(x)=R \delta \frac{x}{\left\langle x\right\rangle^{2-\delta}},$$ satisfies $\| \nabla \rho_\delta\|_{L^{\infty}(\mathbb{R}^d)} \leq R \delta <1$. The mapping $\rho_{\delta}$ is then a positive contraction mapping and Lemma~\ref{slowmet} shows that the family of norms $\|\cdot\|_x= \frac{\|\cdot\|}{R \left\langle x\right\rangle^{\delta}}$ defines a slowly varying metric on $\mathbb{R}^d$. \medskip \begin{theorem}{\label{slowmetric}} \cite[Theorem~1.4.10]{Hormander}. Let $X$ be an open subset in $V$ a $\mathbb{R}$-vector space of finite dimension $d \geq 1$ and $(\|\cdot\|_x)_{x \in X}$ be a family of norms in $V$ defining a slowly varying metric. Then, there exists a sequence $(x_k)_{k \geq 0} \in X^{\mathbb{N}}$ such that the balls \begin{equation*} B_k=\left\{x \in V:\ \|x-x_k \|_{x_k} <1 \right\} \subset X, \end{equation*} form a covering of $X$, $$X = \bigcup \limits_{k=0}^{+\infty} B_k,$$ such that the intersection of more than $N=\big(4 C^3+1 \big)^d$ two by two distinct balls $B_k$ is always empty, where $C \geq 1$ denotes the positive constant appearing in the slowness condition \emph{(\ref{equiv})}. \end{theorem}
proofpile-arXiv_065-86
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Conclusions} We present the first comprehensive~study~to~characterize PPs~in~DL systems written~in~\textsc{TensorFLow}~and \textsc{Keras}, and build the first~benchmark of PPs in DL systems to assess existing~approaches in tackling them. Further,~we develop a static checker \textsc{DeepPerf}\xspace to detect~three types of PPs,~and detect many new PPs in GitHub projects. \section{Data-Availablity Statement} All the study data and source code of \textsc{DeepPerf}\xspace are available at \cite{zenodo} to foster future research. \section{Empirical Study Methodology}\label{sec:design} We first introduce the design of our study, and then~present~our~data collection, data labeling, and benchmark construction process. \subsection{Study Design} Our goal is to understand PPs in DL systems. As DL systems~can~be built on top of various DL libraries, we limit our scope to DL systems developed in \textsc{TensorFlow} and \textsc{Keras}. We select \textsc{TensorFlow}~as~it is the most popular DL library on GitHub. We also include \textsc{Keras} because it is built on top of and tightly integrated with \textsc{TensorFlow} 2. We include \textsc{Keras} but do not distinguish~between \textsc{TensorFlow}~and \textsc{Keras} in our analysis because i) \textsc{Keras} is a frontend and should be used with a backend, and \textsc{TensorFlow} is the most popular backend, and ii) \textsc{TensorFlow}~and \textsc{Keras} are often tightly used together. To achieve this goal, we propose the \todo{four} research questions~as~introduced in Sec.~\ref{sec:intro}. Our \textit{symptom analysis} in \textbf{RQ1} aims~to~understand the observable consequences of PPs. Our findings~from~\textbf{RQ1}~can~characterize the significance of PPs, and provide insights for developing PP detection approaches. Our~\textit{root cause analysis}~in~\textbf{RQ2}~aims~to~characterize the fundamental~reasons for the occurrence of PPs.~Our~findings from \textbf{RQ2} can provide~insights for designing PP localization~approaches. Our~\textit{stage analysis} in \textbf{RQ3} aims to locate~DL~pipeline~stages where~PPs~are~introduced and exposed, and measure the distance~between~exposing~stage~and introducing stage. Our findings from \textbf{RQ3} can~locate~the~bug-prone and bug-affecting stages that should~be~concerned, and reflect~the difficulty of PP localization. Our \textit{approach~assessment} in \textbf{RQ4} aims to quantitatively evaluate existing approaches in tackling PPs. Our findings from \textbf{RQ4}~can~reveal the necessity~of PP detection and localization approaches. Besides, our~findings can also provide hints to develop high-performance DL systems. \subsection{Data Collection} We collected PPs from a well-known~Q\&A site StackOverflow,~where world-wide developers can discuss software development~problems. Our PP collection process consists of the following three steps. \textbf{Step 1: DL Post Selection.} We first selected posts related to~DL libraries \textsc{TensorFlow} and \textsc{Keras} by checking whether the tags~of~a post contain the keywords ``tensorflow'' and ``keras''. We also filtered posts that were created before 2018-01-01 to avoid usage~discussions about old versions of DL libraries that are usually~no~longer~used. At the time of selection (i.e., 2021-03-01), we obtained 61,169 DL posts. Then, we excluded posts that did not contain any source~code~in~question descriptions for the ease of our manual analysis. To focus~on~high-quality posts, we also excluded~posts that did not have an accepted answer or any answer whose votes were greater than two because questioners often commented that the problems had~been~solved,~but forgot to accept the answer. After this step, we had 18,730 DL posts. \textbf{Step 2: PP Post Selection.} Instead of directly using performance-related keywords from the existing studies on PPs in traditional~systems (e.g., \cite{UnderstandingDetectingRealworld2012, zaman2012qualitative, nistor2013discovering, Song2014}), we derived a keyword set in the following way to achieve a wide and comprehensive coverage~of~PP~posts.~We first randomly sampled 100 posts with a tag of ``performance'' from 18,730 posts in Step 1. Then, we manually analyzed these posts~to~extract performance-related keywords, and added them to the set~of~keywords from existing studies. We continued this procedure of random sampling and manual analysis for another two rounds~until~no~new keyword was found; i.e., we sampled 300 posts, which achieved 95\% confidence level and 5.6\% confidence interval. Finally, we used~the derived keyword set to search question descriptions of the 18,730 posts in Step 1, which resulted in 742 candidate PP posts. We provide the full set of derived keywords at our replication site. \textbf{Step 3: PP Identification.} We manually verified the 742 candidate PP posts to reduce noise that was not about PPs~in~DL~systems. For example, some posts might happen to have performance-related keywords, but did not discuss PPs; some posts actually discussed~the accuracy of DL models (because accuracy is often interchangeable with performance in the DL community, and we~align~with~the~SE community where performance is usually referred to as efficiency);~and some posts indeed discussed performance, but did not have a correct answer, which could not be used to understand the characteristics~of PPs. In particular, two of the authors separately inspected~each~candidate PP post to identify PPs. We used Cohen's~Kappa coefficient~to measure the agreement, and it reached 0.813.~A third author was~involved to resolve disagreements. Finally, we identified \todo{224} PPs from \todo{210} PP posts, of which~\todo{14} PP posts contained~two~PPs.~This~scale~is comparable to previous studies on PPs, e.g., 109 PPs in desktop~or~server applications \cite{UnderstandingDetectingRealworld2012} and 70 PPs in mobile applications~\cite{liu2014characterizing}. \subsection{Data Labeling}\label{sec:label} To answer \textbf{RQ1}, \textbf{RQ2} and \textbf{RQ3}, two of the authors labeled each~of~the \todo{224}~PPs with respect to \todo{three} dimensions: symptom, root cause,~and introducing and exposing stages. In particular, they started~with~the classification schema, used for labeling, from the existing general~DL bug~studies \cite{EmpiricalStudyTensorFlow2018, Islam2019, humbatova2019taxonomy, islam2020repairing}~and adapted~it by appending new~ones~and~excluding non-applicable ones. They separately read all post~contents, including the~title, question description, comments, answers,~and~reference~links~mentioned during discussion, to carefully label PPs. Specifically, the symptom of a PP was determined if the questioner explicitly reported the symptom in the post. Otherwise,~it~was conservatively labeled as “\textit{Unknown}”. The root cause of a PP was~inferred from the buggy code version in the question and the fixed~code version (always existed) in the valid answer. The introducing stage~of a PP was determined by analyzing where its root cause was located, while the exposing stage of a PP was decided by analyzing~where its symptom~was exhibited. The introducing/exposing~stage~of~a~PP~was labeled as “\textit{Unknown}” if there was no clear indication in the post.~We provide actionable code of the final taxonomies for symptoms, root causes~and stages at our replication site. The Cohen's~Kappa coefficient was 0.906, 0.772, 0.847 and 0.928~for the labeling of symptom, root cause, introducing stage and exposing stage. A third author~was~involved to resolve disagreements.~It~is worth mentioning that the manual effort, involved in our data collection and labeling procedure, required \todo{six} person-months. \subsection{Benchmark Construction}\label{sec:benchmark} To answer \textbf{RQ4}, we constructed a benchmark by reproducing~PPs. We reproduced PPs on a machine with a 16-core~Intel i7-7820X~CPU (3.60GHz), NVIDIA TITAN Xp GPU, 128GB~RAM and 1TB SSD.~Different PPs require different \textsc{TensorFlow} versions which further~require different CUDA Toolkit versions to support GPU. It is tricky~to install~different CUDA versions in the same physical machine.~Thus, we used \textsc{TensorFlow} Docker images. Only~NVIDIA~GPU~Driver was installed in the physical machine, and each docker container had its own CUDA Toolkit version. Finally, \textsc{TensorFlow} Docker~images ranging from version 1.12 to 1.15 and version~2.0~to~2.5~with~GPU support were covered to build our PP benchmark. We decided to sample some PPs from the \todo{224} PPs instead~of~trying to reproduce all \todo{224} PPs~due~to~the~large~effort~in~reproducing PPs from StackOverflow posts.~To~have a good coverage~of~symptoms and root~causes, we sampled~50\%~PPs~from each~set of~PPs~that~were caused by each~inner~category of root causes (see Sec.~\ref{sec:root}) while exhibiting~each~high-level category of symptoms (see Sec.~\ref{sec:symptom}).~For~each sampled PP, we reproduced it with the following three steps. \textbf{Step 1: Decide \textsc{TensorFlow} Version.} If the \textsc{TensorFlow} version was shown in the post, we used it. If not,~we~checked~whether APIs specific to \textsc{TensorFlow} 1.x (e.g., \texttt{tf.Session}) or \textsc{TensorFlow} 2.x (e.g., \texttt{@tf.function}) existed in the post. If yes, we used the latest \textsc{TensorFlow} version~of 1.x (i.e., 1.15) or 2.x (i.e.,~2.5). If not, we used \textsc{TensorFlow} 2.5. \textbf{Step 2: Complete Code Snippets.} As developers tend~to~only~include code fragments that are directly related to questions,~code~snippets in the post are often incomplete. Specifically, if the buggy~(or fixed) version~was executable, we completed the fixed (or buggy)~version based on it. Otherwise, we wrote missing code fragments~for buggy and fixed versions based on question description and answer. \textbf{Step 3: Reproduce Symptoms.} We executed the buggy~and~fixed version to reproduce symptoms reported in the post. We~may~change input data size, model parameters, etc. to reproduce~described~symptoms as our hardware environment might be different from the post. For PPs with out of memory errors, we set the maximum GPU memory limit with \texttt{tf.GPUOptions} such that the out of memory errors could be reproduced even on GPUs with a larger memory. We successfully reproduced \todo{58} PPs from \todo{112} sampled PPs with four person-months effort.~The~main reasons~for~failed reproduction~are:~i) developers provide~very~incomplete code snippets in~the posts, making it difficult for us~to~complete the buggy~or fixed version, and ii) some PPs require specific hardware environments that are different from our machine. To foster future research on PPs in DL systems, we recorded for each PP in our benchmark its ~environment configuration, input data, buggy version,~fixed~version, \todo{performance~change after fixing}, and reproduction steps. \section{Related Work} We discuss the closely related work in understanding and analyzing deep learning bugs and performance problems. \subsection{Deep Learning Bugs}\label{sec:dl-bug} The recent success in applying deep learning techniques to a variety of domains has gained increasing interest in understanding~characteristics of bugs in deep learning systems. Zhang et al.~\cite{EmpiricalStudyTensorFlow2018} collected 175 bugs in deep learning systems developed~in~\textsc{TensorFlow}~from StackOverflow posts and GitHub commits. They~analyzed the symptoms and root causes of these bugs, and explored~the~challenges~and strategies in bug detection and localization. Islam~et~al.~\cite{Islam2019}~and~Humbatova et al.~\cite{humbatova2019taxonomy} expanded the scope of Zhang et al.'s study~to~include more deep learning libraries. Islam et al.~\cite{Islam2019} analyzed~types, root causes, impacts and pipeline stages of 970 bugs in deep learning systems written in \textsc{Caffe}, \textsc{Keras}, \textsc{TensorFlow}, \textsc{Theano}~and~\textsc{Torch}, while Humbatova et al.~\cite{humbatova2019taxonomy} constructed a taxonomy of bugs in deep learning systems that use \textsc{TensorFlow}, \textsc{Keras} and \textsc{PyTorch} based on manual analysis of 375 bugs and interviews with 20 developers. In their follow-up work, Islam et al.~\cite{islam2020repairing} analyzed bug fix patterns. Kim et al.~\cite{kim2021denchmark} built a benchmark of 4,577~bugs from~193~deep learning systems. Differently, Jia et al.~\cite{jia2020empirical} explored the symptoms, root causes and locations of 202 bugs in the \textsc{TensorFlow} library. Apart from the studies that are focused on a general scope~of~bugs in deep learning systems, several recent studies have targeted more specific bugs. Zhang et al.~\cite{zhang2020empirical} studied failures of deep learning~jobs that are running on a remote, shared platform in Microsoft. Chen~et al.~\cite{chen2021empirical} investigated faults related to the deployment of deep learning models to mobile devices. Zhang et al.~\cite{zhang2021autotrainer} summarized five common training problems in deep learning systems, and developed~a tool to automatically detect and repair training problems. Wan et al.~\cite{MLAPI2021} studied API misuses when deep learning systems use cloud AI services, summarized eight misuse patterns, and developed static checkers to automatically detect some of the misuse~patterns. Huang et al.~\cite{dependency_bug} explored dependency bugs across the DL stack. Some of these studies reveal some partial characteristics of performance problems in deep learning systems. For example, Zhang et al.~\cite{EmpiricalStudyTensorFlow2018} and Islam et al.~\cite{Islam2019} respectively recognized low efficiency and hang~as a symptom of deep learning bugs.~Zhang~et~al.~\cite{zhang2020empirical}~identified GPU out of memory as a failure category of deep learning~jobs. Chen et al.~\cite{chen2021empirical} recognized memory and speed issues~as~two types~of faults in the inference stage of deployment process.~Wan~et~al.~\cite{MLAPI2021}~derived four performance-related API misuse patterns of cloud~AI~services. \todo{Despite these efforts, there still lacks a comprehensive study~to understand characteristics of performance problems in deep~learning systems, and thus our study aims to bridge this knowledge~gap and raise the awareness of performance problems in DL systems.} Besides, some studies have explored general problems~and~challenges in developing and deploying deep learning systems.~For~example, Guo et al. \cite{guo2019empirical} measured the accuracy and performance differences across four deep learning libraries. Zhang~et~al.~\cite{zhang2019empirical}~identified seven kinds of frequently asked deep learning questions~in~StackOverflow, and analyzed their resolution difficulty and root causes. Han et al.~\cite{han2020programmers} explored the topics that developers discuss when~developing deep learning systems. Chen et al.~\cite{chen2020comprehensive} built a taxonomy~of challenges in deploying deep learning systems to different platforms through manual analysis of StackOverflow posts. Pham~et~al.~\cite{pham2020problems} measured accuracy variance in training deep learning systems.~Cummaudo et al.~\cite{Cummaudo2020} studied pain-points that developers face~when~using cloud services of computer vision by mining StackOverflow~posts. Although these studies are not designed for deep learning~bugs,~they shed light on debugging and bug detection in deep learning~systems. Specifically, Guo et al. \cite{guo2019empirical} reported performance differences~in~terms of time cost and memory consumption when trained~deep~learning models are migrated or quantized to different mobile devices~and~web browsers, and called for performance optimization and testing techniques. Zhang et al.~\cite{zhang2019empirical} summarized performance as a category of frequently asked deep learning questions in StackOverflow,~and recognized that performance questions are the most difficult~to~answer. \todo{Our study is inspired by these studies to systematically characterize performance problems in deep learning systems.} Moreover, some advances have been made to detect deep learning bugs. For example, Zhang et al.~\cite{zhang2020detecting} developed a static analysis~approach to detect numerical bugs in neural architectures based on abstract interpretation. Lagouvardos et al.~\cite{lagouvardos2020static} proposed~a~static~analysis to detect shape incompatibility errors in \textsc{TensorFlow} programs, while Verma and Su~\cite{Verma2020} proposed a dynamic abstract interpreter~to catch such errors. Wardat et al.~\cite{wardat2021deeplocalize} developed~a~dynamic~analysis approach to locate faults in deep neural networks.~In~addition,~great efforts have been devoted to testing deep learning systems (e.g.,~\cite{pei2017deepxplore, tian2018deeptest, ma2018deepgauge, odena2019tensorfuzz, xie2019deephunter, sun2018concolic, kim2019guiding}) and deep learning libraries~(e.g.,~\cite{pham2019cradle, nejadgholi2019study, guo2020audee, wang2020deep, wang2021automatic, zhang2021predoo}) for quality assurance. Zhang et al.~\cite{zhang2020machine} presented~a~comprehensive survey of work in this direction. \todo{However, little attention has been received to detecting and testing performance problems in deep learning systems, and our study sheds light on this area.} \subsection{Performance Problems} Many empirical studies have characterized performance~problems~from different perspectives (e.g., root causes, discovery, diagnosis, fixing and reporting) for desktop or server applications~\cite{UnderstandingDetectingRealworld2012, zaman2012qualitative, nistor2013discovering, Song2014, HowArePerformance2020}, highly configurable systems~\cite{EmpiricalStudyPerformance2016, he2020cp}, mobile applications~\cite{liu2014characterizing, linares2015developers}, database-backed web applications~\cite{yang2018not, yang2019view}, and~JavaScript~systems \cite{selakovic2016performance}. They shed light on potential directions on performance analysis (e.g., detection, profiling and testing).~\todo{Our study~is~the first~to understand performance problems in deep learning systems,~which~differs from traditional systems on the programming paradigm.} Advances (e.g., \cite{ammons2004finding, cito2019interactive, han2012performance, curtsinger2015coz}) have been made to identify~general performance problems with dynamic profiles from production~runs. A large body of work has designed pattern-based~methods~to~detect specific performance problems, e.g., reusable/cacheable data (e.g., \cite{bhattacharya2011reuse, Toffola2015, nguyen2013cachetor}), inefficient/redundant loops (e.g., \cite{dhok2016directed, nistor2013toddler, song2017performance, nistor2015caramel}),~and~inefficient collections (e.g., \cite{Jung2011, Shacham2009, Xu2010}). Besides, a lot of techniques~have been proposed for performance testing, i.e., generating test cases~to trigger worst-case performance (e.g., \cite{burnim2009wise, luckow2017symbolic, lemieux2018perffuzz, petsios2017slowfuzz, wei2018singularity}) and find~performance problems (e.g., \cite{grechanik2012automatically, shen2015automating, tizpaz2020detecting}). Another line of work~is performance profiling technique to identify hot paths (e.g., \cite{ball1996efficient, Duesterwald2000, Larus1999}) and fit a performance model to the input size (e.g., \cite{Coppa2012, goldsmith2007measuring, Zaparanuks2012}).~\todo{These performance analysis approaches are designed for traditional systems, and cannot be directly applied to deep learning systems.} Recently, some performance analysis approaches have been proposed for deep learning systems. For example, Qi et al.~\cite{qi2016paleo} modeled and estimated time cost of training deep neural networks,~while~Gao et al.~\cite{gao2020estimating} estimated GPU memory consumption. \todo{Such estimation techniques are useful to find potential performance problems~in~advance.} Liu et al.~\cite{liu2019performance} measured the performance~of~training~deep~learning models on mobile devices, while Ma et al.~\cite{ma2019moving}~compared time cost of JavaScript-based deep learning libraries when running deep learning tasks~in~browsers. \todo{These studies empirically demonstrate the performance differences.} To reduce memory usage of deep~neural networks,~Rhu et al.~\cite{rhu2016vdnn} developed a dynamic memory manager to~virtualize memory~usage, while Wang et al.~\cite{wang2018superneurons} proposed~a~dynamic GPU memory~scheduler. To make deep learning models~efficient, Han et al.~\cite{Han2016} used pruning and quantization to compress models, Yan~et~al.~\cite{yan2015performance} used~a~performance model~to estimate the time of distributed model training~and find the optimal distributed configuration, and Menghani~\cite{Menghani2021} presented a survey in this area. \todo{These approaches are system-level performance optimization techniques, while \textsc{DeepPerf}\xspace is at the source code level.}~\todo{Despite these efforts, the characteristics of performance problems in deep learning systems are still unclear, and our study fills this gap.} \section{Empirical Study Results}\label{sec:results} We present the results of the four research questions. \subsection{Symptom Analysis (RQ1)}\label{sec:symptom} \begin{figure}[!t] \centering \includegraphics[scale=0.40]{fig/symptoms.pdf} \vspace{-20pt} \caption{Taxonomy of PP Symptoms}\label{fig:symptoms} \end{figure} \begin{figure*}[!t] \centering \includegraphics[scale=0.51]{fig/causes.pdf} \vspace{-5pt} \caption{Root Causes of PPs in DL Systems}\label{fig:causes} \end{figure*} The taxonomy of PP symptoms is shown in Fig.~\ref{fig:symptoms}. It is organized~into three high-level categories (i.e., \textit{Time}, \textit{Memory} and \textit{Processor}) and~10 inner categories, which are exhibited by \todo{179~(79.9\%)}~of~the \todo{224}~PPs. The remaining \todo{45 (20.1\%)} PPs belong to the \textit{Unknown} category (defined in Sec. \ref{sec:label}). Notice that one PP can exhibit multiple symptoms. \textbf{Time.} This category covers PPs exhibiting high time cost,~which accounts for the largest portion of PPs, i.e., \todo{126 (56.3\%)}. In particular, \todo{99 (44.2\%)} of the PPs manifest \textit{Slow Execution Time} during~the~execution of DL systems, including data~preparation, model building, training, evaluation, hyper parameter tuning, or prediction.~Further, \todo{16~(7.1\%)} of the PPs~exhibit~\textit{Increasing Time Over Time}; e.g.,~the~prediction time became longer and longer as the model ran\footnote{https://stackoverflow.com/questions/60267911/}. Moreover, \todo{6 (2.7\%)}~of~the PPs manifest \textit{Slow~Initialization Time} when DL systems are initialized before execution;~e.g.,~it~spent more than~80~seconds to import \textsc{TensorFlow}\footnote{https://stackoverflow.com/questions/49053434/}. DL systems~can~still work but slowly when exhibiting the above symptoms.~Differently, \todo{8 (3.6\%)} of the~ PPs result in \textit{Program Hang} that makes DL systems~cease~to respond to inputs, which is the most severe symptom. \textbf{Memory.} This category includes PPs consuming RAM/GPU~memory abnormally, accounting for \todo{56 (25.0\%)} of the PPs. Specifically,~\textit{Out of Memory} is the most common as well as the most severe symptom, covering \todo{37 (16.5\%)} of the PPs. \textit{Memory Leak}, manifested~in~\todo{16~(7.1\%)} of the PPs, occurs when~the~memory usage keeps increasing,~and~may finally lead to out of memory errors. Moreover, \textit{Abnormal GPU~Memory Usage}, i.e., either unexpectedly high or low GPU memory usage, is exhibited in \todo{5 (2.2\%)} of the PPs. \textbf{Processor.} This category consists of PPs with abnormal CPU/GPU utilization, which accounts for \todo{16 (7.1\%)} of the PPs. In particular,~\textit{Abnormal GPU Utilization}, i.e., either unexpectedly high~or~low~GPU~utilization, is manifested in \todo{8 (3.6\%)} of the PPs. For example, the~GPU~utilization was only around 15\%, while the training time was slow~(each epoch took 40 to 50 seconds)\footnote{https://stackoverflow.com/questions/56795642/}. Moreover, DL systems may \textit{Not Use GPU}, leading to no speedup than when running on CPU, which~occurs in \todo{4 (1.8\%)} of the PPs. In addition, \textit{Abnormal CPU Utilization} is also exhibited~in \todo{4 (1.8\%)} of the PPs. \uline{\textbf{Summary.}} More than half of the PPs slow down DL systems,~and nearly one-third of the PPs consume either extremely low or high~resources like memory and processor. Such severe consequences~of~PPs motivate the significance of PPs. Moreover, only~four of the ten~symptoms, as highlighted in dotted rectangles in Fig.~\ref{fig:symptoms},~are~shared~with~the existing symptom taxonomies for general DL bugs~\cite{EmpiricalStudyTensorFlow2018, Islam2019}.~In other words, symptoms of PPs are quite different from those~of~general~DL bugs, and the existing studies on general DL bugs~only~capture~a~partial set of PPs, and thus PPs deserve a comprehensive investigation. \subsection{Root Cause Analysis (RQ2)}\label{sec:root} The taxonomy of PP root causes is reported in Fig.~\ref{fig:causes}. It is grouped~into five high-level categories (i.e., \textit{API}, \textit{Model}, \textit{Library},~\textit{Data} and \textit{Environment}) and 15 inner categories. \textbf{API.} This category covers PPs caused by library API misuses.~This is the most common category and accounts for \todo{115 (51.3\%)}~of~the~PPs. Specifically, \textsc{TensorFlow}~and~\textsc{Keras} provide efficient APIs~for~achieving high performance, e.g., the \texttt{tf.data}~API for building efficient~input pipelines, and various operation~APIs~for efficient computation. However, developers often write their own~implementation~which~is often less efficient, but do \textit{Not Use} the corresponding \textit{Efficient API}~directly, potentially due to the unfamiliarity with APIs.~This causes~\todo{52 (23.2\%)} of the PPs. For example,~a developer wrote a \texttt{for}~loop~to~perform concatenation on a set of images, which could be efficiently achieved by the \texttt{map} API from \texttt{tf.data.Dataset}\footnote{https://stackoverflow.com/questions/63002205/}. Moreover, \textsc{TensorFlow}~and~\textsc{Keras} provide various batch processing APIs for high performance, e.g., data loading, training, evaluation or prediction~in a batch mode. However, developers might~\textit{Not~Use~a~Batch~API},~and some even implement batch processing by themselves, which causes \todo{18 (8.0\%)} of the PPs. For example, a developer loaded~a large~data~set into GPU memory all at once, causing an out of memory error\footnote{https://stackoverflow.com/questions/59456128/}.~The \texttt{flow\_from\_directory} API in \textsc{Keras} can solve~this PP by dynamically loading a batch of data from the specified directory. Notice that \textit{Not Using Batch API} is a sub-category of \textit{Not Using Efficient API}, and we treat it separately due to its high frequency.~In~the~previous two root causes, developers are mostly unaware of the efficient or batch APIs. However, even when developers are aware of some APIs, they might not fully understand their performance~characteristics, and write \textit{Inefficient API Usage}, which causes \todo{45 (20.1\%)} of the PPs. Fig.~\ref{fig:example-1} shows an example of inefficient API usage, where~a developer called the \texttt{map} API before the \texttt{batch} API, and did~not~pass the \texttt{num\_parallel\_calls} argument to \texttt{map}\footnote{https://stackoverflow.com/questions/53424152/}, leading to a long training time. To speed up, \texttt{map} should be called after \texttt{batch} to reduce the number of times the mapped function \texttt{\_batch\_parser} is called, and \texttt{num\_parallel\_calls} should be passed to enable parallelism. \begin{figure}[!t] \flushleft \begin{lstlisting} - def _parser(record):][3] + def _batch_parser(record_batch): - parsed = tf.parse_single_example(record, _keys_to_map) + parsed = tf.parse_example(record_batch, _keys_to_map) return parsed['d'], parsed['s'] def init_tfrecord_dataset(): files_train = glob.glob(DIR_TFRECORDS + '*.tfrecord') random.shuffle(files_train) with tf.name_scope('tfr_iterator'): # define data from randomly ordered files ds = tf.data.TFRecordDataset(files_train) # select elements randomly from the buffer ds = ds.shuffle(buffer_size=10000) - # map them based on tfrecord format - ds = ds.map(_parser) # group elements in batch ds = ds.batch(BATCH_SIZE, drop_remainder=True) + # map batches based on tfrecord format + ds = ds.map(_batch_parser, num_parallel_calls=4) # iterate infinitely ds = ds.repeat() # initialize the iterator return ds.make_initializable_iterator() \end{lstlisting} \vspace{-10pt} \caption{Inefficient API Usage Before and After Fix}\label{fig:example-1} \end{figure} \textbf{Model.} This category consists of PPs that are related to DL~models, which is the second most common category, accounting~for~\todo{50 (22.3\%)} of the PPs. In particular, developers may have \textit{Confusion with Computation Graph} because of the unfamiliarity with~the~programming model in \textsc{TensorFlow} and \textsc{Keras}, which~causes~\todo{27~(12.1\%)}~of~the PPs. A typical confusion is with the programming model of \textsc{TensorFlow} 1.x, which is to first build a dataflow computation graph~and~then run it repeatedly with inputs being fed to and outputs being fetched from the graph. Developers often mix the graph construction into~the graph execution. As a result, nodes are repeatedly added to the~graph, and the graph execution becomes slower and slower. An example\footnote{https://stackoverflow.com/questions/53137115/} is shown in Fig.~\ref{fig:example-2}, where Line 14--16 builds the graph and should~be moved out of the execution loop to Line 6--8. Another~common confusion is with the usage of session, which owns resources~like~queues and variables. However, developers repeatedly create a session~in~the graph execution loop without reusing, or forget to close the session. The example in Fig.~\ref{fig:example-2} also forgets to close the session, and the fix~is to use the session as a context manger at Line 11 that~will~automatically close the session. A typical confusion in \textsc{TensorFlow}~2.x~is~with the \texttt{@tf.function} decorator, which accelerates the decorated function by running it in graph mode instead of in eager mode.~However, developers often do not know where to add the decorator~and~how~to design the decorated function to get real speedup. Further, developers design an \textit{Inefficient Model Structure} (e.g., missing convolution and pooling layers before the flatten layer to have~too~many~weights) or set \textit{Improper Model Parameter} (e.g., a large kernel size in a convolution layer to cause a long training time). These two categories~respectively cause \todo{6 (2.7\%)} and \todo{5 (2.2\%)} of the PPs.~Moreover,~developers also set \textit{Improper Hyper Parameter}, e.g., a~large~batch~size~to~cause an~out of memory error or a small batch size to cause a long training time. This category causes \todo{12 (5.4\%)} of the PPs. \begin{figure}[!t] \flushleft \begin{lstlisting} inp = tf.constant([[1.,1.]]) out = tf.constant([[1.,0.]]) weight = tf.Variable([[1.,1.], [1.,1.]]) optimizer = tf.train.GradientDescentOptimizer(0.1) + y = tf.matmul(inp, weight) + loss = (out[0][0] - y[0][0])*2 + (out[0][1] - y[0][1])*2 + train = optimizer.minimize(loss) - sess = tf.Session() + with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for epoch in range(1000): - y = tf.matmul(inp, weight) - loss = (out[0][0] - y[0][0])*2 + (out[0][1] - y[0][1])*2 - sess.run(optimizer.minimize(loss)) + sess.run(train) \end{lstlisting} \vspace{-10pt} \caption{Graph Confusion Before and After Fix}\label{fig:example-2} \end{figure} \textbf{Library.} This category refers to PPs caused by problems~of~DL~libraries, accounting for \todo{24 (10.7\%)} of the PPs. Specifically, \todo{15 (6.7\%)}~of the PPs are caused by \textit{Buggy Library Version}; i.e., DL systems~themselves are correctly written, but trigger the PPs in DL libraries.~For example, repeated calls to \texttt{model.predict} (e.g., in a loop) resulted~in a memory leak\footnote{https://stackoverflow.com/questions/60267911/}, due to a memory leak~persisting across multiple versions of \textsc{TensorFlow}\footnote{https://github.com/tensorflow/tensorflow/issues/34579/}. These \todo{15} PPs trigger the PPs~in~\todo{12}~distinctive APIs. It is~non-trivial to detect such PPs as we do not~have~a full~list~of APIs with PPs in each DL library version. Moreover,~\textit{Mismatched Library~Version}~causes~\todo{9~(4.0\%)} of the PPs, as version restrictions have to be satisfied~for~full~GPU~usage. For example,~\textsc{TensorFlow} 1.x is not~fully~supported on \textsc{CUDA} 11.1, resulting in a long time to start the training\footnote{https://stackoverflow.com/questions/64462347/}. \textbf{Data.} This category covers PPs related to data processing,~accounting for \todo{21 (9.4\%)} of the PPs. Specifically, developers may~write \textit{Inefficient Data Transmission}, e.g., loading input data over the network during training but not directly copying them to the local storage,~or storing weight data in CPU which causes the weights copied~to~GPU and the gradients copied back to CPU in each training iteration.~This category accounts for \todo{12 (5.4\%)} of the PPs. Further, developers may implement \textit{Inefficient Data Preprocessing} (e.g., lack of image normalization before changing an image to a tensor), which causes~\todo{3~(1.3\%)} of the PPs. Moreover, \textit{Improper Input~Data} (e.g., improper data~format or size that consumes excessive resources) causes \todo{6 (2.7\%)}~of~the PPs. For example, images with unnecessarily high resolution were loaded, causing an out of memory error\footnote{https://stackoverflow.com/questions/50742757/}. \begin{figure*}[!t] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=0.99\textwidth]{fig/stage-bug.pdf} \vspace{-2pt} \caption{Introduced and Exposed PPs in Each Stage} \label{fig:stage-bug} \end{subfigure} \begin{subfigure}[b]{0.485\textwidth} \centering \includegraphics[width=0.99\textwidth]{fig/stage-distance.pdf} \vspace{-2pt} \caption{Distance between Exposing Stage and Introducing Stage} \label{fig:stage-distance} \end{subfigure} \vspace{-10pt} \caption{The Exposing Stage and Introducing Stage of PPs and their Distance} \end{figure*} \begin{figure*}[!t] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=0.99\textwidth]{fig/stage-symptom.pdf} \vspace{-2pt} \caption{Symptoms of the PPs Exposed in Each Stage} \label{fig:stage-symptom} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=0.99\textwidth]{fig/stage-cause.pdf} \vspace{-2pt} \caption{Root Causes of the PPs Introduced in Each Stage} \label{fig:stage-cause} \end{subfigure} \vspace{-10pt} \caption{Correlation between Symptoms and Exposing Stages and between Root causes and Introducing Stages} \end{figure*} \textbf{Hardware.} This category covers PPs related to hardware issues, accounting for \todo{14 (6.3\%)} of the PPs. Specifically, hardware may only support part of the DL library versions, and hence \textit{Hardware~and~Library Mismatch} causes \todo{4 (1.8\%)} of the PPs. For example, a GPU~with compute capability 6.1 is not supported in \textsc{TensorFlow} 2.3 which requires a GPU with compute capability 7.0\footnote{https://stackoverflow.com/questions/63602858/}. Further,~to~utilize~the full acceleration capability of TPU, DL systems often need specific code design. Thus, \textit{Hardware and Code Mismatch} causes \todo{7 (3.1\%)} of the PPs. For example, to use Colab TPU, a DL model need~to~be~explicitly converted to a TPU compatible version; if not, the training becomes extremely slow\footnote{https://stackoverflow.com/questions/58670563/}. Moreover, hardware~need proper configuration to achieve full utilization, especially for distributed training. Thus, \textit{Improper Configuration} causes \todo{3 (1.3\%)} of the PPs. For example, the \texttt{tf.distribute.Strategy} API should be used to properly configure and allocate multiple GPUs\footnote{https://stackoverflow.com/questions/59074659/}. \uline{\textbf{Summary.}} About half of the PPs are introduced by API~misuses. Model, data and hardware, i.e., the enabling characteristics~of~DL systems, introduce more than one-third of the PPs. DL libraries~also~introduce one-tenth of the PPs. These diverse sources of root causes~increase the complexity of PP localization. Moreover, only seven~of~the 15 root causes, as shown in dotted rectangles in Fig.~\ref{fig:causes},~are~the~same~to the previous root cause taxonomies for general DL bugs~\cite{EmpiricalStudyTensorFlow2018, Islam2019, humbatova2019taxonomy}. These differences owe to the fact that our study is focused~on~the~performance of DL systems, while the previous studies are mainly~concentrated on the functionality of DL systems. \subsection{Stage Analysis (RQ3)} Islam et al.~\cite{Islam2019} classify the pipeline of DL systems into six stages,~i.e., \textit{Data Preparation}, \textit{Model Building},~\textit{Training}, \textit{Evaluation}, \textit{Hyper Para-meter Tuning} and \textit{Prediction}, in their study on general DL bugs.~We~consider them as the execution stages of DL systems, and add two new stages, found in our data labeling, before them.~The first~newly~added stage~is~\textit{Environment Setting}, where~DL~environment like libraries and hardware are installed~and~configured. The second one is \textit{Initialization}, where the DL system~is~initialized (e.g., importing libraries and initializing parameters) before starting the execution stages. Fig.~\ref{fig:stage-bug} reports the number of PPs introduced and exposed~in~each stage, where the stage name on the $x$-axis is simplified to the initial letters. Data preparation is the most bug-prone stage, which~is~blamed in \todo{88 (39.3\%)} of the PPs. Environment setting, model building~and~training are the second most bug-prone stages, respectively~causing~about \todo{10\%}~of the PPs. Hence, developers should~pay~more~attention~to~these stages to avoid the introduction of PPs, while~automated PP localization approaches should be specifically developed~for~these~stages. The other stages are less bug-prone, respectively introducing~at~most \todo{5\%} of the PPs. On the other hand, training~and data preparation~are the two most~bug-affecting stages, where \todo{96 (42.9\%)} and \todo{40~(17.9\%)}~of the PPs are respectively exposed. Thus, developers should focus more~efforts on these two stages to optimize their performance, while~automated PP detection approaches should be specifically developed~for these two stages. Around \todo{7\%} of the PPs are respectively exposed~until~the~evaluation and prediction stages. The other stages are less~bug-affecting, respectively exposing~at~most~\todo{3\%}~of~the~PPs. Further, data preparation introduces more PPs than exposed.~This difference is more severe in the other two earlier pipeline~stages, i.e., environment setting and model building. About \todo{62\%} of the~PPs~are introduced in the earlier four pipeline~stages, about \todo{61\%} of which~are exposed in the~later~four pipeline stages. The other way around,~training exposes more PPs than introduced. This difference holds~in~the other two later pipeline stages, i.e., evaluation and prediction.~Nearly \todo{61\%} of the PPs are exposed in the later four pipeline stages.~Thus,~PPs should be proactively detected and localized before severe consequences occur so as to reduce time cost and resource consumption. Besides, for each PP, we measure the distance between~its~exposing stage and introducing stage, which is used as an indicator~of~the~difficulty of PP localization. Intuitively, the~larger~the~distance,~the~more difficult to localize a PP from its symptom~to~root~cause.~As~shown~in Fig.~\ref{fig:stage-distance}, \todo{95 (42.4\%)} of the PPs are exposed~and~introduced~in~the~same stage, while \todo{92 (41.1\%)} of the PPs cannot be exposed in the introducing~stage. Specifically, \todo{68 (30.4\%)} of the PPs are exposed~two~stages later. Extremely, \todo{4 (1.8\%)} of the PPs are exposed seven stages later;~i.e., they are introduced in the first stage but exposed in the last stage. Hence, PP localization is challenging for a considerable amount~of~PPs. Moreover, we investigate the symptom distribution~of~the~PPs~exposed in each stage, which is shown in Fig.~\ref{fig:stage-symptom}. This distribution~helps pinpoint the potentially useful performance indicators for detecting PPs exposed in different stages. For example, time-related indicators can be valuable to~detect PPs exposed in initialization,~data~preparation and prediction, because the most common symptom of the~PPs exposed in these stages is under the category of \textit{Time}. Similarly,~we report the root~cause distribution of the PPs introduced in each~stage in Fig.~\ref{fig:stage-cause}. This distribution helps hint the potential technical solutions to localize PPs introduced in different stages.~For~example,~the most frequent root cause of the PPs introduced in most stages is under the category~of~\textit{API}, and hence API~misuse detection could~be developed to localize PPs introduced in these stages. \uline{\textbf{Summary.}} The most bug-prone stages are data preparation,~envi-ronment setting, model building and training, which introduce~nearly \todo{70\%} of the PPs. The most bug-affecting stages are training~and~data preparation, which expose around \todo{60\%} of the PPs. Nearly \todo{40\%}~of~the PPs cannot be exposed in the introducing stage. Moreover, we introduce two new stages that are not covered in the previous~stage~analysis for general DL bugs~\cite{Islam2019}, and investigate the introducing~and~exposing stages that are not distinguished in the previous study~\cite{Islam2019}. \begin{table*} \centering \small \caption{Benchmark PPs across Root Causes and Symptoms and Assessment Results}\label{table:benchmark} \vspace{-10pt} \begin{tabular}{|c|*{4}{C{4.4em}|}c|*{5}{C{2.5em}|}} \hline \multirow{2}{*}{\textbf{Root Cause}} & \multicolumn{4}{c|}{\textbf{Symptom}} & \multirow{2}{*}{\textbf{Total}} & \multicolumn{2}{c|}{\textbf{Profiler}} & \multicolumn{2}{c|}{\textbf{XLA}} & \multicolumn{1}{c|}{\textbf{Doc.}} \\\cline{2-5}\cline{7-11} & \textbf{Time} & \textbf{Memory} & \textbf{Processor} & \textbf{Unknown} & & \textbf{App.} & \textbf{Par.} & \textbf{App.} & \textbf{Par.} & \textbf{App.} \\\hline\hline \rowcolor{Gray} \textbf{API} & \textbf{18 (54)} & \textbf{4 (23)} & \textbf{0 (10)} & \textbf{10 (38)} & \textbf{32 (115)} & \textbf{6} & \textbf{1} & \textbf{17} & \textbf{3} & \textbf{9} \\\hline Not Using Efficient API & 8 (19) & 0 (1) & 0 (5) & 10 (31) & 17 (52) & 1 & 1 & 14 & 3 & 2\\ Not Using Batch API & 1 (6) & 2 (9) & 0 (1) & 0 (2) & 3 (18) & 2 & 0 & 0 & 0 & 0\\ Inefficient API Usage & 9 (29) & 2 (13) & 0 (4) & 0 (5) & 7 (45) & 3 & 0 & 3 & 0 & 7\\\hline\hline \rowcolor{Gray} \textbf{Model} & \textbf{10 (30)} & \textbf{7 (19)} & \textbf{0 (0)} & \textbf{2 (5)} & \textbf{17 (50)} & \textbf{8} & \textbf{1} & \textbf{7} & \textbf{1} & \textbf{2} \\\hline Confusion with Computation Graph & 7 (22) & 2 (4) & 0 (0) & 1 (3) & 9 (27) & 2 & 0 & 6 & 1 & 0\\ Inefficient Model Structure & 0 (2) & 1 (2) & 0 (0) & 1 (2) & 2 (6) & 1 & 0 & 1 & 0 & 0 \\ Improper Model Parameter & 2 (2) & 3 (4) & 0 (0) & 0 (0) & 4 (5) & 4 & 1 & 0 & 0 & 0\\ Improper Hyper Parameter & 1 (4) & 1 (9) & 0 (0) & 0 (0) & 2 (12) & 1 & 0 & 0 & 0 & 2\\\hline\hline \rowcolor{Gray} \textbf{Library} & \textbf{4 (15)} & \textbf{4 (9)} & \textbf{0 (3)} & \textbf{0 (1)} & \textbf{6 (24)} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0}\\\hline Buggy Library Version & 4 (8) & 4 (8) & 0 (0) & 0 (1) & 6 (15) & 0 & 0 & 0 & 0 & 0\\ Mismatched Library Version & 0 (7) & 0 (1) & 0 (3) & 0 (0) & 0 (9) & -- & -- & -- & -- & --\\\hline\hline \rowcolor{Gray} \textbf{Data} & \textbf{2 (14)} & \textbf{0 (4)} & \textbf{1 (3)} & \textbf{1 (1)} & \textbf{3 (21)} & \textbf{1} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{1}\\\hline Inefficient Data Transmission & 1 (10) & 0 (1) & 1 (2) & 0 (0) & 1 (12)& 0 & 0 & 0 & 0 & 1\\ Inefficient Data Preprocessing & 0 (1) & 0 (1) & 0 (0) & 1 (1) & 1 (3) & 1 & 0 & 0 & 0 & 0 \\ Improper Data Input & 1 (3) & 0 (2) & 0 (1) & 0 (0) & 1 (6) & 0 & 0 & 0 & 0 & 0 \\\hline\hline \rowcolor{Gray} \textbf{Hardware} & \textbf{0 (13)} & \textbf{0 (1)} & \textbf{0 (0)} & \textbf{0 (0)} & \textbf{0 (14)} & -- & -- & -- & -- & -- \\\hline\hline \rowcolor{Gray} \textbf{Total} & \textbf{34 (126)} & \textbf{15 (56)} & \textbf{1 (16)} & \textbf{13 (45)} & \textbf{58 (224)} & \textbf{15} & \textbf{2} & \textbf{24} & \textbf{4} & \textbf{12} \\\hline \end{tabular} \end{table*} \subsection{Approach Assessment (\textbf{RQ4})}\label{sec:assessment} To~the~best of our knowledge, there is no PP detection and localization approach for DL systems. Notice that performance analysis approaches in \cite{qi2016paleo, gao2020estimating} can estimate performance metrics (i.e., time and GPU memory), but cannot directly pinpoint PPs. Based~on~their estimation, either automated approaches need to be further designed or developer experience need to be relied on to identify PPs. Therefore, we do not use them. Thus, we select and assess~the~following~three~typical performance analysis approaches, which can~be~used by~developers~to~improve the performance of DL systems. \begin{itemize}[leftmargin=*] \item \textsc{TensorFlow} Profiler\footnote{https://tensorflow.org/guide/profiler}: It is built on top of NVIDIA CUDA Profiling Interface to track the performance of \textsc{TensorFlow} models.~It visualizes the time cost and resource consumption of various~\textsc{TensorFlow} operations in the model, finds performance~bottlenecks, and recommends~best~practices to improve performance. Differently, general python profiling tools (e.g., cProfile and memory\_profiler) can only measure performance metrics, but cannot directly pinpoint PPs. Therefore, we do not use them. \item XLA (Accelerated Linear Algebra)\footnote{https://tensorflow.org/xla}: It is a domain-specific compiler that can accelerate \textsc{TensorFlow} models. Each \textsc{TensorFlow} operation is executed~by a precompiled GPU kernel implementation. XLA can compile the~\textsc{TensorFlow} graph into a sequence of computation kernels generated specifically for the given model, and fuse the kernels to avoid memory operations between~the~execution of different kernels to improve the performance~\cite{Li2021DLCompiler}. \item \textsc{TensorFlow} Documentation: It includes~all \textsc{TensorFlow}~API~documentation\footnote{https://www.tensorflow.org/versions/r2.5/api\_docs} and performance guide\footnote{https://www.tensorflow.org/guide} where developers can~find hints about performance problems and optimization solutions. \end{itemize} Generally, we assess each technique in two dimensions: i) whether a technique is applicable to a PP (or whether a PP is in the capability scope of a technique), and ii) whether a technique can~solve~a~PP.~The assessment results on our benchmark (see Sec.~\ref{sec:benchmark}) are shown~in~the last five columns in Table~\ref{table:benchmark}. The first six columns of Table~\ref{table:benchmark} show the number of reproduced~PPs across root causes~and~symptoms, where the number in parentheses is the total number of PPs. They~cover~all root~causes except for \textit{Mismatched Library Version} and the three~hardware relevant~root~causes. They cover all high-level symptoms,~but achieve a relatively~low coverage of processor relevant symptoms. As shown in the seventh column of Table~\ref{table:benchmark}, \textsc{TensorFlow}~Profiler is only applicable to \todo{15 (25.9\%)} PPs, but is not applicable~to~the~others for two reasons. First,~\textsc{TensorFlow} Profiler requires a \textsc{TensorFlow} version of at least 1.14. However, some PPs are~reproduced with a lower version.~Second,~\textsc{TensorFlow}~Profiler requires~a~full training or evaluation process to track the performance, which is not always available for the PPs in our benchmark. Moreover,~of~these~\todo{15}~PPs, \textsc{TensorFlow}~Profiler fails to finish profiling because of out~of~memory errors for \todo{9} PPs, and does not raise any warning or raises~a~false warning for \todo{4} PPs. Hence, we consider these \todo{13} PPs as not solved~by \textsc{TensorFlow}~Profiler. For the remaining \todo{2} PPs, \textsc{TensorFlow}~Profiler either raises a warning but suggests a fix that achieves a smaller performance improvement than our fixed version in the benchmark, or helps detect the PP by reporting the most time-consuming operation but fails to raise a warning and suggest a fix. Thus,~we~consider these \todo{2} PPs as partially solved by \textsc{TensorFlow}~Profiler,~as~reported in the eighth column of Table~\ref{table:benchmark}. These results demonstrate~that~\textsc{TensorFlow} Profiler has limited capability in tackling PPs. As presented in the ninth column of Table~\ref{table:benchmark}, XLA is applicable~to \todo{24 (41.4\%)} PPs. There are two reasons that XLA is not applicable to the others. First, XLA uses just-in-time (JIT)~compilation.~However, compilation errors might occur for some PPs in our benchmark.~Second, XLA is designed for optimizing the performance~of~\textsc{TensorFlow} models. Thus, it is not applicable to PPs whose root causes~are not related to \textsc{TensorFlow} operations or computation graphs. Furthermore, of these \todo{24} PPs, XLA only improves the performance~for \todo{4} PPs but still achieves a smaller performance improvement than our fixed version in the benchmark.~This~is reasonable because~XLA~is actually not aware of the PPs, but optimizes performance by fusing nodes in computation graphs, while our fixed version reduces the number of nodes in computation graphs. Hence, we consider these~\todo{4} PPs as partially solved by XLA,~as~reported in the tenth column of Table~\ref{table:benchmark}. For the other \todo{20} PPs, XLA does not have~any~performance improvement because of the small number of nodes in computation graphs. Thus, we consider these~\todo{20} PPs as not solved by XLA. These results indicate that PPs in DL systems often cannot be eliminated by the compilation optimization techniques in XLA. As shown in the last column of Table~\ref{table:benchmark}, \textsc{TensorFlow} documentation is only applicable~to \todo{12 (20.7\%)} PPs. We consider \textsc{TensorFlow} documentation as applicable as long as the documentation mentions the optimization solution of a PP. There are two main reasons that \textsc{TensorFlow} documentation is applicable to a small portion of PPs. The first is that performance characteristics, especially non-time characteristics, are hardly described~in~API~documentation.~The~second is that many PPs are caused by inefficient usages of multiple APIs, but API documentation is often focused on individual API usages. Although performance guide covers usages of multiple APIs, they only cover limited APIs such as \texttt{tf.data}. We consider these \todo{12} PPs as solved by \textsc{TensorFlow} documentation. These results show that \textsc{TensorFlow} documentation provides limited support for PPs. \uline{\textbf{Summary.}} Efforts like profiling, compilation optimization~and~documentation have been devoted to optimizing~the~performance of DL systems from different perspectives. However, they provide limited capability in tackling PPs, potentially due to the lack~of~a~comprehensive understanding of PPs in DL systems. \section{Introduction}\label{sec:intro} The advances in deep learning (DL) have attracted~an~increasing~interest in applying DL to various applications~in~both~industry~and academia, e.g., image processing, machine translation,~speech~recognition, medical diagnosis, self-driving cars, and robotics. DL~systems adopt a \textit{data-driven} programming paradigm,~where developers define a desired neural network that learns~the decision logic from~a large amount of training data. Differently, traditional~systems follow a \textit{logic-based} programming paradigm,~where developers directly~encode the decision logic in the source code. This paradigm shift poses unique challenges to engineering DL systems \cite{amershi2019software, zhang2019empirical, han2020programmers, chen2020comprehensive}. In particular, performance, as an important quality requirement,~is one of the challenges~in~engineering DL systems~\cite{zhang2019empirical}. It has a significant impact on the time and resources (e.g., GPU memory~and~power) required during the process pipeline (e.g., training~and~inference)~of DL systems~\cite{Menghani2021}. For example, the language model GPT-3~costs~millions of dollars for a single training run\footnote{https://lambdalabs.com/blog/demystifying-gpt-3/}. Performance~problems~(PPs) can slow down DL systems, consume excessive resources, hurt~user experience, cause financial loss, or threaten human lives. For example, many users suffered a significant slowdown of their~DL~systems after upgrading \textsc{TensorFlow} 1.x to \textsc{TensorFlow} 2.x, and hence~decided to switch to \textsc{PyTorch}\footnote{https://stackoverflow.com/questions/58441514/why-is-tensorflow-2-much-slower-than-tensorflow-1}. Moreover, performance questions of DL systems are recognized as the most difficult to answer~among~all questions of DL systems on StackOverflow~\cite{zhang2019empirical}.~Therefore,~it~is~necessary to study the characteristics of PPs in DL systems. A lot of efforts have been recently made to extensively investigate the characteristics (e.g., symptoms, root causes, fixes and taxonomy) of general bugs~\cite{EmpiricalStudyTensorFlow2018, Islam2019, humbatova2019taxonomy, islam2020repairing} and specific bugs \cite{zhang2020empirical, chen2021empirical, zhang2021autotrainer, MLAPI2021}~in~DL~systems. However, these studies are not specifically designed for PPs, and thus only capture some partial characteristics~of PPs~in~DL~systems. In contrast,~PPs~have been widely studied~for traditional systems,~e.g.,~desktop or server applications~\cite{UnderstandingDetectingRealworld2012, zaman2012qualitative, nistor2013discovering, Song2014}, highly~configurable systems~\cite{EmpiricalStudyPerformance2016, he2020cp},~mobile applications~\cite{liu2014characterizing, linares2015developers}, database-backed web applications~\cite{yang2018not, yang2019view}, and JavaScript systems~\cite{selakovic2016performance}. However, PPs in DL systems could be different due to the programming paradigm shift from traditional systems to DL systems. In summary, the characteristics of PPs in DL systems are under-investigated. To bridge this knowledge gap, we present the first comprehensive study to characterize PPs in DL systems developed in \textsc{TensorFlow} and \textsc{Keras} and to assess existing approaches in tackling~PPs.~To~this end, we first collect \todo{224} PPs from \todo{210} StackOverflow~posts,~and~manually investigate the PPs to characterize~their~symptoms (\textbf{RQ1}),~root causes (\textbf{RQ2}), and introducing and exposing~stages (\textbf{RQ3}). Based~on these~\todo{224} PPs, we manually build a benchmark~of~\todo{58} PPs~that cover most symptoms and root causes, and assess the capability~of~a~profiler in detecting PPs, the capability of a compiler in optimizing~PPs, and the capability of documentation in hinting PPs (\textbf{RQ4}). \begin{itemize}[leftmargin=*] \item \textbf{RQ1 Symptom:} what are the symptoms of PPs? \item \textbf{RQ2 Root Cause:} what are the root causes of PPs? \item \textbf{RQ3 Stage:} what are the stages of introducing and exposing~PPs? \item \textbf{RQ4 Assessment:} how is the capability of existing performance analysis approaches in tackling PPs? \end{itemize} Through these research question analysis, we aim to provide useful findings for developers and researchers. For example, more~than~half of the PPs slow down DL systems,~and nearly one-third of the~PPs~consume either extremely low or high~resources. About half of~the~PPs are introduced by API~misuses, and root causes related to model,~data and hardware introduce more than one-third~of~the PPs. The most~bug-prone stages are data preparation,~environment setting, model building and training. The most bug-affecting stages are training~and~data preparation. \todo{40\%}~of~the PPs are not exposed~in~the~introducing stage. Existing approaches have a very limited capability in tackling PPs. Our findings provide implications for developers and researchers~on developing high-performance DL systems and detecting~and~localizing PPs in DL systems, e.g., performance-aware~techniques~to~recommend DL library APIs and~DL~models, static~techniques to model and estimate time cost~and~resource consumption~of DL systems, and rule-based techniques to detect and localize PPs in DL systems. To demonstrate the usefulness of our findings, we develop~a~static checker, named \textsc{DeepPerf}\xspace, that supports rule-based detection~of~three types of PPs derived from our study. We run \textsc{DeepPerf}\xspace~against~\todo{1,108} GitHub projects with~more~than \todo{100} stars. \textsc{DeepPerf}\xspace has detected~\todo{488} new PPs in \todo{130} of these projects with \todo{15} false positives. \todo{105} of these PPs have already been confirmed by the developers, and \todo{27} of them have already been fixed. Others are still waiting for confirmation. In summary, this paper makes the following contributions. \begin{itemize}[leftmargin=*] \item We present the first comprehensive study to characterize \todo{224}~PPs in DL systems written in \textsc{TensorFlow} and \textsc{Keras},~and~to~assess existing approaches in tackling a contructed benchmark~of~\todo{58}~PPs. \item We develop a~static checker, named \textsc{DeepPerf}\xspace, to detect three~types of PPs, and detect \todo{488} new PPs in \todo{130} GitHub projects. \end{itemize} \subsection{Application} To demonstrate the usefulness of our findings, we implement~a~rule-based static checker, named \textsc{DeepPerf}\xspace, to detect PPs in DL systems. \textsc{DeepPerf}\xspace is implemented with two static analysis tools,~AST\footnote{https://docs.python.org/3/library/ast.html}~and Jedi\footnote{https://github.com/davidhalter/jedi/}. It currently supports three types of PPs whose detection~rules are manually derived from our empirical study (Sec.~\ref{sec:results}). \textbf{Checker 1: Repeated Node Creation.} Creating~the~same~nodes repeatedly to a computation graph is one of the common types~of~PPs under the root cause category of \textit{Confusion with Computation Graph}. \textsc{DeepPerf}\xspace is designed to detect node creation APIs that are called~in loops with the same argument values; e.g., the two APIs \texttt{tf.matmul} and \texttt{optimizer.minimize} in Fig. ~\ref{fig:example-2}. Actually, it is similar to Loop Invariant Computation and Code Motion (LICM) optimization, which has been well studied in classic compilers~\cite{David1994LoopInvariant}. However, Grappler\footnote{https://tensorflow.org/guide/graph\_optimization}, the default graph optimizer in \textsc{TensorFlow} runtime, cannot eliminate this type of PPs although it has the loop optimizer. Notice that this type of PP has been reported in \cite{EmpiricalStudyTensorFlow2018}. However, to the best of knowledge, its detection has not been investigated in prior studies To implement the checker, we first extract \textsc{TensorFlow} APIs~that may add computation graph nodes by parsing the \texttt{@tf\_export}~decorators in the source code of \textsc{TensorFlow} Python APIs\footnote{https://github.com/tensorflow/tensorflow/tree/r1.15/tensorflow/python/ops}. Then, we manually review these APIs to exclude APIs that actually~do~not~add nodes (e.g., \texttt{tf.assign}) or APIs that produce different values given~the same inputs (e.g., \texttt{tf.random.uniform}). Finally, we obtain~\todo{356}~APIs. Our checker determines whether these~\todo{356}~APIs are called~with same argument values among loop~iterations.~To this end, it tracks variables that are changed among loop iterations, including the loop~control variable, variables that are assigned in the loop body but~are~defined outside the loop, and any variables that depend on them.~It~identifies APIs called without using changed variables as arguments~as~PPs. Our analysis is inter-procedural. If there are functions called~in~the loop, it passes changed variables to callee functions, analyzes changed variables in callee functions, and identifies APIs called without using changed variables as arguments in callee functions. \textbf{Checker 2: Inefficient Order of \texttt{batch} and \texttt{map}.} As showed~in Fig.~\ref{fig:example-1}, calling \texttt{map} before \texttt{batch} is not efficient, and hence \texttt{batch} is suggested to be called before \texttt{map} to reduce the number of times the mapped function is called. To detect such API misuse of \texttt{batch}~and \texttt{map}, our checker first identifies \texttt{tf.Dataset} object, and then analyzes~the call sites to check whether \texttt{batch} is called after \texttt{map}. \textbf{Checker 3: Disabled Parallelism of \texttt{map} and \texttt{interleave}.} As listed in Fig.~\ref{fig:example-1}, calling \texttt{map} without setting its \texttt{num\_parallel\_calls} argument disables parallelism. It also holds for \texttt{interleave}.~To~detect such API misuse of \texttt{map} and \texttt{interleave}, our checker identifies \texttt{tf.Dataset} object, and analyzes the call sites to check~whether \texttt{map} and \texttt{interleave} are called without setting \texttt{num\_parallel\_calls}. \begin{table} \centering \small \caption{PP Detection Results of \textsc{DeepPerf}\xspace}\label{table:detection} \vspace{-10pt} \begin{tabular}{|c|*{7}{C{2.2em}|}} \hline \multirow{2}{*}{\textbf{Checker}} & \multicolumn{3}{c|}{\textbf{Detected}} & \multicolumn{2}{c|}{\textbf{Confirmed}} & \multicolumn{2}{c|}{\textbf{Fixed}} \\\cline{2-8} & \textbf{PP} & \textbf{Proj.} & \textbf{FP} & \textbf{PP} & \textbf{Proj.} & \textbf{PP} & \textbf{Proj.} \\\hline\hline Checker 1 & 77 & 49 & 15 & 20 & 14 & 7 & 4 \\\hline Checker 2 & 195 & 68 & 0 & 52 & 18 & 0 & 0 \\\hline Checker 3 & 216 & 66 & 0 & 33 & 18 & 20 & 10 \\\hline\hline Total & 488 & 130 & 15 & 105 & 44 & 27 & 13 \\\hline \end{tabular} \end{table} \textbf{Evaluation on Our Benchmark.} In our PP benchmark, four,~two and two PPs belong to the PP types targeted by the three checkers. Three, two and two of them were successfully detected by the three checkers. The only one false negative of \textbf{Checker 1} is caused~by~the incomplete type inference in Jedi. As reported~in~Sec.~\ref{sec:assessment},~\todo{\textsc{TensorFlow} Profiler is not applicable to these eight PPs. XLA is applicable to four, one and one of them, but fails to solve them. \textsc{TensorFlow} Documentation is applicable to zero, two and~two~of~them by only hinting the solution in API documentation or performance~guide.} \textbf{Evaluation on GitHub Projects.} We used PyGitHub\footnote{https://github.com/PyGithub/PyGithub} to crawl \todo{1,108} GitHub repositories that used~\textsc{TensorFlow} and Python and had at least \todo{100} stars, and ran \textsc{DeepPerf}\xspace~on these repositories.~We~reported detected PPs~as~issues~to~developers, and also manually~reviewed and verified all the detected PPs. As \textsc{TensorFlow} Profiler and XLA are dynamic analysis tools,~it~is difficult for us~to~properly configure and execute \todo{1,108} GitHub~repositories. \textsc{TensorFlow} Documentation only provides guidance but is not a tool.~Thus,~we~did not compare our checkers with~them~in this large-scale evaluation. The results are shown~in Table~\ref{table:detection}, where the statistics~about~detected, confirmed and fixed PPs are reported for each checker. Specifically, \textbf{Checker 1} detected \todo{77} PPs in \todo{49} projects. It detected \todo{15} false positives (i.e., the fourth column in Table~\ref{table:detection}). The reason is that we use lightweight heuristics to decide loop invariants~based on AST and Jedi, but do not use heavyweight data/control flow~analysis, for the scalability of our checker. \todo{20} PPs~in~\todo{14}~projects~have~been confirmed by developers, and \todo{7} of them in \todo{4} projects have been~fixed. \textbf{Checker 2} detected \todo{195} PPs in \todo{68} projects with no false positive. \todo{52} PPs in \todo{18} projects have been confirmed by developers, but none~of them has been fixed. The reason is that the fix requires extra effort~in vectorizing the mapped function (e.g., the \texttt{\_batch\_parser}~function in Fig.~\ref{fig:example-1}), which is non-trivial. In that sense, automated~vectorization is required in \textsc{TensorFlow}, like auto-vectorization in LLVM\footnote{https://www.llvm.org/docs/Vectorizers.html\#slp-vectorizer}. \textbf{Checker 3} detected \todo{216} PPs in \todo{66} projects with no false positive. \todo{33} PPs in \todo{18} projects have been confirmed by developers, while~\todo{20}~of them in \todo{10} projects have already been fixed. The projects~that~have confirmed/fixed our detected PPs~include popular ones like \textsc{Keras}, \textsc{TensorFlow Agents}, \textsc{TensorFlow Hub} and \textsc{Tensorforce}. Besides, we randomly sampled 5 PPs from the 7 and 20 fixed PPs for \textbf{Check 1} and \textbf{Checker 3} respectively, and measured the execution time of the buggy and fixed version. On average, the execution time was improved by 35.6\% and 20.4\% after fixing PPs, respectively. \uline{\textbf{Summary.}} PP is~a~widespread problem~in~DL~systems, and rule-based PP detection is promising. The three checkers in \textsc{DeepPerf}\xspace detected \todo{488} PPs in \todo{130} projects with \todo{15} false positives. \todo{105} PPs in \todo{44} projects have~been~confirmed by developers, while~\todo{27}~of them in \todo{13} projects have~been~fixed by developers. \subsection{Threats} We discuss the threats to our empirical study, PP benchmark,~and~detection approach. Our study investigates PPs in DL systems written with \textsc{TensorFlow} and \textsc{Keras}. Thus, it is not clear whether~our~findings can generalize to DL systems developed with other DL libraries like \textsc{PyTorch}. We believe it deserves a separate study to investigate differences across DL libraries. Further, our study~analyzes~PPs~from StackOverflow posts. However, GitHub is another~valuable~source~of PPs. It is interesting to further explore PPs from GitHub to strength our findings, which in fact requires large manual efforts as we spent \todo{six} person-months to analyze \todo{224} PPs. Our PP detection~results~on GitHub projects also indicate the potential applicability~of~our findings. Moreover, our study involves manual analysis on PPs, which may incur biases. To reduce them, two of the authors separately~analyzed PPs and a third author was involved to resolve disagreements. Our benchmark consists of \todo{58} PPs, whose size,~to~be~honest,~is~not very large. However, considering the large human efforts involved~in constructing the benchmark, we believe it is acceptable. We are~still continuously enlarging our benchmark via reproducing those non-sampled PPs from the \todo{224} PPs and collecting PPs from GitHub. Our rule-based static checker, \textsc{DeepPerf}\xspace, currently only supports three types of PPs. Here, \textsc{DeepPerf}\xspace is not designed to cover~all~type of PPs, but to demonstrate the potential of rule-based~PP~detection as well as the usefulness of our findings. We plan~to~manually~enrich the detection rules in \textsc{DeepPerf}\xspace to support more PP types. In the long run, we hope to automatically learn~the~detection rules. \section{Implication, Application and Threat} We discuss the implications for developers~and~researchers,~demonstrate one application to PP detection, and discuss the threats. \subsection{Implications} \textbf{Developers.} Our study reveals~the~common symptoms~of~PPs~that developers could pay attention~to~when testing and running their~DL systems for detecting potential PPs.~ Our study also identifies~the~common root causes of PPs that can~be~useful for developers~to~diagnose, debug or fix PPs. Our study also~captures the most bug-prone~or~bug-affecting stages where developers could~focus more efforts~on~to~provide the most benefit for PP introduction avoidance or performance optimization. Furthermore, our findings provide some development suggestions. Developers should carefully read the release note and API documentation of DL libraries to get familiar with the rich~set~of library APIs and their performance characteristics.~In this way, PPs caused by the most common root cause (i.e., API misuses) might~be reduced. Developers should also be systematically trained~to~have~a comprehensive understanding of computation graph to build efficient DL models. In this way, PPs caused by~the~second~most~common root cause (i.e., model construction) might be reduced. \textbf{Researchers.} Our findings provide several implications on future research in three directions. First, \textit{intelligent techniques~for~high-performance DL system development} are needed. As developers~are~often unaware of library APIs that are specifically designed for high~performance or unaware of the performance characteristics~of~library~APIs, DL library API recommendation methods should be developed.~To~realize performance-aware API recommendation, a knowledge~graph of DL library APIs should be constructed based on release note,~API documentation and StackOverflow discussions with a specific focus on modeling performance characteristics of APIs and~performance differences across library versions. To locate and replace~inefficient code snippets written from scratch by developers, semantic~analysis techniques should be developed to determine their semantic~similarity to existing library APIs. Apart from such intelligent~techniques~at the code level, recommendation techniques~should be developed~to automatically suggest DL library versions, efficient~DL~models~and their parameters, and environment configurations. Second, \textit{PP detection techniques} are needed. Half of the symptoms (i.e., \textit{Increasing Time Over Time},~\textit{Program Hang}, \textit{Out of Memory},~\textit{Memory Leak},~and \textit{Not Using GPU}) can be regarded as a credible~oracle~for detecting PPs in DL systems. Therefore, proactive~monitoring and prediction techniques should be developed to detect~PPs~as~early~as possible before these severe symptoms occur. DL systems exhibiting the~other symptoms are not guaranteed~to~contain~PPs~as~it~is~often not clear how much time or resources a DL system should consume to run without~a PP. To solve this performance~oracle~problem, one potential way is to design differential testing techniques~to~compare the performance of DL systems running with different DL~libraries, different DL models, or different hardware configurations.~However, it may incur too much overhead. Hence, another potential way~is~to design static techniques to model and estimate time cost~or~resource consumption of DL systems so that performance bottlenecks~can~be identified in advance before execution. During our manual analysis, we find that \textsc{TensorFlow} has some built-in mechanism~in~detecting PPs and recommending fixes~by~throwing a warning message, e.g., ``\textit{WARNING: tensorflow: multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data~is~recommended}''. However, such warning~messages are only raised in \todo{3} of the PPs, indicating the preliminary support in PP detection due to symptom and root~cause diversity. Hence, built-in mechanisms in DL libraries should be further enhanced to detect PPs and recommend fixes. Third, \textit{PP localization techniques} are needed. Our study reveals~that the exposing stage of a PP is usually not the introducing stage.~For example, the location that throws the error message of an out~of~memory error is usually not the location of the root cause. Therefore,~it~is often challenging to localize PPs. During our manual analysis, we find that developers often use logs as the clue to locate PPs. Hence, automated log analysis techniques should be developed to smartly insert log statements into DL systems and locate potential~PPs~using log traces. Further, as API misuse is the most common root~cause of PPs, mining techniques should be designed to learn frequent API usage sequences and localize potential violations in DL systems.~API usage mining has been widely explored in traditional~systems~\cite{robillard2012automated}, but it is interesting to investigate how they are applicable to PPs in DL systems. From our experience, there are three challenges~to detect API-related PPs. First, due to the lack of effective type inference tool in Python, it is hard to precisely extract API usages from Python code. Second, as traditional API usage mining is not aware of performance characteristics of APIs, it is non-trivial~to~automatically determine the performance difference among mined API sequences. Third, it is difficult to detect PPs caused by \textit{Not Using Efficient APIs}, because the inefficient APIs that developers use are totally different from efficient APIs that should be used. Last but not the least, rule-based techniques should be developed to detect and localize PPs, considering the potentially large amount of PPs on StackOverflow or GitHub. The challenge is to automatically derive but not manually specify the rules.
proofpile-arXiv_065-87
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Elliptic and parabolic problems associated to the degenerate operators \begin{equation*} \label{defL} \mathcal L =y^{\alpha_1}\Delta_{x} +y^{\alpha_2}\left(D_{yy}+\frac{c}{y}D_y -\frac{b}{y^2}\right) \quad {\rm and}\quad D_t- \mathcal L \end{equation*} in the half-space $\mathbb{R}^{N+1}_+=\{(x,y): x \in \mathbb{R}^N, y>0\}$ or in $(0, \infty) \times \mathbb{R}^{N+1}_+$ lead quite naturally to the introduction of weighted Sobolev spaces which are anisotropic if $\alpha_1 \neq \alpha_2$. The aim of this paper is to provide the functional analytic properties of these Sobolev spaces needed in \cite{MNS-CompleteDegenerate} and in \cite{MNS-PerturbedBessel} in the $1$-d case, where we prove existence, uniqueness and regularity of elliptic and parabolic problems governed by the operators above. We also refer to \cite{met-calv-negro-spina, MNS-Sharp, MNS-Grad, MNS-Grushin, MNS-Max-Reg, Negro-Spina-Asympt} for the analogous results concerning the $N$-d version of $D_{yy}+\frac{c}{y}D_y -\frac{b}{y^2}$. For $m \in \mathbb{R}$ we consider the measure $y^m dx dy $ in $\mathbb{R}^{N+1}_+$ and write $L^p_m$ for $L^p(\mathbb{R}_+^{N+1}; y^m dx dy)$. Given $p>1$, $\alpha_1 \in \mathbb{R}$, $\alpha_2<2$, we define the Sobolev space \begin{align*} W^{2,p}(\alpha_1,\alpha_2,m)&=\left\{u\in W^{2,p}_{loc}(\mathbb{R}^{N+1}_+):\ u,\ y^{\alpha_1} D_{x_ix_j}u,\ y^\frac{\alpha_1}{2} D_{x_i}u, y^{\alpha_2}D_{yy}u,\ y^{\frac{\alpha_2}{2}}D_{y}u\in L^p_m\right\} \end{align*} which is a Banach space equipped with the norm \begin{align*} \|u\|_{W^{2,p}(\alpha_1,\alpha_2,m)}=&\|u\|_{L^p_m}+\sum_{i,j=1}^n\|y^{\alpha_1} D_{x_ix_j}u\|_{L^p_m}+\sum_{i=1}^n\|y^{\frac{\alpha_1}2} D_{x_i}u\|_{L^p_m}\\ &+\|y^{\alpha_2}D_{yy}u\|_{L^p_m}+\|y^{\frac{\alpha_2}{2}}D_{y}u\|_{L^p_m}. \end{align*} Next we add a Neumann boundary condition for $y=0$ in the form $y^{\alpha_2-1}D_yu\in L^p_m$ and set \begin{align*} W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)=\{u \in W^{2,p}(\alpha_1,\alpha_2,m):\ y^{\alpha_2-1}D_yu\ \in L^p_m\} \end{align*} with the norm $$ \|u\|_{W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)}=\|u\|_{W^{2,p}(\alpha_1,\alpha_2,m)}+\|y^{\alpha_2-1}D_yu\|_{ L^p_m}. $$ We consider also an integral version of the Dirichlet boundary condition, namely a weighted summability requirement for $y^{-2}u$ and introduce $$ W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m)=\{u \in W^{2,p}(\alpha_1, \alpha_2, m): y^{\alpha_2-2}u \in L^p_m\} $$ with the norm $$\|u\|_{W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m)}=\|u\|_{W^{2,p}(\alpha_1, \alpha_2, m)}+\|y^{\alpha_2-2}u\|_{L^p_m}.$$ Note that $\alpha_1, \alpha_2$ are not assumed to be positive. The restriction $\alpha_2<2$ is not really essential since one can deduce from it the case $\alpha_2>2$, using the change of variables described in the next section. However, we keep it both to simplify the exposition and because $\mathcal L$ is mainly considered for $\alpha_2<2$. No requirement is made for the mixed derivatives $D_{x_iy}u$ to simplify some arguments. However, the weighted integrability of the mixed derivatives is automatic under the condition of Proposition \ref{Sec sob derivata mista}. Sobolev spaces with weights are well-known in the literature, see e.g. \cite{grisvard}, \cite[Chapter 6]{necas}, \cite{Geymonat-Grisvard} and \cite{morel} for the non-anisotropic case. Variants of $W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m)$, usually defined as the closure of compactly supported functions in $W^{2,p}(\alpha_1, \alpha_2, m)$, can be found in the above papers . However, we have not been able to find anything about $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$. Let us briefly describe the content of the paper. In Section 2 we show that, by a change of variables, the spaces $W^{2,p}_{\mathcal{N}}(\alpha_1,\alpha_2,m)$ and $W^{2,p}_{\mathcal{N}}(\tilde \alpha_1,\tilde\alpha_2,\tilde m)$, $ \tilde\alpha_1=\frac{\alpha_1}{\beta+1},\quad \tilde\alpha_2=\frac{\alpha_2+2\beta}{\beta+1}, \quad \tilde m=\frac{m-\beta}{\beta+1}$ are isomorphic. This observation simplifies many proofs but requires the full scale of $L^p_m$ spaces, according to the general strategy of \cite{} to study the operator $\mathcal L$. Hardy inequalities and traces for $y=0$ are studied in Section 3. The main properties of the spaces $W^{2,p}_{\mathcal{N}}(\alpha_1,\alpha_2,m)$ are proved in Section 4 together with a density result for smooth functions having zero $y$-derivative in a strip around $y=0$, which is crucial in the study of the operator $\mathcal L$. The space $W^{2,p}_{\mathcal{R}}(\alpha_1,\alpha_2,m)$ is studied in Section 5. \section{A useful change of variables}\label{Section Degenerate} For $k,\beta \in\mathbb{R}$, $\beta\neq -1$ let \begin{align}\label{Gen Kelvin def} T_{k,\beta\,}u(x,y)&:=|\beta+1|^{\frac 1 p}y^ku(x,y^{\beta+1}),\quad (x,y)\in\mathbb{R}^{N+1}_+. \end{align} Observe that $$ T_{k,\beta\,}^{-1}=T_{-\frac{k}{\beta+1},-\frac{\beta}{\beta+1}\,}.$$ \begin{lem}\label{Isometry action der} The following properties hold for $1 \leq p \leq \infty$. \begin{itemize} \item[(i)] $T_{k,\beta\,}$ maps isometrically $L^p_{\tilde m}$ onto $L^p_m$ where $$ \tilde m=\frac{m+kp-\beta}{\beta+1}.$$ \item[(ii)] For every $u\in W^{2,1}_{loc}\left(\mathbb{R}^{N+1}_+\right)$ one has \begin{itemize} \item[1.] $y^\alpha T_{k,\beta\,}u=T_{k,\beta\,}(y^{\frac{\alpha}{\beta+1}}u)$, for any $\alpha\in\mathbb{R}$;\medskip \item [2.] $D_{x_ix_j}(T_{k,\beta\,}u)=T_{k,\beta} \left(D_{x_ix_j} u\right)$, \quad $D_{x_i}(T_{k,\beta\,}u)=T_{k,\beta}\left(D_{x_i} u\right)$;\medskip \item[3.] $D_y T_{k,\beta\,}u=T_{k,\beta\,}\left(ky^{-\frac 1 {\beta+1}}u+(\beta+1)y^{\frac{\beta}{\beta+1}}D_yu\right)$, \\[1ex] $D_{yy} (T_{k,\beta\,} u)=T_{k,\beta\,}\Big((\beta+1)^2y^{\frac{2\beta}{\beta+1}}D_{yy}u+(\beta+1)(2k+\beta)y^{\frac{\beta-1}{\beta+1}}D_y u+k(k-1)y^{-\frac{2}{\beta+1}}u\Big)$.\medskip \item[4.] $D_{xy} T_{k,\beta\,}u=T_{k,\beta\,}\left(ky^{-\frac 1 {\beta+1}}D_xu+(\beta+1)y^{\frac{\beta}{\beta+1}}D_{xy}u\right)$ \end{itemize} \end{itemize} \end{lem}{\sc{Proof.}} The proof of (i) follows after observing the Jacobian of $(x,y)\mapsto (x,y^{\beta+1})$ is $|1+\beta|y^{\beta}$. To prove (ii) one first observes that any $x$-derivatives commutes with $T_{k,\beta}$. Then we compute \begin{align*} D_y T_{k,\beta\,}u(x,y)=&|\beta+1|^{\frac 1 p}y^{k}\left(k\frac {u(x,y^{\beta+1})} y+(\beta+1)y^\beta D_y u(x,y^{\beta+1})\right)\\[1ex] =&T_{k,\beta\,}\left(ky^{-\frac 1 {\beta+1}}u+(\beta+1)y^{\frac{\beta}{\beta+1}}D_yu\right) \end{align*} and similarly \begin{align*} D_{yy} T_{k,\beta\,} u(x,y)=&T_{k,\beta\,}\Big((\beta+1)^2y^{\frac{2\beta}{\beta+1}}D_{yy}u+(\beta+1)(2k+\beta)y^{\frac{\beta-1}{\beta+1}}D_y u+k(k-1)y^{-\frac{2}{\beta+1}}u\Big). \end{align*} \qed Let us specialize the above lemma to \begin{align*} T_{0,\beta}&:L^p_{\tilde m}\to L^p_m,\qquad \tilde m=\frac{m-\beta}{\beta+1} \end{align*} to transform Sobolev spaces with different exponents. \begin{prop} \label{Sobolev eq} Let $p>1$, $m, \alpha_1,\alpha_2\in \mathbb{R}$ with $\alpha_2< 2$. Then one has \begin{align*} W^{2,p}_{\mathcal{N}}(\alpha_1,\alpha_2,m)=T_{0,\beta}\left(W^{2,p}_{\mathcal{N}}(\tilde \alpha_1,\tilde\alpha_2,\tilde m)\right),\qquad \tilde\alpha_1=\frac{\alpha_1}{\beta+1},\quad \tilde\alpha_2=\frac{\alpha_2+2\beta}{\beta+1}. \end{align*} In particular, by choosing $\beta=-\frac{\alpha_2}2$ one has \begin{align*} W^{2,p}_{\mathcal{N}}(\alpha_1,\alpha_2,m)=T_{0,-\frac {\alpha_2} 2}\left(W^{2,p}_{\mathcal{N}}(\tilde \alpha,0,\tilde m)\right),\qquad \tilde\alpha=\frac{2\alpha_1}{2-\alpha_2},\quad \tilde m=\frac{m+\frac{\alpha_2} 2}{1-\frac{\alpha_2} 2}. \end{align*} \end{prop} {\sc{Proof.}} Given $\tilde u\in W^{2,p}_{\mathcal{N}}(\tilde \alpha_1,\tilde\alpha_2,\tilde m)$ let us set $ u(x,y)=(T_{0,\beta}\tilde u)(x,y)=|\beta+1|^{1/p}\tilde u(x,y^{\beta+1})$. Everything follows from the equalities of Lemma \ref{Isometry action der}, \begin{itemize} \item [(i)] $y^{\alpha_1}D_{x_ix_j}u=T_{0,\beta} \left(y^{\tilde \alpha_1}D_{x_ix_j}\tilde u\right)$, \quad $y^{\frac{\alpha_1}{2}}D_{x_i}u=T_{0,\beta}\left(y^{\frac{\tilde \alpha_1}{2}}D_{x_i}\tilde u\right)$;\smallskip \item [(ii)] $y^{\frac{\alpha_2}2}D_{y}u=(1+\beta)T_{0,\beta}\left(y^{\tilde\alpha_2}D_{y}\tilde u\right)$,\quad $y^{\alpha_2-1}D_{y}u=(1+\beta)T_{0,\beta}\left(y^{\tilde \alpha_2-1}D_{y}\tilde u\right)$;\smallskip \item [(iii)] $y^{\alpha_2}D_{yy}u=(1+\beta)T_{0,\beta}\left[(1+\beta) y^{\tilde \alpha_2}D_{yy}\tilde u+\beta y^{\tilde \alpha_2-1}D_{y}\tilde u\right]$. \end{itemize} \qed \begin{os} Note that in the above proposition is essential to deal with $W^{2,p}_{\mathcal{N}}(\alpha_1,\alpha_2,m)$. Indeed in general the isometry $T_{0,\beta}$ does not transform $W^{2,p}(\tilde \alpha_1,\tilde\alpha_2,\tilde m)$ into $W^{2,p}(\alpha_1,\alpha_2,m)$, because of identity (iii) above. \end{os} \section{Hardy inequalities and traces} In this section we prove some weighted Hardy inequalities and investigate trace properties of function $u$ such that $y^\beta D_yu\in L^p_m$. \medskip The following result is standard but we give a proof to settle "almost everywhere" issues. \begin{lem} \label{L1} Let $u\in L^1_{loc}(\mathbb{R}^{N+1}_+)$ be such that $D_yu\in L^1(\mathbb{R}^{N+1}_+)$. Then there exists $v$ such that $v=u$ almost everywhere and $v(\cdot,y)\in L^1_{loc}(\mathbb{R}^N)$ for every $y\geq 0$ and $$v(x,y_2)-v(x,y_1)=\int_{y_1}^{y_2}D_yu(x,s)\ ds$$ for every $0 \leq y_1< y_2 \leq \infty$ and almost every $x\in Q$. \end{lem} {\sc Proof.} For a.e. $x \in \mathbb{R}^N$ the function $u(x, \cdot)$ is absolutely continuous and then $$u(x,y_2)-u(x,y_1)=\int_{y_1}^{y_2}D_yu(x,s)\ ds$$ for a.e. $y_1, y_2$. It is therefore sufficient to define $v(x,y)=\int_c^y D_yu(x,s)\, ds+u(x,c)$, if $c$ is chosen in such a way that $u(\cdot,c) \in L^1_{loc}(\mathbb{R}^N)$. \qed Properties of functions $u\in L^p_m$ such that $D_y u \in L^p_m$ have been proved in \cite[Appendix B]{MNS-Caffarelli}. Here we exploit the more general property $y^\beta D_y u \in L^p_m$. \begin{prop} \label{Hardy in core} Let $C:=\left|\frac{m+1}{p}-(1-\beta)\right|^{-1}$. The following properties hold for $ u\in L^1_{loc}(\mathbb{R}^{N+1}_+)$ such that $y^{\beta}D_{y}u\in L^p_m$. \begin{itemize} \item[(i)] If $\frac{m+1}p<1-\beta$ then $D_yu\in L^1\left(Q\times [0,1]\right)$ for any cube $Q$ of $\mathbb{R}^N$; in particular $u$ has a trace $u(\cdot,y)\in L^1_{loc}(\mathbb{R}^N)$ for every $0\leq y\leq 1$. Moreover setting $u_{0}(x)=\lim_{y\to0}u(x,y)$ one has \begin{align*} \|y^{\beta-1}(u-u_0)\|_{L^p_m}\leq C \|y^{\beta}D_{y}u\|_{L^p_m}. \end{align*} If moreover $u\in L^p_m$ then $u(\cdot,y)\in L^p(\mathbb{R}^N)$ for every $0\leq y\leq 1$. \item[(ii)] If $\frac{m+1}p>1-\beta$ then $D_yu\in L^1\left(Q\times [1,\infty[\right)$ for any cube $Q$ of $\mathbb{R}^N$; in particular $u$ has a finite trace $u_{\infty}(x)=\lim_{y\to\infty}u(x,y)\in L^1_{loc}\left(\mathbb{R}^N\right)$ and \begin{align} \|y^{\beta-1}(u-u_{\infty)}\|_{L^p_m}\leq C \|y^{\beta}D_{y}u\|_{L^p_m}. \end{align} If moreover $u\in L^p_m$ then $u_\infty\in L^p(\mathbb{R}^N)$ and $u_\infty=0$ if $m \geq -1$. \end{itemize} \end{prop} {\sc{Proof.}} To prove (i) let $f(x,y):=y^{\beta}D_{y}u(x,y)$. If $Q$ is a cube of $\mathbb{R}^N$ then since $\frac{m+1}p>1-\beta$ one has \begin{align*} \int_{Q\times [0,1]}|D_yu|dxdy&=\int_{Q\times [0,1]}|D_yu|y^{\beta}y^{-\beta-m}y^mdxdy\\[1ex] &\leq \|y^{\beta}D_yu\|_{L^p_m}\left(\int_{0}^1 y^{-(\beta+m)p'+m}\right)^{\frac 1{p'}}|Q|^{\frac 1{p'}} =C(Q,b,p)\|y^{\beta}D_yu\|_{L^p_m}. \end{align*} In particular by Lemma \ref{L1}, $u$ has a finite trace $u(\cdot,y)\in L^1_{loc}\left(\mathbb{R}^N\right)$ for every $0\leq y\leq 1$. Setting $u_0(x)=u(x,0)=\lim_{y \to 0}u(x,y ) $ we can write \begin{align*} y^{\beta-1}\left(u(x,y)-u_{0}(x)\right)=y^{\beta-1}\int_{0}^y f(x,s)s^{-\beta}\,ds:=(H_1f)(y). \end{align*} By \cite[Lemma 10.3, (i)]{MNS-Caffarelli}, the operator $H_1$ is bounded on $L^p_m(\mathbb{R}_+)$ when $\frac{m+1}p<1-\beta$, hence \begin{align*} \|y^{\beta-1}\left (u(x,\cdot)-u_0 (x) \right)\|_{L^p_m(\mathbb{R}_+)}\leq C \|y^{\beta}D_{y}u(x,\cdot)\|_{L^p_m(\mathbb{R}_+)}. \end{align*} Claim (i) then follows by raising to the power $p$ and integrating with respect to $x$. To prove that $u(\cdot,y)\in L^p(\mathbb{R}^N)$ we proceed analogously: since $u\in L^p_m$ then $u(\cdot,y)\in L^p(\mathbb{R}^N)$ for a.e. $y\in [0,1]$. Without any loss of generality we suppose $u(\cdot,1)\in L^p(\mathbb{R}^N)$ and we write for any $y_0\in [0,1]$ \begin{equation*} u(x,y_0)=u(x,1)-\int_{y_0}^ 1D_y u(x,s)\ ds=u(x,1)-\int_{y_0}^1 s^\beta D_y u(x,s)s^{\frac m p}s^{-\beta-\frac m p}\ ds. \end{equation*} Then using H\^older inequality \begin{align*} |u(x,y_0)|&\leq |u(x,1)|+\left(\int_{y_0}^1 \left|s^\beta D_y u(x,s)\right|^p s^{m}\ ds\right)^{\frac 1 p}\left(\int_{y_0}^1 s^{(-\beta-\frac mp)p'}\ ds\right)^{\frac 1 {p'}}\\[1ex] &\leq |u(x,1)|+C\left(\int_{0}^1 \left|s^\beta D_y u(x,s)\right|^p s^{m}\ ds\right)^{\frac 1 p}. \end{align*} Raising to the power $p$ and integrating with respect to $x$ we obtain \begin{align*} \|u(\cdot,y_0)\|_{L^{p}(\mathbb{R}^N)}\leq C\left(\|u(\cdot,1)\|_{L^p(\mathbb{R}^N)}+\left\|y^\beta D_yu\right\|_{L^p_m}\right). \end{align*} The proof of (ii) is similar writing \begin{align*} y^{\beta-1}\left(u(x,y)-u_{\infty}(x)\right)=-y^{\beta-1}\int_{y}^\infty f(x,s)s^{-\beta}\,ds:=-(H_2f)(y) \end{align*} and applying \cite[Lemma 10.3, (ii)]{MNS-Caffarelli}. If $u \in L^p_m$ and $m \geq -1$, then $|u(x, \cdot)|^p$ is not summable with respect to $y^m\, dy$ for every $x$ where $u_\infty (x) \neq 0$, hence $u_\infty=0$ a.e. \qed \medskip \medskip In the next lemma we show that $u$ has has a logarithmic singularity for $y\to 0, \infty$, when $\frac{m+1}{p}=1-\beta $. \begin{lem} \label{int-uMaggiore} If $\frac{m+1}{p}=1-\beta $ and $u, y^{\beta}D_{y}u\in L^p_m$, then \begin{equation} \label{behaviour} \left(\int_{\mathbb{R}^N}|u(x,y)|^p\, dx\right)^{\frac 1 p}\leq \|u(\cdot,1)\|_{L^p(\mathbb{R}^N)}+|\log y|^{\frac{1}{p'}}\|y^{\beta} D_y\|_{L^p_m}. \end{equation} \end{lem} {\sc Proof.} Let $\frac{m+1}{p}=1-\beta$ and set $f=y^\beta D_y\in L^p_m$. Then for $y\in (0,1)$ one has \begin{align*} u(x,y)&=u(x,1)-\int_y^1 D_y u(x,s)\ ds=u(x,1)-\int_y^1s^{-\beta}f(x,s)\ ds\\[1ex] &=u(x,1)-\int_y^1s^{-\beta-m} f(x,s)s^m\ ds. \end{align*} Therefore, since $(-\beta-m)p'+m=-1$, H\"older inequality yields \begin{align*} |u(x,y)|&\leq |u(x,1)|+\left(\int_y^1 s^{(-\beta-m)p'}s^m\ ds\right)^\frac{1}{p'}\left(\int_y^1 |f(x,s)|^ps^m\ ds\right)^\frac{1}{p} \\[1ex] &\leq | u(x,1)|+ |\log y|^\frac{1}{p'}\|f(x,\cdot)\|_{L^p\left((0,1),y^mdy\right)}. \end{align*} The inequality for $y>1$ is similar. Since $u\in L^p_m$ then, as in Proposition \ref{Hardy in core}, we can suppose $u(\cdot,1)\in L^p(\mathbb{R}^N)$ and raising to the power $p$ and integrating with respect to $x$ we conclude the proof. \qed We also need some elementary interpolative inequalities; the first generalizes \cite[Lemma 4.3]{met-soba-spi-Rellich} (see also \cite{met-negro-soba-spina}). \begin{lem} \label{inter} For $m, \beta \in \mathbb{R}$, $1<p<\infty$ there exist $C>0, \varepsilon_0>0$ such that for every $u \in W^{2,p}_{loc}((0, \infty))$, $0<\varepsilon <\varepsilon_0$, $$ \|y^{\beta-1} u'\|_{L^p_m (\mathbb{R}_+)} \leq C \left (\varepsilon \|y^\beta u''\|_{L^p_m(\mathbb{R}_+)} +\frac{1}{\varepsilon} \|y^{\beta-2}u\|_{L^p_m(\mathbb{R}_+)} \right ). $$ \end{lem} {\sc Proof. } Changing $\beta$ we may assume that $m=0$. We use the elementary inequality \begin{equation} \label{i1} \int_a^b |u'(y)|^p\, dy \leq C\left (\varepsilon^p (b-a)^p \int_a^b |u''(y)|^p\, dy+\frac{1}{\varepsilon^p (b-a)^p}\int_a^b |u(y)|^p\, dy\right ) \end{equation} for $\varepsilon \leq \varepsilon_0$, where $\varepsilon_0, C$ are the same as for the unit interval (this follows by scaling). We apply this inequality to each interval $I_n=[2^n, 2^{n+1}[$, $n \in \mathbb Z$ and multiply by $2^{n(\beta-1)p}$ thus obtaining since $y \approx 2^n$ in $I_n$ $$ \int_{I_n}y^{(\beta-1)p} |u'(y)|^p\, dy \leq \tilde C\left (\varepsilon^p \int_ {I_n}y^{\beta p}|u''(y)|^p\, dy+\frac{1}{\varepsilon^p}\int_{I_n} y^{(\beta-2)p}|u(y)|^p\, dy\right ). $$ The thesis follows summing over $n$. \qed \begin{lem} \label{inter1} For $m, \beta <2$, $1<p<\infty$ there exist $C>0, \varepsilon_0>0$ such that for every $u \in W^{2,p}_{loc}((1, \infty))$, $0<\varepsilon <\varepsilon_0$, $$ \|y^{\frac{\beta}{2}} u'\|_{L^p_m((1, \infty))} \leq C \left (\varepsilon \|y^\beta u''\|_{L^p_m((1, \infty))} +\frac{1}{\varepsilon} \|u\|_{L^p_m((1, \infty))} \right ). $$ \end{lem} {\sc Proof. }We use \eqref{i1} in $(a_n, a_{n+1})$ where $a_n=n^{1+\frac{\gamma}{2}}$, so that $a_{n+1}-a_n \approx n^{\frac{\gamma}{2}}$. We multiply both sides by $n^{(1+\frac{\gamma}{2})(m+\frac{\beta p}{2})} \approx y^{m+\frac{\beta p}{2}}$ in $(a_n, a_{n+1})$ and sum over $n$. Choosing $\gamma \geq 0$ in such a way that $\beta=\frac{2\gamma}{2+\gamma}$, the thesis follows. \qed \section{The space $W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$} \label{section sobolev} Let $p>1$, $m, \alpha_1 \in \mathbb{R}$, $\alpha_2<2$. We recall that \begin{align*} W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)=\{u \in W^{2,p}(\alpha_1,\alpha_2,m):\ y^{\alpha_2-1}D_yu\ \in L^p_m\} \end{align*} with the norm $$ \|u\|_{W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)}=\|u\|_{W^{2,p}(\alpha_1,\alpha_2,m)}+\|y^{\alpha_2-1}D_yu\|_{ L^p_m}. $$ We have made the choice not to include the mixed derivatives in the definition of $W^{2,p}_{\mathcal{N}}\left(\alpha_1,\alpha_2,m\right)$ to simplify some arguments. However the following result holds in a range of parameters which is sufficient for the study of the operator $\mathcal L$. \begin{prop}\label{Sec sob derivata mista} If $\alpha_2-\alpha_1<2$ and $\alpha_1^{-} <\frac{m+1}p$ then there exists $C>0$ such that $$\|y^\frac{\alpha_1+\alpha_2}{2} D_{y}\nabla_x u \|_{ L^p_m} \leq C \|u\|_{W^{2,p}_{\cal N}(\alpha_1, \alpha_2, m)}$$ for every $u \in W^{2,p}_{\mathcal{N}}\left(\alpha_1,\alpha_2,m\right)$. \end{prop} {\sc Proof.} This follows from \cite[Theorem 7.1]{MNS-CompleteDegenerate}, choosing $c$ sufficiently large therein, so that $\alpha_1^{-} <\frac{m+1}p<c+1-\alpha_2$. \qed \begin{os}\label{Os Sob 1-d} With obvious changes we may consider also the analogous Sobolev spaces on $\mathbb{R}_+$, $W^{2,p}(\alpha_2,m)$ and $W^{2,p}_{\cal N}(\alpha_2, m)$. For example we have $$W^{2,p}_{\mathcal N}(\alpha,m)=\left\{u\in W^{2,p}_{loc}(\mathbb{R}_+):\ u,\ y^{\alpha}D_{yy}u,\ y^{\frac{\alpha}{2}}D_{y}u,\ y^{\alpha-1}D_{y}u\in L^p_m\right\}.$$ For brevity sake, we consider in what follows, only the Sobolev spaces on $\mathbb{R}^{N+1}_+$ but all the results of this section will be valid also in $\mathbb{R}_+$ changing the condition $\alpha_1^{-} <\frac{m+1}p$ (which appears in Sections \ref{denso}, \ref{Sec sob min domain}) to $0<\frac{m+1}p$. \end{os} We clarify in which sense the condition $y^{\alpha_2-1}D_y u \in L^p_m$ is a Neumann boundary condition. \begin{prop} \label{neumann} The following assertions hold. \begin{itemize} \item[(i)] If $\frac{m+1}{p} >1-\alpha_2$, then $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)=W^{2,p}(\alpha_1, \alpha_2, m)$. \item[(ii)] If $\frac{m+1}{p} <1-\alpha_2$, then $$W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)=\{u \in W^{2,p}(\alpha_1, \alpha_2, m): \lim_{y \to 0}D_yu(x,y)=0\ {\rm for\ a.e.\ x \in \mathbb{R}^N }\}.$$ \end{itemize} In both cases (i) and (ii), the norm of $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$ is equivalent to that of $W^{2,p}(\alpha_1, \alpha_2, m)$. \end{prop} {\sc Proof. } If $\frac{m+1}{p} >1-\alpha_2$ and $u \in W^{2,p}(\alpha_1, \alpha_2, m)$, we apply Proposition \ref{Hardy in core} (ii) to $D_y u$ and obtain that $\lim_{y \to \infty}D_yu(x,y)=g(x)$ exists. At the points where $g(x) \neq 0$, $u(x, \cdot)$ has at least a linear growth with respect to $y$ and hence $\int_0^\infty |u(x,y)|^p y^m\, dy=\infty$ (since $(m+1)/p>1-\alpha_2 >-1$). Then $g=0$ a.e. and Proposition \ref{Hardy in core}(ii) again gives $\|y^{\alpha_2 -1}D_yu\|_{L^p_m} \leq C\|y^{\alpha_2}D_{yy}u\|_{L^p_m}$. If $\frac{m+1}{p} <1-\alpha_2$ we apply Proposition \ref{Hardy in core} (i) to $D_y u$ to deduce that $\lim_{y \to 0}D_yu(x,y)=h(x)$ exists. If $h=0$, then Hardy inequality yields $y^{\alpha_2-1}D_y u \in L^p_m$. On the other hand, $y^{\alpha_2-1}D_y u \in L^p_m$ implies $h=0$, since $y^{p(\alpha_2-1)}$ is not integrable with respect to the weight $y^m$. \qed \subsection{An alternative description of $W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$} We show an alternative description of $W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$, adapted to the operator $D_{yy}+cy^{-1}D_y$. \begin{lem}\label{Lem Trace Dy in W} Let $c\in\mathbb{R}$ and let us suppose that $\frac{m+1}{p}<c+1-\alpha_2$. If $u \in W^{2,p}_{loc}(\mathbb{R}^{N+1}_+)$ and $ u,\ y^{\alpha_2}\left(D_{yy}u+c\frac{D_yu}y\right) \in L^p_m$, then the following properties hold. \begin{itemize} \item[(i)] The function $v=y^cD_y u$ satisfies $v,D_yv\in L^1_{loc} \left(\mathbb{R}^{N}\times[0,\infty)\right)$ and therefore has a trace $v_0(x):=\lim_{y\to 0}y^c D_yu(x,y)\in L^1_{loc}(\mathbb{R}^N)$ at $y=0$. \item[(ii)] $v_0=0$ if and only $y^{\alpha_2-1}D_y u \in L^p_m(\mathbb{R}^N \times [0,1])$. In this case \begin{align*} \left\|y^{\alpha_2-1}D_yu\right \|_{L^p_m}\leq C \left\|y^{\alpha_2}\left (D_{yy}u+cy^{-1}D_yu\right)\right\|_{L^p_m} \end{align*} with $C=\left(c+1-\alpha_2-\frac{m+1}{p}\right)^{-1}>0$. \item[(iii)] If the stronger assumption $0<\frac{m+1}p\leq c-1$ holds then $v_0=0$ and $y^{\alpha_2-1}D_y u \in L^p_m(\mathbb{R}^N \times [0,1])$. \end{itemize} \end{lem} {\sc{Proof.}} Let $v:=y^{c}D_yu$ and $$f:=y^{\alpha_2}\left(D_{yy}u+c\frac{D_yu}{y}\right)=y^{\alpha_2-c}D_yv\in L^p_m.$$ Claim (i) is then a consequence of Proposition \ref{Hardy in core} (i) with $\beta=\alpha_2-c$. To prove (ii) we set $v_0(x)=\left(y^cD_yu\right)(x,0)$. Then one has $g:=y^{\alpha_2-c-1}(v-v_0)\in L^p_m$ by Proposition \ref{Hardy in core} (ii) again. Then $$y^{\alpha_2-1}D_yu=g+y^{\alpha_2-1-c}v_0$$ is $L^p_m$-integrable near $y=0$ if and only if $v_0=0$, since $\frac{m+1}{p} <c+1-\alpha_2$. Finally, when $v_0=0$, $y^{\alpha_2-1}=g=y^{\alpha_2-c-1}v$ and we can use Proposition \ref{Hardy in core} (ii). Let us prove (iii). Note that $c-1 <c+1-\alpha_2$, since $\alpha_2<2$. At the points where $v_0(x) \neq 0$, we have for $0<y \leq \delta(x)$, $|D_yu(x, y)|\geq \frac 12|v_0(x)| y^{-c}$ which implies $|u(x,y)|\geq \frac 14|v_0(x)| y^{-c+1}$ for $0<y \leq \delta'(x)$, since $c>1$. This yields $\int_0^\infty |u(x,y)|^p y^m\, dy=\infty$, since $(m+1)/p\leq c-1$, and then $v_0=0$ a.e. \qed \medskip To provide an equivalent description of $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$ we need the following simple lemma. \begin{lem} \label{elliptic} Assume that $u \in L^p(\mathbb{R}^N) \cap W^{2,1}_{loc}(\mathbb{R}^N)$ for some $1<p<\infty$ and that $\Delta u \in L^p(\mathbb{R}^N)$. Then $u \in W^{2,p}(\mathbb{R}^N)$. \end{lem} {\sc{Proof.}} Let $v \in W^{2,p}(\mathbb{R}^N)$ be such that $v-\Delta v=u-\Delta u$ and consider $w=u-v \in L^p(\mathbb{R}^N) \cap W^{2,1}_{loc}(\mathbb{R}^N)$. If $\phi \in C_c^\infty (\mathbb{R}^N)$, then $$ 0=\int_{\mathbb{R}^N}(w-\Delta w)\phi=\int_{\mathbb{R}^N}w(\phi-\Delta \phi). $$ Since $w \in L^p(\mathbb{R}^N)$ the above identity extends by density to all $\phi \in W^{2,p'}(\mathbb{R}^N)$ and then, since $I-\Delta$ is invertible from $W^{2,p'}(\mathbb{R}^N)$ to $L^{p'}(\mathbb{R}^N)$, we have $\int_{\mathbb{R}^N} w g=0$ for every $g \in L^{p'}(\mathbb{R}^N)$, so that $w=0$ and $u=v \in W^{2,p}(\mathbb{R}^N)$. \qed We can now show an equivalent description of $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$, adapted to the degenerate operator $D_{yy}+cy^{-1}D_y$. \begin{prop}\label{Trace D_yu in W} Let $c\in\mathbb{R}$ and $\frac{m+1}{p}<c+1-\alpha_2$. Then \begin{align*} W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)=&\left\{u \in W^{2,p}_{loc}(\mathbb{R}^{N+1}_+): u,\ y^{\alpha_1}\Delta_xu\in L^p_m \right. \\[1ex] &\left.\hspace{10ex} y^{\alpha_2}\left(D_{yy}u+c\frac{D_yu}y\right) \in L^p_m\text{\;\;and\;\;}\lim_{y\to 0}y^c D_yu=0\right\} \end{align*} and the norms $\|u\|_{W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)}$ and $$\|u\|_{L^p_m}+\|y^{\alpha_1}\Delta_x u\|_{L^p_m}+\|y^{\alpha_2}(D_{yy}u+cy^{-1}D_yu)\|_{L^p_m}$$ are equivalent on $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$. Finally, when $0<\frac{m+1}p\leq c-1$ then \begin{align*} W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)=&\left\{u \in W^{2,p}_{loc}(\mathbb{R}^{N+1}_+): u,\ y^{\alpha_1}\Delta_xu, y^{\alpha_2}\left(D_{yy}u+c\frac{D_yu}y\right) \in L^p_m\right\}. \end{align*} \end{prop} {\sc{Proof.}} Let $\mathcal G$ be the space on the right hand side with the canonical norm indicated above. By Lemma \ref{Lem Trace Dy in W} $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m) \subset \mathcal G$ and the embedding is clearly continuous. Conversely, let $u \in \mathcal G$. The estimate for $y^{\alpha_2-1}D_yu$ follows from Lemma \ref{Lem Trace Dy in W}(ii) and yields, by difference, also that for $y^{\alpha_2}D_{yy}u$. Since for $y\leq 1$ one has $y^{\frac{\alpha_2}2}\leq y^{\alpha_2-1}$ it follows that $y^{\frac{\alpha_2}2}D_yu\in L^p_m(\mathbb{R}^N \times [0,1])$ and $y^{\frac{\alpha_2}2}D_yu\in L^p_m(\mathbb{R}^N \times [1,\infty])$ by Lemma \ref{inter1}. Finally, we prove the inequality $$\|y^{\frac{\alpha_1}2}D_xu\|_{L^p_m}+\|y^{\alpha_1}D_{x_ix_j}u\|_{L^p_m}\leq C\left (\|u\|_{L^p_m}+\|y^{\alpha_1}\Delta_x u\|_{L^p_m}\right ). $$ Since $u(\cdot,y) \in L^p(\mathbb{R}^N) \cap W_{loc}^{2,p}(\mathbb{R}^N)$ for a.e. $y>0$, the lemma above and the Calderon-Zygmund inequality yield \begin{align*} \int_{\mathbb{R}^N} |D_{x_i x_j}(x,y)|^p\,dx\leq C\int_{\mathbb{R}^N} |\Delta_x(x,y)|^p\,dx. \end{align*} Multiplying by $y^{p\alpha_1+m }$ and integrating over $\mathbb{R}_+$ we obtain $\sum_{i,j=1}^n\|y^{\alpha_1} D_{x_ix_j}u\|_{L^p_m}\leq C\|y^{\alpha_1} \Delta_x u\|_{L^p_m}$. The estimate $$\|y^{\frac{\alpha_1}2} \nabla_{x}u\|_{L^p_m}\leq C\left(\|y^{\alpha_1} \Delta_x u\|_{L^p_m}+\|u\|_{L^p_m}\right)$$ can be obtained similarly using the interpolative inequality $$\|\nabla_x u(\cdot,y)\|_{L^p(\mathbb{R}^n)}\leq \epsilon \|\Delta_x u(\cdot,y)\|_{L^p(\mathbb{R}^n)}+\frac {C(N,p)} \epsilon \| u(\cdot,y)\|_{L^p(\mathbb{R}^n)}$$ with $\epsilon=y^{\frac{\alpha_1}2}$. The equality for $0<\frac{m+1}p\leq c-1$ follows from Lemma \ref{Lem Trace Dy in W}(iii). \qed \medskip We provide now another equivalent description of $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$ which involves a Dirichlet, rather than Neumann, boundary condition, in a certain range of parameters. \begin{prop}\label{trace u in W op} Let $c\geq 1$ and $\frac{m+1}{p}<c+1-\alpha_2$. The following properties hold. \begin{itemize} \item[(i)] If $c>1$ then \begin{align*} W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)=&\left\{u \in W^{2,p}_{loc}(\mathbb{R}^{N+1}_+): u,\ y^{\alpha_1}\Delta_xu\in L^p_m,\right. \\[1ex] &\left.\hspace{11ex}y^{\alpha_2}\left(D_{yy}u+c\frac{D_yu}y\right) \in L^p_m\text{\;and\;}\lim_{y\to 0}y^{c-1} u=0\right\}. \end{align*} \item[(ii)] If $c=1$ then \begin{align*} W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)=&\left\{u \in W^{2,p}_{loc}(\mathbb{R}^{N+1}_+): u,\ y^{\alpha_1}\Delta_xu\in L^p_m,\right. \\[1ex] &\left.\hspace{11ex}y^{\alpha_2}\left(D_{yy}u+c\frac{D_yu}y\right) \in L^p_m\text{\;and\;}\lim_{y\to 0} u(x,y)\in \mathbb{C}\right\}. \end{align*} \end{itemize} \end{prop} {\sc Proof. } Let us prove (i). By Proposition \ref{Trace D_yu in W} it is sufficient to show that the conditions $\lim_{y \to 0}y^cD_yu=0$ and $\lim_{y \to 0}y^{c-1}u=0$ are equivalent. We proceed as in Lemma \ref{Lem Trace Dy in W} setting $v:=y^{c}D_yu$ and $$f:=y^{\alpha_2}\left(D_{yy}u+c\frac{D_yu}{y}\right)=y^{\alpha_2-c}D_yv\in L^p_m.$$ If $v_0(x)=\left(y^cD_yu\right)(x,0)$, then $g:=y^{\alpha_2-c-1}(v-v_0)\in L^p_m$ by Proposition \ref{Hardy in core} (ii), and \begin{equation} \label{w1} D_yu=y^{1-\alpha_2}g+y^{-c}v_0. \end{equation} Then, since $c>1$, we can write for $0<y<1$ \begin{align}\label{eq1 trace u in W} u(x,1)- u(x,y)=\frac{1}{c-1}v_0(x)(y^{1-c}-1)+\int_y^1 s^{1-\alpha_2}g(x,s)\, ds \end{align} and \begin{equation} \label{w2} \int_y^1 s^{1-\alpha_2}|g(x,s)|\, ds \leq \|g\|_{L^p_m} \left(\int_y^1 s^{(1-\alpha_2 -\frac mp)p'} \right)^{\frac{1}{p'}} \leq C(1+y^{\gamma}) \end{equation} with $\gamma=2-\alpha_2-(m+1)/p>1-c$ (when $\gamma=0$ the term $y^\gamma$ is substituted by $|\log y|^{\frac{1}{p'}}$). Since $c>1$, it follows that \begin{align*} \lim_{y \to 0} y^{c-1}u(x,y)=\frac{v_0(x)}{1-c} \end{align*} and therefore $\displaystyle\lim_{y \to 0} y^{c-1}u(x,y)=0$ if and only if $v_0(x)=0$ or, by Lemma \ref{Lem Trace Dy in W}(ii), if $\displaystyle\lim_{y \to 0}y^c D_yu(x,y)=0$. To prove (ii) we proceed similarly. From \eqref{w1} with $c=1$ we obtain \begin{align*} u(x,1)- u(x,y)=-v_0(x)|\log y|+\int_y^1 s^{1-\alpha_2}g(x,s)\, ds,\qquad 0<y<1. \end{align*} The parameter $\gamma$ is positive, since $(m+1)/p<2-\alpha_2$ and the integral on the right hand side of \eqref{eq1 trace u in W} converges. Therefore $\displaystyle\lim_{y \to 0} u(x,y)\in \mathbb{C}$ if and only if $v_0(x)=0$. \qed \begin{os} We point out that the function $v=y^{c-1}u$ above satisfies $D_yv\in L^1 \left(Q\times[0,1]\right)$ for every cube $Q$. In particular $D_y u\in L^1\left(Q\times [0,1]\right)$, if $\frac{m+1}{p} <2-\alpha_2$, by choosing $c=1$. Indeed, if $c>1$, using \eqref{eq1 trace u in W}, \eqref{w2} with $v_0=0$ one has $y^{c-2}u\in L^1 \left(Q\times[0,1]\right)$. Then the equality $$D_{y}v=y^{c-1}D_yu+(c-1)y^{c-2}u=y^{c-\alpha_2}g+(c-1)y^{c-2}u$$ and $g \in L^p_m$ and H\"older inequality yield $y^{c-\alpha_2}g \in L^1(Q \times [0,1])$. When $c=1$ then $v=u$ and we use \eqref{w1} with $v_0=0$ and then \eqref{w2}, since $\gamma>0$, as observed in the above proof. \end{os} \subsection{Approximation with smooth functions} \label{denso} The main result of this section is a density property of smooth functions in $W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$. We introduce the set \begin{equation} \label{defC} \mathcal{C}:=\left \{u \in C_c^\infty \left(\mathbb{R}^N\times[0, \infty)\right), \ D_y u(x,y)=0\ {\rm for} \ y \leq \delta\ {\rm and \ some\ } \delta>0\right \} \end{equation} and its one dimensional version \begin{equation} \label {defD} \mathcal{D}=\left \{u \in C_c^\infty ([0, \infty)), \ D_y u(y)=0\ {\rm for} \ y \leq \delta\ {\rm and \ some\ } \delta>0\right \}. \end{equation} Let $$C_c^\infty (\mathbb{R}^{N})\otimes\mathcal D=\left\{u(x,y)=\sum_i u_i(x)v_i(y), \ u_i \in C_c^\infty (\mathbb{R}^N), \ v_i \in \cal D \right \}$$ (finite sums). Clearly $C_c^\infty (\mathbb{R}^{N})\otimes\mathcal D \subset \mathcal C$. \begin{teo} \label{core gen} If $\frac{m+1}{p}>\alpha_1^-$ then $C_c^\infty (\mathbb{R}^{N})\otimes\mathcal D$ is dense in $W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$. \end{teo} Note that the condition $(m+1)/p>\alpha_1^-$ or $m+1>0$ and $(m+1)/p+\alpha_1>0$ is necessary for the inclusion $C_c^\infty (\mathbb{R}^{N})\otimes\mathcal D \subset W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$. \medskip For technical reason we start from the case $\alpha_2=0$ and write $\alpha$ for $\alpha_1$. Then $$W^{2,p}_{\mathcal N}(\alpha,0,m)=\left\{u\in W^{2,p}_{loc}(\mathbb{R}^{N+1}_+):\ u,\, y^\alpha D_{x_ix_j}u,\ y^\frac{\alpha}{2} D_{x_i}u,\ D_{y}u\ D_{yy}u,\ \frac{D_yu}{y}\in L^p_m\right\}.$$ \medskip We need some preliminary results which show the density of smooth functions with compact support in $W^{2,p}_{\mathcal N}(\alpha,0,m)$. In the first no restriction on $\alpha$ is needed. \begin{lem} \label{supp-comp} The functions in $ W^{2,p}_{\mathcal N}(\alpha,0,m)$ having support in $\mathbb{R}^N \times [0,b[$ for some $b>0$ are dense in $ W^{2,p}_{\mathcal N}(\alpha,0,m)$. \end{lem} {\sc Proof.} Let $0\leq\phi\leq 1$ be a smooth function depending only on the $y$ variable which is equal to $1$ in $(0,1)$ and to $0$ for $y \ge 2$. Set $\phi_n(y)=\phi \left(\frac{y}{n}\right)$ and $u_n(x,y)=\phi_n(y)u(x,y)$. Then $u_n\in W^{2,p}_{\mathcal N}(\alpha,0,m)$ and has compact support in $\mathbb{R}^N \times [0,2n]$. By dominated convergence $u_n \to u$ in $L^p_m$. Since $D_{x_ix_j}u_n=\phi_n D_{x_ix_j}u$, $ D_{x_i}u_n=\phi_nD_{x_i}u$ we have $y^\alpha D_{x_ix_j}u_n\to y^\alpha D_{x_ix_j}u$, $y^\frac{\alpha}{2} D_{x_i}u_n\to y^\frac{\alpha}{2} D_{x_i}u$ , by dominated convergence again. For the convergence of the $y$-derivatives, we observe that $|D_y \phi_n|\leq \frac{C}{n}\chi_{[n,2n]}$, $|D_{yy} \phi_n| \leq \frac{C}{n^2}\chi_{[n,2n]}$. Since $D_y u_n =\phi_n D_y u+ D_y \phi_n u$ and $D_{yy} u_n =\phi_n D_{yy} u+2D_y\phi_n D_y u+ uD_{yy}\phi_n$, we have also $D_y u_n \to D_y u$, $D_{yy}u_n\to D_{yy}u$ and $\frac{D_yu_n}{y}\to \frac{D_yu}{y}$ in $L^p_m$. \qed \begin{lem} \label{supp-comp-x} Assume that $\frac{m+1}{p}<2$ and $\frac{m+1}{p}+\alpha>0$. Then the functions in $ W^{2,p}_{\mathcal N}(\alpha,0,m)$ with compact support are dense in $ W^{2,p}_{\mathcal N}(\alpha,0,m)$. \end{lem} {\sc Proof.} Let $u\in W^{2,p}_{\mathcal N}(\alpha,0,m)$. By Lemma \ref{supp-comp}, we may assume that $u$ has support in $\mathbb{R}^N \times [0,b[$ for some $b>0$. Let $0\leq\phi\leq 1$ be a smooth function depending only on the $x$ variable which is equal to $1$ if $|x|\leq 1$ and to $0$ for $|x| \ge 2$. Set $\phi_n(x)=\phi \left(\frac{x}{n}\right)$ and $u_n(x,y)=\phi_n(x)u(x,y)$. Then $u_n\in W^{2,p}_{\mathcal N}(\alpha,0,m)$ and has compact support. By dominated convergence $u_n \to u$ in $L^p_m$. Moreover, since $D_y u_n =\phi_n D_y u$ and $D_{yy} u_n =\phi_n D_{yy} u$, we have immediately $D_{yy}u_n\to D_{yy}u$, $\frac{D_yu_n}{y}\to \frac{D_yu}{y}$ in $L^p_m$. Concerning the derivatives with respect to the $x$ variable, we have $|D_{x_i} \phi_n(x)|\leq \frac{C}{n}\chi_{[n,2n]}(|x|)$, $|D_{x_ix_j } \phi_n(x)|\leq \frac{C}{n^2}\chi_{[n,2n]}(|x|)$ and \begin{align}\label{eq1 lem supp} \nonumber D_{x_i}u_n&=\phi_n D_{x_i}u+u D_{x_i}\phi_n, \\ D_{x_jx_i}u_n&=\phi_n D_{x_ix_j}u+D_{x_j}\phi_n D_{x_i}u+D_{x_j}u D_{x_i}\phi_n +u D_{x_ix_j}\phi_n. \end{align} Let us show that $y^\alpha u,\ y^{\frac{\alpha}2} u\in L^p_m$. Since $u$ has support in $\mathbb{R}^N \times [0,b[$ this is trivial for $\alpha\geq 0$. When $\alpha<0$ let $f(x,y)=\frac{u(x,y)-u(x,0)}{y^2}$ so that \begin{align*} y^\alpha u=y^{\alpha+2}f+y^\alpha u(\cdot,0). \end{align*} By Proposition \ref{Hardy in core}, $f\in L^p_m$ and $u(\cdot,0)\in L^p(\mathbb{R}^N)$. Since $u$ and $f$ have support in $\mathbb{R}^N \times [0,b[$, the assumption $-\alpha<\frac{m+1}{p}<2$ then implies that $y^\alpha u\in L^p_m$ and also $y^{\frac{\alpha}2} u\in L^p_m$. Using the classic interpolative inequality $$\|\nabla_x u(\cdot,y)\|_{L^p(\mathbb{R}^N)}\leq \epsilon \|\Delta_x u(\cdot,y)\|_{L^p(\mathbb{R}^N)}+\frac {C(N,p)} \epsilon \| u(\cdot,y)\|_{L^p(\mathbb{R}^N)}$$ with $\epsilon=1$ we easily get (after raising to the power $p$, multiplying by $y^\alpha$ and integrating in $y$), $y^\alpha \nabla_x u\in L^p_m$. Using \eqref{eq1 lem supp} and the fact that $y^\alpha u,\ y^{\frac{\alpha}2} u,\ y^\alpha \nabla_x u\in L^p_m$ we deduce using dominated convergence that $y^\alpha D_{x_ix_j}u_n\to y^\alpha D_{x_ix_j}u$, $y^\frac{\alpha}{2} D_{x_i}u_n\to y^\frac{\alpha}{2} D_{x_i}u$ in $L^p_m$. \qed In the next lemma we add regularity with respect to the $x$-variable. \begin{lem} \label{smooth} Let $\frac{m+1}{p}<2$, $\frac{m+1}{p}+\alpha>0$ and $u\in W^{2,p}_{\mathcal N}(\alpha,0,m)$ with compact support. Then there exist $(u_n)_{n\in\mathbb{N}}\subset W^{2,p}_{\mathcal{N}}(\alpha,0,m)$ with compact support such that $u_n$ converges to $u$ in $W^{2,p}_{\mathcal{N}}(\alpha,0,m)$ and, for every $y\geq 0$, $u_n(\cdot,y)$ belongs to $ C^\infty(\mathbb{R}^N)$ and has bounded $x$-derivatives of any order. \end{lem} {\sc Proof.} Let $u\in W^{2,p}_{\mathcal{N}}(\alpha,0,m)$ as above and let us fix a standard sequence of mollifiers in $\mathbb{R}^N$ $\rho_n=n^N\rho(nx)$ where $0\leq \rho\in C_c^\infty(\mathbb{R}^N)$, $\int_{\mathbb{R}^N}\rho(x)\ dx=1$. Let us set $u_n(x,y)=\left(\rho_n\ast u(\cdot,y)\right)(x)$ where $\ast$ means convolution with respect to the $x$ variable. By Lemma \ref{L1} and Proposition \ref{Hardy in core}, $u(\cdot, y)\in L^p(\mathbb{R}^N)$ for every $y\geq 0$ and therefore, by standard properties, $u_n$ has a compact support and $u_n(\cdot,y)\in C^\infty_b(\mathbb{R}^N)$ for every $y\geq 0$. By Young's inequality \begin{align*} \|u_n(\cdot,y)\|_{L^p(\mathbb{R}^N)}\leq \|u(\cdot,y)\|_{L^p(\mathbb{R}^N)},\qquad u_n(\cdot,y)\to u(\cdot,y)\quad\text{in}\quad L^p(\mathbb{R}^N),\qquad \forall y\geq 0. \end{align*} Raising to the power $p$, multiplying by $y^m$ and by integrating with respect to $y$, we get \begin{align*} \|u_n\|_{L^p_m}\leq \|u\|_{L^p_m} \end{align*} which, using dominated convergence with respect to $y$, implies $u_n\to u$ in $L^p_m$. Using the equalities \begin{align*} y^\alpha D_{x_ix_j}u_n&=\rho_n\ast (y^\alpha D_{x_ix_j}u),\qquad y^\frac{\alpha}{2} D_{x_i}u_n=\rho_n\ast (y^\frac{\alpha}{2} D_{x_i}u),\\[1ex] D_{yy}u_n&=\rho_n\ast D_{yy}u,\hspace{14.5ex} y^\gamma D_{y}u_n=\rho_n\ast (y^\gamma D_{y}u), \end{align*} $\gamma=0,1$ a similar argument as before yields $u_n\to u$ in $W^{2,p}_n(\alpha,0,m)$.\\\qed We can now prove a weaker version of Theorem \ref{core gen} when $\alpha_2=0$. \begin{prop} \label{corend} If $\frac{m+1}{p}>\alpha^-$ then $\mathcal C$, defined in \eqref{defC}, is dense in $W^{2,p}_{\mathcal{N}}(\alpha,0,m)$. \end{prop} {\sc Proof.} (i) We first consider the case $\frac{m+1}{p}>2$. Let $u\in W^{2,p}_{\mathcal{N}}(\alpha,0,m)$ which, by Lemma \ref{supp-comp}, we may assume to have the support in $\mathbb{R}^N\times [0,b]$. Let $\phi$ be a smooth function depending only on $y$, equal to $0$ in $(0,1)$ and to $1$ for $y \ge 2$. Let $\phi_n(y)=\phi (ny)$ and $u_n(x,y)=\phi_n(y)u(x,y)$. Then \begin{align*} D_{x_ix_j}u_n &=\phi_n D_{x_ix_j}u,\hspace{12ex}D_{x_i}u_n =\phi_n D_{x_i}u,\\[1ex] D_y u_n &=\phi_n D_y u+D_y\phi_n u,\qquad D_{yy} u_n =\phi_n D_{yy} u+2D_y\phi_n D_yu+ uD_{yy}\phi_n. \end{align*} By dominated convergence $u_n \to u$, $y^\alpha D_{x_ix_j}u_n \to y^\alpha D_{x_ix_j}u$, $y^\frac{\alpha}{2} D_{x_i}u_n \to y^\frac{\alpha}{2} D_{x_i}u$ in $L^p_m$. Let us consider now the terms containing the $y$ derivatives and observe that \begin{align}\label{sti cut 2} |D_{y} \phi_n|\leq Cn\chi_{[\frac 1 n,\frac 2{n}]}\leq \frac{2C}{y}\chi_{[\frac 1 n,\frac 2{n}]},\qquad |D_{yy } \phi_n|\leq C n^2\chi_{[\frac 1 n,\frac 2{n}]}\leq \frac{4C}{y^2}\chi_{[\frac 1 n,\frac 2{n}]}. \end{align} Using these estimates and since $y^{-2}u\in L^p_m $ by Proposition \ref{Hardy in core} $$\frac{D_y u_n}{y} =\phi_n \frac{D_y u}{y}+\frac{u}{y} (D_y\phi_n) \to \frac{D_y u}{y}$$ in $L^p_m$, by dominated convergence. In a similar way one shows that $D_y u_n \to D_yu$ and $D_{yy}u_n \to D_{yy}u$ in $L^p_m$ and hence functions with compact support in $\mathbb{R}^n\times ]0,\infty[$ are dense in $W^{2,p}_{\mathcal{N}}(\alpha,0,m)$. At this point, a standard smoothing by convolutions shows the density of $C_c^\infty (\mathbb{R}^N \times ]0,\infty[)$ in $W^{2,p}_{\mathcal{N}}(\alpha,0,m)$. (ii) Let $\frac{m+1}{p}=2$. We proceed similarly to (i) and fix $u\in W^{2,p}_{\mathcal{N}}(\alpha,0,m)$ with support in $\mathbb{R}^N\times [0,b]$. Let $\phi$ be a smooth function which is equal to $0$ in $\left(0,\frac{1}{4}\right)$ and to $1$ for $y \ge \frac{1}{2}$. Let $\phi_n(y)=\phi\left(y^\frac{1}{n}\right)$ and $u_n=\phi_n u$. By dominated convergence it is immediate to see that $u_n \to u$,\; $y^{\alpha}D_{x_ix_j}u_n\to y^{\alpha}D_{x_ix_j}u$,\; $y^\frac{\alpha}{2}D_{x_i}u_n\to y^\frac{\alpha}{2}D_{x_i}u$ in $L^p_m$. To treat the terms concerning the $y$ derivatives we observe that \begin{align}\label{beh cut} \nonumber |\phi_n'|&=\left|\frac{1}{n}\phi'\left(y^\frac{1}{n}\right)y^{\frac{1}{n}-1} \right|\leq \frac{C}{ny}\chi_{[(\frac 1 4)^n,(\frac 1{2})^n]}\\[1ex] | \phi_n''|&=\left|\frac{1}{n^2}\phi''\left(y^\frac{1}{n}\right)y^{\frac{2}{n}-2}+\frac 1 n(\frac 1 n-1)\phi'\left(y^\frac{1}{n}\right)y^{\frac{1}{n}-2}\right|\leq \frac{C}{ny^2}\chi_{[(\frac 1 4)^n,(\frac 1{2})^n]}. \end{align} Moreover, \begin{align*} D_y u_n =\phi_n D_y u+\phi_n'u,\qquad D_{yy} u_n=\phi_n D_{yy} u+2\phi_n'D_yu+\phi_n''u. \end{align*} Then $\frac 1 y D_y u_n\to \frac 1 y D_y u$ in $L^p_m$ since $\phi_n \frac{D_y u}{y}\to \frac 1 y D_y u$ by dominated convergence and $\phi_n'\frac{u}{y}\to 0$. In fact, using \eqref{beh cut} and \eqref{behaviour} of Lemma \ref{int-uMaggiore} we have \begin{align*} \left\|\phi_n'\frac{u}{y}\right\|^p_{L^p_m}\leq \frac{C}{n^p}\int_{(\frac 1 4)^n}^{(\frac 1 2)^n} |\log y|^{p-1}y^{m-2p}\,dy=\frac{C}{n^{2p}}\int_{\frac 1 4}^{\frac 1 2} |\log s|^{p-1}s^{-1}\,dy \end{align*} which tends to $0$ as $n\to\infty$. Concerning the second order derivative we have $ D_{yy} u_n\to D_{yy} u$ since $\phi_n D_{yy} u\to D_{yy} u$ by dominated convergence and the other terms tend to $0$. Indeed proceeding as before we have $|\phi_n'D_yu|\leq C\frac{C}{n}\chi_{[(\frac 1 4)^n,(\frac 1{2})^n]}\frac{|D_y u|}{y}$ which tends to $0$ by dominated convergence. Finally, \begin{align*} \|\phi_n''u\|^p_{L^p_m}\leq \frac{C}{n^p}\int_{(\frac 1 4)^n}^{(\frac 1 2)^n} |\log y|^{p-1}y^{m-2p}\,dy=\frac{C}{n^{2p}}\int_{\frac 1 4}^{\frac 1 2} |\log s|^{p-1}s^{-1}\,dy \end{align*} which tends to $0$ as $n\to\infty$. Now the proof is as for (i) and shows that $C_c^\infty (\mathbb{R}^N \times ]0,\infty[)$ is dense in $W^{2,p}_{\mathcal{N}}(\alpha,0,m)$. (iii) Let assume finally that $\frac{m+1}{p}<2$. By Lemmas \ref{supp-comp-x}, \ref{smooth} we may assume that $u$ has compact support and that for every $y \geq 0$, $u(\cdot,y) \in C^\infty_b(\mathbb{R}^N)$. By Proposition \ref{Hardy in core}, $\frac{u-u(\cdot,0)}{y^2}\in L^p_m$. Let $\phi$ be a smooth function equal to $0$ in $(0,1)$ and to $1$ for $y \ge 2$ and $\phi_n(y)=\phi (ny)$. Setting $$u_n(x,y)=(1-\phi_n(y))u(x,0)+\phi_n(y)u(x,y),$$ then \begin{align*} D_{x_i}u_n &=(1-\phi_n)D_{x_i} u(\cdot,0)+ \phi_n D_{x_i}u,\\[1ex] D_{x_ix_j}u_n &=(1-\phi_n)D_{x_ix_j} u(\cdot,0)+ \phi_nD_{x_ix_j}u,\\[1ex] D_y u_n &=\phi_n' (u-u(\cdot,0))+\phi_nD_{y}u,\\[1ex] D_{yy} u_n &=\phi_n'' (u-u(\cdot,0))+2\phi_n'D_yu+\phi_nD_{yy}u. \end{align*} It follows that $u_n \to u$,\; $y^\alpha D_{x_ix_j}u_n \to y^\alpha D_{x_ix_j}u$, \; $y^\frac{\alpha}{2} D_{x_i}u_n \to y^\frac{\alpha}{2} D_{x_i }u$ in $L^p_m$. Since the argument is always the same, let us explain it for $u_n$. The term $\phi_n u$ converges to $u$ by dominated convergence and $(1-\phi_n)u(\cdot, 0)$ converges to zero since $u(\cdot,0)$ is bounded with compact support. Using \eqref{sti cut 2} one has $$\frac{|\phi_n' (u-u(\cdot,0))|}{y}\leq C \chi_{[\frac{1}{n},\frac{2}{n}]}(y)\frac{|u-u(\cdot,0)|}{y^2}$$ which tend to $0$ in $L^p_m$ by dominated convergence and then $\frac{D_y u_n}{y}$ converges to $\frac{D_{y}u}{y}$ in $L^p_m$. Similarly $D_{yy} u_n$ converges $D_{yy}u$ in $L^p_m$. Each function $u_n$ has compact support, does not depend on $y$ for small $y$ and is smooth with respect to the $x$ variable for any fixed $y$. Smoothness with respect to $y$ is however not yet guaranteed. This last property can be added by taking appropriate convolutions in $y$ with a compact support mollifier. \qed We can now prove the general density result. \medskip {\sc{Proof of Theorem \ref{core gen}} } The density of $\mathcal C$, defined in \eqref{defC}, in $W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$ follows by Lemma \ref{Sobolev eq} and Proposition \ref{corend} since the isometry $T_{0,-\frac{\alpha_2}2}$ isometrically maps dense subsets of $W^{2,p}_{\mathcal{N}}(\tilde \alpha,0,\tilde m)$ into dense subsets of $W^{2,p}_{\mathcal{N}}(\alpha_1,\alpha_2,m)$ and, since $\alpha_2<2$, leaves invariant $\mathcal{C}$. Note also that the conditions $(m+1)/p>\alpha_1^-$ and $(\tilde m+1)/p>\tilde\alpha^-$ are equivalent, since $\alpha_2<2$, again. In order to prove the density of $C_c^\infty (\mathbb{R}^{N})\otimes\mathcal D$, we may therefore assume that $u$ is in $\mathcal C$, that is $u \in C_c^\infty (\mathbb{R}^{N} \times [0, \infty))$ and $ D_y u(x,y)=0$ for $y \leq \delta$ for some $\delta>0$. Let $\eta$ be a smooth function depending only on the $y$ variable which is equal to $1$ in $[0,\frac{\delta}{2}]$ and to $0$ for $y \ge \delta$. Then, since $D_y u(x,y)=0$ for $y \leq \delta$, $$u(x,y)=\eta (y)u(x,y)+(1-\eta(y))u(x,y)=\eta (y)w(x)+(1-\eta(y))u(x,y)=u_1(x,y)+u_2(x,y)$$ with $u_1(x,y)=\eta(y)w(x)$, $w(x)$ depending only on the $x$ variable. Observe now that $u_2(x,y)=(1-\eta(y))u(x,y)=0$ in $[0,\frac{\delta}{2}]$ and outside the support of $u$, therefore it belongs to $C^\infty_c(\mathbb{R}^{N+1}_+)$ and the approximation with respect to the $W^{2,p}(\mathbb{R}^{N+1}_+)$ norm by functions in $C_c^\infty (\mathbb{R}^{N})\otimes C_c^\infty (]0, \infty[)$ is standard (just use a sequence of polynomials converging uniformly to $u_2$ with all first and second order derivatives on a cube containing the support of $u_2$ and truncate outside the cube by a cut-off of the same type). This proves the result.\qed \begin{os} From the proofs of Proposition \ref{corend} and Theorem \ref{core gen} it follows that if $u\in W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$ has support in $\mathbb{R}^N\times[0,b]$, then there exists a sequence $\left(u_n\right)_{n\in\mathbb{N}}\in\mathcal C$ such that $ \mbox{supp }u_n\subseteq \mathbb{R}^N\times[0,b]$ and $u_n\to u$ in $W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$. \end{os} \begin{cor} \label{Core C c infty} Assume $\frac{m+1}{p}\geq 2-\alpha_2$ and $\frac{m+1}{p}>\alpha_1^-$. Then $ C_c^\infty (\mathbb{R}^{N+1}_+) $ and $C_c^\infty (\mathbb{R}^{N})\otimes C_c^\infty \left(]0, \infty[\right)$ are dense in $W^{2,p}_{\mathcal N}(\alpha_1,\alpha_2,m)$. \end{cor} {\sc Proof. } This follows from the the proofs of Proposition \ref{corend} and of Theorem \ref{core gen}. \qed \medskip Specializing Proposition \ref{Hardy in core} to $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$ we get the following corollary. \begin{cor}\label{Hardy Rellich Sob} Let $\frac{m+1}{p}>\alpha_1^-$. The following properties hold for any $u\in W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$. \begin{itemize} \item[(i)] If $\frac{m+1}p>1-\frac{\alpha_2}2$ then \begin{align*} \|y^{\frac{\alpha_2}2-1}u\|_{L^p_m}\leq C \|y^{\frac{\alpha_2}2}D_{y}u\|_{L^p_m}. \end{align*} \item[(ii)] If $\frac{m+1}p>2-\alpha_2$ then \begin{align*} \|y^{\alpha_2-2}u\|_{L^p_m}\leq C \|y^{\alpha_2-1}D_{y}u\|_{L^p_m}. \end{align*} \item[(iii)] If $\frac{m+1}p<2-\alpha_2$ then \begin{align*} \|y^{\alpha_2-2}(u-u(\cdot,0))\|_{L^p_m}\leq C \|y^{\alpha_2-1}D_{y}u\|_{L^p_m}. \end{align*} \item[(iv)] If $\alpha_2-\alpha_1<2$ and $\frac{m+1}p>1-\frac{\alpha_1+\alpha_2}{2}$, $\frac{m+1}p>\alpha_1^-$ then \begin{align*} \|y^{\frac{\alpha_1+\alpha_2}{2}-1}\nabla_{x}u\|_{L^p_m}\leq C \|y^\frac{\alpha_1+\alpha_2}{2} D_{y}\nabla_x u\|_{L^p_m}. \end{align*} \end{itemize} \end{cor} {\sc{Proof.}} By density we may assume that $u \in C_c^\infty (\mathbb{R}^{N})\otimes\mathcal D$. All points follow by applying Proposition \ref{Hardy in core} to $u$ in the cases (i), (ii) and (iii) and to $\nabla_x u$ in the case (iv), recalling Proposition \ref{Sec sob derivata mista}.\qed \medskip \section{The space $W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m)$} \label{Sec sob min domain} We consider an integral version of Dirichlet boundary conditions, namely a weighted summability of $y^{-2}u$ and introduce for $m \in \mathbb{R}$, $\alpha_2<2$ \begin{equation} \label{dominiodirichlet} W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m)=\{u \in W^{2,p}(\alpha_1, \alpha_2, m): y^{\alpha_2-2}u \in L^p_m\} \end{equation} with the norm $$\|u\|_{W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m)}=\|u\|_{W^{2,p}(\alpha_1, \alpha_2, m)}+\|y^{\alpha_2-2}u\|_{L^p_m}.$$ We remark that $W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m)$ will be considered for every $m \in \mathbb{R}$ whereas $W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$ only for $m+1>0$. The symbol $\mathcal R$ stands for "Rellich", since Rellich inequalities concern with the summability of $y^{-2}u$. \begin{prop} \label{RN} The following properties hold. \begin{itemize} \item[(i)] if $u \in W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m)$ then $y^{\alpha_2-1}D_y u \in L^p_m.$ \item[(ii)] If $\alpha_2-\alpha_1<2$ and $\frac{m+1}{p}>2-\alpha_2$, then $W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m) = W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)=W^{2,p}(\alpha_1, \alpha_2, m)$, with equivalence of the corresponding norms. In particular, $C_c^\infty (\mathbb{R}^{N+1}_+)$ is dense in $W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m) $. \end{itemize} \end{prop} {\sc Proof. } The proof of (i) follows by integrating with respect to $x$ the inequality of Lemma \ref{inter}. The proof of (ii) follows from Proposition \ref{neumann}(i) and Corollary \ref{Hardy Rellich Sob}(ii), after noticing that $\alpha_2-\alpha_1 <2$ and $\frac{m+1}{p}>2-\alpha_2$ yield $\frac{m+1}{p}>\alpha_1^-$. The density of $C_c^\infty (\mathbb{R}^{N+1}_+)$ in $W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m)$ now follows from Corollary \ref{Core C c infty}. \qed Finally, we investigate the action of the multiplication operator $T_{k,0}:u\mapsto y^ku$. The following lemma is the companion of Lemma \ref{Sobolev eq} which deals with the transformation $T_{0,\beta}$. \begin{lem}\label{isometryRN} \label{y^k W} Let $\alpha_2-\alpha_1<2$ and $\frac{m+1}{p}>2-\alpha_2$. For every $k\in\mathbb{R}$ \begin{align*} T_{k,0}: W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m) \to W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m-kp) \end{align*} is an isomorphism (we shall write $y^k W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)= W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m-kp)$). \end{lem} {\sc{Proof.}} Let $u=y^{k}v$ with $v\in W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)$. Since all $x$-derivatives commute with $T_{k,0}$ we deal only with the $y$-derivatives. We observe that \begin{align*} D_yu=y^k(D_yv+k\frac{v}y),\qquad D_{yy}u=y^k\left(D_{yy}v+2k \frac{D_y v}{y}+ k(k-1)\frac v{y^2}\right). \end{align*} Corollary \ref{Hardy Rellich Sob} yields \begin{align*} \|y^{\alpha_2-2}v\|_{L^p_m}+\|y^{\frac{\alpha_2}2-1}v\|_{L^p_m}+\|y^{\alpha_2-1}D_y v\|_{L^p_m}\leq c \|v\|_{W^{2,p}_{\mathcal N}(\alpha_1, \alpha_2, m)} \end{align*} and then $u \in W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m-kp)$. Conversely, if $u \in W^{2,p}_{\mathcal R}(\alpha_1, \alpha_2, m-kp)$, then $y^{\alpha_2-1}D_y u \in L^p_{m-kp}$ by Proposition \ref{RN}(i) and similar formulas as above show that $y^{\alpha_2-1}D_y v, y^{\alpha_2} D_{yy}v \in L^p_m$. Since $y^{\alpha_2/2-1} \leq 1+y^{\alpha_2-2}$, then $y^{\alpha_2/2-1} u \in L^p_{m-kp}$ and $y^{\alpha/2}D_y v \in L^p_m$. \qed
proofpile-arXiv_065-88
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:introduction} \IEEEPARstart{D}{ental} cone-beam computerized tomography (CBCT) and intraoral scan (IOS) are used for virtual implant positioning, maxillofacial surgery simulation, and orthodontic treatment planning. Dental CBCT has been widely used for the three-dimensional (3D) imaging of the teeth and jaws \cite{sukovic2003cone,miracle2009conebeam}. Recently, IOS has been increasingly used to capture digital impressions that are replicas of teeth, gingiva, palate, and soft tissue in the oral cavity \cite{mangano2017intraoral,zimmermann2015intraoral}, as digital scanning technologies have rapidly advanced \cite{robles2020digital}. The use of IOS addresses many of the shortcomings of the conventional impression manufacturing techniques \cite{siqueira2021intraoral, manicone2021patient}. This paper aims to provide a fully automated method of integrating dental CBCT and IOS data into one image such that the integrated image utilizes the strengths and supplements the weaknesses of each image. In dental CBCT, spatial resolution is insufficient for elaborately depicting tooth geometry and interocclusal relationships. Moreover, image degradation associated with metal-induced artifacts is becoming an increasingly frequent problem, as the number of older people with artificial dental prostheses and metallic implants is rapidly increasing with the rapidly aging populations. Metallic objects in the CBCT field of view produce streaking artifacts that highly degrade the reconstructed CBCT images, resulting in a loss of information on the teeth and other anatomical structures\cite{schulze2011artefacts}. IOS can compensate for the aforementioned weaknesses of dental CBCT. IOS provides 3D tooth crown and gingiva surfaces with a high resolution. However, tooth roots are not observed in intraoral digital impressions. Therefore, CBCT and IOS can be complementary to each other. A suitable fusion between CBCT and IOS images allows to provide detailed 3D tooth geometry along with the gingival surface. Numerous attempts have been made to register dental impression data to maxillofacial models obtained from 3D CBCT images. The registration process is to find a rigid transformation by taking advantage of the properties that the upper and lower jaw bones are rigid and the tooth surfaces are partially overlapping areas (\textit{e.g.}, the crowns of the exposed teeth). Several methods \cite{gateno2003new, uechi2006novel, swennen2007use, xia2009new, swennen2009cone} utilized fiducial markers for registration, which require a complicated process that involves the fabrication of devices with the markers, double CT scanning and post-processing for marker removal. To simplify these processes, virtual reference point-based methods \cite{kim2010integration, lin2013artifact, hernandez2013new, nilsson2016virtual} were proposed to roughly align two models using reference points, and achieve a precise fit by employing an iterative closest point (ICP) method \cite{besl1992method}. ICP is a widely used iterative registration method consisting of the closest point matching between two data and minimization of distances between the paired points. However, the ICP method relies heavily on initialization because it can easily be trapped into a local optimal solution. Therefore, these methods based on ICP typically require the user-involved initial alignment, which is a cumbersome and time-consuming procedure due to manual clicking. Furthermore, registration using ICP can be difficult to achieve acceptable results for patients containing metallic objects \cite{flugge2017registration}. Teeth in CBCT images that are contaminated by metal artifacts prevent accurate point matching with teeth in impressions. Therefore, there is a high demand for a fully automated and robust registration method. Recently, a deep learning-based method \cite{chung2020automatic} was used to automate the initial alignment by extracting pose cues from two data. This approach has limitations in achieving sufficient registration accuracy enough for clinical application. Without the use of a very good initial guess, the point matching for multimodal image registration is affected by the non-overlapping area of the two different modality data (\textit {e.g.}, the soft tissues in IOS and the jaw bones and tooth surfaces contaminated by metal artifacts in CBCT). For an accurate registration, it is necessary to separate the non-overlapping areas as much as possible to prevent incorrect point matching. Therefore, individual tooth segmentation and identification in CBCT and IOS are required as important preprocessing tasks. In recent years, owing to advances in deep learning methods, numerous fully automated 3D tooth segmentation methods have been developed for CBCT images \cite{lee2020automated,rao2020symmetric,chen2020automatic,cui2019toothNet,jang2021fully} and impression models \cite{lian2020deep,zanjani2021mask,cui2021tsegnet}. Although the performance of intraoral scanners is improving, full-arch scans have not yet surpassed the accuracy of conventional impressions \cite{zhang2021accuracy,giachetti2020accuracy}. IOS at short distances is available to obtain partial digital impressions that can replace traditional dental models, but it may not yet be suitable for clinical use on long complete-arches due to the global cumulative error introduced during the local image stitching process \cite{ender2019accuracy}. To achieve sophisticated image fusion, it is therefore necessary to correct the stitching errors of IOS. We propose a fully automated method for registration of CBCT and IOS data as well as correction of IOS stitching errors. The proposed method consists of four parts: (i) individual tooth segmentation and identification module from IOS data (TSIM-IOS); (ii) individual tooth segmentation and identification module from CBCT data (TSIM-CBCT); (iii) global-to-local tooth registration between IOS and CBCT; and (iv) stitching error correction of the full-arch IOS. We developed TSIM-IOS using 2D tooth feature-highlighted images, which are generated by orthographic projection of the IOS data. This approach allows high-dimensional 3D surface models to efficiently segment individual teeth using low-dimensional 2D images. In TSIM-CBCT, we utilize the panoramic image-based deep learning method \cite{jang2021fully}. This method is robust against metal artifacts because it utilizes panoramic images (generated from CBCT images) not significantly affected by metal artifacts. The TSIM-IOS and -CBCT are used to focus only on the teeth, while removing as many non-overlapping areas as possible. In (iii), we then align the two highly overlapping data (\textit{i.e.}, the segmented teeth in the CBCT and IOS data) through global-to-local fashion, which consists of global initialization by fast point feature histograms (FPFH) \cite{rusu2009fast} and local refinement by ICP based on individual teeth (T-ICP). T-ICP allows the closest point matching only between the same individual teeth in the CBCT and IOS. The last part (iv) corrects the stitching errors of IOS using CBCT-derived tooth surfaces. Owing to the reliability of CBCT \cite{baumgaertel2009reliability}, location information of 3D teeth in the CBCT data can be used as a reference for correction of IOS teeth. After registration, each IOS tooth is fixed through a slightly rigid transformation determined by the reference CBCT tooth. The main contributions of this paper are summarized as follows. \begin{itemize} \item To the best of our knowledge, this study is the first to provide a sophisticated fusion of IOS and CBCT data at the level of accuracy required for clinical use. \item The proposed method can provide accurate intraoral digital impressions that correct cumulative stitching errors. \item This framework is robust against metal-induced artifacts in low-dose dental CBCT. \item The combined tooth-gingiva models with individually segmented teeth can be used for occlusal analysis and implant surgical guide production in digital dentistry. \end{itemize} The remainder of this paper is organized as follows. Section 2 describes the proposed method in detail. In Section 3, we explain the experimental results. Section 4 presents the discussion and conclusions. \begin{figure*} \centering \includegraphics[width=1\textwidth]{framework.pdf} \caption{Overall flow diagram of the proposed method consisting of four parts; tooth segmentation and identification from IOS and CBCT data, global-to-local tooth registration of IOS and CBCT, and stitching error correction in IOS. Therefore, the proposed method integrates IOS and CBCT images into one coordinate system while improving the accuracy of full-arch IOS.} \label{fig:framework} \end{figure*} \begin{figure} \centering \subfloat[]{\includegraphics[width=.2\textwidth]{segid_ios.pdf}}~~~ \subfloat[]{\includegraphics[width=.225\textwidth]{segid_cbct.pdf}} \caption{Results of TSIM-IOS and -CBCT, respectively. The indicated numbers represent mandibular teeth by the universal notation. (a) Individual IOS teeth and their split gingiva parts, and (b) CBCT teeth containing unexposed wisdom teeth.} \label{fig:segid} \end{figure} \section{Method} The overall framework of the proposed method is illustrated in Fig. \ref{fig:framework}. It is designed to automatically align a patient's IOS model with the same patient's CBCT image. IOS models consist of 3D surfaces (triangular meshes) of the upper and lower teeth, and are acquired in Standard Triangle Language (STL) file format containing 3D coordinates of the triangle vertices. The 3D vertices of IOS data can be expressed as a set of 3D points and the unit of these points is millimeter. Dental CBCT images are isotropic voxel structures consisting of sequences of 2D cross-sectional images, and are saved in Digital Imaging and Communications in Medicine (DICOM) format. Registration between two different imaging protocols must be separately obtained for the maxilla and mandible. For convenience, only the method for the mandible is described in this section. The method for the maxilla is the same. \subsection{Individual Tooth Segmentation and Identification in IOS} \label{subsec:IOS} As shown in Fig. \ref{fig:segid}a, TSIM-IOS decomposes the 3D point set $X$ of the IOS model into \begin{equation} X = \underbrace{X_{t_1} \cup \cdots \cup X_{t_J}}_{X_{\mbox{\scriptsize teeth}}}\cup X_{\mbox{\scriptsize gingiva}}, \end{equation} where each $X_{t_j}$ represents a tooth with the code $t_j$ in $X$, $J$ is the number of teeth in $X$, and $X_{\mbox{\scriptsize gingiva}}$ is the rest including the gingiva in $X$. According to the universal notation system \cite{nelson2014wheeler}, $t_j$ is the number between 1 and 32 that is assigned to an individual tooth to identify the unique tooth. A detailed explanation is provided in Appendix \ref{app:sec1}. Additionally, we divide the gingiva $X_{\mbox{\scriptsize gingiva}}$ into \begin{equation} \label{eq:gingiva} X_{\mbox{\scriptsize gingiva}} = X_{g_1} \cup \cdots \cup X_{g_J}, \end{equation} where \begin{equation} X_{g_j} = \left\{\mathbf{x} \in X_{\mbox{\scriptsize gingiva}} : \underset{\mathbf{x}' \in X_{\mbox{\tiny teeth}}}{\mbox{argmin}}\| \mathbf{x}-\mathbf{x}'\| \in X_{t_j}\right\} . \end{equation} Therefore, a point in $X_{\mbox{\scriptsize gingiva}}$ belongs to a separated gingiva $X_{g_j}$ according to the nearest tooth $X_{t_j}$. \subsection{Individual Tooth Segmentation and Identification in CBCT} \label{subsec:CBCT} TSIM-CBCT is based on a deep learning-based individual tooth segmentation and identification method developed by Jang \textit{et al.} \cite{jang2021fully}. As shown in Fig. \ref{fig:segid}b, we obtain the teeth point cloud $Y$ that consists of individual tooth point clouds, denoted by \begin{equation} Y = \underbrace{Y_{t_1} \cup \cdots \cup Y_{t_J}}_{Y_{\mbox{\scriptsize teeth}}} \cup Y_{\mbox{\scriptsize rest}}, \end{equation} where each $Y_{t_j}$ represents the $t_j$-tooth for $j=1,\cdots,J$ and $Y_{\mbox{\scriptsize rest}}$ refers to a point cloud of unexposed teeth (\textit{e.g.}, impacted wisdom teeth) if presented. Because the impacted teeth do not appear in IOS images, they are separated by $Y_{\mbox{\scriptsize rest}}$. Each tooth point cloud $Y_{t_j}$ is obtained from a 3D binary image of the $t_j$-tooth determined by the individual tooth segmentation and identification method \cite{jang2021fully}. The points in $Y_{t_j}$ lie on isosurfaces (approximating the boundary of the segmented tooth image) that are generated by the marching cube algorithm \cite{lewiner2003efficient}. Because the unit of points in $Y_{t_j}$ is associated with the image voxels, the points are scaled in millimeters by the image spacing and slice thickness. \subsection{Global-to-Local Tooth Registration of IOS and CBCT} This subsection describes the registration method to find the optimal transformation $\mathcal{T}^*$ such that the transformed point cloud $\mathcal{T}^*(X) = \{\mathcal{T}^*(\mathbf{x}) : \mathbf{x} \in X \}$ best aligns with the target $Y$ in terms of partially overlapping tooth surfaces. The registration problem consists of the following two steps: \begin{enumerate} \item Construct a set of correspondences $Corr= \{(\mathbf{x},\mathbf{y})\in X \times Y\}$ between a source $X$ and target $Y$. \item Find the optimal rigid transformation with the following mean square error minimization to best match the pairs in the correspondences \begin{equation} \mathcal{T}^* = \underset{\mathcal{T} \in SE(3)}{\mbox{argmin}} \sum_{(\mathbf{x},\mathbf{y}) \in Corr} \|\mathbf{y}-\mathcal{T}(\mathbf{x})\|^2, \end{equation} where $SE(3)$ is the set of rigid transformations that are modeled with a $4\times4$ matrix determined by three angles and a translation vector. \end{enumerate} Here, we adopt two registration methods: FPFH \cite{rusu2009fast} for global initial alignment, and an improved ICP using individual tooth segmentation for local refinement. \subsubsection{Global initial alignment of the IOS and CBCT teeth} We compute the two sets of FPFH vectors \cite{rusu2009fast}; $\mbox{FPFH}(X_{\mbox{\scriptsize teeth}})=\{\mbox{FPFH}(\mathbf{x}):\mathbf{x} \in X_{\mbox{\scriptsize teeth}}\}$ and $\mbox{FPFH}(Y_{\mbox{\scriptsize teeth}})=\{\mbox{FPFH}(\mathbf{y}):\mathbf{y} \in Y_{\mbox{\scriptsize teeth}}\}$. $\mbox{FPFH}(\mathbf{x})$ represents not only the geometric features of the normal vector and the curvature at $\mathbf{x} \in X_{\mbox{\scriptsize teeth}}$, but also the relevant information considering its neighboring points over $X_{\mbox{\scriptsize teeth}}$. The details of FPFH are provided in Appendix \ref{app:sec2}. $\mbox{FPFH}(X_{\mbox{\scriptsize teeth}})$ and $\mbox{FPFH}(Y_{\mbox{\scriptsize teeth}})$ are used to find correspondences between $X_{\mbox{\scriptsize teeth}}$ and $Y_{\mbox{\scriptsize teeth}}$. For each $\mathbf{x} \in X_{\mbox{\scriptsize teeth}}$, we select $\mathbf{y} \in Y_{\mbox{\scriptsize teeth}}$, denoted by $\mbox{match}^{\mbox{\tiny FPFH}}_{Y_{\mbox{\tiny teeth}}}(\mathbf{x})$, whose FPFH vector is most similar to $\mbox{FPFH}(\mathbf{x})$: \begin{equation} \mbox{match}^{\mbox{\tiny FPFH}}_{Y_{\mbox{\tiny teeth}}}(\mathbf{x})=\underset{\mathbf{y} \in Y_{\mbox{\tiny teeth}}}{\text{argmin}}~{\|\mbox{FPFH}(\mathbf{x})-\mbox{FPFH}(\mathbf{y})\|}. \end{equation} Similarly, we compute $\mbox{match}^{\mbox{\tiny FPFH}}_{X_{\mbox{\tiny teeth}}}(\mathbf{y})$ for all $\mathbf{x} \in X_{\mbox{\scriptsize teeth}}$. Then, we obtain the correspondence set \begin{equation} Corr = Corr_{X_{\mbox{\tiny teeth}}} \cap Corr_{Y_{\mbox{\tiny teeth}}}, \end{equation} where \begin{align} &Corr_{X_{\mbox{\tiny teeth}}} = \left\{\left(\mathbf{x},\mbox{match}^{\mbox{\tiny FPFH}}_{Y_{\mbox{\tiny teeth}}}(\mathbf{x})\right):\mathbf{x} \in X_{\mbox{\scriptsize teeth}} \right\},\\ &Corr_{Y_{\mbox{\tiny teeth}}} = \left\{\left(\mbox{match}^{\mbox{\tiny FPFH}}_{X_{\mbox{\tiny teeth}}}(\mathbf{y}),\mathbf{y} \right):\mathbf{y} \in Y_{\mbox{\scriptsize teeth}} \right\}. \end{align} The set $Corr$ contains pairs $(\mathbf{x},\mathbf{y}) \in X_{\mbox{\scriptsize teeth}} \times Y_{\mbox{\scriptsize teeth}}$ where $\mbox{FPFH}(\mathbf{x})$ and $\mbox{FPFH}(\mathbf{y})$ are the most similar to each other. However, such simple feature information alone cannot provide a proper point matching between $X_{\mbox{\scriptsize teeth}}$ and $Y_{\mbox{\scriptsize teeth}}$, because there are too many points with similar geometric features in the point clouds. To filter out inaccurate pairs from the set ${Corr}$, we randomly sample three pairs $(\mathbf{x}_1,\mathbf{y}_1)$, $(\mathbf{x}_2,\mathbf{y}_2)$, $(\mathbf{x}_3,\mathbf{y}_3)\in {Corr}$ and select them if the following conditions \cite{zhou2016fast} are met, and drop them otherwise: \begin{equation} \label{eq:filter} \tau < \frac{\|\mathbf{x}_i - \mathbf{x}_j\|}{\|\mathbf{y}_i - \mathbf{y}_j\|} < \frac{1}{\tau},~~\text{for}~1\leq i<j \leq 3, \end{equation} where $\tau$ is a number close to 1. We denote this filtered subset as ${Corr}^{(0)}$. Then, the initial transformation is determined by \begin{equation} \mathcal{T}^{(0)}=\underset{\mathcal{T} \in SE(3)}{\mbox{argmin}} \sum_{(\mathbf{x},\mathbf{y}) \in Corr^{(0)}} \|\mathbf{y}-\mathcal{T}(\mathbf{x})\|^2. \end{equation} \subsubsection{Local refinement of the roughly aligned teeth} We denote $X_{\mbox{\scriptsize teeth}}$ transformed by the previously obtained $\mathcal{T}^{(0)}$ as $X_{\mbox{\scriptsize teeth}}^{(0)} = X_{t_1}^{(0)} \cup \cdots \cup X_{t_J}^{(0)}$, where $X_{t_j}^{(0)} = \mathcal{T}^{(0)}(X_{t_j})$ for $j=1,\cdots,J$. $X_{\mbox{\scriptsize teeth}}^{(0)}$ and $Y_{\mbox{\scriptsize teeth}}$ are then roughly aligned, but fine-tuning is needed to achieve accurate registration. A fine rigid transformation is obtained through an iterative process, which gradually improves the correspondence finding. We propose an improved ICP (T-ICP) method with point matching based on individual teeth. For $k \geq 1$, we denote $X_{\mbox{\scriptsize teeth}}^{(k)} = \mathcal{T}^{(k)}(X_{\mbox{\scriptsize teeth}}^{(k-1)})$. Here, the $k$-th rigid transformation $\mathcal{T}^{(k)}$ is determined by \begin{equation} \mathcal{T}^{(k)} = \underset{\mathcal{T} \in SE(3)}{\mbox{argmin}} \sum_{(\mathbf{x},\mathbf{y}) \in Corr^{(k)}} \|\mathbf{y}-\mathcal{T}(\mathbf{x})\|^2. \end{equation} The correspondence set $Corr^{(k)}$ for $k$ is given by \begin{equation} Corr^{(k)} = \left\{ \left(\mathbf{x}, \mbox{match}_{Y_{\mbox{\tiny teeth}}}(\mathbf{x}) \right) : \mathbf{x} \in X_{\mbox{\scriptsize teeth}}^{(k-1)} \right\} \cap P^{(k)}, \end{equation} where \begin{align} & \mbox{match}_{Y_{\mbox{\tiny teeth}}}(\mathbf{x})=\underset{\mathbf{y} \in Y_{\mbox{\tiny teeth}}}{\mbox{argmin}} \|\mathbf{x}-\mathbf{y}\|, \\ & P^{(k)} = \bigcup_{j=1}^n \left\{(\mathbf{x},\mathbf{y}) \in X_{t_j}^{(k-1)} \times Y_{t_j} \right\}. \end{align} Using the set $P^{(k)}$ prevents undesired correspondences between two teeth with different codes. Note that this is the vanilla ICP when $P^{(k)}$ is not used. The final rigid transformation $\mathcal{T}^*$ is obtained by the following composition of transformations: $\mathcal{T}^*=\mathcal{T}^{(K)} \circ \cdots \circ \mathcal{T}^{(0)}$, where $K$ is the number of iterations until the stopping criterion is satisfied for a given $\varepsilon>0$: \begin{equation} \sum_{(\mathbf{x},\mathbf{y}) \in Corr^{(K)}} \| \mathcal{T}^{(K)} \circ \cdots \circ \mathcal{T}^{(0)}(\mathbf{x}) - \mathbf{y} \|<\varepsilon. \end{equation} \subsection{Stitching Error Correction in IOS} Next, we edit the IOS models with stitching errors by referring to the CBCT images. We denote $X_{t_j}^*=\mathcal{T}^*(X_{t_j})$ and $X_{g_j}^*=\mathcal{T}^*(X_{g_j})$ for $j=1,\cdots,J$. Each tooth $X^*_{t_j}$ is transformed by a corrective rigid transformation $\mathcal{T}_j^{**}$, which is obtained by applying the vanilla ICP to sets $X^*_{t_j-1} \cup X^*_{t_j} \cup X^*_{t_j+1}$ and $Y^*_{t_j-1} \cup Y^*_{t_j} \cup Y^*_{t_j+1}$ as the source and target. Here, $X^*_{t_j-1}$ (or $X^*_{t_j+1}$) is an empty set if $t_j-1$ (or $t_j+1$) is not equal to $t_{j'}$ for every $j'=1,\cdots,J$. Using the individual corrective transformations, IOS stitching errors are corrected separately by $X_{t_j}^{**}=\mathcal{T}_{j}^{**}(X_{t_j}^{*})$ for $j=1,\cdots,J$. In this procedure, we use one tooth and two adjacent teeth on both sides for reliable correction. It takes advantage of the fact that narrow digital scanning is accurate. Now it remains to fix the gingiva area whose boundary shares the boundaries with the teeth. To fit the boundaries between the gingiva and individually transformed teeth, the gingival surface is divided according to the areas in contact with the individual teeth by Eq. \eqref{eq:gingiva}. Therefore, the rectified gingiva is obtained by $X_{g_j}^{**} = \mathcal{T}_{j}^{**}(X_{g_j}^{*})$ for $j=1,\cdots,J$. \section{Experiments and Results} Experiments were carried out using CBCT images in DICOM format and IOS models in STL format. Each CBCT image is produced by a dental CBCT machine: DENTRI-X (HDXWILL), which uses tube voltages of 90kVp and a tube current of 10mA. The size of images obtained by the machine is $800\times800\times400$. The pixel spacing and slice thickness are both $0.2$mm. Each IOS model is scanned by one of two intraoral scanners: i500 (Medit) and TRIOS 3 (3shape). An IOS model is either maxilla or mandible, which has approximately 200,000 vertices and 120,000 triangular faces, respectively. The dataset were provided by HDXWILL. Additionally, we used maxillary and mandibular digital dental models to train TSIM-IOS. These dataset were collected by the Yonsei University College of Dentistry. Personal information in all dataset was de-identified for patient privacy and confidentiality. In Sections \ref{subsec:IOS} and \ref{subsec:CBCT}, the proposed deep convolutional network models were trained by labeled dataset for individual tooth segmentation and identification. For TSIM-IOS, 71 maxillary and mandibular dental models were used for training and 35 models for testing. Similarly, in TSIM-CBCT, 49 3D CBCT images were used for training and 23 images for testing. \begin{figure*}[h] \centering \subfloat[]{\includegraphics[width=.1975\textwidth]{distmap1.pdf}} \subfloat[]{\includegraphics[width=.1975\textwidth]{distmap2.pdf}} \subfloat[]{\includegraphics[width=.1975\textwidth]{distmap3.pdf}} \subfloat[]{\includegraphics[width=.1975\textwidth]{distmap4.pdf}} \subfloat[]{\includegraphics[width=.1975\textwidth]{distmap5.pdf}} \caption{Qualitative comparison results of registration methods. (a) MR, (b) CPD, (c) FPFH, (d) FPFH followed by ICP, and (e) the proposed method. The colors in the teeth represent distances between the IOS and CBCT tooth surfaces.} \label{fig:quantitative_reg_result} \end{figure*} \begin{figure}[h] \centering \includegraphics[width=.4\textwidth]{comparison_fpfh.pdf} \caption{Correspondence pairs of FPFH-based methods. The figure on the left shows poor matching from the FPFH method without TSIM. On the other hand, the figure on the right shows modest correspondences between the teeth obtained by TSIM.} \label{fig:comparison_fpfh} \end{figure} \subsection{Evaluation and Result of the Proposed Registration Method} We used 22 pairs of IOS models and CBCT images to evaluate the performance of the proposed registration method. Each pair was obtained from the same patient. To measure the registration accuracy, we used a landmark distance between tooth landmarks pre-marked on IOS and CBCT data: \begin{equation}\label{eq:land} E_{land}(\hat{X},\hat{Y};\mathcal{T}) = \frac{1}{N}\sum_{i=1}^N \| \mathcal{T}(\hat{\mathbf{x}}_i)-\hat{\mathbf{y}}_i\|, \end{equation} where $\mathcal{T}$ is a rigid transformation and, $\hat{X}=\{\hat{\mathbf{x}}_1,\cdots,\hat{\mathbf{x}}_N\}$ and $\hat{Y}=\{\hat{\mathbf{y}}_1,\cdots,\hat{\mathbf{y}}_N\}$ are the landmark sets of the pair of IOS and CBCT data, respectively. These landmarks were selected points with discernible features such as cusps. In addition, we computed a surface distance from the IOS tooth surfaces to the CBCT tooth surfaces: \begin{equation}\label{eq:surf} E_{surf}(\bar{X},\bar{Y};\mathcal{T}) = \sup_{\bar{\mathbf{x}} \in \bar{X}} \inf_{\bar{y} \in \bar{Y}} \| \mathcal{T}(\bar{\mathbf{x}})-\bar{\mathbf{y}} \|, \end{equation} where $\bar{X}$ and $\bar{Y}$ are the ground-truth tooth segmentations of IOS and CBCT data, respectively. The metric $E_{surf}$ evaluates how far the IOS crown surface $\bar{X}$ is from the CBCT-derived tooth surface $\bar{Y}$. To verify the effectiveness of the proposed method, we compared it manual clicking registration with ICP (MR), coherent point drift (CPD) \cite{myronenko2010point}, FPFH, and FPFH followed by ICP. These methods are implemented using raw IOS model and skull model, which is obtained by applying thresholding segmentation and the marching cube algorithm to CBCT images. Table \ref{tbl:eval_reg} provides the quantitative evaluations of the methods, and Fig. \ref{fig:quantitative_reg_result} displays the qualitative results by visualizing distance maps between the ground-truth tooth surfaces of CBCT and IOS, which are aligned by rigid transformations obtained from the employed methods. Also, we performed an ablation study to present the advantage of TSIM-IOS and CBCT, as reported in Table \ref{tbl:eval_reg}. \begin{table}[] \footnotesize \centering \caption{Quantitative Comparison Results of Registration Methods. \label{tbl:eval_reg}}\vskip 0.0in \begin{tabular}{ccccccc} \hline & {\bf Method} &{\bf Landmark (mm)} & {\bf Surface (mm)} \\ \cline{1-4} \multirow{4}{*}{\parbox{.95cm}{w/o TSIM}} & {MR} & {$1.47 \pm 2.40$} & {$3.11 \pm 3.68$}\\ & {CPD} & {$12.77 \pm 6.12$} & {$17.57 \pm 6.26$}\\ & {FPFH} & {$0.46 \pm 0.34$} & {$0.91 \pm 0.54$}\\ & {FPFH + ICP} & {$0.28 \pm 0.11$} & {$0.55 \pm 0.10$}\\ \cline{1-4} \multirow{4}{*}{\parbox{.95cm}{w/ TSIM}} & {MR} & {$0.67 \pm 1.66$} & {$1.70 \pm 3.41$}\\ & {CPD} & {$3.68 \pm 2.62$} & {$5.01 \pm 3.55$}\\ & {FPFH} & {$0.40 \pm 0.19$} & {$0.71 \pm 0.16$}\\ & {FPFH + ICP} & {$0.22 \pm 0.10$} & {$0.48 \pm 0.09$}\\ & {\bf Proposed method} & {$\bf 0.22 \pm 0.09$} & {$\bf 0.47 \pm 0.08$}\\ \hline \end{tabular} \end{table} \begin{table*} \footnotesize \centering \caption{Results of Stitching Error Correction according to Registration Methods. \label{tbl:eval_cor}}\vskip 0.0in \begin{tabular}{cccccccc} \hline & {\bf Method} & {\bf Landmark (mm)} & {\bf Difference (mm)} & {\bf Surface (mm)} & {\bf Difference (mm)}\\ \cline{1-6} \multirow{4}{*}{\parbox{.8cm}{w/ TSIM}} & {MR} & {$0.53 \pm 1.69$} & {$-0.14 \pm 0.03$} & {$1.49 \pm 3.50$} & {$-0.21 \pm 0.09$}\\ & {CPD} & {$3.62 \pm 2.71$} & {$-0.06 \pm 0.09$} & {$4.94 \pm 3.69$} & {$-0.07 \pm 0.14 $}\\ & {FPFH} & {$0.14 \pm 0.09$} & {$-0.26 \pm -0.10$} & {$0.40 \pm 0.15$} & {$ -0.31 \pm -0.01 $}\\ & {FPFH + ICP} & {$0.12 \pm 0.07$} & {$-0.10 \pm -0.03$} & {$0.32 \pm 0.12$} & {$-0.16 \pm 0.03 $}\\ & {\bf Proposed method} & {$\bf 0.11\pm 0.07$} & {$\bf -0.10 \pm -0.02$} & {$\bf 0.30 \pm 0.11$} & {$\bf -0.17 \pm 0.03$}\\ \hline \end{tabular} \end{table*} \begin{figure*} \centering \subfloat{\includegraphics[width=.21\textwidth]{result1.pdf}}~~~ \subfloat{\includegraphics[width=.21\textwidth]{result3.pdf}}~~~ \subfloat{\includegraphics[width=.21\textwidth]{result4.pdf}}~~~ \subfloat{\includegraphics[width=.21\textwidth]{result2.pdf}} \caption{Qualitative results before and after correction of four selected evaluation data. The yellow and red lines represent contours of the IOS models with the proposed registration and correction methods, respectively. The contours are obtained from the IOS models cut along the corresponding CT slices. The two contours almost overlap, but the differences appear at the end of the arches. } \label{fig:correction_results} \end{figure*} When source and target point clouds partially overlap, the MR and CPD were less accurate than FPFH, suggesting that the feature-based method is more suitable than user interaction and probabilistic-based methods. But above all, these methods suffer from the unnecessary points because non-overlapping areas between the IOS and skull models (\textit{i.e.}, alveolar bones in CBCT and soft tissues in IOS) occupy the most of the entire area. In such condition, FPFH may produce inaccurate correspondence pairs due to the non-overlapping points that are not properly filtered out in Eq. \eqref{eq:filter}, as shown in Fig. \ref{fig:comparison_fpfh}. Therefore, the use of TSIM-IOS and -CBCT is beneficial by eliminating the areas that may adversely affect accurate registration. In the ablation study, the methods with TSIM showed improved performances compared to those without TSIM. Still, the MR and CPD have limitations in achieving automation and improving accuracy due to the roots of CBCT teeth, respectively. To precisely match the models roughly aligned by FPFH, we developed T-ICP, which is an improved ICP method that uses individual tooth segmentation. Adopting T-ICP instead of ICP led to increased accuracy. The advantage of T-ICP is that it avoids point correspondences between adjacent teeth with different codes. This constraint prevents unwanted correspondences by performing point matching only between the same CBCT and IOS teeth. \subsection{Correction of the IOS Stitching Errors} This subsection presents the results before and after the correction of distortions in IOS, which occur in the stitching process of locally scanned images. Table \ref{tbl:eval_cor} reports the correction results for the registration methods with TSIM that were used in the subsection above. All post-correction accuracies increased compared with the pre-correction accuracies. However, these correction results depend on the performances of registration methods. Each tooth of the IOS aligned with CBCT in the previous registration step is used as an initial guess to determine a corrective transformation. The locations of the IOS teeth should be as close as possible to the CBCT teeth, as the ICP may become stuck in local minima. Fig. \ref{fig:correction_results} presents the results of the proposed registration and correction methods. Due to accumulated stitching errors, the scanned arches tend to be narrower or wider than the actual arches. Thus, the registration results show that the full-arch IOS models slightly deviated at the end of the arches. In contrast, the corrected IOS models fit edges of the teeth in CBCT images. \section{Discussion and Conclusion} In this paper, we developed a fully automatic registration and correction technique that integrates two different imaging modalities (\textit{i.e.}, IOS and CBCT images) in one scene. The proposed method is intended not only to compensate CBCT-derived tooth surfaces with the high-resolution surfaces of IOS, but also to correct cumulative IOS stitching errors across the entire dental arch by referring to CBCT. The most important contribution of the proposed method is its registration accuracy at the level of clinical application, even with severe metal artifacts in CBCT. The accuracy is achieved by the use of TSIM-IOS and -CBCT, which allow the minimization of the non-congruent points in CBCT and IOS data. The tooth-focused approach addresses the drawbacks of existing methods by achieving improved accuracy and fully automation. Moreover, this approach helps to correct full-arch digital impressions with distortion caused by stitching errors. The fusion of the CBCT images and IOS models provides high-resolution crown surface even in the presence of serious metal-related artifacts in the CBCT images. Metal artifact reduction (MAR) in dental CBCT is known to be the most difficult and important issue. By avoiding the challenging problem of MAR with the help of IOS, the merged image may be used for occlusal analysis. The proposed multimodal data integration system can provide a jaw-tooth-gingiva composite model, which is a basic tool in digital dentistry workflow. Thus, it may be used to produce a surgical wafer for orthognathic surgical planning and orthodontic mini-screw guide to reduce failure by minimizing root contact. Furthermore, because the jaw-tooth-gingiva model is componentized with jaw bones, individual teeth, and soft tissues (gingiva and palate), it is useful in terms of versatility and practicality in various dental treatment tasks (\textit{e.g.}, dental implant placement, orthodontic simulation and evaluation). The proposed method can eliminate the hassle of traditional dental prosthetic treatments that are labor intensive, costly, require at least two individual visits, and require temporary prosthesis to be worn until the final crown is in place. Moreover, if the final crown made in the dental laboratory does not fit properly at the second visit, the patient and dentist will have to repeat the previous operation, and the laboratory may have to redesign the restoration prosthesis. Note that the proposed integration of dental CBCT and IOS data can provide an alternative to traditional impressions, thereby reducing the time-consuming laboratory procedure of manually editing individual teeth using a computer-aided interface. \section*{Acknowledgment} This research was supported by a grant of the Korea Health Technology R\&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health \& Welfare, Republic of Korea (grant number : HI20C0127). We would like to express our deepest gratitude to HDXWILL which shares dataset.
proofpile-arXiv_065-89
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Gas-particle two-phase flow is very common in natural phenomena, e.g., sand storms, volcano eruption, and many engineering industries, e.g., petroleum industry, chemical industry, energy industry. Numerical simulation is a powerful tool to study the gas-particle two-phase flow, and many numerical methods have been developed to accurately and efficiently capture the complex physics of gas-particle flow \cite{Gasparticle-review-multiscale-tsuji2007multi, Gasparticle-review-van2008numerical, Gasparticle-review-zhongwenqi-zhong2016cfd, Gasparticle-review-Ge2017discrete, Gasparticle-review-WangJunwu2020continuum}. Generally two approaches, Eulerian-Eulerian (EE) approach and Eulerian-Lagrangian (EL) approach, are widely employed, and the difference of this classification is based on the treatment of particle phase. In the EE approach, the particle phase is assumed as a continuum media, and hydrodynamic equations are employed for the evolution of particle flow \cite{Gasparticle-Abgrall-saurel1999multiphase, Gasparticle-TFM-compressible-houim2016multiphase}. EE approach is also called two fluid model (TFM). One representative EE approach is kinetic theory-based granular flow (KTGF), which is based on the similarity in the modeling of solid particle and the molecule in gas \cite{Gasparticle-KTGF-ding1990bubbling, Gasparticle-book-luhuilin2021computational}. In EL approach, all individual solid particles or particle parcels composed of a group of solid particles with the same properties, will be tracked according to Newton's law of motion in the simulation \cite{ Gasparticle-DEM-review-guo-curtis-2015discrete}. Some typical EL approaches are discrete element method (DEM) \cite{Gasparticle-DEM-cundall1979discrete, Gasparticle-DEM-tsuji1993discrete, Gasparticle-DEM-review-guo-curtis-2015discrete}, coarse-grained particle method (CGPM) \cite{Gasparticle-coarse-grained-DEM-sakai2009large, Gasparticle-coarse-grained-EMMS-DPM-gewei-lu2016computer}, multiphase particle-in-cell (MP-PIC) \cite{Gasparticle-PIC-andrews1996multiphase, Gasparticle-PIC-rourke-2009model, Gasparticle-PIC-rourke-2012inclusion}, etc. In terms of the consideration of flow physics, the choice of EE or EL depends on the local Knudsen ($Kn$) number of particle flow. Similar to gas, the $Kn$ number of disperse phase can be defined as the ratio of mean free path (MFP) of solid particles over characteristic length scale \cite{Gasparticle-momentmethod-Fox2013computational}. When $Kn$ number is very small with sufficient inter-particle collisions, the solid particle phase can be assumed as a continuum medium, and the EE approach can be appropriately used for the gas-particle system. On the contrary, when $Kn$ number is large, individual particle transport becomes important and the solid phase stays in a non-equilibrium state. So, the EL approach is a preferred choice. The disadvantage of EL approach is the high computational cost due to the particle trajectory tracking for all individual particles or parcels, especially in the dense solid-particle flow\cite{Gasparticle-review-van2008numerical}. Theoretically, EL approach can be used when $Kn$ number is small as long as the computation cost is affordable. For the EE approach, it will difficult to give an accurate prediction when $Kn$ number of particle phase is large, because EE approach cannot capture non-equilibrium physics of solid particles, such as particle trajectory crossing (PTC) phenomenon \cite{Gasparticle-review-balachandar2010turbulent, Gasparticle-momentmethod-Fox2013computational}. Based on the features of EE and EL approach, many studies focus on the hybrid method, coupling Eulerian and Lagrangian approach together for solid particle phase, to maintain both the accuracy and computation efficiency \cite{Gasparticle-hybrid-EL-pialat2007hybrid, Gasparticle-hybrid-dynamic-multiscale-method-chen-wangjunwu-2017dynamic, Gasparticle-hybrid-KTGF-DEM-zhang-luhuilin-2019modified, Gasparticle-hybrid-EL-panchal2021hybrid}. In the hybrid method, it is a challenge to define an accurate and reliable criterion for the smooth transition between the Eulerian and Lagrangian approaches for disperse phase. In addition, some other methods are proposed and used for the gas-particle flow, such as direct numerical simulation (DNS) \cite{Gasparticle-DNS-luokun-li2016direct, Gasparticle-DNS-gewei-liu2017meso}, unified gas kinetic scheme (UGKS) \cite{UGKS-xu2010unified, UGKS-gas-particle-liu2019unified}, unified gas kinetic particle method (UGKP) \cite{KP-gasparticle-wangzhao-wang2020unified}, discrete unified gas kinetic scheme (DUGKS) \cite{Gasparticle-DUGKS-immersed-boundary-guozhaoli-tao2018combined}, method of moment (MOM) \cite{Gasparticle-MOM-Fox-desjardins2008quadrature, Gasparticle-momentmethod-Fox2013computational}, direct simulation Monte Carlo (DSMC) \cite{Gasparticle-DSMC-bird1976molecular}, material point method (MPM) \cite{Gasparticle-MPM-baumgarten2019general}, smooth particle hydrodynamics (SPH) \cite{Gasparticle-SPH-GeWei-deng2013two}, hybrid coarse-grain DEM and resolved DEM \cite{Gasparticle-hybrid-DEM-coarse-grained-DEM-queteschiner2018coupling}, hybrid finite-volume-particle method \cite{Gasparticle-dusty-hybrid-finite-volume-particle-chertock2017hybrid}, etc. In recent years, unified gas-kinetic scheme (UGKS) has been developed for rarefied and continuum flow simulation \cite{UGKS-xu2010unified, UGKS-book-xu2014direct}. Based on the direct modeling on the cell's Knudsen number, i.e., $Kn_c = \tau/\Delta t$ with particle collision time $\tau$ over numerical time step $\Delta t$, UGKS recovers multiscale transport in flow regimes through a smooth connecting between $e^{-1/Kn_c}$ weighted equilibrium flow evolution and the rest $(1-e^{-1/Kn_c})$ particle free transport, and the NS solution is automatically obtained at small $Kn_c$. After the success of the UGKS for the gas flow, the method has been further extended to other multiscale transports, such as radiative heat transfer, neutron transport, plasma, particulate flow, etc \cite{UGKS-radiative-sun2015asymptotic, UGKS-neutron-tan2020,UGKS-plasma-liu2017unified, UGKS-gas-particle-liu2019unified}. A particle-based UGKS, which is named unified gas-kinetic particle (UGKP) method, was developed subsequently using stochastic particles to follow the evolution of gas distribution function \cite{WP-first-liu2020unified, WP-second-zhu-unstructured-mesh-zhu2019unified}. In UGKP, the sampled particles can be divided into two categories: collisionless (free transport) particle and collisional particle within each time step. The collisional particles will be eliminated in the evolution and get re-sampled from the equilibrium state at the beginning of the next time step. As a result, only the collisionless particles are fully tracked in the whole time step in UGKP. Furthermore, it is realized that a proportion $e^{-1/Kn_c}$ of re-sampled particles from the equilibrium state at the beginning of next time step in UGKP will get collision and be eliminated again within the next time step. Actually, the contribution from these re-sampled collisional particles to flux function in the finite volume UGKP can be evaluated analytically. As a result, the collisional particles don't need to be re-sampled at all, and can be followed analytically through a wave representation in the upgraded unified gas-kinetic wave-particle (UGKWP) method \cite{WP-first-liu2020unified, WP-second-zhu-unstructured-mesh-zhu2019unified, WP-3D-chen2020three, WP-sample-xu2021modeling}. In UGKWP, wave and particle are coupled together in the evolution, and only free transport particles are basically tracked to capture the non-equilibrium flow physics. Therefore, UGKWP becomes a hydrodynamic flow solver in the continuum flow regime due to the absence of particles and goes to a particle method in the highly rarefied regime. UGKWP can present an optimized approach to capture multiscale transport efficiently using the combination of wave and particle. In the continuum flow regime, UGKWP will automatically get back to the gas-kinetic scheme (GKS), which is a kinetic theory-based Navier-Stokes solver \cite{GKS-2001, GKS-lecture, CompactGKS-ji2020-unstructured, CompactGKS-zhao2019-8th-order, GKS-turbulence-implicitHGKS-Cao2019}. Besides gas flow, UGKWP has also been used in the study of radiative transfer, plasma, and two phase flow \cite{UGKWP-Li2020,UGKS-Liu2021-AIA,WP-six-gas-particle-yang2021unified}. The special wave and particle decomposition in UGKWP makes it suitable for the simulation of both dense (wave) and dilute solid-particle (particle) phase easily. For the particulate two phase flow, the gas phase will be followed by the GKS and solid-particle phase by the UGKWP, and final scheme is called GKS-UGKWP for convenience. For the dilute monodisperse particulate flow, a previous GKS-UGKWP has been developed \cite{WP-six-gas-particle-yang2021unified}. Based on the UGKWP for the solid-particle phase, the sampled particles depends on the local particle's cell's Knudsen number. When $Kn_c$ is extremely small for dense particle distribution, no particle will be sampled in UGKWP and UGKWP reduces to the hydrodynamic flow solver. As a result, the GKS-UGKWP automatically becomes an EE approach. When $Kn$ number is extremely large, only particle evolution in UGKWP will be tracked and the corresponding GKS-UGKWP becomes an EL approach. For the intermediate $Kn_c$ number, both EE and EL formulation will be coupled in each cell according to $Kn_c$ in the evolution of the particulate flow. In this paper, more realistic model will be implemented in GKS-UGKWP for the two-phase flow simulation. Based on solid volume fraction $\epsilon_{s}$, the particulate flow is usually divided into dilute flow with $\epsilon_{s} \le \epsilon_{s}^{*}$ and dense flow $\epsilon_{s} > \epsilon_{s}^{*}$, and one of the choices of $\epsilon_{s}^{*}$ is 0.001 \cite{Gasparticle-review-van2006multiscale}. However, the solid volume fraction is not necessarily a reliable indicator showing the importance of particle-particle collision, but the Kundsen number is a suitable indicator \cite{Gasparticle-momentmethod-Fox2013computational}. Generally, the inter-particle collision is (much possibly but not necessarily) more frequent in dense flow than dilute one due to a large number of solid particles. Therefore the particle-particle collision usually plays a significant role in the solid phase evolution of dense phase, and it cannot be neglected in the numerical simulation aiming to accurately recover the real flow physics. The influence of inter-particle collision is considered and modeled differently in numerical methods. For example, in MP-PIC, an inter-particle stress term models the effect of particle-particle collision, but it can only simulate the particulate flow with solid concentration $\epsilon_{s}<0.05$, which cannot be very high \cite{Gasparticle-PIC-andrews1996multiphase}. With the modification of collision term, the improved MP-PIC can be used for dense particle flow with high concentration \cite{Gasparticle-PIC-rourke-2009model, Gasparticle-PIC-rourke-2012inclusion}. In DEM, both soft-sphere model and hard-sphere model can be used to calculate the influence of inter-particle collision \cite{Gasparticle-DEM-cundall1979discrete, Gasparticle-DEM-hard-sphere-kuipers-hoomans1996discrete, Gasparticle-review-multiscale-tsuji2007multi}. In UGKWP, the collision effect is explicitly included in the collision term of the kinetic equation for modeling the evolution process from local non-equilibrium to equilibrium state \cite{Gasparticle-KTGF-lun1984kinetic, WP-first-liu2020unified}. For the numerical simulation of dense solid-particle flow, a challenge is the existence of non-conservative ``nozzle term" in momentum equation and correspondingly $pDV$ work term in energy equation for the gas flow, which is similar to $pDV$ term in the quasi-one-dimensional gas nozzle flow equation \cite{Gasparticle-TFM-compressible-houim2016multiphase}. If these terms were not solved correctly, un-physical fluctuations of pressure and flow field would be generated, especially in the flow zone with a steep interface of solid-phase concentration \cite{Gasparticle-Abgrall-saurel1999multiphase, Gasparticle-TFM-compressible-houim2016multiphase}. When the solid phase approaches to a packing limit, the effect of enduring particle-particle contact and friction, modeled by the solid frictional pressure term, has to be considered \cite{Gasparticle-KTGF-pressure-friction-johnson1987frictional, Gasparticle-pressure-friction-srivastava2003analysis, Gasparticle-pressure-friction-schneiderbauer2012comprehensive}. Also, the introduction of frictional pressure can avoid the solid particles' over-assembling due to the dramatically increased value when the solid volume fraction approaches its maximum limiting value \cite{Gasparticle-KTGF-pressure-friction-johnson1987frictional, Gasparticle-TFM-compressible-houim2016multiphase}. Particulate flow with high concentration is very common in practical engineering problems, such as fluidized bed, pneumatic conveying, etc \cite{Gasparticle-book-fan1999principles, Gasparticle-book-luhuilin2021computational}. Therefore in this paper, the previously developed GKS-UGKWP for dilute flow is extended to dense gas-particle flow. The GKS-UGKWP is further developed for gas-particle two-phase flow with a wide range of volume fraction from very dilute flow to dense solid-particle phase. This paper is organized as follows. Section 2 introduces the governing equations for particle phase and the UGKWP method. Section 3 is the governing equations for gas phase and the GKS method. Section 4 introduces the numerical experiments. The last section is the conclusion. \section{UGKWP for solid-particle phase} \subsection{Governing equation for particle phase} The evolution of particle phase is govern by the following kinetic equation, \begin{gather}\label{particle phase kinetic equ} \frac{\partial f_{s}}{\partial t} + \nabla_x \cdot \left(\textbf{u}f_{s}\right) + \nabla_u \cdot \left(\textbf{a}f_{s}\right) = \frac{g_{s}-f_{s}}{\tau_{s}}, \end{gather} where $\textbf{u}$ is the particle velocity, $\textbf{a}$ is the particle acceleration caused by the external force, $\nabla_x$ is the divergence operator with respect to space, $\nabla_u$ is the divergence operator with respect to velocity, $\tau_s$ is the relaxation time for the particle phase, $f_{s}$ is the distribution function of particle phase, and $g_{s}$ is the associated equilibrium distribution, which can be written as, \begin{gather*} g_{s}=\epsilon_s\rho_s\left(\frac{\lambda_s}{\pi}\right)^{\frac{3}{2}}e^{-\lambda_s \left[(\textbf{u}-\textbf{U}_s)^2\right]}, \end{gather*} where $\epsilon_s$ is the volume fraction of particle phase, $\rho_s$ is the material density of particle phase, $\lambda_s$ is the value relevant to the granular temperature $T_s$ with $\lambda_s = \frac{m_s}{2k_BT_s}$, $m_s=\rho_s \frac{4}{3}\pi\left(\frac{d_s}{2}\right)^3$ is the mass of one particle, $d_s$ is the diameter of solid particle, and $\textbf{U}_s$ is the macroscopic velocity of particle phase. The sum of kinetic and thermal energy for colliding particle may not be conserved due to the inelastic collision between particles. Therefore the collision term in Eq.\eqref{particle phase kinetic equ} should satisfy the following compatibility condition \cite{UGKS-gas-particle-liu2019unified}, \begin{equation}\label{particle phase compatibility condition} \frac{1}{\tau_s} \int g_s \bm{\psi} \text{d}\Xi= \frac{1}{\tau_s} \int f_s \bm{\psi}' \text{d}\Xi, \end{equation} where $\bm{\psi}=\left(1,\textbf{u},\displaystyle \frac{1}{2}\textbf{u}^2\right)^T$ and $\bm{\psi}'=\left(1,\textbf{u},\displaystyle \frac{1}{2}\textbf{u}^2+\frac{r^2-1}{2}\left(\textbf{u}-\textbf{U}_s\right)^2\right)^T$. The lost energy due to inelastic collision in 3D can be written as, \begin{gather*} Q_{loss} = \frac{\left(1-r^2\right)3p_s}{2}, \end{gather*} where, $r\in\left[0,1\right]$ is the restitution coefficient, determining the percentage of lost energy in inelastic collision. While $r=1$ means no energy loss (elastic collision), $r=0$ refers to total loss of all internal energy of particle phase $\epsilon_s\rho_se_s =\frac{3}{2}p_s$ with $p_s=\frac{\epsilon_s\rho_s}{2\lambda_s}$. The particle acceleration $\textbf{a}$ is determined by the external force. In this paper, the drag force $\textbf{D}$, the buoyancy force $\textbf{F}_b$, and gravity $\textbf{G}$ are considered. $\textbf{D}$ and $\textbf{F}_b$ are inter-phase force, standing for the force applied on the solid particles by gas flow. The general form of drag force can be written as, \begin{gather}\label{drag force model} \textbf{D} = \frac{m_s}{\tau_{st}}\left(\textbf{U}_g-\textbf{u}\right), \end{gather} where $\textbf{U}_g$ is the macroscopic velocity of gas phase, and $\tau_{st}$ is the particle internal response time. Many studies have been conducted on the drag force model to give an accurate prediction for the drag under different solid concentrations. In this paper, the drag force model proposed by Gidaspow is employed to determine $\tau_{st}$ \cite{Gasparticle-book-gidaspow1994multiphase}, \begin{equation}\label{taust equation} \tau_{st}= \left\{\begin{aligned} &\frac{4}{3}\frac{\rho_s d_s}{\rho_g|\textbf{U}_g-\textbf{u}|C_d} \epsilon_{g}^{2.65}, & \epsilon_{g}>0.8, \\ &\frac{1}{150\frac{\epsilon_{s} \mu_g}{\epsilon_{g} \rho_{s} d_s^2} + 1.75\frac{\rho_g |\textbf{U}_g - \textbf{u}|}{\rho_{s} d_s}}, & \epsilon_{g} \le 0.8, \end{aligned}\right. \end{equation} and it can used for both dilute and dense flow. $C_d$ is the drag coefficient, which is obtained by, \begin{equation} C_d = \left\{\begin{aligned} &\frac{24}{Re_s}\left(1+0.15 Re_s^{0.687}\right), & & Re_s \le 1000, \\ &0.44, & & Re_s > 1000, \end{aligned} \right. \end{equation} where $d_s$ is the diameter of solid particle, and $\mu_g$ is the dynamic viscosity of gas phase. $Re_s = |\textbf{U}_g-\textbf{u}| d_s/\nu_g$ is the particle Reynolds number, and $\nu_g=\mu_g/\rho_g$ is the kinematic viscosity of gas phase. Besides, another interactive force considered is the buoyancy force, which can be modeled as, \begin{gather}\label{buoyancy force model} \textbf{F}_b = -\frac{m_s}{\rho_{s}} \nabla_x p_g, \end{gather} where $p_g$ is the pressure of gas phase. Then, the acceleration term can be obtained, \begin{gather*}\label{particle phase acceleration term} \textbf{a}=\frac{\textbf{D} + \textbf{F}_b}{m_s} + \textbf{G}. \end{gather*} When the collision between solid particles are elastic with $r=1$, in the continuum flow regime the hydrodynamic equations becomes the Euler equations which can be obtained based on the Chapman-Enskog asymptotic analysis, \begin{align}\label{particle phase Euler equ} &\frac{\partial \left(\epsilon_s\rho_s\right)}{\partial t} + \nabla_x \cdot \left(\epsilon_s\rho_s \textbf{U}_s\right) = 0,\nonumber \\ &\frac{\partial \left(\epsilon_s\rho_s \textbf{U}_s\right)}{\partial t} + \nabla_x \cdot \left(\epsilon_s\rho_s \textbf{U}_s \textbf{U}_s + p_s \mathbb{I} \right) = \frac{\epsilon_{s}\rho_{s}\left(\textbf{U}_g - \textbf{U}_s\right)}{\tau_{st}} - \epsilon_{s} \nabla_x p_g + \epsilon_{s}\rho_{s} \textbf{G} , \\ &\frac{\partial \left(\epsilon_s\rho_s E_s\right)}{\partial t} + \nabla_x \cdot \left(\left(\epsilon_s\rho_s E_s + p_s\right) \textbf{U}_s \right) = \frac{\epsilon_{s}\rho_{s}\textbf{U}_s \cdot \left(\textbf{U}_g - \textbf{U}_s\right)}{\tau_{st}} - \frac{3p_s}{\tau_{st}} - \epsilon_{s} \textbf{U}_s \cdot \nabla_x p_g + \epsilon_{s}\rho_{s} \textbf{U}_s \cdot \textbf{G}.\nonumber \end{align} Note that the heat conduction between the particle and gas phase are neglected in this paper. In summary, the evolution of particle phase is governed by Eq.\eqref{particle phase kinetic equ}, and the hydrodynamic equations Eq.\eqref{particle phase Euler equ} is only the asymptotic solution in the continuum flow limit for the solid-particle phase. \subsection{UGKWP method} In this subsection, the UGKWP for the evolution of particle phase is introduced. Generally, the kinetic equation of particle phase Eq.\eqref{particle phase kinetic equ} is split as, \begin{align} \label{particle phase kinetic equ without acce} \mathcal{L}_{s1} &:~~ \frac{\partial f_{s}}{\partial t} + \nabla_x \cdot \left(\textbf{u}f_{s}\right) = \frac{g_{s}-f_{s}}{\tau_{s}}, \\ \label{particle phase kenetic equ only acce} \mathcal{L}_{s2} &:~~ \frac{\partial f_{s}}{\partial t} + \nabla_u \cdot \left(\textbf{a}f_{s}\right) = 0, \end{align} and splitting operator is used to solve Eq.\eqref{particle phase kinetic equ}. Firstly we focus on $\mathcal{L}_{s1}$ part, the particle phase kinetic equation without external force, \begin{gather*} \frac{\partial f_{s}}{\partial t} + \nabla_x \cdot \left(\textbf{u}f_{s}\right) = \frac{g_{s}-f_{s}}{\tau_{s}}. \end{gather*} For brevity, the subscript $s$ standing for the solid particle phase will be neglected in this subsection. The integration solution of the kinetic equation can be written as, \begin{equation}\label{particle phase integration solution} f(\textbf{x},t,\textbf{u})=\frac{1}{\tau}\int_0^t g(\textbf{x}',t',\textbf{u} )e^{-(t-t')/\tau}\text{d}t'\\ +e^{-t/\tau}f_0(\textbf{x}-\textbf{u}t, \textbf{u}), \end{equation} where $\textbf{x}'=\textbf{x}+\textbf{u}(t'-t)$ is the trajectory of particles, $f_0$ is the initial gas distribution function at time $t=0$, and $g$ is the corresponding equilibrium state. In UGKWP, both macroscopic conservative variables and microscopic gas distribution function need to be updated. Generally, in the finite volume framework, the cell-averaged macroscopic variables $\textbf{W}_i$ of cell $i$ can be updated by the conservation law, \begin{gather} \textbf{W}_i^{n+1} = \textbf{W}_i^n - \frac{1}{\Omega_i} \sum_{S_{ij}\in \partial \Omega_i}\textbf{F}_{ij}S_{ij} + \Delta t \textbf{S}_{i}, \end{gather} where $\textbf{W}_i=\left(\rho_i, \rho_i \textbf{U}_i, \rho_i E_i\right)$ is the cell-averaged macroscopic variables, \begin{gather*} \textbf{W}_i = \frac{1}{\Omega_{i}}\int_{\Omega_{i}} \textbf{W}\left(\textbf{x}\right) \text{d}\Omega, \end{gather*} $\Omega_i$ is the volume of cell $i$, $\partial\Omega_i$ denotes the set of cell interfaces of cell $i$, $S_{ij}$ is the area of the $j$-th interface of cell $i$, $\textbf{F}_{ij}$ denotes the macroscopic fluxes across the interface $S_{ij}$, which can be written as \begin{align}\label{particle phase Flux equation} \textbf{F}_{ij}=\int_{0}^{\Delta t} \int \textbf{u}\cdot\textbf{n}_{ij} f_{ij}(\textbf{x},t,\textbf{u}) \bm{\psi} \text{d}\textbf{u}\text{d}t, \end{align} where $\textbf{n}_{ij}$ denotes the normal vector of interface $S_{ij}$, $f_{ij}\left(t\right)$ is the time-dependent distribution function on the interface $S_{ij}$, and $\bm{\psi}=(1,\textbf{u},\displaystyle \frac{1}{2}\textbf{u}^2)^T$. $\textbf{S}_{i}$ is the source term due to inelastic collision inside each control volume, where the solid-particle's internal energy has not been taken into account in the above equation. Substituting the time-dependent distribution function Eq.\eqref{particle phase integration solution} into Eq.\eqref{particle phase Flux equation}, the fluxes can be obtained, \begin{align*} \textbf{F}_{ij} &=\int_{0}^{\Delta t} \int \textbf{u}\cdot\textbf{n}_{ij} f_{ij}(\textbf{x},t,\textbf{u}) \bm{\psi} \text{d}\textbf{u}\text{d}t\\ &=\int_{0}^{\Delta t} \int\textbf{u}\cdot\textbf{n}_{ij} \left[ \frac{1}{\tau}\int_0^t g(\textbf{x}',t',\textbf{u})e^{-(t-t')/\tau}\text{d}t' \right] \bm{\psi} \text{d}\textbf{u}\text{d}t\\ &+\int_{0}^{\Delta t} \int\textbf{u}\cdot\textbf{n}_{ij} \left[ e^{-t/\tau}f_0(\textbf{x}-\textbf{u}t,\textbf{u})\right] \bm{\psi} \text{d}\textbf{u}\text{d}t\\ &\overset{def}{=}\textbf{F}^{eq}_{ij} + \textbf{F}^{fr}_{ij}. \end{align*} The procedure of obtaining the local equilibrium state $g_0$ at the cell interface as well as the construction of $g\left(t\right)$ is the same as that in GKS. For a second-order accuracy, the equilibrium state $g$ around the cell interface is written as, \begin{gather*} g\left(\textbf{x}',t',\textbf{u}\right)=g_0\left(\textbf{x},\textbf{u}\right) \left(1 + \overline{\textbf{a}} \cdot \textbf{u}\left(t'-t\right) + \bar{A}t'\right), \end{gather*} where $\overline{\textbf{a}}=\left[\overline{a_1}, \overline{a_2}, \overline{a_3}\right]^T$, $\overline{a_i}=\frac{\partial g}{\partial x_i}/g$, $i=1,2,3$, $\overline{A}=\frac{\partial g}{\partial t}/g$, and $g_0$ is the local equilibrium on the interface. Specifically, the coefficients of spatial derivatives $\overline{a_i}$ can be obtained from the corresponding derivatives of the macroscopic variables, \begin{equation*} \left\langle \overline{a_i}\right\rangle=\partial \textbf{W}_0/\partial x_i, \end{equation*} where $i=1,2,3$, and $\left\langle...\right\rangle$ means the moments of the Maxwellian distribution functions, \begin{align*} \left\langle...\right\rangle=\int \bm{\psi}\left(...\right)g\text{d}\textbf{u}. \end{align*} The coefficients of temporal derivative $\overline{A}$ can be determined by the compatibility condition, \begin{equation*} \left\langle \overline{\textbf{a}} \cdot \textbf{u}+\overline{A} \right\rangle = \left[\begin{array}{c} 0\\ \textbf{0}\\ -\frac{Q_{loss}}{\tau_s} \end{array}\right]. \end{equation*} where $Q_{loss}=\frac{\left(1-r^2\right)3p_s}{2}$, caused by the particle-particle inelastic collision. Now, all the coefficients in the equilibrium state $g\left(\textbf{x}',t',\textbf{u}\right)$ have been determined, and its integration becomes, \begin{gather} f^{eq}(\textbf{x},t,\textbf{u}) \overset{def}{=} \frac{1}{\tau}\int_0^t g(\textbf{x}',t',\textbf{u})e^{-(t-t')/\tau}\text{d}t' \nonumber\\ = c_1 g_0\left(\textbf{x},\textbf{u}\right) + c_2 \overline{\textbf{a}} \cdot \textbf{u} g_0\left(\textbf{x},\textbf{u}\right) + c_3 A g_0\left(\textbf{x},\textbf{u}\right), \end{gather} with coefficients, \begin{align*} c_1 &= 1-e^{-t/\tau}, \\ c_2 &= \left(t+\tau\right)e^{-t/\tau}-\tau, \\ c_3 &= t-\tau+\tau e^{-t/\tau}, \end{align*} and thereby the integrated flux over a time step for equilibrium state can be obtained, \begin{gather*} \textbf{F}^{eq}_{ij} =\int_{0}^{\Delta t} \int \textbf{u}\cdot\textbf{n}_{ij} f_{ij}^{eq}(\textbf{x},t,\textbf{u})\bm{\psi}\text{d}\textbf{u}\text{d}t. \end{gather*} Besides, the flux contribution from the particle free transport $f_0$ is calculated by tracking the particles sampled from $f_0$. Therefore, the updating of the cell-averaged macroscopic variables can be written as, \begin{gather}\label{particle phase equ_updateW_ugkp} \textbf{W}_i^{n+1} = \textbf{W}_i^n - \frac{1}{\Omega_i} \sum_{S_{ij}\in \partial \Omega_i}\textbf{F}^{eq}_{ij}S_{ij} + \frac{\textbf{w}_{i}^{fr}}{\Omega_{i}} + \Delta t \textbf{S}_{i}, \end{gather} where $\textbf{w}^{fr}_i$ is the net free streaming flow of cell $i$, standing for the flux contribution of the free streaming of particles, and the term $\textbf{S}_{i} = \left[0,\textbf{0},-\frac{Q_{loss}}{\tau_s}\right]^T$ is the source term due to the inelastic collision for solid particle phase. The net free streaming flow $\textbf{w}^{fr}_i$ is determined in the following. The evolution of particle should also satisfy the integral solution of the kinetic equation, which can be written as, \begin{equation} f(\textbf{x},t,\textbf{u}) =\left(1-e^{-t/\tau}\right)g^{+}(\textbf{x},t,\textbf{u}) +e^{-t/\tau}f_0(\textbf{x}-\textbf{u}t,\textbf{u}), \end{equation} where $g^{+}$ is named as the hydrodynamic distribution function with analytical formulation. The initial distribution function $f_0$ have a probability of $e^{-t/\tau}$ to free transport and $1-e^{-t/\tau}$ to colliding with other particles. The post-collision particles satisfies the distribution $g^+\left(\textbf{x},\textbf{u},t\right)$. The free transport time before the first collision with other particles is denoted as $t_c$. The cumulative distribution function of $t_c$ is, \begin{gather}\label{particle phase wp cumulative distribution} F\left(t_c < t\right) = 1 - e^{-t/ \tau}, \end{gather} and therefore $t_c$ can be sampled as $t_c=-\tau \text{ln}\left(\eta\right)$, where $\eta$ is a random number generated from a uniform distribution $U\left(0,1\right)$. Then, the free streaming time $t_f$ for particle $k$ is determined by, \begin{gather} t_f = min\left[-\tau\text{ln}\left(\eta\right), \Delta t\right], \end{gather} where $\Delta t$ is the time step. Therefore, within one time step, all particles can be divided into two groups: the collisionless particle and the collisional particle, and they are determined by the relation between of time step $\Delta t$ and free streaming time $t_f$. Specifically, if $t_f=\Delta t$ for one particle, it is collisionless particle, and the trajectory of this particle is fully tracked in the whole time step. On the contrary, if $t_f<\Delta t$ for one particle, it is collisional particle, and its trajectory will be tracked until $t_f$. The collisional particle is eliminated at $t_f$ in the simulation and the associated mass, momentum and energy carried by this particle are merged into the macroscopic quantities in the relevant cell by counting its contribution through the flux function. More specifically, the particle trajectory in the free streaming process within time $t_f$ is tacked by, \begin{gather} \textbf{x} = \textbf{x}^n + \textbf{u}^n t_f . \end{gather} The term $\textbf{w}_{i}^{fr}$ can be calculated by counting the particles passing through the interfaces of cell $i$, \begin{gather} \textbf{w}_{i}^{fr} = \sum_{k\in P\left(\partial \Omega_{i}^{+}\right)} \bm{\phi}_k - \sum_{k\in P\left(\partial \Omega_{i}^{-}\right)} \bm{\phi}_k, \end{gather} where, $P\left(\partial \Omega_{i}^{+}\right)$ is the particle set moving into the cell $i$ during one time step, $P\left(\partial \Omega_{i}^{-}\right)$ is the particle set moving out of the cell $i$ during one time step, $k$ is the particle index in one specific set, and $\bm{\phi}_k=\left[m_{k}, m_{k}\textbf{u}_k, \frac{1}{2}m_{k}(\textbf{u}^2_k)\right]^T$ is the mass, momentum and energy carried by particle $k$. Therefore, $\textbf{w}_{i}^{fr}/\Omega_{i}$ is the net conservative quantities caused by the free stream of the tracked particles. Now, all the terms in Eq.\eqref{particle phase equ_updateW_ugkp} have been determined and the macroscopic variables $\textbf{W}_i$ can be updated. The trajectories of all particles have been tracked during the time interval $\left(0, t_f\right)$. For the collisionless particles with $t_f=\Delta t$, they still survive at the end of one time step; while the collisional particles with $t_f<\Delta t$ are deleted after their first collision and they are supposed to go to the equilibrium state in that cell. Therefore, the macroscopic variables of the collisional particles in cell $i$ at the end of each time step can be directly obtained based on the conservation law, \begin{gather} \textbf{W}^h_i = \textbf{W}^{n+1}_i - \textbf{W}^p_i, \end{gather} where $\textbf{W}^{n+1}_i$ is the updated conservative variables in Eq.\eqref{particle phase equ_updateW_ugkp} and $\textbf{W}^p_i$ are the mass, momentum, and energy of remaining collisionless particles in the cell at the end of the time step. Besides, the macroscopic variables $\textbf{W}^h_i$ account for all eliminated collisional particles to the equilibrium state, and these particles can be re-sampling from $\textbf{W}^h_i$ based on the overall Maxwellian distribution at the beginning of next time step. Now the updates of both macroscopic variables and the microscopic particles have been presented. The above method is the so-called unified gas-kinetic particle (UGKP) method. The above UGKP can be further developed to UGKWP method. In UGKP method, all particles are divided into collisionless and collisional particles in each time step. The collisional particles are deleted after the first collision and re-sampled from $\textbf{W}^h_i$ at the beginning of next time step. However, only the collisionless part of the re-samples particles can survive in the next time step, and all collisional ones will be deleted again. Actually, the transport fluxes from these collisional particles can be evaluated analytically without using particles. According to the cumulative distribution Eq.\eqref{particle phase wp cumulative distribution}, the proportion of the collisionless particles is $e^{-\Delta t/\tau}$, and therefore in UGKWP only the collisionless particles from the hydrodynamic variables $\textbf{W}^{h}_i$ in cell $i$ will be re-sampled with the total mass, momentum, and energy, \begin{gather} \textbf{W}^{hp}_i = e^{-\Delta t/\tau} \textbf{W}^{h}_i. \end{gather} Then, the free transport time of all the re-sampled particles will be $t_f=\Delta t$ in UGKWP. The fluxes $\textbf{F}^{fr,wave}$ from these un-sampled collisional particle of $ (1- e^{-\Delta t/\tau} )\textbf{W}^{h}_i$ can be evaluated analytically \cite{WP-first-liu2020unified, WP-second-zhu-unstructured-mesh-zhu2019unified}. Now, same as UGKP, the net flux $\textbf{w}_{i}^{fr,p}$ by the free streaming of the existing particles in UGKWP can be calculated by \begin{gather} \textbf{w}_{i}^{fr,p} = \sum_{k\in P\left(\partial \Omega_{i}^{+}\right)} \bm{\phi}_k - \sum_{k\in P\left(\partial \Omega_{i}^{-}\right)} \bm{\phi}_k. \end{gather} Then, the macroscopic flow variables in UGKWP are updated by \begin{gather}\label{particle phase wp final update W} \textbf{W}_i^{n+1} = \textbf{W}_i^n - \frac{1}{\Omega_i} \sum_{S_{ij}\in \partial \Omega_i}\textbf{F}^{eq}_{ij}S_{ij} - \frac{1}{\Omega_i} \sum_{S_{ij}\in \partial \Omega_i}\textbf{F}^{fr,wave}_{ij}S_{ij} + \frac{\textbf{w}_{i}^{fr,p}}{\Omega_{i}} + \Delta t \textbf{S}_{i}. \end{gather} The second part $\mathcal{L}_{s2}$ in Eq.\eqref{particle phase kenetic equ only acce} accounts for the external acceleration, \begin{gather*} \frac{\partial f_{s}}{\partial t} + \nabla_u \cdot \left(\textbf{a}f_{s}\right) = 0, \end{gather*} where the velocity-dependent acceleration term caused by inter-phase forces and solid particle's gravity has the following form, \begin{gather*} \textbf{a} = \frac{\textbf{U}_g - \textbf{u}}{\tau_{st}} - \frac{1}{\rho_{s}} \nabla_x p_g + \textbf{G}. \end{gather*} Taking moment $\bm{\psi}$ to Eq.\eqref{particle phase kenetic equ only acce}, \begin{gather*} \int \bm{\psi} \left( \frac{\partial f_{s}}{\partial t} + \textbf{a} \cdot \nabla_u f_{s} + f_{s}\nabla_u \cdot \textbf{a} \right) \text{d}\textbf{u} = 0, \end{gather*} and in the Euler regime with $f_s = g_s + \mathcal{O}\left(\tau_{s}\right)$, we can obtain, \begin{gather*} \frac{\partial \textbf{W}_s}{\partial t} + \textbf{Q}_s= 0, \end{gather*} where \begin{gather*} \textbf{W}_s=\left[\begin{array}{c} \epsilon_s\rho_s\\ \epsilon_s\rho_s \textbf{U}_s\\ \epsilon_s\rho_s E_s \end{array} \right], ~~ \textbf{Q}_s=\left[\begin{array}{c} 0 \\ \frac{\epsilon_s\rho_s\left(\textbf{U}_s-\textbf{U}_g\right)}{\tau_{st}} +\epsilon_s \nabla_x p_g - \epsilon_{s}\rho_{s} \textbf{G} \\ \frac{\epsilon_s\rho_{s}\textbf{U}_s \cdot \left(\textbf{U}_s-\textbf{U}_g\right)}{\tau_{st}} +3\frac{p_s}{\tau_{st}} + \epsilon_s\textbf{U}_s \cdot \nabla_x p_g - \epsilon_{s}\rho_{s} \textbf{U}_s \cdot \textbf{G} \end{array}\right]. \end{gather*} When the first-order forward Euler method is employed for time marching, the cell-averaged macroscopic variable can be updated by, \begin{gather}\label{update macroscopic variable of acceleration wave} \textbf{W}^{n+1}_s = \textbf{W}_s - \textbf{Q}_s \Delta t, \end{gather} and the modifications on velocity and location of the remaining free transport particles can be written as, \begin{align} \textbf{u}^{n+1} &= \textbf{u} + \textbf{a}t_f,\\ \textbf{x}^{n+1} &= \textbf{x} + \frac{\textbf{a}}{2} t_f^2.\label{displacement by acceleartion term} \end{align} Now the update of the particle phase in one time step has been finished. In the following, specific variables determination for the solid-particle phase will be presented. \subsection{Particle phase Knudsen number} The particle phase regime is determined by its Knudsen number $Kn$, defined by the ratio of collision time of solid particles $\tau_{s}$ to the characteristic time of macroscopic flow $t_{ref}$, \begin{gather}\label{particle phase Kn_s} Kn = \frac{\tau_s}{t_{ref}}. \end{gather} Specifically, $\tau_s$ is the time interval between collisions of solid particles, or called the particle collision time, and $t_{ref}$ is the characteristic time, defined as the ratio flow characteristic length to the flow characteristic velocity, $t_{ref}=L_{ref}/U_{ref}$. According to the previous studies \cite{Gasparticle-MOM-Fox-passalacqua2010fully, Gasparticle-momentmethod-Fox2013computational}, in this paper $\tau_s$ is taken as, \begin{gather}\label{particle phase tau_s} \tau_s = \frac{\sqrt{\pi}d_s}{12\epsilon_sg_0}\sqrt{2\lambda_s}, \end{gather} where $d_s$ is the diameter of solid particle, $\epsilon_s$ is the volume fraction of solid phase. $g_0$ is the radial distribution function with the following form, \begin{gather} g_0 = \frac{2-c}{2\left(1-c\right)^3}, \end{gather} where $c=\epsilon_s/\epsilon_{s,max}$ is the ratio of the volume fraction $\epsilon_{s}$ to the allowed maximum value $\epsilon_{s,max}$. Generally, for dilute particulate flow, $\tau_{s}$ is more likely much larger than $t_{ref}$, leading to a large $Kn$, and the particle transport plays more important role in the evolution. However, for dense particulate flow, the collision between solid particles is in high-frequency, which results in a small $\tau_{s}$ and thereby a small $Kn$, and the inter-particle collision plays the key effect in the evolution. \subsection{Frictional pressure} When the solid phase is in high concentration, the frictional pressure $p_{fric}$ has to be considered. $p_{fric}$ accounts for the enduring inter-particle contacts and frictions, which plays important roles in the near packing situation. Some expressions for $p_{fric}$ have been proposed in the previous studies \cite{Gasparticle-KTGF-pressure-friction-johnson1987frictional, Gasparticle-pressure-friction-srivastava2003analysis, Gasparticle-pressure-friction-schneiderbauer2012comprehensive}. In this paper, the correlation proposed by Johnson and Jackson is employed \cite{Gasparticle-KTGF-pressure-friction-johnson1987frictional}, which can be written as, \begin{align} p_{fric} = \left\{\begin{aligned} &~~~~~~~~ 0 & , & \epsilon_{s} \le \epsilon_{s,crit}, \\ &0.1 \epsilon_{s} \frac{\left(\epsilon_{s} - \epsilon_{s,crit}\right)^2}{\left(\epsilon_{s,max} - \epsilon_{s}\right)^5}& , & \epsilon_{s} > \epsilon_{s,crit}, \end{aligned} \right. \end{align} where $\epsilon_{s,crit}$ is the critical volume fraction of particle flow, and it takes a value $0.5$ in this paper unless special notification. Therefore, the momentum and energy equation in Eq.\eqref{particle phase Euler equ} will be rewritten as, \begin{gather}\label{particle phase momentum equ equ with p_fr} \frac{\partial \left(\epsilon_s\rho_s \textbf{U}_s\right)}{\partial t} + \nabla_x \cdot \left(\epsilon_s\rho_s \textbf{U}_s \textbf{U}_s + p_s \mathbb{I} + p_{fric} \mathbb{I}\right) = \frac{\epsilon_{s}\rho_{s}\left(\textbf{U}_g - \textbf{U}_s\right)}{\tau_{st}} - \epsilon_{s} \nabla_x p_g + \epsilon_{s}\rho_{s} \textbf{G}. \end{gather} \begin{gather}\label{particle phase energy equ equ with p_fr} \frac{\partial \left(\epsilon_s\rho_s E_s\right)}{\partial t} + \nabla_x \cdot \left(\left(\epsilon_s\rho_s E_s + p_s + p_{fric}\right) \textbf{U}_s \right) = \frac{\epsilon_{s}\rho_{s}\textbf{U}_s \cdot \left(\textbf{U}_g - \textbf{U}_s\right)}{\tau_{st}} - \frac{3p_s}{\tau_{st}} - \epsilon_{s} \textbf{U}_s \cdot \nabla_x p_g \nonumber\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~ + \epsilon_{s}\rho_{s} \textbf{U}_s \cdot \textbf{G}. \end{gather} The terms relevant to frictional pressure, $\nabla_x \cdot \left(p_{fric} \mathbb{I}\right)$ and $\nabla_x \cdot \left(p_{fric} \textbf{U}_s\right)$, are solved as source terms in this paper. \subsection{Flux limiting model near the packing condition} The introduction of frictional pressure $p_{fric}$ can avoid the solid particles' over-assembling since it increases dramatically when the particle phase approaches its limiting packing state \cite{Gasparticle-KTGF-pressure-friction-johnson1987frictional, Gasparticle-TFM-compressible-houim2016multiphase}. Besides, a flux limiting model is proposed in this paper to effectively prevent the solid volume fraction $\epsilon_{s}$ from exceeding its maximum value $\epsilon_{s,max}$. Taking one-dimensional example, in UGKWP the numerical flux at interface $i+1/2$ between cell $i$ and cell $i+1$ can be generally written as, \begin{equation}\label{particle phase flux left rigth form original} \textbf{F}_{i+1/2} = \int_{0}^{\Delta t} \int_{u>0} u f_{i+1/2}(\textbf{x},t) \bm{\psi} \text{d}u\text{d}t + \int_{0}^{\Delta t} \int_{u<0} u f_{i+1/2}(\textbf{x},t) \bm{\psi} \text{d}u\text{d}t\\ \overset{def}{=} \textbf{F}_{i+1/2}^{l} + \textbf{F}_{i+1/2}^{r}, \end{equation} which will be modified as, \begin{equation}\label{particle phase flux left right form flux limiting} \textbf{F}_{i+1/2} = \textbf{C}\left[\alpha\left(\epsilon_{s,i+1}\right)\right] \cdot \textbf{F}_{i+1/2}^{l} + \textbf{C}\left[\alpha\left(\epsilon_{s,i}\right)\right] \cdot \textbf{F}_{i+1/2}^{r}, \end{equation} with \begin{equation*} \textbf{C}\left[\alpha\right] = \begin{bmatrix} 1-\alpha & 0 & 0 \\ 0 & 1+\alpha & 0 \\ 0 & 0 & 1-\alpha \end{bmatrix}, \end{equation*} where $\alpha$ is the limiting factor, and it depends on the cell-averaged solid volume fraction $\epsilon_{s}$ as, \begin{equation}\label{packing limit factor} \alpha\left(\epsilon_{s}\right) = \left\{\begin{aligned} &~~~~~~~~ 0 & , & \epsilon_{s} \le k\epsilon_{s,max}, \\ &\left(\frac{\epsilon_{s} - k\epsilon_{s,max}}{\epsilon_{s,max} - k \epsilon_{s,max}}\right)^2& , & \epsilon_{s} > k\epsilon_{s,max}. \end{aligned} \right. \end{equation} Here $k$ is a threshold for the flux limiting model, and it takes a value $0.95$ unless special notification in this paper. As shown in Eq.\eqref{packing limit factor}, when $\epsilon_{s}$ is smaller than $k\epsilon_{s,max}$, the limiting factor $\alpha$ goes to $0$ and there is no limiting; while when $\epsilon_{s}$ is larger than $k\epsilon_{s,max}$, $\alpha$ will increase and the limiting model works. Particularly, when the packing limit approaches to $\epsilon_{s}=\epsilon_{s,max}$, $\alpha$ also takes its maximum value $1$. As a result, solid particles cannot flow into the ``saturated" cell, and the solid volume fraction $\epsilon_{s}$ will not increase anymore. In addition, Eq.\eqref{particle phase flux left right form flux limiting} indicates that, as this limiting model is activated, only the "inflow" across the interface will be effected, while the "outflow" will not be limited as a physical modeling to the reality. \section{GKS for gas phase} \subsection{Governing equation for gas phase} The gas phase is regarded as continuum flow and the governing equations are the Navier-Stokes equations with source terms reflecting the inter-phase interaction \cite{Gasparticle-book-gidaspow1994multiphase, Gasparticle-book-ishii2010thermo}, \begin{align}\label{gas phase macroscopic equ} &\frac{\partial \left(\widetilde{\rho_g}\right)}{\partial t} + \nabla_x \cdot \left(\widetilde{\rho_g} \textbf{U}_g\right)= 0,\nonumber \\ &\frac{\partial \left(\widetilde{\rho_g} \textbf{U}_g\right)}{\partial t} + \nabla_x \cdot \left(\widetilde{\rho_g} \textbf{U}_g \textbf{U}_g + \widetilde{p_g}\mathbb{I}\right) - \epsilon_{g} \nabla_x \cdot \left(\mu_g \bm{\sigma}\right) = p_g \nabla_x \epsilon_{g} -\frac{\epsilon_{s}\rho_{s}\left(\textbf{U}_g - \textbf{U}_s\right)}{\tau_{st}} + \widetilde{\rho_g} \textbf{G} , \\ &\frac{\partial \left(\widetilde{\rho_g} E_g\right)}{\partial t} + \nabla_x \cdot \left(\left(\widetilde{\rho_g} E_g + \widetilde{p_g}\right) \textbf{U}_g \right) - \epsilon_{g} \nabla_x \cdot \left(\mu_g \bm{\sigma}\cdot\textbf{U}_g - \kappa \nabla_x T_g \right) = - p_{g} \frac{\partial \epsilon_{g}}{\partial t} \nonumber \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -\frac{\epsilon_{s}\rho_{s}\textbf{U}_s \cdot \left(\textbf{U}_g - \textbf{U}_s\right)}{\tau_{st}} + \frac{3p_s}{\tau_{st}} + \widetilde{\rho_g} \textbf{U}_g \cdot \textbf{G}, \nonumber \end{align} where $\widetilde{\rho_g}=\epsilon_{g}\rho_g$ is the apparent density of gas phase, $p_g=\rho_gRT_g$ is the pressure of gas phase and $\widetilde{p_g}=\widetilde{\rho_g}RT_g$, the strain rate tensor $\bm{\sigma}$ is \begin{gather*} \bm{\sigma} = \nabla_x\textbf{U}_g + \left(\nabla_x\textbf{U}_g\right)^T - \frac{2}{3} \nabla_x \cdot \textbf{U}_g \mathbb{I}, \end{gather*} and \begin{gather*} \mu_g = \tau_{g} p_g, ~~~~ \kappa = \frac{5}{2} R \tau_{g} p_g. \end{gather*} In particular, at the right hand side in Eq.\eqref{gas phase macroscopic equ}, the term $p_{g} \nabla_x \epsilon_{g}$ is called ``nozzle" term, and the associated work term $- p_{g} \frac{\partial \epsilon_{g}}{\partial t}$ is called $pDV$ work term, since it is similar to the $pDV$ term in the quasi-one-dimensional gas nozzle flow equations \cite{Gasparticle-TFM-compressible-houim2016multiphase}. Unphysical pressure fluctuations might occurs if the ``nozzle" term and $pDV$ term are not solved correctly. According to \cite{Toro2013book}, Eq.\eqref{gas phase macroscopic equ} can be written as the following form, \begin{align}\label{gas phase macroscopic equ final} &\frac{\partial \left(\rho_g\right)}{\partial t} + \nabla_x \cdot \left(\rho_g \textbf{U}_g\right)= C_{\epsilon_g}\rho_g,\nonumber \\ &\frac{\partial \left(\rho_g \textbf{U}_g\right)}{\partial t} + \nabla_x \cdot \left(\rho_g \textbf{U}_g \textbf{U}_g + p_g\mathbb{I} - \mu_g \bm{\sigma}\right) = C_{\epsilon_g} \rho_g \textbf{U}_g -\frac{\epsilon_{s}\rho_{s}\left(\textbf{U}_g - \textbf{U}_s\right)}{\epsilon_g \tau_{st}} + \rho_g\textbf{G} , \\ &\frac{\partial \left(\rho_g E_g\right)}{\partial t} + \nabla_x \cdot \left(\left(\rho_g E_g + p_g\right) \textbf{U}_g - \mu_g \bm{\sigma}\cdot\textbf{U}_g + \kappa \nabla_x T_g \right) = C_{\epsilon_g} \left(\rho_g E_g + p_g\right) \nonumber \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -\frac{\epsilon_{s}\rho_{s}\textbf{U}_s \cdot \left(\textbf{U}_g - \textbf{U}_s\right)}{\epsilon_g \tau_{st}} + \frac{3p_s}{\epsilon_g \tau_{st}} + \rho_g \textbf{U}_g \cdot \textbf{G}, \nonumber \end{align} where, $C_{\epsilon_g} = -\frac{1}{\epsilon_{g}}\frac{\text{d}\epsilon_{g}}{\text{d}t}$ with $\frac{\text{d}\epsilon_{g}}{\text{d}t}=\frac{\partial \epsilon_{g}}{\partial t}+\textbf{U}_g \cdot \nabla\epsilon_{g}$, and how to solve $C_{\epsilon_{g}}$ in this paper will be introduced later. \subsection{GKS for gas evolution} This subsection introduces the evolution of gas phase in gas-particle two-phase system. The gas flow is governed by the Navier-Stokes equations with the inter-phase interaction, and the corresponding GKS is a limiting scheme of UGKWP in the continuum flow regime. In general, the evolution of gas phase Eq.\eqref{gas phase macroscopic equ final} can be split into two parts, \begin{align} \mathcal{L}_{g1}&:~~ \left\{ \begin{array}{lr} \frac{\partial \left(\rho_g\right)}{\partial t} + \nabla_x \cdot \left(\rho_g \textbf{U}_g\right)= 0, & \vspace{1ex}\\ \frac{\partial \left(\rho_g \textbf{U}_g\right)}{\partial t} + \nabla_x \cdot \left(\rho_g \textbf{U}_g \textbf{U}_g + p_g\mathbb{I} - \mu_g \bm{\sigma}\right) = 0, & \vspace{1ex}\\ \frac{\partial \left(\rho_g E_g\right)}{\partial t} + \nabla_x \cdot \left(\left(\rho_g E_g + p_g\right) \textbf{U}_g - \mu_g \bm{\sigma}\cdot\textbf{U}_g + \kappa \nabla_x T_g \right) = 0, & \end{array} \right. \\ \nonumber\\ \mathcal{L}_{g2}&:~~ \left\{ \begin{array}{lr} \frac{\partial \left(\rho_g\right)}{\partial t} = C_{\epsilon_g}\rho_g, & \vspace{1ex}\\ \frac{\partial \left(\rho_g \textbf{U}_g\right)}{\partial t} = C_{\epsilon_g} \rho_g \textbf{U}_g -\frac{\epsilon_{s}\rho_{s}\left(\textbf{U}_g - \textbf{U}_s\right)}{\epsilon_g \tau_{st}} + \rho_g\textbf{G}, & \vspace{1ex}\\ \frac{\partial \left(\rho_g E_g\right)}{\partial t} = C_{\epsilon_g} \left(\rho_g E_g + p_g\right) -\frac{\epsilon_{s}\rho_{s}\textbf{U}_s \cdot \left(\textbf{U}_g - \textbf{U}_s\right)}{\epsilon_g \tau_{st}} + \frac{3p_s}{\epsilon_g \tau_{st}} + \rho_g \textbf{U}_g \cdot \textbf{G}. & \end{array} \right. \end{align} The GKS is constructed to solve $\mathcal{L}_{g1}$ and $\mathcal{L}_{g2}$ separately. Firstly, the kinetic equation without acceleration term for gas phase $\mathcal{L}_{g1}$ is, \begin{equation}\label{gas phase kinetic equ without acce} \frac{\partial f_{g}}{\partial t} + \nabla_x \cdot \left(\textbf{u}f_{g}\right) = \frac{g_{g}-f_{g}}{\tau_{g}}, \end{equation} where $\textbf{u}$ is the velocity, $\tau_g$ is the relaxation time for gas phase, $f_{g}$ is the distribution function of gas phase, and $g_{g}$ is the corresponding equilibrium state (Maxwellian distribution). The local equilibrium state $g_{g}$ can be written as, \begin{gather*} g_{g}=\rho_g\left(\frac{\lambda_g}{\pi}\right)^{\frac{K+3}{2}}e^{-\lambda_g\left[(\textbf{u}-\textbf{U}_g)^2+\bm{\xi}^2\right]}, \end{gather*} where $\rho_g$ is the density of gas phase. $\lambda_g$ is determined by gas temperature through $\lambda_g = \frac{m_g}{2k_BT_g}$, where $m_g$ is the molecular mass, $\textbf{U}_g$ is the macroscopic velocity of gas phase. $K$ is the internal degree of freedom with $K=(5-3\gamma)/(\gamma-1)$ for three-dimensional diatomic gas, where $\gamma=1.4$ is the specific heat ratio. The collision term satisfies the compatibility condition \begin{equation}\label{gas phase compatibility condition} \int \frac{g_g-f_g}{\tau_g} \bm{\psi} \text{d}\Xi=0, \end{equation} where $\bm{\psi}=\left(1,\textbf{u},\displaystyle \frac{1}{2}(\textbf{u}^2+\bm{\xi}^2)\right)^T$, the internal variables $\bm{\xi}^2=\xi_1^2+...+\xi_K^2$, and $\text{d}\Xi=\text{d}\textbf{u}\text{d}\bm{\xi}$. For Eq.\eqref{gas phase kinetic equ without acce}, the integral solution of $f$ at the cell interface can be written as, \begin{equation}\label{gas phase equ_integral1} f(\textbf{x},t,\textbf{u},\bm{\xi})=\frac{1}{\tau}\int_0^t g(\textbf{x}',t',\textbf{u},\bm{\xi})e^{-(t-t')/\tau}\text{d}t'\\ +e^{-t/\tau}f_0(\textbf{x}-\textbf{u}t,\textbf{u},\bm{\xi}), \end{equation} where $\textbf{x}'=\textbf{x}+\textbf{u}(t'-t)$ is the trajectory of particles, $f_0$ is the initial gas distribution function at time $t=0$, and $g$ is the corresponding equilibrium state. The initial NS gas distribution function $f_0$ in Eq.\eqref{gas phase equ_integral1} can be constructed as \begin{equation}\label{gas phase equ_f0} f_0=f_0^l(\textbf{x},\textbf{u})H(x)+f_0^r(\textbf{x},\textbf{u})(1-H(x)), \end{equation} where $H(x)$ is the Heaviside function, $f_0^l$ and $f_0^r$ are the initial gas distribution functions on the left and right side of one cell interface. More specifically, the initial gas distribution function $f_0^k$, $k=l,r$, is constructed as \begin{equation*} f_0^k=g^k\left(1+\textbf{a}^k \cdot \textbf{x}-\tau(\textbf{a}^k \cdot \textbf{u}+A^k)\right), \end{equation*} where $g^l$ and $g^r$ are the Maxwellian distribution functions on the left and right hand sides of a cell interface, and they can be determined by the corresponding conservative variables $\textbf{W}^l$ and $\textbf{W}^r$. The coefficients $\textbf{a}^l=\left[a^l_1, a^l_2, a^l_3\right]^T$, $\textbf{a}^r=\left[a^r_1, a^r_2, a^r_3\right]^T$, are related to the spatial derivatives in normal and tangential directions, which can be obtained from the corresponding derivatives of the initial macroscopic variables, \begin{equation*} \left\langle a^l_i\right\rangle=\partial \textbf{W}^l/\partial x_i, \left\langle a^r_i\right\rangle=\partial \textbf{W}^r/\partial x_i, \end{equation*} where $i=1,2,3$, and $\left\langle...\right\rangle$ means the moments of the Maxwellian distribution functions, \begin{align*} \left\langle...\right\rangle=\int \bm{\psi}\left(...\right)g\text{d}\Xi. \end{align*} Based on the Chapman-Enskog expansion, the non-equilibrium part of the distribution function satisfies, \begin{equation*} \left\langle \textbf{a}^l \cdot\textbf{u}+A^l\right\rangle = 0,~ \left\langle \textbf{a}^r \cdot\textbf{u}+A^r\right\rangle = 0, \end{equation*} and therefore the coefficients $A^l$ and $A^r$ can be fully determined. The equilibrium state $g$ around the cell interface is modeled as, \begin{equation}\label{gas phase equ_g} g=g_0\left(1+\overline{\textbf{a}}\cdot\textbf{x}+\bar{A}t\right), \end{equation} where $\overline{\textbf{a}}=\left[\overline{a}_1, \overline{a}_2, \overline{a}_3\right]^T$, $g_0$ is the local equilibrium of the cell interface. More specifically, $g$ can be determined by the compatibility condition, \begin{align*} \int\bm{\psi} g_{0}\text{d}\Xi=\textbf{W}_0 &=\int_{u>0}\bm{\psi} g^{l}\text{d}\Xi+\int_{u<0}\bm{\psi} g^{r}\text{d}\Xi, \nonumber \\ \int\bm{\psi} \overline{a_i} g_{0}\text{d}\Xi=\partial \textbf{W}_0/\partial x_i &=\int_{u>0}\bm{\psi} a^l_i g^{l}\text{d}\Xi+\int_{u<0}\bm{\psi} a^r_i g^{r}\text{d}\Xi, \end{align*} $i=1,2,3$, and \begin{equation*} \left\langle \overline{\textbf{a}} \cdot \textbf{u}+\bar{A}\right\rangle = 0. \end{equation*} After determining all parameters in the initial gas distribution function $f_0$ and the equilibrium state $g$, substituting Eq.\eqref{gas phase equ_f0} and Eq.\eqref{gas phase equ_g} into Eq.\eqref{gas phase equ_integral1}, the time-dependent distribution function $f(\textbf{x}, t, \textbf{u},\bm{\xi})$ at a cell interface can be expressed as, \begin{align}\label{gas phase equ_finalf} f(\textbf{x}, t, \textbf{u},\bm{\xi}) &=c_1 g_0+ c_2 \overline{\textbf{a}}\cdot\textbf{u}g_0 +c_3 {\bar{A}} g_0\nonumber\\ &+\left[c_4 g^r +c_5 \textbf{a}^r\cdot\textbf{u} g^r + c_6 A^r g^r\right] (1-H(u)) \\ &+\left[c_4 g^l +c_5 \textbf{a}^l\cdot\textbf{u} g^l + c_6 A^l g^l\right] H(u) \nonumber. \end{align} with coefficients, \begin{align*} c_1 &= 1-e^{-t/\tau}, \\ c_2 &= \left(t+\tau\right)e^{-t/\tau}-\tau, \\ c_3 &= t-\tau+\tau e^{-t/\tau}, \\ c_4 &= e^{-t/\tau}, \\ c_5 &= -\left(t+\tau\right)e^{-t/\tau}, \\ c_6 &= -\tau e^{-t/\tau}. \end{align*} Then the integrated flux over a time step can be obtained, \begin{align} \textbf{F}_{ij} =\int_{0}^{\Delta t} \int\textbf{u}\cdot\textbf{n}_{ij} f_{ij}(\textbf{x},t,\textbf{u},\bm{\xi})\bm{\psi}\text{d}\Xi\text{d}t, \end{align} where $\textbf{n}_{ij}$ is the normal vector of the cell interface. Then, the cell-averaged conservative variables of cell $i$ can be updated as follows, \begin{gather} \textbf{W}_i^{n+1} = \textbf{W}_i^n - \frac{1}{\Omega_i} \sum_{S_{ij}\in \partial \Omega_i}\textbf{F}_{ij}S_{ij}, \end{gather} where $\Omega_i$ is the volume of cell $i$, $\partial\Omega_i$ denotes the set of interface of cell $i$, $S_{ij}$ is the area of $j$-th interface of cell $i$, $\textbf{F}_{ij}$ denotes the projected macroscopic fluxes in the normal direction, and $\textbf{W}_{g}=\left[\rho_g,\rho_g \textbf{U}_g, \rho_g E_g\right]^T$ are the cell-averaged conservative flow variables for gas phase. The second part, $\mathcal{L}_{g2}$, is from the inter-phase interaction. The increased macroscopic variables for gas phase in 3D can be calculated as \begin{gather} \textbf{W}^{n+1}_g = \textbf{W}_g + \textbf{Q}\Delta t, \end{gather} where \begin{gather*} \textbf{W}_g=\left[\begin{array}{c} \rho_g\\ \rho_g \textbf{U}_g\\ \rho_g E_g \end{array} \right], ~~ \textbf{Q}=\left[\begin{array}{c} C_{\epsilon_g}\rho_g \\ C_{\epsilon_g} \rho_g \textbf{U}_g -\frac{\epsilon_{s}\rho_{s}\left(\textbf{U}_g - \textbf{U}_s\right)}{\epsilon_g \tau_{st}} + \rho_g\textbf{G}\\ C_{\epsilon_g} \left(\rho_g E_g + p_g\right) -\frac{\epsilon_{s}\rho_{s}\textbf{U}_s \cdot \left(\textbf{U}_g - \textbf{U}_s\right)}{\epsilon_g \tau_{st}} + \frac{3p_s}{\epsilon_g \tau_{st}} +\rho_g \textbf{U}_g \cdot \textbf{G} \end{array}\right], \end{gather*} with $C_{\epsilon_g} = -\frac{1}{\epsilon_{g}}\frac{\text{d}\epsilon_{g}}{\text{d}t}$ and $\frac{\text{d}\epsilon_{g}}{\text{d}t}=\frac{\partial \epsilon_{g}}{\partial t}+\textbf{U}_g \cdot \nabla\epsilon_{g}$. In this paper, $\frac{\partial \epsilon_{g}}{\partial t}$ is evaluated, \begin{equation} \frac{\partial \epsilon_{g}}{\partial t} = \frac{\epsilon_{g}^{n+1} - \epsilon_{g}^n}{\Delta t}. \end{equation} Here $\nabla\epsilon_{g}$ is the cell-averaged volume fraction gradient of gas phase in the cell. For example, $\frac{\partial \epsilon_{g}}{\partial x}$ is calculated by, \begin{equation} \frac{\partial \epsilon_{g,i}}{\partial x} = \frac{\epsilon_{g,i+\frac{1}{2}} - \epsilon_{g,i-\frac{1}{2}}}{\Delta x}, \end{equation} where $\epsilon_{g,i-\frac{1}{2}}$ and $\epsilon_{g,i+\frac{1}{2}}$ are volume fractions of gas phase at the left and right interface of cell $i$, which can be obtained from the reconstructed $\epsilon_{s}$ according to $\epsilon_{s} + \epsilon_{g} = 1$. Now the update for the gas phase in one time step has been finished. Finally, the algorithm of GKS-UGKWP method for the gas-particle two phase flow is summarized in Figure \ref{Flow chart}. \begin{figure}[htbp] \centering \subfigure{ \includegraphics[height=10.5cm]{figure/flow-chart.png}} \caption{The flow chart of GKS-UGKWP method.} \label{Flow chart} \end{figure} \section{Numerical experiments} \subsection{Interaction of a shock wave with dense particle layer} The interaction of a shock wave with a dense particle layer will generate complicated particles' behavior \cite{Gasparticle-shock-particle-layer-kosinski2005dust, Gasparticle-shock-particle-layer-step-shimura2018two}, which brings challenges to a numerical scheme. The problem in \cite{Gasparticle-shock-particle-layer-kosinski2005dust} is tested by GKS-UGKWP in this section. Figure \ref{Sketch of the interaction of shock and particle layer problem} presents the initial configuration of the test case. The computational domain is a channel with size $L\times H=0.1m\times0.005m$, which is covered by $250\times20$ uniform rectangular mesh. The initial height of the dense particle layer in the channel is $0.001m$, and the volume fraction is 0.5. The layer is composed of solid particles with density $1000kg$ and diameter $90\mu m$. Initially, the gas in the channel is standard atmospheric condition. Next to the particle layer, there is a high pressure gas region with $4$bar, which will generate a shock wave after the diaphragm is removed at the beginning of calculation. \begin{figure}[htbp] \centering \subfigure{ \includegraphics[height=3.5cm]{figure/Shock-particle-layer/Shock-particle-layer.png}} \caption{Sketch of initial condition.} \label{Sketch of the interaction of shock and particle layer problem} \end{figure} The post-shock snapshots of solid-particle phase volume fraction are shown in Figure \ref{Shock particle layer eps}. After the shock wave passes, more and more particles in the dense layer will be lifted upward and therefore a ``particle stream" is formed at the leading section of the layer. The lifted particles will be accelerated by the gas flow and move backward, and then solid particles are dispersed in the channel. These particle behaviors have also been observed in the previous studies \cite{Gasparticle-shock-particle-layer-kosinski2005dust, Gasparticle-shock-particle-layer-step-shimura2018two}. Since more and more particles are lifted upward and dispersed in the channel, the leading edge of the dense particle layer gradually moves backward. The changing of leading-edge position with time is shown in Figure \ref{Shock particle layer leading edge}, which agrees well with the previous study by Eulerian-Lagrangian approach \cite{Gasparticle-shock-particle-layer-kosinski2005dust}. \begin{figure}[htbp] \centering \subfigure{ \includegraphics[height=1.0cm]{figure/Shock-particle-layer/legend-epss.png}} \quad \subfigure{ \includegraphics[height=2.0cm]{figure/Shock-particle-layer/epss-new-t0d3.eps}} \quad \subfigure{ \includegraphics[height=2.0cm]{figure/Shock-particle-layer/epss-new-t0d6.eps}} \quad \subfigure{ \includegraphics[height=2.0cm]{figure/Shock-particle-layer/epss-new-t1d0.eps}} \quad \subfigure{ \includegraphics[height=2.0cm]{figure/Shock-particle-layer/epss-new-t1d4.eps}} \caption{Particle phase volume fraction at $t=0.3ms$, $t=0.6ms$, $t=1.0ms$ and $t=1.4ms$.} \label{Shock particle layer eps} \end{figure} \begin{figure}[htbp] \centering \subfigure{ \includegraphics[height=5.5cm]{figure/Shock-particle-layer/leading-edge.eps}} \caption{The leading edge position of dense particle layer at different time.} \label{Shock particle layer leading edge} \end{figure} Figure \ref{Shock particle layer wave and particle in 1.0ms} shows the wave and particle decompositions from UGKWP at $t=1.0ms$. For the dense particle layer region, e.g., the zone near bottom wall, inter-particle collisions play the key role in the evolution due to the high solid concentration. In UGKWP, no particle will be sampled there and only wave is used for the evolution of particle flow, such as the automatic fluid approach. However, for the dilute particle region in the up part of the channel, the non-equilibrium particle transport appears and particles are sampled and tracked in UGKWP. Therefore, the UGKWP can adapt to different flow physics consistently. In addition, the percentage of sampled particles in UGKWP is fully determined by local flow condition, which is not artificially pre-defined. The above results indicate that UGKWP unifies the approaches for the equilibrium and non-equilibrium transport seamlessly, and provides an efficient method for the multiscale flow simulation. \begin{figure}[htbp] \centering \subfigure{ \includegraphics[height=1.0cm]{figure/Shock-particle-layer/legend-epss.png}} \quad \subfigure{ \includegraphics[height=2.0cm]{figure/Shock-particle-layer/epss-new-t1d0-wave.eps}} \quad \subfigure{ \includegraphics[height=2.0cm]{figure/Shock-particle-layer/epss-new-t1d0-part.eps}} \caption{UGKWP computation of solid-particle phase by wave (up) and particle (down) decompositions at $t=1.0ms$.} \label{Shock particle layer wave and particle in 1.0ms} \end{figure} \subsection{Horizontal pneumatic conveying} Pneumatic conveying is a widely-used technique for the transportation of bulk solid particles by gas flow in the pipe or channel. The advantage of pneumatic conveying includes design flexibility, working safety, and low maintenance cost, etc \cite{Gasparticle-book-fan1999principles}. Under different conditions, the solid phase will show different flow patterns. Here, a horizontal pneumatic conveying problem will be tested by GKS-UGKWP to check its ability to recover the typical flow patterns. The flow conditions, including inlet gas velocity $U_{g,in}$, inlet solid mass flow rate $G_{s,in}$ and gas pressure gradient $\Delta p/L$, obtained from the experiment \cite{Gasparticle-pneumatic-converying-rao2001electrical}, are employed in the simulation. The solid particles used in the experiment have the following physical properties: density $1683kg/m^3$ and diameter $3.01mm$. The computational domain is a two-dimensional horizontal channel with size $4m\times0.04m$, covered by $800\times8$ uniform rectangular mesh. Three typical cases, disperse flow pattern, settle flow pattern, and slug flow pattern, are tested, and the corresponding experimental measurement data are listed in Table \ref{pneumatic conveying problem three cases table}. Initially, the gas with inlet velocity $U_{g,in}$ flows into the channel from the left boundary; the solid particles are carried by gas flow and uniformly fed into the channel with mass flow rate $G_{s,in}$ through the left boundary; at the right boundary solid particles are free to leave. The atmospheric pressure boundary is employed for gas phase at the right boundary, while higher gas pressure is imposed at the left boundary according to the pressure gradient $\Delta p/L$ given in Table \ref{pneumatic conveying problem three cases table}. \begin{table}[h] \caption{Simulation conditions from experimental measurement \cite{Gasparticle-pneumatic-converying-rao2001electrical}.} \vspace{2pt} \small \centering \setlength{\tabcolsep}{3.4mm}{ \begin{tabular}{ccccc}\toprule[1pt] & $U_{g,in}$ $\left(m/s\right)$ & $G_{s,in}$ $\left(kg/m^2\cdot s\right)$ & $\Delta p/L$ $\left(Pa/m\right)$ & \textit{Flow pattern} \\ \hline Case 1 & 28.6 & 71.4 & 271.4 & disperse flow\\ Case 2 & 15.6 & 17.2 & 454.0 & settle flow\\ Case 3 & 10.4 & 21.1 & 855.6 & slug flow \\ \bottomrule[1pt] \end{tabular}} \label{pneumatic conveying problem three cases table} \end{table} For Case 1, the snapshot of solid phase volume fraction $\epsilon_{s}$ in the region $0.5m\sim3.5m$ at $t=6.0s$ is shown in Figure \ref{pneumatic conveying problem disperse flow}, and the enlarged snapshots at different times are presented in Figure \ref{pneumatic conveying problem disperse flow enlarge}. The typical disperse flow pattern is observed: solid phase is in dilute flow region (the solid volume fraction $\epsilon_{s}$ of most areas in the channel is lower than 0.01); solid particles move downstream carried by gas flow and solid particle concentration is relatively higher at the channel bottom than the up zone due to the effect of gravity. For Case 2, the snapshot of solid volume fraction $\epsilon_{s}$ in the channel at $t=6.0s$ and the enlarged snapshots in the local region $2.4m\sim3.0m$ are shown in Figure \ref{pneumatic conveying problem settle flow} and Figure \ref{pneumatic conveying problem settle flow enlarge}, respectively. In Case 2, a settled layer of solid particles with $\epsilon_{s}$ around 0.3 are formed along the channel bottom; while dilute solids flow is observed above this settle layer. It is the typical structure for settle flow pattern, or called stratified flow pattern. Finally, the snapshot of solid volume fraction $\epsilon_{s}$ in the channel at $6.0s$ for Case 3 and the local enlarged snapshots at $5.0s, 5.5s,$ and $6.0s$ are presented in Figure \ref{pneumatic conveying problem slug flow} and Figure \ref{pneumatic conveying problem slug flow enlarge}. Compared with the flow conditions of Case 2, Case 3 has a lower inlet gas velocity and a greater inlet solid particle flux, and therefore the solid concentration is generally higher in the channel. Particularly, in some zones the solid phase is in dense flow on the whole cross-section of the channel, which is the typical phenomenon for slug flow pattern. In summary, the flow structures and features predicted by GKS-UGKWP are consistent with the experimental observations for three typical flow patterns, validating the feasibility and reliability of GKS-UGKWP for this kind of problems. \begin{figure}[htbp] \centering \subfigure{ \includegraphics[height=0.7cm]{figure/pneumatic-converying/disperse-flow-t6d0-legend}} \quad \subfigure{ \includegraphics[height=1.8cm]{figure/pneumatic-converying/disperse-flow-t6d0.eps}} \caption{The snapshot of solid phase volume fraction $\epsilon_{s}$ of Case 1, disperse flow pattern, at $t=6.0s$.} \label{pneumatic conveying problem disperse flow} \end{figure} \begin{figure}[htbp] \centering \subfigure{ \includegraphics[height=0.7cm]{figure/pneumatic-converying/disperse-flow-t6d0-legend}} \quad \subfigure{ \includegraphics[height=1.8cm]{figure/pneumatic-converying/disperse-flow-t5d0-enlarge.eps}} \quad \subfigure{ \includegraphics[height=1.8cm]{figure/pneumatic-converying/disperse-flow-t5d5-enlarge.eps}} \quad \subfigure{ \includegraphics[height=1.8cm]{figure/pneumatic-converying/disperse-flow-t6d0-enlarge.eps}} \caption{The enlarged snapshots of solid phase volume fraction $\epsilon_{s}$ in the local region $2.4m\sim3.0m$ of Case 1 at different times: (a) $t=5.0s$, (b)$t=5.5s$, (c)$t=6.0s$.} \label{pneumatic conveying problem disperse flow enlarge} \end{figure} \begin{figure}[htbp] \centering \subfigure{ \includegraphics[height=0.7cm]{figure/pneumatic-converying/settle-flow-t6d0-legend}} \quad \subfigure{ \includegraphics[height=1.8cm]{figure/pneumatic-converying/settle-flow-t6d0.eps}} \caption{The snapshot of solid phase volume fraction $\epsilon_{s}$ of Case 2, settle flow pattern, at $t=6.0s$. The whole computation domain $4m\times0.04m$ is shown.} \label{pneumatic conveying problem settle flow} \end{figure} \begin{figure}[htbp] \centering \subfigure{ \includegraphics[height=0.7cm]{figure/pneumatic-converying/settle-flow-t6d0-legend}} \quad \subfigure{ \includegraphics[height=1.8cm]{figure/pneumatic-converying/settle-flow-t5d0-enlarge.eps}} \quad \subfigure{ \includegraphics[height=1.8cm]{figure/pneumatic-converying/settle-flow-t5d5-enlarge.eps}} \quad \subfigure{ \includegraphics[height=1.8cm]{figure/pneumatic-converying/settle-flow-t6d0-enlarge.eps}} \caption{The enlarged snapshots of solid phase volume fraction $\epsilon_{s}$ in the local region $2.4m\sim3.0m$ of Case 2 at different times: (a) $t=5.0s$, (b)$t=5.5s$, (c)$t=6.0s$.} \label{pneumatic conveying problem settle flow enlarge} \end{figure} \begin{figure}[htbp] \centering \subfigure{ \includegraphics[height=0.7cm]{figure/pneumatic-converying/slug-flow-t6d0-legend}} \quad \subfigure{ \includegraphics[height=1.8cm]{figure/pneumatic-converying/slug-flow-t6d0.eps}} \caption{The snapshot of solid phase volume fraction $\epsilon_{s}$ of Case 3, slug flow pattern, at $t=6.0s$. The whole computation domain $4m\times0.04m$ is shown.} \label{pneumatic conveying problem slug flow} \end{figure} \begin{figure}[htbp] \centering \subfigure{ \includegraphics[height=0.7cm]{figure/pneumatic-converying/slug-flow-t6d0-legend}} \quad \subfigure{ \includegraphics[height=1.8cm]{figure/pneumatic-converying/slug-flow-t5d0-enlarge.eps}} \quad \subfigure{ \includegraphics[height=1.8cm]{figure/pneumatic-converying/slug-flow-t5d5-enlarge.eps}} \quad \subfigure{ \includegraphics[height=1.8cm]{figure/pneumatic-converying/slug-flow-t6d0-enlarge.eps}} \caption{The enlarged snapshots of solid phase volume fraction $\epsilon_{s}$ in the local region $3.0m\sim3.6m$ of Case 3 at different times: (a) $t=5.0s$, (b)$t=5.5s$, (c)$t=6.0s$.} \label{pneumatic conveying problem slug flow enlarge} \end{figure} \subsection{Bubble formation in fluidized bed} The fluidized bed is widely used in chemical industry to enhance chemical reactions, solids separation, heat transfer, etc. In this problem, the initial stage of bubble formation in a fluidized bed is simulated, and the detailed description of this experiment can refer to \cite{Gasparticle-fluidized-single-bubble-nieuwland1996bubble}. The sketch of this problem is shown in Figure \ref{Sketch of bubble formation}. The computational domain $W\times H$ is $0.57m\times1.0m$, and $76\times120$ uniform rectangular mesh is used. An orifice with width $0.02m$ exists at the bottom center. The height of particle bed $H_p$ is $0.5m$, and above this particle bed is free board used for the expansion of particle bed. The bed consists of solid particles with density $3060kg/m^3$ and diameter $285\mu m$. The initial solid volume fraction $\epsilon_s$ is set as 0.5, which is smaller than $\epsilon_{s,max}$ taken as 0.6 in this case. This is based on the condition that the initial particle bed has reached a minimum fluidization state before blowing upward gas flow into the particle bed. Initially, the jet with $U_{jet}=10.0m/s$ blows into the particle bed through the orifice, while the gas with the minimum fluidization velocity $U_{min}=0.08m/s$ flows into the particle bed at other bottom boundary region outside the center orifice. For gas phase, the up boundary is set as pressure outlet, and for the bottom boundary a higher pressure is employed with $\Delta p = 7500Pa$, which is approximated to balance the gravity by $\Delta p = \epsilon_{s}\left(\rho_s-\rho_g\right)GH_p$ as given in \cite{Gasparticle-fluidized-single-bubble-nieuwland1996bubble}. For the left and right walls, the non-slip and slip boundary condition is employed for gas phase and solid particle phase, respectively. \begin{figure}[htbp] \centering \subfigure{ \includegraphics[height=5.5cm]{figure/FB_Falah/Bubble.png}} \caption{Sketch of bubble formation in fluidized bed.} \label{Sketch of bubble formation} \end{figure} The contours of apparent density of solid particle phase at different times are shown in Figure \ref{bubble formation apparent density}. The results show the typical process of bubble formation: initially, a small bubble occurs due to the jet with high velocity from the orifice; it becomes larger and larger in the evolution, and finally detaches the bottom boundary. During the process, the bubble shape is similar to an ellipse. The above process obtained by GKS-UGKWP agrees well with the observed phenomenon in the experiment \cite{Gasparticle-fluidized-single-bubble-nieuwland1996bubble}. To further quantitatively compare the bubble formation process with the experiment, the equivalent bubble diameter is calculated, which is defined as $D_e=\sqrt{4S/\pi}$. According to \cite{Gasparticle-fluidized-single-bubble-nieuwland1996bubble}, $S$ is the area of bubble obtained by the numerical simulation, defined as the area of $\epsilon_{s} < 0.15$. The equivalent bubble diameter obtained by GKS-UGKWP is presented in Figure \ref{bubble formation ed compare}, and it agrees well with the experiment measurement \cite{Gasparticle-fluidized-single-bubble-nieuwland1996bubble}, showing the accuracy and reliability of GKS-UGKWP. Besides, the sampled particles in UGKWP at different times are shown in Figure \ref{bubble formation part in wp}. The original high-concentration solid particle bed is represented by wave, and isn't shown here. The sampled particles only appear in the non-equilibrium region, such as at the boundary between dense and dilute solid particle phase. In addition, as the gas bubble becomes larger, more particles will emerge in UGKWP to capture the larger non-equilibrium zone with the penetration of solid particles in the gas bubble region. \begin{figure}[htbp] \centering \subfigure{ \includegraphics[height=1.1cm]{figure/FB_Falah/rhos-legend}} \quad \subfigure{ \includegraphics[height=5.0cm]{figure/FB_Falah/rhos-t0d05.eps}} \quad \subfigure{ \includegraphics[height=5.0cm]{figure/FB_Falah/rhos-t0d10.eps}} \quad \subfigure{ \includegraphics[height=5.0cm]{figure/FB_Falah/rhos-t0d15.eps}} \quad \subfigure{ \includegraphics[height=5.0cm]{figure/FB_Falah/rhos-t0d18.eps}} \caption{Apparent density of solid particle phase during bubble formation process: from left to right are the snapshots at time $0.05s$, $0.10s$, $0.15s$, and $0.18s$.} \label{bubble formation apparent density} \end{figure} \begin{figure}[htbp] \centering \subfigure{ \includegraphics[height=5.5cm]{figure/FB_Falah/ed-compare.eps}} \caption{Comparison of equivalent diameter $D_{e}$ obtained by GKS-UGKWP with experiment measurement.} \label{bubble formation ed compare} \end{figure} \begin{figure}[htbp] \centering \subfigure{ \includegraphics[height=1.1cm]{figure/FB_Falah/part-legend}} \quad \subfigure{ \includegraphics[height=4.0cm]{figure/FB_Falah/part-t0d05.eps}} \quad \subfigure{ \includegraphics[height=4.0cm]{figure/FB_Falah/part-t0d10.eps}} \quad \subfigure{ \includegraphics[height=4.0cm]{figure/FB_Falah/part-t0d15.eps}} \quad \subfigure{ \includegraphics[height=4.0cm]{figure/FB_Falah/part-t0d18.eps}} \caption{Sampled particle for the solid particle phase in bubble formation process: from left to right are the results at time $0.05s$, $0.10s$, $0.15s$, and $0.18s$. The color shows the mass fraction of particle representation in UGKWP. The wave representation of solid particle phase in the dense particle zone is not shown here.} \label{bubble formation part in wp} \end{figure} \subsection{Particle clustering in fluidized bed} Particle clustering is a typical hydrodynamic phenomenon in circulating fluidized bed (CFB), and it has a significant influence on the evolution of gas-particle flow. In this section, GKS-UGKWP is used to calculate the CFB problem in \cite{Gasparticle-fluidized-circulating-helland2005numerical} and capture the particle clustering phenomenon. Figure \ref{Sketch of the vertical rise} presents the schematic diagram of the vertical riser in this problem. The computational domain $W\times H$ is $5cm\times50cm$ covered by $25\times250$ uniform rectangular mesh. Initially, the solid particles are distributed uniformly in the riser with the solid phase volume fraction 0.03 and zero velocity; the gas phase is in standard atmospheric condition, $\rho_g=1.2kg/m^3$, $p=1bar$, and zero velocity. The density and diameter of the solid particles in the riser are $2400kg/m^3$ and $133\mu m$ respectively. Initially, the air flows into the riser through bottom boundary with vertical velocity $V_g=1.0m/s$ and higher pressure approximated by $\Delta p = \epsilon_{s}\left(\rho_{s}-\rho_g\right)GH$. The solid particles are free to leave at the up boundary, and the escaped particles from the up boundary will be compensated back into the riser through the bottom boundary to maintain that the total mass of solid particles inside the riser is a constant in the whole simulation. For left and right walls, the slip and non-slip boundary conditions are employed for the solid phase and the gas phase respectively. \begin{figure}[htbp] \centering \subfigure{ \includegraphics[height=5.0cm]{figure/FB_riser/FB_Helland.png}} \caption{Sketch of the vertical riser.} \label{Sketch of the vertical rise} \end{figure} The instantaneous snapshots of the distribution of solid volume fraction $\epsilon_s$ at different times are shown in Figure \ref{vertical riser epsilons different times}. The results indicate that the typical heterogeneous structures in a circulating fluidized bed are captured clearly: axially it is dilute flow in the upper zone while dense flow in the bottom zone; solid particles aggregate into clusters in the riser; generally, solid particles and clusters are carried upward in the core zone by high-speed gas flow while dropping down mainly at the near-wall zone. All the above typical features are consistent with the previous observations in both numerical and experimental studies \cite{Gasparticle-fluidized-circulating-helland2005numerical}. To further quantitatively analyze the results, the time-averaged profile is shown in Figure \ref{vertical riser results compare} and compared with the previous numerical results obtained by the Eulerian-Lagrangian approach \cite{Gasparticle-fluidized-circulating-helland2005numerical}. The profile of time-averaged $\epsilon_{s}$ at different riser height is shown in Figure \ref{vertical riser results compare}(a). The particle phase has a lower concentration $0.01$ in the up zone, while a higher concentration $0.1$ in the zone near bottom boundary. Figure \ref{vertical riser results compare}(b) presents the transversal profile of vertical velocity of particle flow $v_s$, which shows a parabolic shape, indicating solid particles move upward in the center region, while downward in the near-wall zone. Overall, the prediction given by GKS-UGKWP agrees well with the previous study by Eulerian-Lagrangian approach. \begin{figure}[htbp] \centering \subfigure{ \includegraphics[height=10.0cm]{figure/FB_riser/eps-t3d0.eps}} \quad \subfigure{ \includegraphics[height=10.0cm]{figure/FB_riser/eps-t4d0.eps}} \quad \subfigure{ \includegraphics[height=10.0cm]{figure/FB_riser/eps-t5d0.eps}} \quad \subfigure{ \includegraphics[height=10.0cm]{figure/FB_riser/eps-t6d0.eps}} \quad \subfigure{ \includegraphics[height=10.0cm]{figure/FB_riser/legend}} \caption{The instantaneous snapshots of the distribution of solid phase volume fraction $\epsilon_s$ at different times: (a)$t=3.0s$, (b)$t=4.0s$, (c)$t=5.0s$, and (d)$t=6.0s$.} \label{vertical riser epsilons different times} \end{figure} \begin{figure}[htbp] \centering \subfigure{ \includegraphics[height=5.5cm]{figure/FB_riser/eps-compare.eps}} \quad \subfigure{ \includegraphics[height=5.5cm]{figure/FB_riser/vs-compare.eps}} \caption{Comparison with the numerical results by Eulerian-Lagrangian approach \cite{Gasparticle-fluidized-circulating-helland2005numerical}. Left: time-averaged solid phase volume fraction $\epsilon_s$ at different height. Right: transversal profile of the time-averaged solid phase velocity $v_s$ in the upper part of the riser.} \label{vertical riser results compare} \end{figure} \section{Conclusion} In this paper, GKS-UGKWP method is developed to study gas-particle two-phase flow with both dense and dilute solid particle concentration. A drag force model for both dilute and dense particle flow is employed. The pressure model for inter-particle contacts and frictions is introduced, and it works for high solid particle concentration flow. In addition, a flux limiting model is proposed to prevent the over-packing of the solid particle phase. The non-conservative terms in the gas phase for accounting nozzle effect in momentum equation and $pDV$ work term in the energy equation, are added in the current scheme. For the particulate flow at high concentration, the inter-particle collisions play significant roles in the evolution. The inter-particle collision is included in the collision term of the kinetic equation for the particle phase to approach to the local equilibrium state. The current method can be used for particulate flow with a wide range of solid concentrations: from very dilute flow to dense one. UGKWP is a multiscale method and is capable of capturing the multiscale transport of particulate flow efficiently by its coupled wave-particle formulation in the evolution process. At a small cell $Kn$ number in high particle concentration region, the intensive inter-particle collisions will drive the particle distribution to near equilibrium and is represented by wave component in UGKWP without particles. As a result, the EE two fluid approach can be recovered by UGKWP, the so-called coupled hydrodynamic equations for two phase flow. While at large $Kn$ number for dilute particle concentration, the inadequate inter-particle collision in UGKWP keeps the particle phase in non-equilibrium and its evolution is fully determined by the particle transport. The EL approach for the two phase flow is obtained by UGKWP automatically in the dilute particle concentration region. At an intermediate $Kn$ number, both wave and particle in UGKWP play roles in the evolution, and the number of sampled particles is determined by the local degree of flow non-equilibrium, which ensures a smooth and consistent transition in different flow regimes. The proposed GKS-UGKWP for the gas-particle system is tested by a series of two-phase problems. The interaction of shock wave with solid particle layer in a channel is simulated, and the numerical results agree well with the previous study by EL approach. In the horizontal pneumatic conveying problem, typical flow patterns observed in the experiment for both low and high solid concentrations are well captured by GKS-UGKWP. The bubble formation through a particle bed is well captured by the proposed method and the bubble shape and size agree well with the experiment measurements. Also in the circulating fluidized bed case, the particle clustering phenomenon and the corresponding heterogeneous structures are well captured by GKS-UGKWP. These results validate the accuracy and reliability of GKS-UGKWP for the simulation of gas-particle two-phase flow. \section*{Acknowledgements} The current research is supported by National Numerical Windtunnel project, National Science Foundation of China (11772281, 91852114,12172316), Hong Kong research grant council 16208021, and Department of Science and Technology of Guangdong Province (Grant No.2020B1212030001). \bibliographystyle{plain}%
proofpile-arXiv_065-90
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Learning a shape space is the task of finding a latent representation to a collection of input training shapes that generalizes well to unseen, test shapes. This is often done within an autoencoder framework, namely an \emph{encoder} $\Phi:X\rightarrow Z$, mapping an input shape in $X$ (in some 3D representation) to the latent space $Z$, and a \emph{decoder} $\Psi:Z\rightarrow Y$, mapping latent representations in $Z$ back to shapes $Y$ (possibly in other 3D representation than $X$). Many shape collections exhibit \emph{symmetries}. That is, transformations that do not change the essence of the shape. For example, applying an Euclidean motion (rotation, reflection, and/or translation) to a rigid object such as a piece of furniture will produce an equivalent version of the object. Similarly, the same articulated body, such as an animal or a human, can assume different poses in space. A natural way to incorporate symmetries in shape space learning is to require the mapping to the latent space, i.e., the encoder, and mapping from the latent space, i.e., the decoder, to be equivariant to the relevant symmetries. That is, applying the symmetry to an input shape and then encoding it would result in the same symmetry applied to the latent code of the original shape. Similarly, reconstructing a shape from a transformed latent code will result in a transformed shape. The main benefit in imposing equivariance in shape space learning is achieving a very useful inductive bias: If the model have learned a single shape, it can already generalize perfectly to all its symmetric versions! % Unfortunately, even in the presumably simpler setting of a global Euclidean motion, building an equivariant neural network that is both expressive and efficient remains a challenge. The only architectures that were known to be universal for Euclidean motion equivariant functions are Tensor Field Networks \cite{thomas2018tensor,dym2020universality} and group averaging \cite{yarotsky2021universal,chen2021equivariant} both are computationally and memory intensive. Other architectures, {e.g.}, Vector Neurons \cite{deng2021vector} are efficient computationally but are not known to be universal. In this paper, we present a novel framework for building equivariant encoders and decoders for shape space learning that are flexible, efficient and maximally expressive ({i.e.}, universal). In particular, we introduce two contributions: (i) we adapt the recent Frame Averaging (FA) framework \cite{puny2021frame} to shape space learning, showing how to efficiently build powerful shape autoencoders. The method is general, easily adapted to different architectures and tasks, and its training only uses standard autoencoder reconstruction losses without requiring the introduction of new losses. (ii) We construct what we believe is the first autoencoder architecture that is fully equivariant to piecewise Euclidean transformations of the shape's parts, {e.g.}, articulated human body. We have tested our framework on two types of shape space learning tasks: learning implicit representations of shapes from real-life input point clouds extracted from sequences of images \cite{reizenstein2021common}, and learning mesh deformations of human (body and hand) and animal shape spaces \cite{dfaust:CVPR:2017,zuffi20173d,mahmood2019amass,akhter2015pose}. In both tasks, our method produced state of the art results when compared to relevant baselines, often showing a big margin compared to the runner-up, justifying the efficacy of the inductive bias injected using the frame-averaging and equivariance. \section{Related work} \paragraph{Euclidean equivariant point networks.} Original point cloud networks, such as PointNet \cite{qi2017pointnet,qi2017pointnet++}, PCNN \cite{atzmon2018point}, PointCNN \cite{li2018pointcnn}, Spider-CNN \cite{xu2018spidercnn}, and DGCNN \cite{wang2019dynamic} are permutation equivariant but not Euclidean equivariant. Therefore, these architectures often struggle to generalize over translated and/or rotated inputs. Realizing that Euclidean equivariance is a useful inductive bias, much attention has been given to develop Euclidean equivariant point cloud networks. % Euclidean \emph{invariance} can be achieved by defining the network layers in terms of distances or angles between points ~\cite{deng2018ppf,zhang2019rotation} or angles and distances measured from the input point cloud's normals \cite{gojcic2019perfect}. % Other works encode local neighborhoods using some local or global coordinate system to achieve invariance to rotations and translations. \cite{xiao2020endowing,yu2020deep,deng2018ppf} use PCA to define rotation invariance. % % % % Equivariance is a desirable property for autoencoders. Some works use representation theory of the rotation group ({e.g.}, spherical harmonics) to build rotational equivariant networks \cite{worrall2017harmonic,liu2018deep,weiler20183d}. Tensor Field Networks (TFN) \cite{thomas2018tensor,fuchs2020se,romero2020group} achieve equivariance to both translation and rotation. However, TFN architectures are tailored to rotations and require high order features for universality \cite{dym2020universality}. % Recently~\cite{deng2021vector} proposed a rotation equivariant network encoding features using the first two irreducible representations of the rotation group (tensor features) and constructed linear equivariant layers between features as well as equivariant non-linearities. This architecture is not proven universal. Another method achieving Euclidean equivariance is by group averaging or convolutions \cite{yarotsky2021universal}. \cite{esteves2018learning,cohen2018spherical} use spherical convolution to achieve rotation or Euclidean equivariance. \cite{chen2021equivariant} suggests to average of the 6D Euclidean group. Recently, \cite{puny2021frame} suggest Frame Averaging (FA) as a general purpose methodology for building equivariant architectures that are maximally expressive and often offer a much more efficient computation than group representation or averaging techniques. \vspace{-5pt} \paragraph{Implicit shape space learning.} Learning neural implicit representations from input point clouds is done by regressing signed distance function to the surface \cite{park2019deepsdf} or occupancy probabilities \cite{chen2019learning,mescheder2019occupancy}. The input point cloud is usually encoded in the latent space using a PointNet-like encoder \cite{qi2017pointnet,zaheer2017deep} or autodecoder \cite{park2019deepsdf}. \cite{atzmon2020sal,atzmon2020sald} regress the unsigned distance to the input point clouds avoiding the need of implicit function supervision for training. Normal data and gradient losses can be used to improve training and fidelity of learned implicits \cite{gropp2020implicit,sitzmann2020implicit,atzmon2020sald,lipman2021phase}. Higher spatial resolution was achieved by using spatially varying latent codes \cite{peng2020convolutional,chibane2020implicit}. The above works did not incorporate Euclidean equivariance. As far as we are aware, \cite{deng2021vector} are the first to incorporate Euclidean equivariance in the implicit shape space learning framework. Implicit representations are generalized to deformable and articulated shapes by composing the implicit representation with some backward parametric deformation such as Linear Blend Skinning (LBS) \cite{jeruzalski2020nilbs,Saito:CVPR:2021,mihajlovic2021leap}, displacement and/or rotation fields \cite{park2020deformable,pumarola2021d} and flows \cite{niemeyer2019occupancy,atzmon2021augmenting}. NASA \cite{deng2020nasa} suggest to combine a collection of deformable components represented using individual occupancy networks sampled after reversing the Euclidean transformation of each component. SNARF \cite{chen2021snarf} applies approximated inverse of LBS operator followed by an occupancy query. Both NASA and SNARF work on a single shape and do not learn the latent representation of pose. \vspace{-10pt} \paragraph{Mesh shape space learning.} Mesh shape spaces are often represented as coordinates assigned to a fixed template mesh and GNNs are used to learn their coordinates and latent representations \cite{litany2018deformable,verma2018feastnet,jiang2020disentangled,huang2021arapreg}. \cite{kostrikov2018surface} adapt GNNs to surfaces advocating the dirac operator transferring information from nodes to faces and vice versa. \cite{litany2018deformable,jiang2020disentangled} use Variational AutoEncoders (VAEs) to improve generalization. The most recent and related work to ours in this domain is \cite{huang2021arapreg} that suggests to incorporate As-Rigid-As-Possible (ARAP) \cite{sorkine2007rigid,wand2007reconstruction} deformation loss to encourage Euclidean motions of local parts of the shape. \section{Method} \subsection{Preliminaries: Group action} In this work, we consider vector spaces for representing shape and feature spaces. In the following, we define these vector spaces in general terms and specify how the different symmetry groups are acting on them. We use capital letters to represent vector spaces, {e.g.}, $V,W,X,Y,Z$. We use two types of vector spaces: i) $\mathbb R^{a+b\times 3}$, where $a,b\in\mathbb N_{\geq 0}$ are the invariant and equivariant dimensions, respectively, and ii) $C^1(\mathbb R^3)$, the space of continuously differentiable scalar volumetric functions. % The symmetry considered in this paper is the group of Euclidean motions in $\mathbb R^3$, denoted $E(3)=O(3)\ltimes \mathbb R^3$, where $O(3)$ is the orthogonal matrix group in $\mathbb R^{3\times 3}$. We represent elements in this group as pairs $g=({\bm{R}},{\bm{t}})$, where ${\bm{R}}\in O(3)$ and ${\bm{t}}\in\mathbb R^3$, where by default vectors are always column vectors. The \emph{action} of $G$ on a vector space $V$, denoted $\rho_V$, is defined as follows. First, for ${\bm{V}}=({\bm{u}},{\bm{U}})\in V = \mathbb R^{a+b\times 3}$, consisting of an invariant part ${\bm{u}} \in \mathbb R^a$, and equivariant part ${\bm{U}} \in \mathbb R^{b \times 3}$, we define the action simply by applying the transformation to the equivariant part: \begin{equation}\label{e:rho_1} \rho_V(g){\bm{V}} = ({\bm{u}}, {\bm{U}}{\bm{R}}^T + \mathbf{1} {\bm{t}}^T) \end{equation} where $g=({\bm{R}},{\bm{t}})\in E(3)$ and $\mathbf{1}\in \mathbb R^{b}$ is the vector of all ones. Second, for $f\in V = C^1(\mathbb R^3)$ we define the action using change of variables: \begin{equation}\label{e:rho_2} (\rho_V(g)f)({\bm{x}}) = f({\bm{R}}^T({\bm{x}}-{\bm{t}})) \end{equation} for all ${\bm{x}}\in\mathbb R^3$ and $g=({\bm{R}},{\bm{t}})\in G$. \subsection{Shape spaces and equivariance} We consider an input shape space $X$, a latent space $Z$, and output shape space $Y$, representing shapes in $\mathbb R^3$. All three spaces $X,Z,Y$ are vector spaces as described above, each endowed with an action (using either \eqref{e:rho_1} or \ref{e:rho_2}) of the Euclidean group $G=E(3)$, denoted $\rho_X,\rho_Z,\rho_Y$, respectively. Our goal is to learn an encoder $\Phi:X\rightarrow Z$, and decoder $\Psi:Z\rightarrow Y$ that are \emph{equivariant}. Namely, given an $E(3)$-transformed input, $\rho_X(g){\bm{X}}$ we would like its latent code to satisfy \begin{equation} \Phi(\rho_X(g){\bm{X}})=\rho_Z(g)\Phi({\bm{X}}), \end{equation} and its reconstruction to satisfy \begin{equation} \Psi(\rho_Z(g){\bm{Z}})=\rho_Y(g) \Psi({\bm{Z}}). \end{equation} Such $X,Z,Y$ are called \emph{steerable} spaces \cite{cohen2016steerable}. The following commutative diagram summarizes the interplay between the encoder, decoder, and the actions of the transformation group: \begin{equation*} \begin{tikzcd} X \arrow{r}{\Phi} \arrow[swap]{d}{\rho_X(g)} & Z \arrow{r}{\Psi} \arrow[swap]{d}{\rho_Z(g)} & \arrow[swap]{d}{\rho_Y(g)} Y \\% X \arrow{r}{\Phi}& Z \arrow{r}{\Psi} & Y \end{tikzcd} \end{equation*} \subsection{Frame averaging}\label{ss:frame_averaging} We will use Frame Averaging (FA) \cite{puny2021frame} to build $\Phi,\Psi$. FA allows to build both computationally efficient and maximally expressive equivariant networks. A \emph{frame} is a map ${\mathcal{F}}:V\rightarrow 2^G\setminus \emptyset$. That is, for each element ${\bm{V}}\in V$ it provides a non-empty subset of the group $G=E(3)$, ${\mathcal{F}}({\bm{V}})\subset G$. The frame ${\mathcal{F}}$ is called \emph{equivariant} if it satisfies \begin{equation}\label{e:frame_equi} {\mathcal{F}}(\rho_V(g){\bm{V}})=g{\mathcal{F}}({\bm{V}}) \end{equation} for all $g\in G$, ${\bm{V}}\in V$, where for a set $A\subset G$ we define (as usual) $gA=\set{ga\, \vert\, a\in A}$, and the equality in \eqref{e:frame_equi} should be understood in the sense of sets. Then, as shown in \cite{puny2021frame}, an arbitrary map $\phi:V\rightarrow W$ can be made equivariant by averaging over an equivariant frame: \begin{equation}\label{e:fa} \ip{\phi}_{\mathcal{F}}({\bm{V}}) = \frac{1}{|{\mathcal{F}}({\bm{V}})|}\sum_{g\in {\mathcal{F}}({\bm{V}})} \rho_W(g) \phi \parr{ \rho_V(g)^{-1} {\bm{V}}}. \end{equation} The operator $\ip{\cdot}_{\mathcal{F}}$ is called Frame Averaging (FA). An alternative to FA is full group averaging \cite{yarotsky2021universal,chen2021equivariant}, that amounts to replacing the sum over ${\mathcal{F}}({\bm{V}})$ in \eqref{e:fa} with an integral over $G$. Full group averaging also provides equivariance and universality. The crucial benefit in FA, however, is that it only requires averaging over a small number of group elements without sacrificing expressive power. In contrast, averaging over the entire group $E(3)$ requires approximating a 6D integral (with an unbounded translation part). Therefore, it can only be approximated and is memory and computationally intensive \cite{chen2021equivariant}. \paragraph{Frame construction.} All the frames we use in this paper are of the form ${\mathcal{F}}:V\rightarrow 2^G\setminus \emptyset$, for $V=\mathbb R^{d\times 3}$, $G=E(3)$, with the action defined as in \eqref{e:rho_1}. In some cases we further assume to have some non-negative weight vector ${\bm{w}}=(w_1,\ldots,w_d)\in\mathbb R^d_{{\scriptscriptstyle \geq 0}}$. % Given ${\bm{V}}\in V=\mathbb R^{d\times 3}$ we define ${\mathcal{F}}({\bm{V}})\subset E(3)$ using weighted PCA, as follows. First, \begin{equation}\label{e:t} {\bm{t}} = \frac{1}{\mathbf{1}^T{\bm{w}}}{\bm{V}}^T {\bm{w}} \end{equation} is the weighted centroid. The covariance matrix is in $\mathbb R^{3\times 3}$ $${\bm{C}} = ({\bm{V}}-\mathbf{1}{\bm{t}}^T)^T\mathrm{diag}({\bm{w}})({\bm{V}}-\mathbf{1}{\bm{t}}^T),$$ where $\mathrm{diag}({\bm{w}})\in\mathbb R^{d\times d}$ is a diagonal matrix with ${\bm{w}}$ along its main diagonal. In the generic case (which we assume in this paper) no eigenvalues of ${\bm{C}}$ are repeating, {i.e.}, $\lambda_1<\lambda_2<\lambda_3$ (for justification see {e.g.}, \cite{breiding2018geometry}). Let ${\bm{r}}_1,{\bm{r}}_2,{\bm{r}}_3$ be the corresponding eigenvectors. % The frame is defined by ${\mathcal{F}}({\bm{V}})=\set{({\bm{R}},{\bm{t}})\ \vert \ {\bm{R}}=\brac{\pm{\bm{r}}_1,\pm{\bm{r}}_2,\pm{\bm{r}}_3}}$, which contains $2^3=8$ elements. Intuitively, ${\bm{V}}$ is a point cloud in $\mathbb R^3$ and its frame, ${\mathcal{F}}({\bm{V}})$, contains all Euclidean motions that take the origin to the weighted centroid of ${\bm{V}}$ and the axes to the weighted principle directions. The proof of the following proposition is in the supplementary. \begin{proposition}\label{prop:frame_equi} The frame ${\mathcal{F}}$ is equivariant. \end{proposition} \subsection{Shape space instances} \paragraph{Global Euclidean: Mesh $\rightarrow$ mesh. } In this case, we would like to learn mesh encoder and decoder that are equivariant to global Euclidean motion. We consider the shape spaces $X=Y=\mathbb R^{n\times 3}$ that represent all possible coordinate assignments to vertices of some fixed $n$-vertex template mesh. % The latent space is defined as $Z=\mathbb R^{m+d\times 3}$ consisting of vectors of the form ${\bm{Z}}=({\bm{u}},{\bm{U}})\in Z$, where the ${\bm{u}}\in\mathbb R^m$ part contains invariant features and the ${\bm{U}}\in\mathbb R^{d\times 3}$ part contains equivariant features. % The group actions $\rho_X,\rho_Z,\rho_Y$ are as defined in \eqref{e:rho_1}. We define our encoder $\Phi$ and decoder $\Psi$ by FA (\eqref{e:fa}), {i.e.}, $ \Phi=\ip{\phi}_{{\mathcal{F}}}$, and $\Psi=\ip{\psi}_{{\mathcal{F}}}$ where the frames are defined as in Section \ref{ss:frame_averaging} with constant weights ${\bm{w}}=\mathbf{1}$, $\phi:X\rightarrow Z$ and $\psi:Z\rightarrow Y$ are standard GNNs adapted to meshes (implementation details are provided in Section \ref{s:implementation}). \paragraph{Global Euclidean: Point-cloud $\rightarrow$ implicit.} Here we adopt the setting of \cite{deng2021vector} where $X=\mathbb R^{n\times 3}$ represents all possible $n$-point clouds in $\mathbb R^3$, and $Y=C^1(\mathbb R^3)$ contains implicit representations of a shapes in $\mathbb R^3$. That is, for $f\in Y$ we consider its zero preimage, \begin{equation}\label{e:level_set} f^{-1}(0) = \set{{\bm{x}}\in\mathbb R^3 \ \vert \ f({\bm{x}})=0} \end{equation} as our shape rerpesenation in $\mathbb R^3$. If $0$ is a regular value of $f$ then the Implicit Function Theorem implies that $f^{-1}(0)$ is a surface in $\mathbb R^3$. A regular value $r\in \mathbb R$ of $f$ means that at every preimage ${\bm{x}}\in f^{-1}(r)$, the gradient does not vanish, $\nabla f({\bm{x}})\ne 0$. % The latent space is again $Z=\mathbb R^{m+d\times 3}$, consisting of vectors of the form ${\bm{Z}}=({\bm{u}},{\bm{U}})\in Z$. The actions $\rho_X,\rho_Z$ are defined as in \eqref{e:rho_1}, while the action $\rho_Y$ is defined as in \eqref{e:rho_2}. The motivation behind the definition of $\rho_Y$ is that $\rho_Y(g) f$ would transform the shape represented by $f$, that is $f^{-1}(0)$, by $g$: \begin{align*} (\rho_Y(g)f)^{-1}(0) &= \set{{\bm{x}} \ \vert \ f({\bm{R}}^T({\bm{x}}-{\bm{t}}))=0} \\ &= \set{{\bm{R}}{\bm{x}}+{\bm{t}} \ \vert \ f({\bm{x}})=0} \\ &= {\bm{R}} f^{-1}(0) + {\bm{t}} \end{align*} The encoder is defined as $\Phi=\ip{\phi}_{\mathcal{F}}$, where the frames are computed as described in Section \ref{ss:frame_averaging} with constant weights ${\bm{w}}=\mathbf{1}$, and $\phi:X\rightarrow Z$ is a point cloud network (implementation details are provided in Section \ref{s:implementation}). Since the decoder needs to output an element in $Y$, which is a space of functions, we define the decoder by \begin{equation} \Psi({\bm{Z}}) = \hat{\Psi}({\bm{Z}},\cdot), \end{equation} where $\hat{\Psi}: Z \times \mathbb R^3 =\mathbb R^{m+3\times(d+1)}\rightarrow \mathbb R$. % Following \cite{deng2021vector}, to make the decoder $\Psi$ equivariant as a map $Z\rightarrow Y$ it is enough to ask that $\hat{\Psi}$ is equivariant under appropriate actions. Namely, the action in \eqref{e:rho_1} applied to $V=\mathbb R^{m+3\times (d+1)}$, and $W=\mathbb R$, where the latter is just the trivial action providing invariance, {i.e.}, $\rho_\mathbb R(g)\equiv 1$. \begin{proposition} \label{lem:Psi_hat_Psi} $\Psi$ is equivariant iff \ $\hat{\Psi}$ is equivariant. \end{proposition} % The decoder is therefore defined as $\hat{\Psi}=\ip{\psi}_{\mathcal{F}}$, where $\psi:Z\times \mathbb R^3 \rightarrow \mathbb R$ is an MLP (implementation details are provided in Section \ref{s:implementation}), and the frame is defined as in Section \ref{ss:frame_averaging} with constant weights ${\bm{w}}=\mathbf{1}$. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{temp.png} \caption{Piecewise Euclidean: Mesh $\rightarrow$ mesh. The same $\phi$ backbone is used for the equivariant encoding of each part. Similarly, the same $\psi$ backbone is used for the equivariant decoding of each part's latent code. Lastly, the final prediction is a weighted sum of each part's equivariant output mesh.} \label{fig:parts} \vspace{-7.5pt} \end{figure} \paragraph{Piecewise Euclidean: Mesh $\rightarrow$ mesh.} In this scenario we generalize our framework to be equivariant to \emph{different} Euclidean motions applied to different object parts (see Figure \ref{fig:parts}). We consider (as before) the shape spaces $X=Y=\mathbb R^{n\times 3}$ that represent all possible coordinate assignments to vertices of some fixed $n$-vertex template mesh. % The $k$ parts are defined using a partitioning weight matrix ({e.g.}, as used in linear blend skinning) ${\bm{W}}\in \mathbb R^{n\times k}$, where ${\bm{W}}_{i,j}\in [0,1]$ indicates the probability that the $i$-th vertex belongs to the $j$-th part, and $\sum_{j=1}^k {\bm{W}}_{i,j}=1$. % The latent space has the form $Z=Z_1\times \cdots \times Z_k$, where $Z_j \in \mathbb R^{m+d\times 3}$. Note that $k=1$ represents the case of a global Euclidean motion, as discussed above. The actions $\rho_X,\rho_Y, \rho_{Z_j}$, $j\in [k]=\set{1,\ldots,k}$, are defined as in \eqref{e:rho_1}. Lastly, we define the encoder and decoder by \begin{align}\label{e:local_enc_and_dec} \Phi({\bm{X}}) &= \parr{ \ip{\phi}_{{\mathcal{F}}_j}({\bm{X}}_j) \ \vert \ j\in[k]} \\ \label{e:piecewise_Psi} \Psi({\bm{Z}}) &= \sum_{j=1}^k {\bm{w}}_j \odot \ip{\psi({\bm{Z}}_j)}_{{\mathcal{F}}} \end{align} where $\phi:X\rightarrow Z$, $\psi:Z\rightarrow Y$ are Graph Neural Networks (GNNs) as above; ${\bm{X}}_j\in X$ is the geometry of each part, where all other vertices are mapped to the part's centroid, {i.e.}, $${\bm{X}}_j=(\mathbf{1}-{\bm{w}}_j)\frac{{\bm{w}}^T{\bm{X}}}{{\bm{w}}^T\mathbf{1}}+{\bm{w}}_j\odot{\bm{X}},$$ ${\bm{w}}_j = {\bm{W}}_{:,j}$ the $j$-th column of the matrix ${\bm{W}}$, and each part's frame ${\mathcal{F}}_j$ is defined as in Section \ref{ss:frame_averaging} with weights ${\bm{w}}_j$. The part's latent code is ${\bm{Z}}_j = \ip{\phi}_{{\mathcal{F}}_j}({\bm{X}}_j) \in Z_j$. For vector ${\bm{a}}\in\mathbb R^n$ and matrix ${\bm{B}}\in\mathbb R^{n\times 3}$ we define the multiplication ${\bm{a}}\odot {\bm{B}}$ by $({\bm{a}}\odot {\bm{B}})_{i,j}={\bm{a}}_i {\bm{B}}_{i,j}$. If using hard weights, {i.e.}, ${\bm{W}}\in\set{0,1}^{n\times k}$, this construction guarantees \emph{part-equivariance}. That is, if the $j$-th part of an input shape ${\bm{X}}\in X$ is transformed by $g_j\in G$, $j\in [k]$, that is, \begin{equation*} {\bm{X}}' = \sum_{j=1}^k {\bm{w}}_j\odot (\rho_X(g_j){\bm{X}}) \end{equation*} then the corresponding latent codes, ${\bm{Z}}_j$, will be transformed by $\rho_{Z_j}(g_j)$, namely \begin{equation*} {\bm{Z}}'_j = \rho_{Z_j}(g_j){\bm{Z}}_j \end{equation*} and the decoded mesh will also transform accordingly, \begin{equation*} {\bm{Y}}' = \sum_{j=1}^k{\bm{w}}_j\odot (\rho_Y(g_j){\bm{Y}}). \end{equation*} \begin{theorem}\label{thm:part_equi} The encoder and decoder in equations \ref{e:local_enc_and_dec} and \ref{e:piecewise_Psi} are part-equivariant. \end{theorem} % % In practice we work with a smoothed weighting matrix allowing values in $[0,1]$, {i.e.}, ${\bm{W}}\in [0,1]^{n\times k}$, losing some of this exact part equivariance for better treatment of transition areas between parts. \section{Implementation details} \label{s:implementation} In this section we provide the main implementation details, further details can be found in the supplementary. \paragraph{Mesh $\rightarrow$ mesh.} The backbone architectures for $\phi,\psi$ is a $6$ layer GNN, exactly as used in \cite{huang2021arapreg}; The specific dimensions of layers and hidden features for each experiment is detailed in the supplementary appendix. We denote the learnable parameters of both networks by $\theta$. The training loss is the standard autoencoder reconstruction loss of the form % \begin{equation}\label{e:loss_rec} {\mathcal{L}}_{\text{rec}}(\theta) = \frac{1}{N}\sum_{i=1}^N \norm{ \Psi(\Phi({\bm{X}}^{(i)})) - {\bm{X}}^{(i)} }_F \end{equation} where $\norm{\cdot}_F$ is the Frobenious norm and $\set{{\bm{X}}^{(i)}}_{i=1}^N\subset\mathbb R^{n\times 3}$ is a batch drawn from the shape space's training set. \paragraph{Point cloud $\rightarrow$ implicit.} The backbone encoder architecture $\phi$ is exactly as in \cite{mescheder2019occupancy} constructed of PointNet \cite{qi2017pointnet} with $4$ layers. The decoder is an MLP as in \cite{atzmon2020sald} with $8$ layers with $512$ features each. We trained a VAE where the latent space is $Z=\mathbb R^{d + m +d\times 3}$ containing codes of the form $({\bm{\mu}},{\bm{\eta}})$, where ${\bm{\mu}}\in\mathbb R^{m+d\times 3}$ is the latent mean, and ${\bm{\eta}}\in\mathbb R^{d}$ is the invariant latent log-standard-deviation. For training the VAE we use a combination of two losses \begin{equation} {\mathcal{L}}(\theta) = {\mathcal{L}}_{\text{sald}}(\theta) + 0.001 {\mathcal{L}}_{\text{vae}}(\theta), \end{equation} where ${\mathcal{L}}_{\text{sald}}$ is the SALD loss \cite{atzmon2020sald}, \begin{equation} {\mathcal{L}}_{\text{sald}}(\theta) = \frac{1}{N}\sum_{i=1}^N \int_\Omega \tau(\Psi({\mathcal{N}}({\bm{\mu}}^{(i)},{\bm{\eta}}^{(i)})), h)({\bm{x}})d{\bm{x}} \end{equation} where $({\bm{\mu}}^{(i)},{\bm{\eta}}^{(i)})=\Phi({\bm{X}}^{(i)})$, ${\mathcal{N}}({\bm{a}},{\bm{b}})$ is an axis aligned Gaussian i.i.d.~sample with mean ${\bm{a}}$ and standard deviation $\exp(\diag({\bm{b}}))$. $h(\cdot)$ is the unsigned distance function to ${\bm{X}}^{(i)}$, and $\tau(f,g)({\bm{x}}) = \abs{\abs{f({\bm{x}})} - g({\bm{x}})} + \min\set{ \norm{\nabla f({\bm{x}})-\nabla g({\bm{x}})}_2 , \norm{\nabla f({\bm{x}})+\nabla g({\bm{x}})}_2}$. The domain of the above integral, $\Omega\subset \mathbb R^3$, is set according to the scene's bounding box. In practice, the integral is approximated using Monte-Carlo sampling. Note that this reconstruction loss is unsupervised (namely, using only the input raw point cloud). The VAE loss is defined also as in \cite{atzmon2020sald} by \begin{equation} {\mathcal{L}}_{\text{vae}}(\theta) = \sum_{i=1}^N \norm{{\bm{\mu}}^{(i)}}_1 + \norm{{\bm{\eta}}^{(i)} + \mathbf{1}}_1, \end{equation} where $\norm{\cdot}_1$ denotes the $1$-norm. \section {Experiments}\label{sec:exp} We have tested our FA shape space learning framework under two kinds of symmetries $G$: global Euclidean transformations and piecewise Euclidean transformations. \subsection{Global Euclidean} In this case, we tested our method both in the mesh $\rightarrow$ mesh and the point-cloud $\rightarrow$ implicit settings. \begin{table}[t] \centering \scriptsize \setlength\tabcolsep{8pt} \begin{tabular}{c} \begin{adjustbox}{max width=\textwidth} \aboverulesep=0ex \belowrulesep=0ex \renewcommand{\arraystretch}{1.1} \begin{tabular}{l|c|c|c} Method & $I$ & $z$ & $SO(3)$ \\ \hline AE & 5.16 & 9.96 & 15.41 \\ AE-Aug & 5.22 & 5.86 & 5.12 \\ Ours & \textbf{4.39} & \textbf{4.35} & \textbf{4.66} \end{tabular} \end{adjustbox} \end{tabular} \caption{ Global Euclidean mesh$\rightarrow$mesh shape space experiment; MSE error (lower is better) in three test versions of the DFAUST \cite{dfaust:CVPR:2017} dataset, see text for details. } \label{tab:mesh_global_rigid} \end{table} \vspace{-10pt} \paragraph{Mesh $\rightarrow$ mesh.} In this experiment we consider the DFaust dataset \cite{dfaust:CVPR:2017} of human meshes parameterized with SMPL \cite{SMPL:2015}. The dataset consists of 41,461 human shapes where a random split is used to generate a training set of 37,197 models, and a test set of 4,264 models. We used the same generated data and split as in \cite{huang2021arapreg}. We generated two additional test sets of randomly oriented models: randomly rotated models about the up axis (uniformly), denoted by $z$, and randomly rotated models (uniformly), denoted by $SO3$. We denote the original, aligned test set by $I$. We compare our global Euclidean mesh$\rightarrow$mesh autoencoder versus the following baselines: Vanilla Graph autoencoder, denoted by AE; and the same AE trained with random rotations augmentations, denoted by AE-Aug. Note that the architecture used for AE and AE-Aug is the same backbone architecture used for our FA architecture. Table \ref{tab:mesh_global_rigid} reports the average per-vertex euclidean distance (MSE) on the various test sets: $I$, $z$ and $SO3$. Note that FA compares favorably to the baselines in all tests. \begin{table}[h] \centering \scriptsize \setlength\tabcolsep{4.0pt} \begin{tabular}{c} \begin{adjustbox}{max width=\textwidth} \aboverulesep=0ex \belowrulesep=0ex \renewcommand{\arraystretch}{1.0} \begin{tabular}[t]{l|cc|cc|cc|cc} \multicolumn{1}{l}{} & \multicolumn{2}{c}{teddy bear} & \multicolumn{2}{c}{bottle} & \multicolumn{2}{c}{suitcase} & \multicolumn{2}{c}{banana} \\ Method & $\dist_{\text{C}}^{\rightarrow}$ & $\dist_{\text{C}}$ & $\dist_{\text{C}}^{\rightarrow}$ & $\dist_{\text{C}}$ & $\dist_{\text{C}}^{\rightarrow}$ & $\dist_{\text{C}}$ & $\dist_{\text{C}}^{\rightarrow}$ & $\dist_{\text{C}}$ \\ \hline VAE & 5.11 & 2.611 & 0.419 & 0.225 & 0.619 & 0.341 & 0.309 & 0.177 \\ VN \cite{deng2021vector} & 0.047 & \textbf{0.421} & 0.638 & 0.334 & 0.348 & 0.218 & 0.157 & 0.087 \\ Ours & \textbf{0.046} & 0.451 & \textbf{0.226} & \textbf{0.129} & \textbf{0.079} & \textbf{0.086} & \textbf{0.118} & \textbf{0.074} \end{tabular} \end{adjustbox} \end{tabular} \caption{Global Euclidean point cloud $\rightarrow$ implicit shape space experiment; CommonObject3D \cite{reizenstein2021common} dataset.} \label{tab:common3D} \end{table} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{all_imp_2.png} \caption{Global Euclidean point cloud $\rightarrow$ implicit, qualitative test results; CommonObject3D \cite{reizenstein2021common} dataset.} \label{fig:common3D} \vspace{-15pt} \end{figure} \vspace{-10pt} \paragraph{Point cloud $\rightarrow$ implicit.} In this experiment we consider the CommonObject3D dataset \cite{reizenstein2021common} that contains 19k objects from 50 different classes. We have used only the objects' point clouds extracted from videos using COLMAP \cite{schoenberger2016sfm}. The point clouds are very noisy and partial (see {e.g.}, Figure \ref{fig:common3D}, where input point clouds are shown in red), providing a "real-life" challenging dataset. Note that we have not used any other supervision for the implicit learning. We have used 4 object catagories: teddy bear ($747$ point clouds), bottle (296 point clouds), suitcase (480 point clouds), and banana (197 point clouds). We have divided each category to train and test sets randomly based on a 70\%-30\% split. We compare to the following baselines: Variational Autoencoder, denoted by VAE; Vector Neurons \cite{deng2021vector} version of this VAE, denoted VN. We used the official VN implementation. For our method we used an FA version of the same baseline VAE architecture. Table \ref{tab:common3D} reports two error metrics on the test set: $\dist_{\text{C}}^{\rightarrow}$ that denotes the one sided Chamfer distance from the input point cloud to the generated shape and $\dist_{\text{C}}$ that denotes the symmetric Chamfer distance (see supplementary for exact definitions). Note that our method improves the symmetric Chamfer metric in almost all cases. Figure \ref{fig:common3D} shows some typical reconstructed implicits (after contouring) in each category, along with the input test point cloud (in red). Qualitatively our framework provides a more faithful reconstruction, even in this challenging noisy scenario without supervision. Note that for the "teddy bear" class we provide a visually improved reconstruction that is not apparent in the qualitative score due to high noise and outliers in the input point clouds. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{local.jpg} \caption{Piecewise Euclidean mesh $\rightarrow$ mesh, qualitative results; DFaust \cite{dfaust:CVPR:2017} dataset. Colors mark different splits: green is the random (easy) split; orange is the unseen random pose split; and red is the unseen pose split, see text for details. Our method demonstrates consistently high-quality results across splits of different difficulty levels.\vspace{10pt} } \label{fig:piecewise} \vspace{-12pt} \end{figure*} \subsection{Piecewise Euclidean} \vspace{-2pt} \paragraph{Mesh $\rightarrow$ mesh.} In this experiment we consider three different datasets: DFaust \cite{dfaust:CVPR:2017}, SMAL \cite{zuffi20173d} and MANO \cite{romero2020group}. For the DFaust dataset we used train-test splits in an increasing level of difficulty: random split of train-test taken from \cite{huang2021arapreg}, as described above; unseen random pose - removing a random (different) pose sequence from each human and using it as test; and unseen pose - removing the \emph{same} pose sequence from all humans and using it as test. The SMAL dataset contains a four-legged animal in different poses. We use the data generated in \cite{huang2021arapreg} of 400 shapes, randomly split into a $300$ train, $100$ test sets. The MANO dataset contains 3D models of realistic human hands in different poses. Using the MANO SMPL model we generated 150 shapes, randomly split into a $100$ train and $50$ test sets. We compared our method to the following baselines: Vanilla autoencoder, denoted by AE and ARAPReg \cite{huang2021arapreg} that reported state of the art results on this dataset. Note that both the AE and our method use the same backbone architecture. ARAPReg, report autodecoder to be superior in their experiments and therefore we compared to that version. Note that all compared methods have the same (backbone) decoder architecture. Figure \ref{fig:piecewise} shows typical reconstruction results on the tests sets: green marks the random (easy) split; orange marks the random unseen pose split; and red marks the global unseen pose split. Note our method is able to produce very high-fidelity approximations of the ground truth, visually improving artifacts, noise and inaccuracies in the baselines (zoom in to see details). Lastly, we note that we use also partition skinning weight matrix (defined on the single rest-pose model) as an extra supervision not used by ARAPReg. \begin{table}[h] \centering \scriptsize \setlength\tabcolsep{3.8pt} \begin{tabular}{c} \begin{adjustbox}{max width=\textwidth} \aboverulesep=0ex \belowrulesep=0ex \renewcommand{\arraystretch}{1.1} \begin{tabular}{l|c|c|c|c|c} Method & random & unseen random pose & unseen pose & SMAL & MANO \\ \hline AE & 5.45 & 7.99 & 6.27 & 9.11 & 1.34 \\ ARAPReg & 4.52 & 7.77 & 3.38 & 6.68 & 1.15 \\ Ours & \textbf{1.68} & \textbf{1.89} & \textbf{1.90} & \textbf{2.44} & \textbf{0.86} \end{tabular} \end{adjustbox} \end{tabular} \caption{ Piecewise Euclidean mesh $\rightarrow$ mesh experiment; MSE error (lower is better); DFaust \cite{dfaust:CVPR:2017}, SMAL \cite{zuffi20173d} and MANO \cite{romero2020group} datasets. \vspace{-10pt}} \label{tab:normal} \end{table} \begin{table*}[t] \centering \scriptsize \setlength\tabcolsep{6pt} \begin{tabular}{c} \begin{adjustbox}{max width=\textwidth} \aboverulesep=0ex \belowrulesep=0ex \renewcommand{\arraystretch}{1.0} \begin{tabular}[t]{c|ccc|ccc|ccc|ccc} \multicolumn{1}{c}{} & \multicolumn{6}{c}{Within Distribution} & \multicolumn{6}{|c}{Out of Distribution} \\ \cmidrule{2-13} \multicolumn{1}{c}{} & \multicolumn{3}{c}{IoU bbox} & \multicolumn{3}{c}{IoU surface} & \multicolumn{3}{c}{IoU bbox} & \multicolumn{3}{c}{IoU surface} \\ & NASA & SNARF & Ours & NASA & SNARF & Ours & NASA & SNARF & Ours & NASA & SNARF & Ours \\ \midrule 50002 & 96.56\% & 97.50\% & \textbf{98.67\%} & 84.02\% & 89.57\% & \textbf{93.28\%} & 87.71\% & 94.51\% & \textbf{96.76\%}& 60.25\% & 79.75\% & \textbf{85.06\%} \\ 50004 & 96.31\% & 97.84\% & \textbf{98.64\%} & 85.45\% & 91.16\% & \textbf{94.57\%} & 86.01\% & 95.61\% & \textbf{96.19\%} & 62.53\% & 83.34\% & \textbf{85.84\%} \\ 50007 & 96.72\% & 97.96\% & \textbf{98.62\%} & 86.28\% & 91.02\% & \textbf{94.11\%} & 80.22\% & 93.99\% & \textbf{95.31\%} & 51.82\% & 77.08\% & \textbf{81.91\%} \\ 50009 & 94.96\% & 96.68\% & \textbf{97.75\%} & 84.52\% & 88.19\% & \textbf{92.84\%} & 78.15\% & 91.22\% & \textbf{94.75\%} & 55.86\% & 75.84\% & \textbf{84.60\%} \\ 50020 & 95.75\% & 96.27\% & \textbf{97.61\%} & 87.57\% & 88.81\% & \textbf{92.60\%} & 83.06\% & 93.57\% & \textbf{95.17\%} & 62.01\% & 81.37\% & \textbf{85.66\%} \\ 50021 & 95.92\% & 96.86\% & \textbf{98.55\%} & 87.01\% & 90.16\% & \textbf{95.38\%} & 81.80\% & 93.76\% & \textbf{96.35\%} & 65.49\% & 81.49\% & \textbf{88.86\%} \\ 50022 & 97.94\% & 97.96\% & \textbf{98.39\%} & 91.91\% & 92.06\% & \textbf{93.68\%} & 87.54\% & 94.67\% & \textbf{96.12\%} & 70.23\% & 83.37\% & \textbf{85.80\%} \\ 50025 & 95.50\% & 97.54\% & \textbf{98.48\%} & 86.19\% & 91.25\% & \textbf{94.74\%} & 83.14\% & 94.48\% & \textbf{95.99\%} & 60.88\% & 82.48\% & \textbf{86.58\%} \\ 50026 & 96.65\% & 97.64\% & \textbf{98.61\%} & 87.72\% & 91.09\% & \textbf{94.64\%} & 84.58\% & 94.13\% & \textbf{96.45\%} & 59.78\% & 80.01\% & \textbf{87.10\%} \\ 50027 & 95.53\% & 96.80\% & \textbf{97.95\%} & 86.13\% & 89.47\% & \textbf{93.46\%} & 83.97\% & 93.76\% & \textbf{95.61\%} & 61.82\% & 81.81\% & \textbf{86.60\%} \end{tabular} \end{adjustbox} \end{tabular} \vspace{-3pt} \caption{Piecewise Euclidean mesh $\rightarrow$ mesh, comparison to implicit articulation methods. DFaust \cite{dfaust:CVPR:2017} and PosePrior \cite{akhter2015pose} datasets. \vspace{10pt}} \label{tab:all_humans_splits} \vspace{-18pt} \end{table*} \vspace{-8pt} \paragraph{Interpolation in shape space.} In this experiment we show qualitative results for interpolating two latent codes ${\bm{Z}}^{(j)}=({\bm{q}}^{(j)},{\bm{Q}}^{(j)})\in Z$, $j=0,1$, computed with our encoder for two input shapes ${\bm{X}}^{(j)}$, $j=0,1$. We use the encoder and decoder learned in the "unseen pose" split described above. Since $Z$ is an equivariant feature space, if ${\bm{X}}^{(1)}$ is an Euclidean transformed version of ${\bm{X}}^{(0)}$, {i.e.}, ${\bm{X}}^{(1)} = \rho_X(g){\bm{X}}^{(0)}$, then equivariance would mean that ${\bm{Z}}^{(1)}=\rho_Z(g){\bm{Z}}^{(0)}$. Therefore interpolation in this case should be done by finding the optimal rotation and translation between the equivariant parts of ${\bm{Z}}^{(0)}$ and ${\bm{Z}}^{(1)}$ and continuously rotating and translating ${\bm{Z}}^{(0)}$ into ${\bm{Z}}^{(1)}$. This can be done using the closed form solution to the rotational Procrustes problem (see {e.g.}, \cite{zhu20073d,schonemann1966generalized}). For two general codes ${\bm{Z}}_j$ we use this procedure while adding linearly the residual difference after cancelling the optimal rotation and translations between the codes. In the supplementary we provide the full derivation of this interpolation denoted ${\bm{Z}}^{t}$, $t\in[0,1]$. Figure \ref{fig:interp} shows the result of decoding the interpolated latent codes ${\bm{Z}}^t$, $t\in[0,1]$ with the learned decoder. Note that both shape and pose gracefully and naturally change along the interpolation path. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{no_pose_int.png} \caption{Interpolation in equivariant latent space between two test examples from the "unseen pose" split (leftmost and rightmost columns). \vspace{0pt}} \label{fig:interp} \vspace{-15pt} \end{figure*} \paragraph{Comparison with implicit pose-conditioned methods.} Lastly, we have trained our piecewise Euclidean mesh $\rightarrow$ mesh framework on the DFaust subset of AMASS \cite{mahmood2019amass}. Following the protocol defined in \cite{chen2021snarf}, we trained our model on each of the $10$ subjects and tested it on the "within distribution" SMPL \cite{SMPL:2015} generated data from SNARF \cite{chen2021snarf}, as well as on their "out of distribution" test from PosePrior \cite{akhter2015pose} dataset. We use both SNARF \cite{chen2021snarf} and NASA \cite{deng2020nasa} as baselines. Table \ref{tab:all_humans_splits} reports quantitative error metrics: The Intersection over Union (IoU) using sampling within the bounding box (bbox) and near the surface, see supplementary for more details. Figure \ref{fig:geiger} shows comparison with SNARF of test reconstructions from the "out of distribution" set. We note that our setting is somewhat easier than that faced by SNARF and NASA (we perform mesh $\rightarrow$ mesh learning using a fixed mesh connectivity and skinning weights; the skinning weights is used by NASA and learned by SNARF). Nevertheless, we do not assume or impose anything on the latent space besides Euclidean equivariance, not use an input pose explicitly, and train only with a simple reconstruction loss (see \eqref{e:loss_rec}). Under this disclaimer we note we improve the reconstruction error both qualitatively and quantitatively in comparison to the baselines. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{geiger.png} \caption{Comparison with SNARF on the "out of distribution" test set.} \label{fig:geiger} \vspace{-25pt} \end{figure} \vspace{-6pt} \section {Limitations and future work} We have introduced a generic methodology for incorporating symmetries by construction into encoder and/or decoders in the context of shape space learning. Using Frame Averaging we showed how to construct expressive yet efficient equivariant autoencoders. We instantiated our framework for the case of global and piecewise Euclidean motions, as well as to mesh $\rightarrow$ mesh, and point cloud $\rightarrow$ implicit scenarios. We have achieved state of the art quantitative and qualitative results in all experiments. Our method has several limitations: First in the mesh $\rightarrow$ mesh case we use fixed connectivity and skinning weights. Generalizing the piecewise Euclidean case to implicit representations, dealing with large scale scenes with multiple objects, or learning the skinning weights would be interesting future works. Trying to use linear blend skinning to define group action of $E(3)^k$ could be also interesting. Finally, using this framework to explore other symmetry types, combinations of geometry representations (including images, point clouds, implicits, and meshes), and different architectures could lead to exciting new methods to learn and use shape space in computer vision. {\small \bibliographystyle{ieee_fullname}
proofpile-arXiv_065-91
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} DL-based malware detectors have recently gained attention in the field of cybersecurity due to their ability to identify unseen malware variants without manual feature engineering and expensive dynamic analysis of the behavior of malware instances in a sandbox \cite{raff2018malware}. However, DL-based malware detectors have been shown to be vulnerable to small perturbations in their input, known as adversarial examples \cite{demetrio2019explaining}. The automated generation of such inputs is known as Adversarial Malware example Generation (AMG), which aims to generate functionality-preserving malware variants that mislead these malware detectors. Emulating AMG attacks against malware detectors can help strengthen their malware detection performance \cite{goodfellow2018making}. AMG methods can generally be classified into white-box and black-box settings \cite{qiu2019review}. Many AMG methods fall under the white-box classification, where the attackers know the model parameters and architecture of the targeted malware detector. Contrarily, black-box methods assume no attacker knowledge of the model parameters and architecture of the targeted malware detector. Since practical AMG scenarios aligns more with the black-box setting, black-box methods have drawn more attention. Most widely-used black-box AMG techniques rely on emulating append attack, an additive approach that injects bytes at non-executable locations in the malware binary. The popularity of append attacks is due to the fact that they are less likely to affect the malware functionality \cite{suciu2019exploring}. To generate adversarial malware variants, these methods require detector feedback, often a Boolean value indicating whether the variant has evaded the malware detector or not. Specifically, current append-based AMG methods require a considerable amount of detector feedback to operate effectively \cite{kolosnjaji2018adversarial} \cite{suciu2019exploring}. Given that real-life malware detectors enforce a query limit, these AMG methods are rendered ineffective due to their query inefficiency. The evasion of detectors via only one query, known as \textit{single-shot evasion}, has been well studied in image applications \cite{ilyas2018black}. However, it is understudied in the AMG context. Because the goal of AMG research is to emulate real attacks and improve the performance of malware detectors, there is a vital need for single-shot AMG methods that can emulate realistic adversarial attacks. We expect that well-designed DL-based AMG methods can result in such performance by automatically extracting salient features from the malware sample \cite{awad2018modeling}. Recently, DL-based language models have been shown to effectively extract salient features from sequential data \cite{belletti2019quantifying}. To this end, by viewing a malware executable as a byte sequence, DL-based language models can be utilized to generate evasive malware content \cite{awad2018modeling}. Using DL-based language models' ability to effectively extract salient features for AMG, we seek to increase the likelihood of single-shot evasion using DL-based language models. Nevertheless, traditional language models are inefficient at byte sequence generation due to long-range dependencies present in lengthy byte sequences \cite{raff2018malware}. To address this, recent Natural Language Processing (NLP) research has shown DL-based causal language models (CLM) as a viable solution \cite{hosseini2020simple}. Specifically, a recent Causal Deep Language model, Generative Pre-trained Transformer (GPT), has yielded state-of-the-art performance in processing long sequences and high-quality text generation \cite{hosseini2020simple}. Inspired by GPT’s success, we propose a novel AMG framework using a GPT-based language model learned from raw malware content to conduct AMG under a query-efficient threat model that increases the chance of single-shot evasion. The rest of this paper is organized as follows. First, we review AMG, CLMs, and GPT. Subsequently, we detail the components of our proposed framework. Lastly, we compare the performance of our proposed method with other state-of-the-art AMG methods and highlight promising future directions. \section{Literature Review} Three areas of research are examined. First, we review extant AMG studies as the overarching area for our study. Second, we examine CLM as an effective language model that can facilitate learning patterns in long byte sequences from malware executables. Third, we review GPT as a state-of-the-art causal language model. \subsection{Adversarial Malware Generation (AMG)} AMG aims to perturb malware samples and generate variants that evade malware detectors. Among the prevailing AMG methods, append attacks are the most practical due to their high chance of preserving the functionality of the original malware executable \cite{suciu2019exploring}. We summarize selected significant append-based prior work based on their data source, attack method used, and presence of a query limit in Table \ref{lit_overview}. Three major observation are made from Table \ref{lit_overview}. First, the majority of studies use VirusTotal, an open-source online malware database, as a source of their malware samples \cite{demetrio2021functionality}\cite{castro2019armed}\cite{castro2019poster}\cite{chen2019adversarial}\cite{rosenberg2019defense}\cite{suciu2019exploring}\cite{anderson2018learning}. Second, regarding selected attack methods, a few notable attack methods include simple append attack \cite{suciu2019exploring}, attacking using randomly generated perturbation \cite{castro2019poster}, and attacking using specific perturbations that lowers a malware detector's score \cite{chen2019adversarial}. More advanced methods incorporate machine learning techniques (Genetic Programming \cite{demetrio2021functionality} \cite{dey2019evadepdf}, Gradient Descent \cite{castro2019armed}, and Dynamic Programming \cite{park2019generation}) and implement advanced DL-based techniques (Generative Adversarial Networks \cite{rosenberg2019defense}, Deep Reinforcement Learning \cite{anderson2018learning}, and Generative Recurrent Neural Networks \cite{ebrahimi2020binary}\cite{hu2018black}). Third, and most importantly, while a sizable amount of AMG research either do not limit the number of queries to the malware detector or allow conducting multiple queries, few studies (Suciu et al. \cite{suciu2019exploring}) operate in a single-shot AMG evasion setting. However, the proposed method in \cite{suciu2019exploring} does not use potentially more effective machine learning approaches. Furthermore, more advanced attack methods, such as DL-based ones, require multiple queries per malware file to evade. This is due to the fact that, if not properly designed, DL-based methods require multiple interactions with the detector to receive feedback and learn generating evasive samples through back-propagation. Thus, they are less likely to perform single-shot AMG evasion. Overall, we observe that most AMG studies either do not limit the number of queries to the detector or they require multiple queries per malware sample to evade the malware detector. This highlights the inefficiency of their attack methods at extracting salient features and generating evasive samples. \subsection{Causal Language Models (CLMs)} DL-based language models have been shown to effectively extract salient features from sequential data \cite{belletti2019quantifying}. Recently, DL-based language models have been successfully utilized in malware analysis \cite{ebrahimi2020binary}\cite{hu2018black}. Recent AMG research demonstrated the viability of treating malware binaries as a language and generating byte sequences, allowing for automatic perturbation generation in the AMG context \cite{ebrahimi2020binary}. However, due to long-range dependencies in the byte sequences from malware executables, traditional language models become ineffective at learning the patterns present in malware binaries \cite{raff2018malware}. Current NLP research has shown CLM as a promising alternative that can learn patterns in long sequential data. CLMs are characterized by using outputs from previous time steps as inputs in future time steps as it generates bytes. This allows CLMs to reference past information when generating current sequences. Compared to other state-of-the-art language models like BERT \cite{devlin2018bert} and XLNet \cite{yang2019xlnet}, CLMs tend to be less computationally intensive. This allows them to process larger input sequences, making them more suitable for long-range language generation in an AMG context. \subsection{Generative Pre-trained Transformer} While CLMs have proved promising in processing sequential data, recent studies have further improved their performance. Specifically, GPT, a recent CLM, has yielded breakthrough performances on NLP tasks \cite{raff2018malware}. GPT is composed of 12 interconnected decoder blocks, each consisting of a self attention layer and a feed-forward neural network as shown in Figure \ref{GPT_Figure}. \begin{figure}[h] \centering \vspace{-3mm} \includegraphics[width=0.25\textwidth]{GPT_figure.png} \vspace{-3mm} \caption{GPT Architecture Utilized in Byte Generation} \vspace{-3mm} \label{GPT_Figure} \end{figure} The first layer in each decoder block, known as self attention layer, generates a representation that determines each token's importance in relation to the current token \cite{vaswani2017attention}. From self attention layer, a feed-forward neural network acts as a gateway to pass the obtained representation to the next decoder block \cite{radford2019language}. The same structure holds for each of the 12 decoder blocks, where the last generated sequence is the final output of the $12^{th}$ decoder block. GPT’s causal nature and its built-in self-attention mechanism allows it to better learn longer-range patterns when compared to traditional language models \cite{chen2020generative}. \section{Research Gaps and Questions} Based on our literature review, two research gaps are identified. First, within the AMG domain, most methods cannot evade malware detectors in a single-shot fashion, causing them to be query inefficient. Second, regarding the methodology, while GPT has shown promising performance in NLP tasks, it is unclear how it could be applied in malware analysis and specifically AMG context. To address the identified gaps, the following research question is posed: \begin{itemize} \item{How can an adversarial causal language model be developed to evade malware detectors with minimal number of queries to maximize the chance of single-shot evasion?} \end{itemize} Motivated by this question, we propose MalGPT, a novel framework to automatically construct adversaries for evading malware detectors in one query utilizing the causal language model GPT2 (a publicly available implementation of GPT). \section{Research Design} Following previous AMG studies, we first introduce the threat model under which our proposed MalGPT operates \cite{biggio2013security}. Then, we examine the architecture of MalGPT and its training process. Finally, we introduce our testbed and the targeted malware detector used in MalGPT's training and evaluation. \subsection{Threat Model} A threat model is a systematic representation of cyber attacks \cite{carlini2019evaluating}\cite{biggio2013security}. Since the goal of our study is evading DL-based detectors with only one query and without accessing the internal model parameters of the malware detector, our threat model focuses on a single-shot, black-box setting. Accordingly, three major components of our threat model are: \begin{itemize} \item \textbf{Adversary’s Goal:} Evade DL-based malware detectors in a single shot. That is, the evasive adversarial malware variants that are generated after one interaction with the detector do not count towards model's success. \item \textbf{Adversary’s Knowledge:} \begin{itemize} \item Structure and parameters of malware detector model are unknown to the attacker. \item Attacker does not have access to the confidence score produced by malware detector (fully black-box attack). \end{itemize} \item \textbf{Adversary's Capability:} The adversary applies functionality-preserving append modifications on malware binaries. \begin{itemize} \item Consistent with past AMG studies \cite{ebrahimi2020binary} \cite{kolosnjaji2018adversarial}, the size of the modifications must stay under 10 KB to maintain the stealth of the generated malware variant. \end{itemize} \end{itemize} \begin{figure*}[t] \centering \vspace{-3mm} \includegraphics[width=0.85\textwidth]{MalGPT_Architecture.png} \vspace{-3mm} \caption{Abstract View of MalGPT Malware Evasion Framework} \vspace{-3mm} \label{Framework_Architecture} \end{figure*} \subsection{MalGPT Architecture}\label{ModelFrameworkSection} To realize this threat model, MalGPT employs a GPT2 language model that is trained to generate benign-looking byte sequences. The trained language model is utilized to generate evasive and functionality-preserving variants of existing known malware executables as depicted in Figure \ref{Framework_Architecture}. This process is detailed in 5 steps: \begin{itemize} \item \textit{Step 1:} The binary content of a malware sample is fed into the trained GPT2 model. \item \textit{Step 2:} The model generates a file-specific byte sequence. \item \textit{Step 3:} The generated sequence is added to the original malware sample, resulting in a new malware variant. \item \textit{Step 4:} The new malware variant is examined for functionality using the VirusTotal API. \item \textit{Step 5:} After confirming its functionality, the generated variant attempts to evade a malware detector in a single query. \end{itemize} \subsection{MalGPT Model Training}\label{ModelTrainingSection} In order to generate benign-looking sequences, MalGPT's GPT2 model is trained on a set of benign files. Figure \ref{Model_Training} provides an illustration of this process, with the final trained model being incorporated into Figure \ref{Framework_Architecture} as the Binary-Trained MalGPT Generation. \begin{figure}[!h] \centering \includegraphics[width=0.5\textwidth]{MalGPT_Training_Illustration.png} \caption{Illustration of MalGPT's Model Training Process} \label{Model_Training} \end{figure} The model is trained in a two-step process. In Step 1, benign files are first converted to a single hex string delimited by sets of four characters (e.g., `AA04 FF44 ...') and fed into the GPT2 model. Step 2 has the model automatically extracting salient features from the hex string and learning how to generate benign-looking byte sequences. After repeating Step 2 for 1,000 training iterations, the process results in a trained model that takes malware byte sequences as input to generate benign-looking, file-specific perturbations. \subsection{Testbed and Targeted Malware Detector} As stated in prior sections, MalGPT requires both benign and malicious files to be trained and to evade malware detectors. To this end, following the approach in \cite{raff2018malware}, we collected 13,554 benign Microsoft Windows system files for MalGPT to learn from. Additionally, we obtained 6,307 malicious samples from VirusTotal in eight major malware categories. Table \ref{testbed_table} summarizes the distribution of these categories along with their description and examples. \begin{table}[!ht] \centering \begin{center} \vspace{-4mm} \caption{Malware Samples in our Testbed} \vspace{-2mm} \begin{tabular}{ |m{1.2cm}<{\centering} |m{3.0cm}<{\centering} |m{1.5cm}<{\centering} |m{0.6cm}<{\centering} |} \hline \textbf{Malware Category} &\textbf{Description} &\textbf{Examples} &\textbf{\# of Files}\\ \hline \textbf{Adware} & Shows unwanted ads and force internet traffic to sites & eldorado, razy, gator & 1,947\\ \hline \textbf{Backdoor} & Negates normal authentications to access the host & lunam, rahack, symmi & 678\\ \hline \textbf{Botnet} & A network of bots connected through the internet & virut, salicode, sality & 526\\ \hline \textbf{Dropper} & Secretly installs other malwares on the host & dinwod, gepys, doboc & 904\\ \hline \textbf{\makecell{Ransom-\\ware}} & Encrypts data and files, restricting access and usage until decrypted by malware authors & vtflooder, msil, bitman & 900\\ \hline \textbf{Rootkit} & Grants admin privilege to malware author & onjar, dqqd, shipup & 53\\ \hline \textbf{Spyware} & Allows malware authors to steal personal information covertly & mikey, qqpass, scar & 640\\ \hline \textbf{Virus} & Corrupts files on the host system & nimda, shodi, hematite & 659\\ \hline \textbf{Total} & - & - & \textbf{6,037}\\ \hline \end{tabular} \label{testbed_table} \vspace{-4mm} \end{center} \end{table} \section{Evaluation} \subsection{Experiment Design} Through consulting with two malware analysis experts as well as the popularity in malware analysis community, we selected MalConv as one of the most highly reputable DL-based malware detectors \cite{raff2018malware}. To evaluate performance, consistent with \cite{fleshman2018non}, we adopted evasion rate as the most common performance metric in AMG research. The evasion rate is defined via the following equation \cite{ebrahimi2020binary}: \begin{equation} Evasion\ Rate = \frac{|E\cap F|}{N} \end{equation} where $E$ and $F$ denote the sets of evasive and functional modified malware samples generated from the AMG method, respectively. $N$ represents the total number of malware samples given as input to the AMG method. To evaluate MalGPT's performance, we conducted several benchmark experiments under the constraints of our threat model (i.e., single-shot evasion, black-box setting, and 10 KB maximum append size). Table \ref{benchmark_table} presents the description for each selected benchmark method. \begin{table}[!ht] \centering \vspace{-4mm} \begin{center} \caption{Overview of Benchmark Experiment Methods} \vspace{-2mm} \begin{tabular}{ |m{1.5cm}<{\centering} |m{3cm}<{\centering} |m{3cm}<{\centering}|} \hline \textbf{Method} & \textbf{Description} & \textbf{Reference(s)}\\ \hline \textbf{Random Append} & Randomly appends bytes to malware sample. & Suciu et al., 2019 \cite{suciu2019exploring}; Castro, Schmitt et al., 2019 \cite{castro2019armed}\\ \hline \textbf{Benign Append} & Appends sections of bytes from benign files to malware sample. & Castro, Biggio et al., 2019 \cite{castro2019poster}\\ \hline \textbf{Enhanced Benign Append} & Appends bytes that lower the confidence score the most. & Chen B. et al., 2019 \cite{chen2019adversarial}\\ \hline \textbf{MalRNN} & Appends a byte sequence generated by an RNN model trained on benign files & Ebrahimi et al., 2020 \cite{ebrahimi2020binary}\\ \hline \end{tabular} \label{benchmark_table} \vspace{-4mm} \end{center} \end{table} Each AMG method was performed on the eight individual malware categories in the testbed as well as the entire testbed (i.e., all 6,037 malware executables) to gauge its efficacy at single-shot AMG evasion based on evasion rate. \subsection{Experiment Results} Table \ref{ex_result} shows MalGPT’s performance compared to state-of-the-art AMG benchmarks against MalConv under the constraints of the defined threat model. The row denoted by `Total' corresponds to the performance on the entire testbed. \begin{table}[!ht] \centering \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{-10mm} \begin{center} \vspace{-4mm} \caption{Experiment Results} \vspace{-2mm} \begin{tabular}{ |m{1cm}<{\centering} |m{1cm}<{\centering} |m{1cm}<{\centering} |m{1.25cm}<{\centering} |m{1cm}<{\centering} |m{1cm}<{\centering}|} \hline \textbf{Category} & \textbf{Random Append} & \textbf{Benign Append} & \textbf{Enhanced Benign Append} & \textbf{MalRNN} & \textbf{MalGPT}\\ \hline \textbf{Adware} & 2\% & 0.87\% & 15.51\% & 4.16\% & \textbf{25.89\%*}\\ \hline \textbf{Backdoor} & 2.06\% & 0.74\% & 21.98\% & 0.44\% & \textbf{18.86\%}\\ \hline \textbf{Botnet} & 2.47\% & 1.14\% & 21.86\% & 6.08\% & \textbf{25.86\%*}\\ \hline \textbf{Dropper} & 3.32\% & 2.32\% & 16.48\% & 4.2\% & \textbf{27.43\%*}\\ \hline \textbf{\makecell{Ransom-\\ware}} & 3.78\% & 0.11\% & 14.44\% & 0.89\% & \textbf{20.33\%*}\\ \hline \textbf{Rootkit} & 1.89\% & 3.77\% & 3.77\% & 5.66\% & \textbf{24.53\%*}\\ \hline \textbf{Spyware} & 2.5\% & 1.88\% & 11.25\% & 4.38\% & \textbf{22.97\%*}\\ \hline \textbf{Virus} & 4.4\% & 2.43\% & 12.29\% & 10.17\% & \textbf{28.38\%*}\\ \hline \textbf{Total} & 2.79\% & 1.27\% & 15.86\% & 4.12\% & \textbf{24.51\%*}\\ \hline \end{tabular} \vspace{1mm} \centering \textbf{Note:} P-Values are significant at 0.05. \label{ex_result} \vspace{-5mm} \end{center} \end{table} The asterisks in Table \ref{ex_result} denote the statistical significance obtained from paired $t$-test at P-value equal to or less than 0.05 between the results of MalGPT and the second-best performing benchmark method in each respective category. The results show approximately a 20\% performance improvement over the recently proposed state-of-the-art malware language model, MalRNN (4.12\% vs. 24.51\%). Moreover, MalGPT shows an approximately 7\% performance improvement at single-shot evasion over the second-best method, Enhanced Benign Append. Overall, from Table \ref{ex_result} it is shown that MalGPT attains the best performance on the entire dataset (24.51\%) and outperforms other benchmark methods in almost all categories (except backdoor). The significantly high performance of MalGPT compared to the benchmark methods suggests that, as expected, the high-quality representations obtained by the GPT2 component in our model effectively increase the chance of single-shot evasion. In addition to comparison with other AMG benchmark methods, it is useful to compare MalGPT's performance across all eight malware categories. Figure \ref{ExpResults} depicts the evasion rate of MalGPT for each malware category a long with the evasion rate across the entire 6,307 malware executables (denoted by `Total'). \begin{figure}[!h] \centering \vspace{-3mm} \includegraphics[width=0.5\textwidth]{ExpResults.png} \vspace{-3mm} \caption{MalGPT's Evasion Rate for each Malware Category and entire dataset} \vspace{-3mm} \label{ExpResults} \end{figure} From Figure \ref{ExpResults}, we have a few observations. While MalGPT attains an evasion rate of 24.51\% across all 6,307 malware samples in our testbed, several categories noticeably deviate from this trend. On the upper end, both Dropper and Virus have a high evasion rate at 27.43\% and 28.38\%, respectively. Conversely, both Backdoor and Ransomware have a lower evasion rate at 18.86\% and 20.33\%, respectively. These results suggest that Dropper and Virus are more sensitive to AMG append attacks while Backdoor and Ransomware may be less sensitive to such attacks. One possible explanation could be because of the malware file size. Droppers are usually small files that download other malicious files through a link after gaining access to a host machine. Likewise, Viruses are often small scripts that seek to corrupt a host machine. As such, both categories feature smaller file size allowing a 10 KB perturbation generated by MalGPT to have a larger effect and thus a higher evasion chance. Conversely, both Backdoor and Ransomware often need large, complex programs (e.g., encryption procedures) to achieve their malicious goals. As such, MalGPT's 10 KB perturbation could be less effective in larger files, thus making evasion more difficult. This aligns with the intuition that DL-based malware detectors are more likely to be evaded with proportionally larger AMG perturbations with respect to the original file size. Overall, the experiment results suggest that our proposed approach of implementing GPT into an AMG framework significantly improves the chance of single-shot evasion. Additionally, our results show the deficiency of current AMG methods to operate effectively in a single-shot threat model. This highlights their excessive reliance on querying a malware detector multiple times, which renders them ineffective in practice when realistic restrictions are applied to the number of allowed queries. \section{Conclusion and Future Directions} AMG research has gained popularity as a way to better understand and combat malware attacks. However, current AMG methods are rendered ineffective in real-world settings due to their reliance on multiple malware detector queries and the frequent implementation of query limits on malware detectors in practice. Leveraging GPT, we propose a novel framework for evading DL-based malware detectors that operationalizes a single-shot black-box evasion threat model. The proposed MalGPT framework utilizes GPT's ability to extract salient features from long-range dependencies in byte sequences extracted from malware executable content and generate benign-looking byte sequences for single-shot AMG evasion. Our MalGPT was evaluated on eight major malware categories. MalGPT significantly outperformed all benchmark methods, demonstrating its ability to operate effectively in a single-shot setting where other AMG methods cannot. Our proposed research could be further extended by incorporating other views of sequential malware data such as malware source code (in addition to raw binary content). Multi-view deep learning methods that can utilize information from both executable's raw content and source code are anticipated to yield better evasion performance. \bibliographystyle{IEEEtran}
proofpile-arXiv_065-92
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec1} The quark model is a successful theory, with physicists employing it to explain mesons' and baryons' inner structures and predict the tetraquark and pentaquark. Over the past decade, the multiquark states' exploration has made significant progress both theoretically and experimentally, with several exotic hadronic states being experimentally observed ~\cite{Belle:2003nnu,LHCb:2016axx,BESIII:2016adj,LHCb:2018oeg,LHCb:2015yax,LHCb:2019kea,LHCb:2021chn,LHCb:2020jpq}. In 2015, the LHCb Collaboration observed the pentaquark states in the $J/\psi p$ invariant mass spectrum of the $\Lambda^{0}_b\to{}J/\psi{}K^{-}p{}$ decays. The two candidates of the hidden\textendash{}charm pentaquark are $P_{c}(4380)$ and $P_{c}(4450)$, respectively, whose $J^P$ has an opposite parity with $(\frac{3}{2}^{-},\frac{5}{2}^{+})$, ~\cite{LHCb:2015yax}. In 2019, the $P_{c}(4450)$ pentaquark structure was confirmed and the observations revealed comprising two peaks, $P_{c}(4440)$ and $P_{c}(4457)$, with a statistical significance of 5.4$\sigma$~\cite{LHCb:2019kea}. Meanwhile, the LHCb Collaboration reported a new pentaquark state observation, $P_{c}(4312)$, with a statistical significance of 7.3$\sigma$. In 2021, the LHCb Collaboration found evidence for a new structure $P_c(4337)$ in $B_{s}^{0}\to J/\psi p\bar{p}$ decays, with a final significance of 3.1$\sigma$~\cite{LHCb:2021chn}. The mass and width of $P_c(4337)$ are $4337\ ^{+7}_{-4}\ ^{+2}_{-2}$ MeV and $29\ ^{+26}_{-12}\ ^{+14}_{-14}$ MeV, respectively, while the parity and angular momentum of $P_{c}(4337)$ were predicted with $J^{P}\ =\ {\frac{1}{2}}^{+}$ \cite{Shen:2017ayv}. After discovering the $P_c$ states, theorists have shown great interest in explaining the pentaquark's nature. For instance, in ~\cite{Weng:2019ynv}, the authors systematically studied the mass spectrum of the $P_c$ states utilizing the chromomagnetic model, while in \cite{Wang:2016dzu}, the magnetic moments of the $P_c$ states were calculated in different color-flavor structures. In addition, the parity and angular momentum of the $P_c$ states were predicted by employing the quark delocalization color screening model ~\cite{Huang:2019jlf}. The $P_{c}(4312)$ can be identified as the hidden\textendash{}charm molecular state $\Sigma_{c}\bar{D}$ with $J^{P}=\frac{1}{2}^{-}$ and $P_{c}(4440)$, and $P_{c}(4457)$ as the hidden\textendash{}charm molecular states $\Sigma_{c}\bar{D}^{*}$ with $J^{P}=\frac{1}{2}^{-}$ and $\frac{3}{2}^{-}$. As more exotic hadrons are observed, theorists have tried to explain their mass spectrum with the one\textendash{}boson\textendash{}exchange model~\cite{Chen:2020kco}, the QCD sum rules~\cite{Wang:2020eep, Chen:2020uif, Wang:2019got,Wang:2021itn,Ozdem:2021ugy,Azizi:2021utt}, and the effective field theory~\cite{Peng:2020hql, Lu:2021irg, Chen:2021htr, Wang:2020dko, Wang:2019nvm}. In general, pentaquark's inner structure has been classified as a molecular model~\cite{Hu:2021nvs, He:2019ify, Chen:2019asm, Wu:2019rog, Chen:2019bip, PavonValderrama:2019nbk, Fernandez-Ramirez:2019koa, Xiao:2019aya, Shimizu:2019ptd, Liu:2019tjn,Chen:2021tip,Wang:2021itn,Lu:2021irg,Xiao:2021rgp,Chen:2020uif}, diquark\textendash{}diquark\textendash{}antiquark model~\cite{Lebed:2015tna, Ali:2020vee, Wang:2019got, Anisovich:2017aqa, Shi:2021wyt}, and diquark\textendash{}triquark model~\cite{Karliner:2003sy, Zhu:2015bba,Giron:2021sla}. The LHCb Collaboration in 2020 observed the hidden\textendash{}charm strange pentaquark $P_{cs}(4459)$ in the $J/\psi{}\Lambda$ mass spectrum through an amplitude analysis of the $\Xi^{-}_b\to{}J/\psi{}\Lambda{}K^{-}$ decay ~\cite{LHCb:2020jpq}. The mass and width are $4458.8\pm 2.9^{+4.7}_{-1.1}$ MeV and $ 17.3\pm 6.5^{+8.0}_{-5.7}$ MeV, respectively, while after an in-depth study of $P_{cs}(4459)$, the structure was proved to have two resonances, with masses 4454.9 $\pm$ 2.7 \mbox{MeV} and 4467.8 $\pm$ 3.7 \mbox{MeV} and widths 7.5 $\pm$ 9.7 \mbox{MeV} and 5.2 $\pm$ 5.3 \mbox{MeV}, respectively. However, the parity and angular momentum of the $P_{cs}(4459)$ have not been determined experimentally. The predictions of $J^{P}=\frac{1}{2}^{-}$ and $\frac{3}{2}^{-}$ have been given based on the QCD sum rules~\cite{Chen:2020uif}, chiral quark model~\cite{Hu:2021nvs}, and the strong decay behaviors of the $P_{cs}(4459)$~\cite{Chen:2021tip}. Given that pentaquark's magnetic moment encoding includes helpful details about the charge and magnetization distributions inside the hadrons that assists in analyzing their geometric configurations. In Ref.~\cite{Li:2021ryu}, the author studies the magnetic moments and transition magnetic moments of the hidden-charm pentaquark states with the coupled channel effects and the D wave contributions. This study is important because magnetic moments help us understand the pentaquark's inner structure. In this work, we calculate the magnetic moments of $P_{cs}$ based on the above three models. The remainder of this paper is as follows. Sec.\ref{sec2} discusses the color factor and color configuration, while Sec.\ref{sec3} introduces the wave function of $P_{cs}$. Sec.\ref{sec4} calculates the magnetic moments of $P_{cs}$ in the molecular model, diquark\textendash{}diquark\textendash{}antiquark model, the and diquark\textendash{}triquark model, and finally, Sec.\ref{sec5} summarizes this work. \section{Color factor and color configuration} \label{sec2} The quark level involves chromomagnetic interactions. Therefore, we exploit the color factor $f$ to indicate whether the color force is attractive or repulsive. Regarding the quark-quark color interaction, the color factor $f$ is: \begin{eqnarray} f(ik \rightarrow jl) = \frac{1}{4}\sum_{a=1}^{8}\lambda_{ji}^{a}\lambda_{lk}^{a}. \end{eqnarray} where $\lambda^a$ denotes the Gell-Mann matrices and the quark colors are labelled by $i, j, k$, and $ l$. The potentials is: \begin{eqnarray} V_{qq}(r) \approx +f\frac{\alpha_s}{r}. \end{eqnarray} Considering the quark-antiquark color interaction, the color factor $\widetilde{f}$ is: \begin{eqnarray} \widetilde{f}(ik \rightarrow jl) = -\frac{1}{4}\sum_{a=1}^{8}\lambda_{ji}^{a}\lambda_{lk}^{a}. \end{eqnarray} The potentials is: \begin{eqnarray} V_{q\bar{q}}(r) \approx +\widetilde{f}\frac{\alpha_s}{r}. \end{eqnarray} In Table \ref{tab:magCov} we list the color factors of the multiplet in the SU(3) color representation \begin{table}[htbp] \centering \caption{Color factor values for the color representation. } \label{tab:magCov} \begin{tabular}{ c|c} \toprule[1pt] \hline \ $3_{c}\ \otimes\ \bar{3}_{c}$\ & \ \ \ $1_{c}\ \oplus\ {8}_{c}$ \\ \hline \ color factor\ & -$\frac{4}{3}$\ \ \ \ \ \ $\frac{1}{6}$\ \\ \hline \ $3_{c}\ \otimes\ 3_{c}$\ & \ \ \ $6_{c}\ \oplus\ \bar{3}_{c}$ \\ \hline \ color factor\ & \ \ \ $\frac{1}{3}$\ \ \ \ \ $-\frac{2}{3}$\ \\ \hline \ $3_{c}\ \otimes\ {3}_{c}\otimes\ {3}_{c}$\ & \ \ \ $1_{c}\ \oplus\ {8}_{1c}\ \oplus\ {8}_{2c}\ \oplus\ {10}_{c}$ \\ \hline \ color factor\ & $-2$\ \ \ \ \ \ $-\frac{1}{2}$\ \ \ \ \ \ $-\frac{1}{2}$\ \ \ \ \ \ $1$\ \\ \hline \ $3_{c}\ \otimes\ {3}_{c}\otimes\ \bar{3}_{c}$\ & \ \ \ $3_{1c}\ \oplus\ 3_{2c}\ \oplus\ \bar{6}_{c}\ \oplus\ {15}_{c}$ \\ \hline \ color factor\ & $-\frac{4}{3}$\ \ \ \ \ \ $-\frac{4}{3}$\ \ \ \ \ \ $-\frac{1}{3}$\ \ \ \ \ \ $2$\ \\ \hline \bottomrule[1pt] \end{tabular} \end{table} Color confinement implies that the physical hadrons are singlet. Under this restriction, we divide the pentaquark states into the following three categories: \begin{enumerate} \item Molecular model Each cluster of the molecular model forms a quasibound cluster. In other words, clusters of the molecular model tend to be color singlet. We observe that $f_{1_{c}} < f_{8_{c}}$ in the color representation of the quark and antiquark, hence it is easier to form a singlet state than octet states. Similarly, $f_{1_{c}} < f_{8_{1c}}/f_{8_{2c}}< f_{10_{c}}$ in the three-quark color representation and thus it is easier to form a singlet state than other states. Therefore, from the molecular model we have two configurations $(c_{}\bar{c_{}})(q_{1}q_{2}q_{3})$ and $(\bar{c_{}}q_{1})(c_{}q_{2}q_{3})$, where $q$ denotes the $u,d,s$ quark. \item Diquark\textendash{}Diquark\textendash{}antiquark model The diquark prefers to form $\bar{3}_{c}$ by comparing the color factor of $6_{c}$ and $\bar{3}_{c}$. Similarly, $\bar{3}_{c}\ \otimes\ \bar{3}_{c}$ prefers to form $3_{c}$. Hence, we have $\bar{3}_{c}(\mathcal{D})\ \otimes\ \bar{3}_{c}(\mathcal{D})\ \otimes\ \bar{3}_{c}(\mathcal{A})$ to form a color singlet, where $\mathcal{D}$ and $\mathcal{A}$ represent the diquark and antiquark, respectively. Thus, the pentaquark configuration is $(cq_{1})(q_{2}q_{3})(\bar{c})$, represented by the diquark\textendash{}diquark\textendash{}antiquark model. \item Diquark\textendash{}triquark model The triquark involves two quarks and an antiquark, assisting in distinguishing it when utilizing the molecule model. In this case, $f_{3_{1c}} / f_{3_{2c}} < f_{\bar{6}_{c}} < f_{15_{c}}$ is the color representation of the triquark quark and we have $3_{c}(\mathcal{T})\ \otimes \ \bar{3}_{c}(\mathcal{D})$ to form a color singlet, where $\mathcal{T}$ represents a triquark. Thus the pentaquark configuration represented by the diquark\textendash{}triquark model is $(c\bar{c}q_{1})(q_{2}q_{3})$ and $(cq_{1})(\bar{c}q_{2}q_{3})$ . \end{enumerate} The separation of c and $\bar{c}$ into distinct confinement volumes provides a natural suppression mechanism for the pentaquark widths\cite{LHCb:2019kea}. Thus we don't consider $(\bar{c}c) (q_{1}q_{2}q_{3})$ and $(\bar{c}cq_{1})(q_{2}q_{3})$. \section{Wave function of hidden-charm strange pentaquark states} \label{sec3} In this work, we study the pentaquark states in the $SU(3)_{f}$ frame. The overall wavefunction for a bounded multiquark state, while accounting for all degrees of freedom, can be written as: \begin{equation} \psi_{wavefunction} = \phi_{flavor}\chi_{spin}\varepsilon_{color}\eta_{space}. \nonumber \end{equation} Due to the Fermi statistics, the overall wavefunction above is required to be antisymmetric. The molecular model of the pentaquark is made up of mesons and baryons. They have to be color singlet because of color confinement. The relation between the spin and flavor is $\phi_{flavor}\chi_{spin}$ = symmetric since the color wavefunction is antisymmetric and the spatial wavefunction is symmetric in the ground state. We study the $P_{cs}$ state in a $SU(3)_f$ frame. There are two configurations for $q_{2}q_{3}$, where $q_{2}q_{3}$ forms the $\bar{3}_f$ and $6_f$ flavor representation with the total spin S = 0 and 1, respectively. When $q_{2}q_{3}$ forms the $6_f$, it is combined with the $q_1$ to form the flavor representation $6_f\ \otimes\ {3}_{f}$ = $10_f\ \oplus\ 8_{1f}$. While, when $q_{2}q_{3}$ forms the $\bar{3}_f$, it is then combined with the $q_1$ to form the flavor representation $\bar{3}_f\ \otimes\ {3}_{f}$ = $8_{2f}\ \oplus\ 1_f$. After inserting $[c\bar{c}]$ and the Clebsch-Gordan coefficients, we apply the same method to the $(cq_{1})(q_{2}q_{3})(\bar{c})$ and $(cq_{1})(\bar{c}q_{2}q_{3})$ configurations, and we obtain the flavor wave function of $P_{cs}$ in $8_{1f}$ and $8_{2f}$. The results are reported in Table \ref{tab:wfun}. \begin{table}[htbp] \centering \caption{The flavor wave function of hidden\textendash{}charm strange pentaquark state in different models. } \label{tab:wfun} \resizebox{0.79\columnwidth}{!}{ \begin{tabular}{c|c|c|c} \toprule[1pt] \hline model & multiplet & $(I,I_{3})$ & wave function \\ \hline \multirow{4}{*}{Molecular model} & \multirow{2}{*}{$8_{1f}$} & $(1,0)$ & $\frac{1}{\sqrt6}[({\bar c}d)(c\{us\})+({\bar c}u)(c\{ds\})]-\sqrt{\frac{2}{3}}({\bar c}s)(c\{ud\})$ \\ \cline{3-4} & & $(0,0)$ & $\frac{1}{\sqrt2}[({\bar{c}}u)(c\{ds\})-({\bar{c}}d)(c\{us\})]$ \\ \cline{2-4} & \multirow{2}{*}{$8_{2f}$} & $(1,0)$ & $\frac{1}{\sqrt2}\{ ({\bar c}d)(c[us])+({\bar c}u)(c[ds]) \}$ \\ \cline{3-4} & & $(0,0)$ & $\frac{1}{\sqrt6}\{({\bar c}d)(c[us])-({\bar c}u)(c[ds])-2(\bar c s)(c[ud])\}$ \\ \hline \multirow{4}{*}{Diquark-diquark-antiquark model} & \multirow{2}{*}{$8_{1f}$} & $(1,0)$ & $\frac{1}{\sqrt6}[({c}d)\{us\}{\bar c}+({c}u)\{ds\}{\bar c}]-\sqrt{\frac{2}{3}}({c}s)\{ud\}{\bar c}$ \\ \cline{3-4} & & $(0,0)$ & $\frac{1}{\sqrt2}[({c}u)\{ds\}{\bar c}-({c}d)\{us\}{\bar c}]$ \\ \cline{2-4} & \multirow{2}{*}{$8_{2f}$} & $(1,0)$ & $\frac{1}{\sqrt2}\{ (cd)[us]{\bar c}+(cu)[ds]{\bar c} \}$ \\ \cline{3-4} & & $(0,0)$ & $\frac{1}{\sqrt6} \{ (cd)[us]{\bar c}-(cu)[ds]{\bar c}-2(cs)[ud]{\bar c} \}$ \\ \hline \multirow{4}{*}{Diquark-triquark model} &\multirow{2}{*}{$8_{1f}$} & $(1,0)$ & $\frac{1}{\sqrt6}[({c}d)(\bar c\{us\})+({c}u)(\bar c\{ds\})]-\sqrt{\frac{2}{3}}({c}s)(\bar c\{ud\})$ \\ \cline{3-4} & & $(0,0)$ & $\frac{1}{\sqrt2}[({c}u)(\bar c\{ds\})-({c}d)(\bar c\{us\})]$ \\ \cline{2-4} & \multirow{2}{*}{$8_{2f}$} & $(1,0)$ & $\frac{1}{\sqrt2}\{ ({c}d)(\bar c[us])+({c}u)(\bar c[ds]) \}$ \\ \cline{3-4} & & $(0,0)$ & $\frac{1}{\sqrt6}\{({c}d)(\bar c[us])-({ c}u)(\bar c[ds])-2( c s)(\bar c[ud])\}$ \\ \hline \bottomrule[1pt] \end{tabular}} \end{table} \section{Magnetic moments of hidden$\textendash{}$charm strange pentaquark} \label{sec4} \subsection {Magnetic moments of the molecular model with the configuration$(\bar{c}q_1)(cq_2q_3)$} Since quarks are fundamental Dirac fermions, the operators of the total magnetic moments and the z-component are: \begin{eqnarray} \hat{\mu} = \ Q\frac{e}{m}\hat{S}, \ \ \ \ \ \ \ \hat{\mu_{z}} = \ Q\frac{e}{m}\hat{S_{z}}. \end{eqnarray} As mentioned above, we do not consider the orbital excitation in the bound state, so the orbital excitation lies between the meson and baryon. The total magnetic moments formula can be written as: \begin{eqnarray} \hat{\mu} = \ \hat{\mu}_{\mathcal{B}}+\hat{\mu}_{\mathcal{M}}+\hat{\mu}_{l}. \end{eqnarray} where the subscripts $\mathcal{B}$ and $\mathcal{M}$ represent the baryon and meson, respectively, and $l$ is the orbital excitation between the meson and baryon. The magnetic moments' specific forms can be written as: \begin{eqnarray} \hat{\mu}_{\mathcal{B}} &=& \sum_{i=1}^{3} \mu_{i}g_{i}\hat{S}_{i},\\ \hat{\mu}_{\mathcal{M}} &=& \sum_{i=1}^{2} \mu_{i}g_{i}\hat{S}_{i},\\ \hat{\mu}_{l} = \mu_{l}\hat{l} &=& \frac{M_{\mathcal{M}}\mu_{\mathcal{B}}+M_{\mathcal{B}}\mu_{\mathcal{M}}}{M_{\mathcal{M}}+M_{\mathcal{B}}}\hat{l}. \end{eqnarray} where $g_{i}$ is the Lande factor and $M_{\mathcal{M}}$ and $M_{\mathcal{B}}$ are the meson and baryon masses, respectively. The pentaquark's $(\bar{c}q_1)(cq_2q_3)$ specific magnetic moments formula in the molecular model is: \begin{eqnarray} \mu &=& \langle\ \psi\ |\ \hat{\mu}_{\mathcal{B}}+\hat{\mu}_{\mathcal{M}}+\hat{\mu}_{l}\ |\ \psi\ \rangle\nonumber\\ &=& \sum_{SS_z,ll_z}\ \langle\ SS_z,ll_z|JJ_z\ \rangle^{2} \left \{ \mu_{l} l_z + \sum_{\widetilde{S}_\mathcal{B},\widetilde{S}_\mathcal{M}}\ \langle\ S_\mathcal{B} \widetilde{S}_{\mathcal{B}},S_\mathcal{M} \widetilde{S}_{\mathcal{M}}|SS_z\ \rangle^{2} \Bigg [ \widetilde{S}_{\mathcal{M}}\bigg(\mu_{\bar{c}} + \mu_{q_1}\bigg )\nonumber\right.\\ &+&\left. \sum_{\widetilde{S}_{c}}\ \langle\ S_c \widetilde{S}_{c},S_{r} \widetilde{S}_{\mathcal{B}}-\widetilde{S}_{c}|S_\mathcal{B} \widetilde{S}_{\mathcal{B}}\rangle^{2}\bigg(g\mu_{c}\widetilde{S}_{c}+(\widetilde{S}_{\mathcal{B}}-\widetilde{S}_{c})(\mu_{q_{2}}+\mu_{q_{3}})\bigg ) \Bigg ]\right \}. \end{eqnarray} where $\psi$ represents the wave function in Table\ref{tab:wfun}. $S_\mathcal{M}$, $S_\mathcal{B}$, $S_r$ are the meson, baryon, and the diquark spin inside the baryon, respectively. $\widetilde{S}$ is the third spin component. For example, the recently observed $P_{cs}(4459)$ state is supposed to be the $\bar{D}^{*}\Xi_{c}$ molecular states in the $8_{2f}$ representation with $(I,I_{3}) = (0,0)$. Their flavor wave functions are: \begin{equation} | P_{cs}\rangle = \frac{1}{\sqrt{6}}\{({\bar c}d)(c[us])-({\bar c}u)(c[ds])-2(\bar c s)(c[ud])\}. \end{equation} Take $J^{p}={\frac{1}{2}}^{-}$ (${\frac{1}{2}}^{+}\otimes1^{-}\otimes0^{+}$) as an example. $J_{1}^{P_{1}}\otimes J_{2}^{P_{2}}\otimes J_{3}^{P_{3}}$ are corresponding to the angular momentum and parity of baryon, meson and orbital, respectively. \begin{align} \mu & = \langle\ P_{cs}\ |\ \hat{\mu}_{\mathcal{B}}+\hat{\mu}_{\mathcal{M}}+\hat{\mu}_{l}\ |\ P_{cs}\ \rangle\nonumber\\ & = \langle \frac{1}{2}\frac{1}{2},1 0 |\frac{1}{2}\frac{1}{2}\rangle^{2}\Bigg [ \langle \frac{1}{2}\frac{1}{2},0 0 |\frac{1}{2}\frac{1}{2}\rangle^{2} \Bigg (\frac{1}{6}*\frac{1}{2}g\mu_{c}+\frac{1}{6}*\frac{1}{2}g\mu_{c}+\frac{4}{6}*\frac{1}{2}g\mu_{c}\Bigg )\Bigg ] +\nonumber\\ & \ \ \ \ \ \langle \frac{1}{2}-\frac{1}{2},1 1 |\frac{1}{2}\frac{1}{2}\rangle^{2}\Bigg [ \Bigg (\frac{1}{6}*(\frac{1}{2}g\mu_{\bar{c}}+\frac{1}{2}g\mu_{d})+\frac{1}{6}*(\frac{1}{2}g\mu_{\bar{c}}+\frac{1}{2}g\mu_{u})+\frac{4}{6}*(\frac{1}{2}g\mu_{\bar{c}}+\frac{1}{2}g\mu_{s})\Bigg ) +\nonumber\\ & \ \ \ \ \ \langle \frac{1}{2}-\frac{1}{2},0 0 |\frac{1}{2}-\frac{1}{2}\rangle^{2} \Bigg (\frac{1}{6}*-\frac{1}{2}g\mu_{c}+\frac{1}{6}*-\frac{1}{2}g\mu_{c}+\frac{4}{6}*-\frac{1}{2}g\mu_{c}\Bigg )\Bigg ] \nonumber\\ & = \frac{1}{9} (\mu_{u}+\mu_{d}+4\mu_{s}+6\mu_{\bar{c}}-3\mu_{c} ). \end{align}\\ In this work, we use the following constituent quark masses\cite{Wang:2018gpl}, \begin{eqnarray} m_u \ =\ m_d \ =\ 0.336\ \mbox{GeV}, \ m_s \ =\ 0.540\ \mbox{GeV},\ m_c \ =\ 1.660\ \mbox{GeV}. \nonumber \end{eqnarray} The numerical results with isospin $(I,I_3) = (1,0)$ and $(I,I_3) = (0,0)$ are reported in Table \ref{pcm} and \ref{lpb}, respectively. \begin{table*}[htbp] \caption{The magnetic moments of the pentaquark states in the molecular model with the wave function $\frac{1}{\sqrt6}[({\bar c}d)(c\{us\})+({\bar c}u)(c\{ds\})]-\sqrt{\frac{2}{3}}({\bar c}s)(c\{ud\})$ in $8_{1f}$ and $\frac{1}{\sqrt2}\{ ({\bar c}d)(c[us])+({\bar c}u)(c[ds]) \}$ in $8_{2f}$ with isospin $(I,I_3) = (1,0)$. They are in $8_{1f}$ representation from $6_f \otimes 3_f = 10_f \oplus 8_{1f}$ and $8_{2f}$ representation from $\bar{3}_f \otimes 3_f = 1_f \oplus8_{2f}$, respectively. The third line $J_{1}^{P_{1}}\otimes J_{2}^{P_{2}}\otimes J_{3}^{P_{3}}$ are corresponding to the angular momentum and parity of baryon, meson and orbital, respectively. The unit is the magnetic moments of the proton.} \label{pcm} \begin{center} \resizebox{0.90\columnwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c} \toprule[1pt] \hline \multicolumn{8}{c}{$8_{1f}$: $\frac{1}{\sqrt6}[({\bar c}d)(c\{us\})+({\bar c}u)(c\{ds\})]-\sqrt{\frac{2}{3}}({\bar c}s)(c\{ud\})$} \\ \hline &\multicolumn{3}{c|}{$^{2}S_{\frac{1}{2}}$ ($J^P={\frac{1}{2}}^{-}$)} &\multicolumn{3}{c|}{${^{4}S_{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{-}}$)} &\multicolumn{1}{c}{$^{6}S_{\frac{5}{2}}^{-}$($J^P={\frac{5}{2}}^{-}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{+}\otimes0^{-}\otimes0^{+}$ & ${\frac{1}{2}}^{+}\otimes1^{-}\otimes0^{+}$ & ${\frac{3}{2}}^{+}\otimes1^{-}\otimes0^{+}$ & ${\frac{1}{2}}^{+}\otimes1^{-}\otimes0^{+}$ & ${\frac{3}{2}}^{+}\otimes0^{-}\otimes0^{+}$ & ${\frac{3}{2}}^{+}\otimes1^{-}\otimes0^{+}$ & ${\frac{3}{2}}^{+}\otimes1^{-}\otimes0^{+}$ \\ \hline $(0,1,0)$ &0.263 &-0.493 &0.735&-0.345 &0.959 &0.460 &0.352 \\ \hline &\multicolumn{3}{c|}{${^{2}P_{\frac{1}{2}}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{3}{c|}{${^{4}P_{\frac{1}{2}}}$ (${J^P={\frac{1}{2}}^{+}}$)} & \\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{+}\otimes0^{-}\otimes1^{-}$ & $[{\frac{1}{2}}^{+}\otimes1^{-}]_{\frac{1}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{+}\otimes1^{-}]_{\frac{1}{2}}\otimes1^{-}$ & ${\frac{3}{2}}^{+}\otimes0^{-}\otimes1^{-}$ & $[{\frac{1}{2}}^{+}\otimes1^{-}]_{\frac{3}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{+}\otimes1^{-}]_{\frac{3}{2}}\otimes1^{-}$ \\ \hline $(0,1,0)$ &-0.145&0.125 &-0.289&0.564 &-0.172 &0.278& \\ \hline &\multicolumn{3}{c|}{${^{2}P_{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{3}{c|}{${^{4}P_{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{1}{c}{${^{6}P_{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{+}\otimes0^{-}\otimes1^{-}$ & $[{\frac{1}{2}}^{+}\otimes1^{-}]_{\frac{1}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{+}\otimes1^{-}]_{\frac{1}{2}}\otimes1^{-}$ &${\frac{3}{2}}^{+}\otimes0^{-}\otimes1^{-}$ & $[{\frac{1}{2}}^{+}\otimes1^{-}]_{\frac{3}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{+}\otimes1^{-}]_{\frac{3}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{+}\otimes1^{-}]_{\frac{5}{2}}\otimes1^{-}$ \\ \hline $(0,1,0)$ &0.177 &-0.551 &0.669&0.666 &-0.276 &0.311 &0.335 \\ \hline &\multicolumn{3}{c|}{${^{4}P_{\frac{5}{2}}}$ (${J^P={\frac{5}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{6}P_{\frac{5}{2}}}$ (${J^P={\frac{5}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{6}P_{\frac{7}{2}}}$ (${J^P={\frac{7}{2}}^{+}}$)} \\ \cline{2-8} $ (Y, I, I_3)$ & ${\frac{1}{2}}^{+}\otimes1^{-}\otimes1^{-}$ & ${\frac{3}{2}}^{+}\otimes0^{-}\otimes1^{-}$ & $[{\frac{3}{2}}^{+}\otimes1^{-}]_{\frac{3}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{+}\otimes1^{-}]_{\frac{5}{2}}\otimes1^{-}$ & ${\frac{3}{2}}^{+}\otimes1^{-}\otimes1^{-}$ \\ \hline $(0,1,0)$ &-0.403 &0.865 &0.394 &0.292 &0.285 \\ \hline \bottomrule[1pt] \multicolumn{8}{c}{$8_{2f}$: $\frac{1}{\sqrt2}\{ ({\bar c}d)(c[us])+({\bar c}u)(c[ds]) \}$} \\ \hline &\multicolumn{2}{c|}{$^2S_{\frac{1}{2}}$($J^P={\frac{1}{2}}^{-}$)} &\multicolumn{1}{c|}{$^{4}S{{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{-}}$)} &\multicolumn{2}{c|}{${^{2}P_{\frac{1}{2}}}$(${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{4}P_{\frac{1}{2}}}$(${J^P={\frac{1}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{+}\otimes0^{-}\otimes0^{+}$ & ${\frac{1}{2}}^{+}\otimes1^{-}\otimes0^{+}$ & ${\frac{1}{2}}^{+}\otimes1^{-}\otimes0^{+}$ & ${\frac{1}{2}}^{+}\otimes0^{-}\otimes1^{-}$ & $[{\frac{1}{2}}^{+}\otimes1^{-}]_{\frac{1}{2}}\otimes1^{-}$ & $[{\frac{1}{2}}^{+}\otimes1^{-}]_{\frac{3}{2}}\otimes1^{-}$ \\ \hline $(0,1,0)$ &0.377&-0.067&0.465&-0.167&-0.007&0.273 \\ \midrule[1pt] &\multicolumn{2}{c|}{$^{2}P_{\frac{3}{2}}$ ($J^P={\frac{3}{2}}^{+}$)} &\multicolumn{1}{c|}{$^{4}P_{\frac{3}{2}}$ ($J^P={\frac{3}{2}}^{+}$)} &\multicolumn{1}{c|}{${^{4}P_{\frac{5}{2}}^{+}}$ (${J^P={\frac{5}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{+}\otimes0^{-}\otimes1^{-}$ & $[{\frac{1}{2}}^{+}\otimes0^{-}]_{\frac{3}{2}}\otimes1^{-}$ & ${\frac{1}{2}}^{+}\otimes1^{-}\otimes1^{-}$ & ${\frac{1}{2}}^{+}\otimes1^{-}\otimes1^{-}$ \\ \hline $(0,1,0)$ &0.315&-0.110&0.324&0.422\\ \bottomrule[1pt] \end{tabular}} \end{center} \end{table*} \begin{table*}[htbp] \caption{The magnetic moments of the pentaquark states in the molecular model with the wave function $\frac{1}{\sqrt2}[({\bar{c}}u)(c\{ds\})-({\bar{c}}d)(c\{us\})]$ in $8_{1f}$ and $\frac{1}{\sqrt6} \{ ({\bar c}d)(c[us])-({\bar c}u)(c[ds])-2(\bar c s)(c[ud]) \}$ in $8_{2f}$ with isospin $(I,I_3) = (0,0)$. The third line $J_{1}^{P_{1}}\otimes J_{2}^{P_{2}}\otimes J_{3}^{P_{3}}$ are corresponding to the angular momentum and parity of baryon, meson and orbital, respectively. The unit is the magnetic moments of the proton.}\label{lpb} \begin{center} \resizebox{0.90\columnwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c} \toprule[1pt] \hline \multicolumn{8}{c}{$8_{1f}$: $\frac{1}{\sqrt2}[({\bar{c}}u)(c\{ds\})-({\bar{c}}d)(c\{us\})]$} \\ \hline &\multicolumn{3}{c|}{$^{2}S_{\frac{1}{2}}$($J^P={\frac{1}{2}}^{-}$)} &\multicolumn{3}{c|}{${^{4}S_{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{-}}$)} &\multicolumn{1}{c}{$^{6}S_{\frac{5}{2}}^{-}$($J^P={\frac{5}{2}}^{-}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{+}\otimes0^{-}\otimes0^{+}$ & ${\frac{1}{2}}^{+}\otimes1^{-}\otimes0^{+}$ & ${\frac{3}{2}}^{+}\otimes1^{-}\otimes0^{+}$ & ${\frac{1}{2}}^{+}\otimes1^{-}\otimes0^{+}$ & ${\frac{3}{2}}^{+}\otimes0^{-}\otimes0^{+}$ & ${\frac{3}{2}}^{+}\otimes1^{-}\otimes0^{+}$ & ${\frac{3}{2}}^{+}\otimes1^{-}\otimes0^{+}$ \\ \hline $(0,0,0)$ &-0.201&0.126 &0.117&-0.113 &0.263 &0.228 &0.352 \\ \hline &\multicolumn{3}{c|}{${^{2}P_{\frac{1}{2}}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{3}{c|}{${^{4}P_{\frac{1}{2}}}$ (${J^P={\frac{1}{2}}^{+}}$)} & \\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{+}\otimes0^{-}\otimes1^{-}$ & $[{\frac{1}{2}}^{+}\otimes1^{-}]_{\frac{1}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{+}\otimes1^{-}]_{\frac{1}{2}}\otimes1^{-}$ & ${\frac{3}{2}}^{+}\otimes0^{-}\otimes1^{-}$ & $[{\frac{1}{2}}^{+}\otimes1^{-}]_{\frac{3}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{+}\otimes1^{-}]_{\frac{3}{2}}\otimes1^{-}$ & \\ \hline $(0,0,0)$ &0.021 &-0.076 &-0.076&-0.046 &0.171 &0.145 & \\ \hline &\multicolumn{3}{c|}{${^{2}P_{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{3}{c|}{${^{4}P_{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{1}{c}{${^{6}P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{+}\otimes0^{-}\otimes1^{-}$ & $[{\frac{1}{2}}^{+}\otimes1^{-}]_{\frac{1}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{+}\otimes1^{-}]_{\frac{1}{2}}\otimes1^{-}$ & ${\frac{3}{2}}^{+}\otimes0^{-}\otimes1^{-}$ & $[{\frac{1}{2}}^{+}\otimes1^{-}]_{\frac{3}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{+}\otimes1^{-}]_{\frac{3}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{+}\otimes1^{-}]_{\frac{5}{2}}\otimes1^{-}$ \\ \hline $(0,0,0)$ &-0.270 &0.075 &0.061&-0.103 &0.163 &0.145 &0.329 \\ \hline &\multicolumn{3}{c|}{${^{4}P_{\frac{5}{2}}}$ (${J^P={\frac{5}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{6}P_{\frac{5}{2}}}$ (${J^P={\frac{5}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{6}P_{\frac{7}{2}}}$ (${J^P={\frac{7}{2}}^{+}}$)} \\ \cline{2-8} $ (Y, I, I_3)$ & ${\frac{1}{2}}^{+}\otimes1^{-}\otimes1^{-}$ & ${\frac{3}{2}}^{+}\otimes0^{-}\otimes1^{-}$ & $[{\frac{3}{2}}^{+}\otimes1^{-}]_{\frac{3}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{+}\otimes1^{-}]_{\frac{5}{2}}\otimes1^{-}$ & ${\frac{3}{2}}^{+}\otimes1^{-}\otimes1^{-}$ \\ \hline $(0,0,0)$ &-0.164 &0.189 &0.172 &0.295 &0.296 \\ \hline \bottomrule[1pt] \multicolumn{8}{c}{$8_{2f}$: $\frac{1}{\sqrt6} \{ ({\bar c}d)(c[us])-({\bar c}u)(c[ds])-2(\bar c s)(c[ud]) \}$} \\ \hline &\multicolumn{2}{c|}{$^2S_{\frac{1}{2}}$($J^P={\frac{1}{2}}^{-}$)} &\multicolumn{1}{c|}{$^{4}S{{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{-}}$)} &\multicolumn{2}{c|}{${^{2}P_{\frac{1}{2}}}$(${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{4}P_{\frac{1}{2}}}$(${J^P={\frac{1}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{+}\otimes0^{-}\otimes0^{+}$ & ${\frac{1}{2}}^{+}\otimes1^{-}\otimes0^{+}$ & ${\frac{1}{2}}^{+}\otimes1^{-}\otimes0^{+}$ & ${\frac{1}{2}}^{+}\otimes0^{-}\otimes1^{-}$ & $[{\frac{1}{2}}^{+}\otimes1^{-}]_{\frac{1}{2}}\otimes1^{-}$ & $[{\frac{1}{2}}^{+}\otimes1^{-}]_{\frac{3}{2}}\otimes1^{-}$ \\ \hline $(0,0,0)$ &0.377&-0.531&-0.231&-0.161&0.152&-0.116 \\ \hline &\multicolumn{2}{c|}{$^{2}P_{\frac{3}{2}}$ ($J^P={\frac{3}{2}}^{+}$)} &\multicolumn{1}{c|}{$^{4}P_{\frac{3}{2}}$ ($J^P={\frac{3}{2}}^{+}$)} &\multicolumn{1}{c|}{${^{4}P_{\frac{5}{2}}^{+}}$(${J^P={\frac{5}{2}}^{+}}$}\\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{+}\otimes0^{-}\otimes1^{-}$ & $[{\frac{1}{2}}^{+}\otimes0^{-}]_{\frac{3}{2}}\otimes1^{-}$ & ${\frac{1}{2}}^{+}\otimes1^{-}\otimes1^{-}$ & ${\frac{1}{2}}^{+}\otimes1^{-}\otimes1^{-}$ \\ \hline $(0,0,0)$ &0.324&-0.568&-0.184&-0.268 \\ \hline \bottomrule[1pt] \end{tabular}} \end{center} \end{table*} \subsection {Magnetic moments of the diquark-diquark-antiquark model with the $(cq_1)(q_2q_3)\bar{c}$ configuration} In the diquark-diquark-antiquark model, there are two P-wave excitation modes inside the three-body bound state, the $\rho$ and the $\lambda$ excitation. The $\rho$ mode P-wave orbital excitation lies between the diquark $(cq_1)$ and diquark $(q_2q_3)$. The $\lambda$ mode P-wave orbital excitation lies between the $\bar{c}$ and the center of mass system of the $(cq_1)$ and $(q_2q_3)$. The total magnetic moments formula of the diquark-diquark-antiquark model can be written as: \begin{eqnarray} \hat{\mu} = \ \hat{\mu}_{H}+\hat{\mu}_{L}+\hat{\mu}_{\bar{c}}+\hat{\mu}_{l}. \end{eqnarray} where the subscripts $H$ and $L$ represent a heavy diquark $(cq_1)$ and light diquark $(q_2q_3)$, respectively, and $l$ is the orbital excitation. In the diquark-diquark-antiquark model, the specific magnetic moments formula of the pentaquark $(cq_1)(q_2q_3)\bar{c}$ is: \begin{eqnarray} \mu &=& \langle\ \psi \ |\ \hat{\mu}_{H}+\hat{\mu}_{L}+\hat{\mu}_{\bar{c}}+\hat{\mu}_{l}\ |\ \psi \ \rangle\nonumber\\ &=&\sum_{S_z,l_z}\ \langle\ SS_z,ll_z|JJ_z\ \rangle^{2} \left \{ \mu_{l} l_z + \sum_{\widetilde{S}_{\bar{c}}}\ \langle\ S_{\bar{c}} \widetilde{S}_{\bar{c}},S_{\mathcal{G}} \widetilde{S}_{\mathcal{G}}|SS_z\ \rangle^{2} \Bigg [g\widetilde{S}_{\bar{c}}\mu_{\bar{c}}\nonumber\right.\\ &+&\left.\sum_{\widetilde{S}_{H},\widetilde{S}_{L}}\ \langle\ S_{H} \widetilde{S}_{H},S_{L} \widetilde{S}_{L}|S_{\mathcal{G}} \widetilde{S}_{\mathcal{G}}\rangle^{2}\bigg(\widetilde{S}_{H}(\mu_{c}+\mu_{q_1})+\widetilde{S}_{L}(\mu_{q_2}+\mu_{q_3})\bigg ) \Bigg ]\right \}. \end{eqnarray} where $S_{\mathcal{G}}$ represents the spin of $(cq_1)(q_2q_3)$. The diquarks' masses are \cite{Ebert:2010af}: \begin{eqnarray} [u,d]&=& 710\mbox{MeV}, \ \ \ \ \{u,d\} =909\mbox{MeV},\ \ \ \ \ [u,s]=948\mbox{MeV},\ \ \ \ \{u,s\} =1069\mbox{MeV},\nonumber\\ \nonumber [c,q]&=& 1973\mbox{MeV},\ \ \ \{c,q\} =2036\mbox{MeV},\ \ \ \ [c,s]=2091\mbox{MeV},\ \ \ \{c,s\} =2158\mbox{MeV}. \end{eqnarray} The numerical results for the states with the $\rho$ excitation mode with isospin $(I,I_3) = (1,0)$ and $(I,I_3) = (0,0)$ are presented in Table \ref{caqs} and \ref{qag}, respectively. The numerical results for the states with the $\lambda$ excitation mode with isospin $(I,I_3) = (1,0)$ and $(I,I_3) = (0,0)$ are presented in Table \ref{abc} and \ref{def}, respectively. \begin{table*}[htbp] \caption{The magnetic moments of the pentaquark states in the diquark-diquark-antiquark model with the wave function $\frac{1}{\sqrt6}[({c}d)\{us\}{\bar c}+({c}u)\{ds\}{\bar c}]-\sqrt{\frac{2}{3}}({c}s)\{ud\}{\bar c}$ in $8_{1f}$ and $\frac{1}{\sqrt2}\{ (cd)[us]{\bar c}+(cu)[ds]{\bar c} \}$ in $8_{2f}$ with isospin $(I,I_3) = (1,0)$. They are in $8_{1f}$ representation from $6_f \otimes 3_f = 10_f \oplus 8_{1f}$ and $8_{2f}$ representation from $\bar{3}_f \otimes 3_f = 1_f \oplus8_{2f}$, respectively. The third line $J_{1}^{P_{1}}\otimes J_{2}^{P_{2}}\otimes J_{3}^{P_{3}}\otimes J_{4}^{P_{4}}$ are corresponding to the angular momentum and parity of $(cq_1)$, $(q_2q_3)$, $\bar{c}$ and orbital, respectively.The $\rho$ mode P-wave orbital excitation lies between the diquark $(cq_1)$ and diquark $(q_2q_3)$. The unit is the magnetic moments of the proton.} \label{caqs} \begin{center} \resizebox{0.95\columnwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c} \toprule[1pt] \hline \multicolumn{8}{c} {$8_{1f}$: $\frac{1}{\sqrt6}[({c}d)\{us\}{\bar c}+({c}u)\{ds\}{\bar c}]-\sqrt{\frac{2}{3}}({c}s)\{ud\}{\bar c}$} \\ \hline &\multicolumn{3}{c|}{$^{2}S_{\frac{1}{2}}$($J^P={\frac{1}{2}}^{-}$)} &\multicolumn{3}{c|}{${^{4}S_{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{-}}$)} &\multicolumn{1}{c}{$^{6}S_{\frac{5}{2}}$ ($J^P={\frac{5}{2}}^{-}$)} \\ \cline{2-8} $(Y, I, I_3)$ & $0^{+}\otimes1^{+} \otimes{\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})_{0} \otimes {\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})_{1} \otimes {\frac{1}{2}}^{-}\otimes0^{+}$ &$(0^{+}\otimes1^{+})\otimes{\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})_{1} \otimes {\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})_{2} \otimes {\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})\otimes {\frac{1}{2}}^{-}\otimes0^{+}$ \\ \hline $(0,1,0)$ &0.514 &-0.377 &0.368 &0.206 &-0.013 &0.881 &0.352 \\ \hline &\multicolumn{1}{c|}{$^{2}P_{\frac{1}{2}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{1}{c|}{$^{4}P_{\frac{1}{2}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{2}{c|}{$^{2}P_{\frac{1}{2}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{2}{c|}{$^{4}P_{\frac{1}{2}}$ (${J^P={\frac{1}{2}}^{+}}$)} & \\ \cline{2-8} $(Y, I, I_3)$ &$(0^{+}\otimes1^{+}\otimes {\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$(0^{+}\otimes1^{+}\otimes {\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{0}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ \\ \hline $(0,1,0)$ &-0.035&0.046& 0.260& 0.012&-0.074& 0.422 \\ \hline &\multicolumn{1}{c|}{${^2 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^4 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{2}{c|}{${^2 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{2}{c|}{${^4 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{1}{c}{${^6 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ &$(0^{+}\otimes1^{+}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$(0^{+}\otimes1^{+}\otimes {\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{0}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{5}{2}}\otimes1^{-}$ \\ \hline $(0,1,0)$ &0.719&0.233&-0.175&0.570&0.005&0.727&0.174 \\ \hline &\multicolumn{3}{c|}{${^{4}P_{\frac{5}{2}}^{+}}$ (${J^P={\frac{5}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{6}P_{\frac{5}{2}}}$ (${J^P={\frac{5}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^6P_{\frac{7}{2}}}$ (${J^P={\frac{7}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ &$(0^{+}\otimes1^{+}\otimes {\frac{1}{2}}^{-})\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{5}{2}}\otimes1^{-}$ &$1^{+}\otimes1^{+}\otimes{\frac{1}{2}}^{-}\otimes1^{-}$ \\ \hline $(0,1,0)$ &0.410&0.190&1.083&0.369&0.554 \\ \hline \bottomrule[1pt] \multicolumn{8}{c}{$8_{2f}$: $\frac{1}{\sqrt2}\{ (cd)[us]{\bar c}+(cu)[ds]{\bar c} \}$ } \\ \hline &\multicolumn{2}{c|}{$^2S_{\frac{1}{2}}$ ($J^P={\frac{1}{2}}^{-}$)} &\multicolumn{1}{c|}{$^{4}S{{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{-}}$)} &\multicolumn{2}{c|}{${^{2}P_{\frac{1}{2}}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{4}P_{\frac{1}{2}}}$ (${J^P={\frac{1}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${0}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{0^{+}}$ & ${1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{0^{+}}$ & ${1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{0^{+}}$ & $0^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{1^{-}}$ & $({1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}} \otimes{1^{-}}$ & $({1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes{1^{-}}$ \\ \hline $(0,1,0)$ &-0.377&0.687&0.465&0.137&-0.224&0.256 \\ \hline &\multicolumn{2}{c|}{$^{2}P_{\frac{3}{2}}$($J^P={\frac{3}{2}}^{+}$)} &\multicolumn{1}{c|}{$^{4}P_{\frac{3}{2}}$($J^P={\frac{3}{2}}^{+}$)} &\multicolumn{1}{c|}{${^{4}P_{\frac{5}{2}}^{+}}$(${J^P={\frac{5}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${0}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{1^{-}}$ & $({1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} )_{\frac{1}{2}}\otimes{1^{-}}$ & $({1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}} \otimes{1^{-}}$ & ${1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{1^{-}}$ \\ \hline $(0,1,0)$ &-0.360&0.695&0.344 &0.473 \\ \hline \bottomrule[1pt] \end{tabular}} \end{center} \end{table*} \begin{table*}[htbp] \caption{The magnetic moments of the pentaquark states in the diquark-diquark-antiquark model with the wave function $\frac{1}{\sqrt2}[({c}u)\{ds\}{\bar c}-({c}d)\{us\}{\bar c}]$ in $8_{1f}$ and $\frac{1}{\sqrt6} \{ (cd)[us]{\bar c}-(cu)[ds]{\bar c}-2(cs)[ud]{\bar c} \}$ in $8_{2f}$ with isospin $(I,I_3) = (0,0)$. The third line $J_{1}^{P_{1}}\otimes J_{2}^{P_{2}}\otimes J_{3}^{P_{3}}\otimes J_{4}^{P_{4}}$ are corresponding to the angular momentum and parity of $(cq_1)$, $(q_2q_3)$, $\bar{c}$ and orbital, respectively. The $\rho$ mode P-wave orbital excitation lies between the diquark $(cq_1)$ and diquark $(q_2q_3)$. } \label{qag} \begin{center} \resizebox{0.95\columnwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c} \toprule[1pt] \hline \multicolumn{8}{c} {$8_{1f}$: $\frac{1}{\sqrt2}[({c}u)\{ds\}{\bar c}-({c}d)\{us\}{\bar c}]$} \\ \hline &\multicolumn{3}{c|}{$^{2}S_{\frac{1}{2}}$($J^P={\frac{1}{2}}^{-}$)} &\multicolumn{3}{c|}{${^{4}S_{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{-}}$)} &\multicolumn{1}{c}{$^{6}S_{\frac{5}{2}}$ ($J^P={\frac{5}{2}}^{-}$)} \\ \cline{2-8} $(Y, I, I_3)$ & $0^{+}\otimes1^{+} \otimes{\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})_{0} \otimes {\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})_{1} \otimes {\frac{1}{2}}^{-}\otimes0^{+}$ &$(0^{+}\otimes1^{+})\otimes{\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})_{1} \otimes {\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})_{2} \otimes {\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})\otimes {\frac{1}{2}}^{-}\otimes0^{+}$ \\ \hline $(0,0,0)$ &0.050 &-0.377 &0.368 &-0.490 &-0.013 &0.881 &0.352 \\ \hline &\multicolumn{1}{c|}{$^{2}P_{\frac{1}{2}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{1}{c|}{$^{4}P_{\frac{1}{2}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{2}{c|}{$^{2}P_{\frac{1}{2}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{2}{c|}{$^{4}P_{\frac{1}{2}}$ (${J^P={\frac{1}{2}}^{+}}$)} & \\ \cline{2-8} $(Y, I, I_3)$ &$(0^{+}\otimes1^{+}\otimes {\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$(0^{+}\otimes1^{+}\otimes {\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{0}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ \\ \hline $(0,0,0)$ &0.013 &-0.287&0.150&-0.098&-0.019& 0.478 \\ \hline &\multicolumn{1}{c|}{${^2 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^4 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{2}{c|}{${^2 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{2}{c|}{${^4 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{1}{c}{${^6 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ &$(0^{+}\otimes1^{+}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$(0^{+}\otimes1^{+}\otimes {\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{0}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{5}{2}}\otimes1^{-}$ \\ \hline $(0, 0,0)$ &0.094&-0.342&-0.340&0.405&0.197&0.661&0.273 \\ \hline &\multicolumn{3}{c|}{${^{4}P_{\frac{5}{2}}^{+}}$ (${J^P={\frac{5}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{6}P_{\frac{5}{2}}}$ (${J^P={\frac{5}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^6P_{\frac{7}{2}}}$ (${J^P={\frac{7}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ &$(0^{+}\otimes1^{+}\otimes {\frac{1}{2}}^{-})\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{5}{2}}\otimes1^{-}$ &$1^{+}\otimes1^{+}\otimes{\frac{1}{2}}^{-}\otimes1^{-}$ \\ \hline $(0,0,0)$ &-0.446&0.024&0.918&0.322&0.388 \\ \hline \bottomrule[1pt] \multicolumn{8}{c}{ $8_{2f}$: $\frac{1}{\sqrt6} \{ (cd)[us]{\bar c}-(cu)[ds]{\bar c}-2(cs)[ud]{\bar c} \}$ } \\ \hline &\multicolumn{2}{c|}{$^2S_{\frac{1}{2}}$ ($J^P={\frac{1}{2}}^{-}$)} &\multicolumn{1}{c|}{$^{4}S{{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{-}}$)} &\multicolumn{2}{c|}{${^{2}P_{\frac{1}{2}}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{4}P_{\frac{1}{2}}}$ (${J^P={\frac{1}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${0}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{0^{+}}$ & ${1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{0^{+}}$ & ${1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{0^{+}}$ & $0^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{1^{-}}$ & $({1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}} \otimes{1^{-}}$ & $({1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes{1^{-}}$ \\ \hline $(0,0,0)$ &-0.377&0.223&-0.231&0.292&0.091&-0.211 \\ \hline &\multicolumn{2}{c|}{$^{2}P_{\frac{3}{2}}$($J^P={\frac{3}{2}}^{+}$)} &\multicolumn{1}{c|}{$^{4}P_{\frac{3}{2}}$($J^P={\frac{3}{2}}^{+}$)} &\multicolumn{1}{c|}{${^{4}P_{\frac{5}{2}}^{+}}$(${J^P={\frac{5}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${0}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{1^{-}}$ & $({1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} )_{\frac{1}{2}}\otimes{1^{-}}$ & $({1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}} \otimes{1^{-}}$ & ${1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{1^{-}}$ \\ \hline $(0,0,0)$ &-0.126&0.470&-0.070 &0.016 \\ \hline \bottomrule[1pt] \end{tabular}} \end{center} \end{table*} \begin{table*}[htbp] \caption{The magnetic moments of the pentaquark states in the diquark-diquark-antiquark model with the wave function $\frac{1}{\sqrt6}[({c}d)\{us\}{\bar c}+({c}u)\{ds\}{\bar c}]-\sqrt{\frac{2}{3}}({c}s)\{ud\}{\bar c}$ in $8_{1f}$ and $\frac{1}{\sqrt2}\{ (cd)[us]{\bar c}+(cu)[ds]{\bar c} \}$ in $8_{2f}$ with isospin $(I,I_3) = (1,0)$. The third line $J_{1}^{P_{1}}\otimes J_{2}^{P_{2}}\otimes J_{3}^{P_{3}}\otimes J_{4}^{P_{4}}$ are corresponding to the angular momentum and parity of $(cq_1)$, $(q_2q_3)$, $\bar{c}$ and orbital, respectively. The $\lambda$ mode P-wave orbital excitation lies between the $\bar{c}$ and the center of mass system of the $(cq_1)$ and $(q_2q_3)$.The unit is the magnetic moments of the proton.} \label{abc} \begin{center} \resizebox{0.95\columnwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c} \toprule[1pt] \hline \multicolumn{8}{c} {$8_{1f}$: $\frac{1}{\sqrt6}[({c}d)\{us\}{\bar c}+({c}u)\{ds\}{\bar c}]-\sqrt{\frac{2}{3}}({c}s)\{ud\}{\bar c}$} \\ \hline &\multicolumn{3}{c|}{$^{2}S_{\frac{1}{2}}$($J^P={\frac{1}{2}}^{-}$)} &\multicolumn{3}{c|}{${^{4}S_{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{-}}$)} &\multicolumn{1}{c}{$^{6}S_{\frac{5}{2}}$ ($J^P={\frac{5}{2}}^{-}$)} \\ \cline{2-8} $(Y, I, I_3)$ & $0^{+}\otimes1^{+} \otimes{\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})_{0} \otimes {\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})_{1} \otimes {\frac{1}{2}}^{-}\otimes0^{+}$ &$(0^{+}\otimes1^{+})\otimes{\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})_{1} \otimes {\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})_{2} \otimes {\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})\otimes {\frac{1}{2}}^{-}\otimes0^{+}$ \\ \hline $(0,1,0)$ &0.514 &-0.377 &0.368 &0.206 &-0.013 &0.881 &0.352 \\ \hline &\multicolumn{1}{c|}{$^{2}P_{\frac{1}{2}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{1}{c|}{$^{4}P_{\frac{1}{2}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{2}{c|}{$^{2}P_{\frac{1}{2}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{2}{c|}{$^{4}P_{\frac{1}{2}}$ (${J^P={\frac{1}{2}}^{+}}$)} & \\ \cline{2-8} $(Y, I, I_3)$ &$(0^{+}\otimes1^{+}\otimes {\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$(0^{+}\otimes1^{+}\otimes {\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{0}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ \\ \hline $(0,1,0)$ &0.217 &-0.080&0.507&0.259&-0.198& 0.299 \\ \hline &\multicolumn{1}{c|}{${^2 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^4 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{2}{c|}{${^2 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{2}{c|}{${^4 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{1}{c}{${^6 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ &$(0^{+}\otimes1^{+}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$(0^{+}\otimes1^{+}\otimes {\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{0}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{5}{2}}\otimes1^{-}$ \\ \hline $(0,1,0)$ &1.096&0.384&0.196&0.941&0.220&0.875&-0.048 \\ \hline &\multicolumn{3}{c|}{${^{4}P_{\frac{5}{2}}^{+}}$ (${J^P={\frac{5}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{6}P_{\frac{5}{2}}}$ (${J^P={\frac{5}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^6P_{\frac{7}{2}}}$ (${J^P={\frac{7}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ &$(0^{+}\otimes1^{+}\otimes {\frac{1}{2}}^{-})\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{5}{2}}\otimes1^{-}$ &$1^{+}\otimes1^{+}\otimes{\frac{1}{2}}^{-}\otimes1^{-}$ \\ \hline $(0,1,0)$ &0.788&0.560&1.454&0.475&0.924 \\ \hline \bottomrule[1pt] \multicolumn{8}{c}{$8_{2f}$: $\frac{1}{\sqrt2}\{ (cd)[us]{\bar c}+(cu)[ds]{\bar c} \}$ } \\ \hline &\multicolumn{2}{c|}{$^2S_{\frac{1}{2}}$ ($J^P={\frac{1}{2}}^{-}$)} &\multicolumn{1}{c|}{$^{4}S{{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{-}}$)} &\multicolumn{2}{c|}{${^{2}P_{\frac{1}{2}}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{4}P_{\frac{1}{2}}}$ (${J^P={\frac{1}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${0}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{0^{+}}$ & ${1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{0^{+}}$ & ${1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{0^{+}}$ & $0^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{1^{-}}$ & $({1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}} \otimes{1^{-}}$ & $({1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes{1^{-}}$ \\ \hline $(0,1,0)$ &-0.377&0.687&0.465&0.525&0.164&0.062 \\ \hline &\multicolumn{2}{c|}{$^{2}P_{\frac{3}{2}}$($J^P={\frac{3}{2}}^{+}$)} &\multicolumn{1}{c|}{$^{4}P_{\frac{3}{2}}$($J^P={\frac{3}{2}}^{+}$)} &\multicolumn{1}{c|}{${^{4}P_{\frac{5}{2}}^{+}}$(${J^P={\frac{5}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${0}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{1^{-}}$ & $({1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} )_{\frac{1}{2}}\otimes{1^{-}}$ & $({1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}} \otimes{1^{-}}$ & ${1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{1^{-}}$ \\ \hline $(0,1,0)$ &0.223&1.277&0.577 &1.055 \\ \hline \bottomrule[1pt] \end{tabular}} \end{center} \end{table*} \begin{table*}[htbp] \caption{The magnetic moments of the pentaquark states in the diquark-diquark-antiquark model with the wave function $\frac{1}{\sqrt2}[({c}u)\{ds\}{\bar c}-({c}d)\{us\}{\bar c}]$ in $8_{1f}$ and $\frac{1}{\sqrt6} \{ (cd)[us]{\bar c}-(cu)[ds]{\bar c}-2(cs)[ud]{\bar c} \}$ in $8_{2f}$ with isospin $(I,I_3) = (0,0)$. The third line $J_{1}^{P_{1}}\otimes J_{2}^{P_{2}}\otimes J_{3}^{P_{3}}\otimes J_{4}^{P_{4}}$ are corresponding to the angular momentum and parity of $(cq_1)$, $(q_2q_3)$, $\bar{c}$ and orbital, respectively. The $\lambda$ mode P-wave orbital excitation lies between the $\bar{c}$ and the center of mass system of the $(cq_1)$ and $(q_2q_3)$.} \label{def} \begin{center} \resizebox{0.95\columnwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c} \toprule[1pt] \hline \multicolumn{8}{c} {$8_{1f}$: $\frac{1}{\sqrt2}[({c}u)\{ds\}{\bar c}-({c}d)\{us\}{\bar c}]$} \\ \hline &\multicolumn{3}{c|}{$^{2}S_{\frac{1}{2}}$($J^P={\frac{1}{2}}^{-}$)} &\multicolumn{3}{c|}{${^{4}S_{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{-}}$)} &\multicolumn{1}{c}{$^{6}S_{\frac{5}{2}}$ ($J^P={\frac{5}{2}}^{-}$)} \\ \cline{2-8} $(Y, I, I_3)$ & $0^{+}\otimes1^{+} \otimes{\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})_{0} \otimes {\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})_{1} \otimes {\frac{1}{2}}^{-}\otimes0^{+}$ &$(0^{+}\otimes1^{+})\otimes{\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})_{1} \otimes {\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})_{2} \otimes {\frac{1}{2}}^{-}\otimes0^{+}$ &$(1^{+}\otimes1^{+})\otimes {\frac{1}{2}}^{-}\otimes0^{+}$ \\ \hline $(0,0,0)$ &0.050 &-0.377 &0.368 &-0.490 &-0.013 &0.881 &0.352 \\ \hline &\multicolumn{1}{c|}{$^{2}P_{\frac{1}{2}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{1}{c|}{$^{4}P_{\frac{1}{2}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{2}{c|}{$^{2}P_{\frac{1}{2}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{2}{c|}{$^{4}P_{\frac{1}{2}}$ (${J^P={\frac{1}{2}}^{+}}$)} & \\ \cline{2-8} $(Y, I, I_3)$ &$(0^{+}\otimes1^{+}\otimes {\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$(0^{+}\otimes1^{+}\otimes {\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{0}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ \\ \hline $(0,0,0)$ &0.334 &-0.448&0.469&0.221&-0.179& 0.318 \\ \hline &\multicolumn{1}{c|}{${^2 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^4 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{2}{c|}{${^2 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{2}{c|}{${^4 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{1}{c}{${^6 P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ &$(0^{+}\otimes1^{+}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$(0^{+}\otimes1^{+}\otimes {\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{0}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{5}{2}}\otimes1^{-}$ \\ \hline $(0, 0,0)$ &0.575&-0.150&0.139&0.884&0.072&0.853&-0.014 \\ \hline &\multicolumn{3}{c|}{${^{4}P_{\frac{5}{2}}^{+}}$ (${J^P={\frac{5}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{6}P_{\frac{5}{2}}}$ (${J^P={\frac{5}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^6P_{\frac{7}{2}}}$ (${J^P={\frac{7}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ &$(0^{+}\otimes1^{+}\otimes {\frac{1}{2}}^{-})\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{1}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes1^{-}$ &$((1^{+}\otimes1^{+})_{2}\otimes{\frac{1}{2}}^{-})_{\frac{5}{2}}\otimes1^{-}$ &$1^{+}\otimes1^{+}\otimes{\frac{1}{2}}^{-}\otimes1^{-}$ \\ \hline $(0,0,0)$ &0.035&0.503&1.397&0.459&0.867 \\ \hline \bottomrule[1pt] \multicolumn{8}{c}{ $8_{2f}$: $\frac{1}{\sqrt6} \{ (cd)[us]{\bar c}-(cu)[ds]{\bar c}-2(cs)[ud]{\bar c} \}$ } \\ \hline &\multicolumn{2}{c|}{$^2S_{\frac{1}{2}}$ ($J^P={\frac{1}{2}}^{-}$)} &\multicolumn{1}{c|}{$^{4}S{{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{-}}$)} &\multicolumn{2}{c|}{${^{2}P_{\frac{1}{2}}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{4}P_{\frac{1}{2}}}$ (${J^P={\frac{1}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${0}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{0^{+}}$ & ${1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{0^{+}}$ & ${1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{0^{+}}$ & $0^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{1^{-}}$ & $({1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-})_{\frac{1}{2}} \otimes{1^{-}}$ & $({1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}}\otimes{1^{-}}$ \\ \hline $(0,0,0)$ &-0.377&0.223&-0.231&0.616&0.410&-0.370 \\ \hline &\multicolumn{2}{c|}{$^{2}P_{\frac{3}{2}}$($J^P={\frac{3}{2}}^{+}$)} &\multicolumn{1}{c|}{$^{4}P_{\frac{3}{2}}$($J^P={\frac{3}{2}}^{+}$)} &\multicolumn{1}{c|}{${^{4}P_{\frac{5}{2}}^{+}}$(${J^P={\frac{5}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${0}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{1^{-}}$ & $({1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} )_{\frac{1}{2}}\otimes{1^{-}}$ & $({1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-})_{\frac{3}{2}} \otimes{1^{-}}$ & ${1}^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-} \otimes{1^{-}}$ \\ \hline $(0,0,0)$ &0.359&0.949&0.121 &0.495 \\ \hline \bottomrule[1pt] \end{tabular}} \end{center} \end{table*} \subsection{Magnetic moments of the diquark-triquark model with the configuration$(cq_1)(\bar{c}q_2q_3)$} Considering the diquark-triquark model, the total magnetic moments formula is: \begin{eqnarray} \hat{\mu} = \ \hat{\mu}_{\mathcal{D}}+\hat{\mu}_{\mathcal{T}}+\hat{\mu}_{l}. \end{eqnarray} where the $l$ is the orbital excitation between the diquark and triquark. The magnetic moments formula of the pentaquark, $(cq_1)(\bar{c}q_2q_3)$ in the diquark-triquark model is \begin{eqnarray} \mu &=& \langle\ \psi\ |\ \hat{\mu}_{\mathcal{D}}+\hat{\mu}_{\mathcal{T}}+\hat{\mu}_{l}\ |\ \psi\ \rangle\nonumber\\ &=& \sum_{S_z,l_z}\ \langle\ SS_z,ll_z|JJ_z\ \rangle^{2} \left \{ \mu_{l} l_z + \sum_{\widetilde{S}_{\mathcal{D}},\widetilde{S}_{\mathcal{T}}}\ \langle\ S_\mathcal{D} \widetilde{S}_{\mathcal{D}},S_\mathcal{T} \widetilde{S}_{\mathcal{T}}|SS_z\ \rangle^{2} \Bigg [ \widetilde{S}_{\mathcal{D}}\bigg(\mu_{c} + \mu_{q_1}\bigg )\nonumber\right.\\ &+&\left. \sum_{\widetilde{S}_{\bar{c}}}\ \langle\ S_{\bar{c}} \widetilde{S}_{\bar{c}},S_{r} \widetilde{S}_{\mathcal{T}}-\widetilde{S}_{\bar{c}}|S_{\mathcal{T}} \widetilde{S}_{\mathcal{T}}\rangle^{2}\bigg(g\mu_{\bar{c}}\widetilde{S}_{\bar{c}}+(\widetilde{S}_{\mathcal{T}}-\widetilde{S}_{\bar{c}})(\mu_{q_2}+\mu_{q_3})\bigg ) \Bigg ]\right \}. \end{eqnarray} where $S_{\mathcal{D}}$, $S_{\mathcal{T}}$ and $S_{r}$ represent the diquark, triquark and the light diquark spin inside the triquark, respectively. The triquark's masses roughly use the sum of the mass of the corresponding diquark and the antiquark. The numerical results with isospin $(I,I_3) = (1,0)$ and $(I,I_3) = (0,0)$ are reported in Table \ref{ooo} and \ref{lll}, respectively. \begin{table*}[htbp] \caption{The magnetic moments of the pentaquark states in the diquark-triquark model with the wave function $\frac{1}{\sqrt6}[({c}d)(\bar c\{us\})+({c}u)(\bar c\{ds\})]-\sqrt{\frac{2}{3}}({c}s)(\bar c\{ud\})$ in $8_{1f}$ and $\frac{1}{\sqrt2}\{ ({c}d)(\bar c[us])+({c}u)(\bar c[ds]) \}$ in $8_{2f}$ with isospin $(I,I_3) = (1,0)$. They are in $8_{1f}$ representation from $6_f \otimes 3_f = 10_f \oplus 8_{1f}$ and $8_{2f}$ representation from $\bar{3}_f \otimes 3_f = 1_f \oplus8_{2f}$, respectively. The third line $J_{1}^{P_{1}}\otimes J_{2}^{P_{2}}\otimes J_{3}^{P_{3}}$ are corresponding to the angular momentum and parity of triquark, diquark and orbital, respectively. The unit is the magnetic moments of the proton.} \label{ooo} \begin{center} \resizebox{0.90\columnwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c} \toprule[1pt] \hline \multicolumn{8}{c}{$8_{1f}$: $\frac{1}{\sqrt6}[({c}d)(\bar c\{us\})+({c}u)(\bar c\{ds\})]-\sqrt{\frac{2}{3}}({c}s)(\bar c\{ud\})$} \\ \hline &\multicolumn{3}{c|}{$^{2}S_{\frac{1}{2}}$ ($J^P={\frac{1}{2}}^{-}$)} &\multicolumn{3}{c|}{${^{4}S_{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{-}}$)} &\multicolumn{1}{c}{$^{6}S_{\frac{5}{2}}^{-}$($J^P={\frac{5}{2}}^{-}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{-}\otimes0^{+}\otimes0^{+}$ & ${\frac{1}{2}}^{-}\otimes1^{+}\otimes0^{+}$ & ${\frac{3}{2}}^{-}\otimes1^{+}\otimes0^{+}$ & ${\frac{1}{2}}^{-}\otimes1^{+}\otimes0^{+}$ & ${\frac{3}{2}}^{-}\otimes0^{+}\otimes0^{+}$ & ${\frac{3}{2}}^{-}\otimes1^{+}\otimes0^{+}$ & ${\frac{3}{2}}^{-}\otimes1^{+}\otimes0^{+}$ \\ \hline $(0,1,0)$ &0.522 &-0.078 &0.051&0.666&0.178&0.188&0.352 \\ \hline &\multicolumn{3}{c|}{${^{2}P_{\frac{1}{2}}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{3}{c|}{${^{4}P_{\frac{1}{2}}}$ (${J^P={\frac{1}{2}}^{+}}$)} & \\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{-}\otimes0^{+}\otimes1^{-}$ & $[{\frac{1}{2}}^{-}\otimes1^{+}]_{\frac{1}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{-}\otimes1^{+}]_{\frac{1}{2}}\otimes1^{-}$ & ${\frac{3}{2}}^{-}\otimes0^{+}\otimes1^{-}$ & $[{\frac{1}{2}}^{-}\otimes1^{+}]_{\frac{3}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{-}\otimes1^{+}]_{\frac{3}{2}}\otimes1^{-}$ & \\ \hline $(0,1,0)$ &-0.137&0.058&0.015&0.080&0.354&0.088 \\ \hline &\multicolumn{3}{c|}{${^{2}P_{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{3}{c|}{${^{4}P_{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{1}{c}{${^{6}P_{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{-}\otimes0^{+}\otimes1^{-}$ & $[{\frac{1}{2}}^{-}\otimes1^{+}]_{\frac{1}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{-}\otimes1^{+}]_{\frac{1}{2}}\otimes1^{-}$ & ${\frac{3}{2}}^{-}\otimes0^{+}\otimes1^{-}$ & $[{\frac{1}{2}}^{-}\otimes1^{+}]_{\frac{3}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{-}\otimes1^{+}]_{\frac{3}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{-}\otimes1^{+}]_{\frac{5}{2}}\otimes1^{-}$ \\ \hline $(0,1,0)$ &0.577&-0.030&0.098&0.152&0.508&0.157&0.242 \\ \hline &\multicolumn{3}{c|}{${^{4}P_{\frac{5}{2}}}$ (${J^P={\frac{5}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{6}P_{\frac{5}{2}}}$ (${J^P={\frac{5}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{6}P_{\frac{7}{2}}}$ (${J^P={\frac{7}{2}}^{+}}$)} \\ \cline{2-8} $ (Y, I, I_3)$ & ${\frac{1}{2}}^{-}\otimes1^{+}\otimes1^{-}$ & ${\frac{3}{2}}^{-}\otimes0^{+}\otimes1^{-}$ & $[{\frac{3}{2}}^{-}\otimes1^{+}]_{\frac{3}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{-}\otimes1^{+}]_{\frac{5}{2}}\otimes1^{-}$ & ${\frac{3}{2}}^{-}\otimes1^{+}\otimes1^{-}$ \\ \hline $(0,1,0)$ &0.714&0.233&0.236&0.299&0.370 \\ \hline \bottomrule[1pt] \multicolumn{8}{c}{$8_{2f}$: $\frac{1}{\sqrt2}\{ ({c}d)(\bar c[us])+({c}u)(\bar c[ds]) \}$} \\ \hline &\multicolumn{2}{c|}{$^2S_{\frac{1}{2}}$($J^P={\frac{1}{2}}^{-}$)} &\multicolumn{1}{c|}{$^{4}S{{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{-}}$)} &\multicolumn{2}{c|}{${^{2}P_{\frac{1}{2}}}$(${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{4}P_{\frac{1}{2}}}$(${J^P={\frac{1}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{-}\otimes0^{+}\otimes0^{+}$ & ${\frac{1}{2}}^{-}\otimes1^{+}\otimes0^{+}$ & ${\frac{1}{2}}^{-}\otimes1^{+}\otimes0^{+}$ & ${\frac{1}{2}}^{-}\otimes0^{+}\otimes1^{-}$ & $[{\frac{1}{2}}^{-}\otimes1^{+}]_{\frac{1}{2}}\otimes1^{-}$ & $[{\frac{1}{2}}^{-}\otimes1^{+}]_{\frac{3}{2}}\otimes1^{-}$ \\ \hline $(0,1,0)$ &-0.377&0.687&0.465&0.199&-0.184&0.235 \\ \midrule[1pt] &\multicolumn{2}{c|}{$^{2}P_{\frac{3}{2}}$ ($J^P={\frac{3}{2}}^{+}$)} &\multicolumn{1}{c|}{$^{4}P_{\frac{3}{2}}$ ($J^P={\frac{3}{2}}^{+}$)} &\multicolumn{1}{c|}{${^{4}P_{\frac{5}{2}}^{+}}$ (${J^P={\frac{5}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{-}\otimes0^{+}\otimes1^{-}$ & $[{\frac{1}{2}}^{-}\otimes1^{+}]_{\frac{3}{2}}\otimes1^{-}$ & ${\frac{1}{2}}^{-}\otimes1^{+}\otimes1^{-}$ & ${\frac{1}{2}}^{-}\otimes1^{+}\otimes1^{-}$ \\ \hline $(0,1,0)$ &-0.307&0.803&0.381&0.558 \\ \hline \bottomrule[1pt] \end{tabular}} \end{center} \end{table*} \begin{table*}[htbp] \caption{The magnetic moments of the pentaquark states in the diquark-triquark model with the wave function $\frac{1}{\sqrt2}[({c}u)(\bar c\{ds\})-({c}d)(\bar c\{us\})]$ in $8_{1f}$ and $\frac{1}{\sqrt6}\{({c}d)(\bar c[us])-({ c}u)(\bar c[ds])-2( c s)(\bar c[ud])\}$ in $8_{2f}$ with isospin $(I,I_3) = (0,0)$. The third line $J_{1}^{P_{1}}\otimes J_{2}^{P_{2}}\otimes J_{3}^{P_{3}}$ are corresponding to the angular momentum and parity of triquark, diquark and orbital, respectively. The unit is the magnetic moments of the proton.}\label{lll} \begin{center} \resizebox{0.90\columnwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c} \toprule[1pt] \hline \multicolumn{8}{c}{$8_{1f}$: $\frac{1}{\sqrt2}[({c}u)(\bar c\{ds\})-({c}d)(\bar c\{us\})]$} \\ \hline &\multicolumn{3}{c|}{$^{2}S_{\frac{1}{2}}$($J^P={\frac{1}{2}}^{-}$)} &\multicolumn{3}{c|}{${^{4}S_{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{-}}$)} &\multicolumn{1}{c}{$^{6}S_{\frac{5}{2}}^{-}$($J^P={\frac{5}{2}}^{-}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{-}\otimes0^{+}\otimes0^{+}$ & ${\frac{1}{2}}^{-}\otimes1^{+}\otimes0^{+}$ & ${\frac{3}{2}}^{-}\otimes1^{+}\otimes0^{+}$ & ${\frac{1}{2}}^{-}\otimes1^{+}\otimes0^{+}$ & ${\frac{3}{2}}^{-}\otimes0^{+}\otimes0^{+}$ & ${\frac{3}{2}}^{-}\otimes1^{+}\otimes0^{+}$ & ${\frac{3}{2}}^{-}\otimes1^{+}\otimes0^{+}$ \\ \hline $(0,0,0)$ &0.033&0.574&-0.601&0.910&-0.555&-0.056&0.352 \\ \hline &\multicolumn{3}{c|}{${^{2}P_{\frac{1}{2}}}$ (${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{3}{c|}{${^{4}P_{\frac{1}{2}}}$ (${J^P={\frac{1}{2}}^{+}}$)} & \\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{-}\otimes0^{+}\otimes1^{-}$ & $[{\frac{1}{2}}^{-}\otimes1^{+}]_{\frac{1}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{-}\otimes1^{+}]_{\frac{1}{2}}\otimes1^{-}$ & ${\frac{3}{2}}^{-}\otimes0^{+}\otimes1^{-}$ & $[{\frac{1}{2}}^{-}\otimes1^{+}]_{\frac{3}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{-}\otimes1^{+}]_{\frac{3}{2}}\otimes1^{-}$ & \\ \hline $(0,0,0)$ &0.062&-0.126&0.265&0.473&-0.345&-0.064 \\ \hline &\multicolumn{3}{c|}{${^{2}P_{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{3}{c|}{${^{4}P_{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{+}}$)} &\multicolumn{1}{c}{${^{6}P_{\frac{3}{2}}}$ (${J^P={\frac{3}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{-}\otimes0^{+}\otimes1^{-}$ & $[{\frac{1}{2}}^{-}\otimes1^{+}]_{\frac{1}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{-}\otimes1^{+}]_{\frac{1}{2}}\otimes1^{-}$ & ${\frac{3}{2}}^{-}\otimes0^{+}\otimes1^{-}$ & $[{\frac{1}{2}}^{-}\otimes1^{+}]_{\frac{3}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{-}\otimes1^{+}]_{\frac{3}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{-}\otimes1^{+}]_{\frac{5}{2}}\otimes1^{-}$ \\ \hline $(0,0,0)$ &0.143&0.671&-0.503&0.707&-0.363&-0.002&0.212 \\ \hline &\multicolumn{3}{c|}{${^{4}P_{\frac{5}{2}}}$ (${J^P={\frac{5}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{6}P_{\frac{5}{2}}}$ (${J^P={\frac{5}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{6}P_{\frac{7}{2}}}$ (${J^P={\frac{7}{2}}^{+}}$)} \\ \cline{2-8} $ (Y, I, I_3)$ & ${\frac{1}{2}}^{-}\otimes1^{+}\otimes1^{-}$ & ${\frac{3}{2}}^{-}\otimes0^{+}\otimes1^{-}$ & $[{\frac{3}{2}}^{-}\otimes1^{+}]_{\frac{3}{2}}\otimes1^{-}$ & $[{\frac{3}{2}}^{-}\otimes1^{+}]_{\frac{5}{2}}\otimes1^{-}$ & ${\frac{3}{2}}^{-}\otimes1^{+}\otimes1^{-}$ \\ \hline $(0,0,0)$ &1.008&-0.445&0.041&0.313&0.420 \\ \hline \bottomrule[1pt] \multicolumn{8}{c}{$8_{2f}$: $\frac{1}{\sqrt6}\{({c}d)(\bar c[us])-({ c}u)(\bar c[ds])-2( c s)(\bar c[ud])\}$} \\ \hline &\multicolumn{2}{c|}{$^2S_{\frac{1}{2}}$($J^P={\frac{1}{2}}^{-}$)} &\multicolumn{1}{c|}{$^{4}S{{\frac{3}{2}}}$(${J^P={\frac{3}{2}}^{-}}$)} &\multicolumn{2}{c|}{${^{2}P_{\frac{1}{2}}}$(${J^P={\frac{1}{2}}^{+}}$)} &\multicolumn{1}{c|}{${^{4}P_{\frac{1}{2}}}$(${J^P={\frac{1}{2}}^{+}}$)} \\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{-}\otimes0^{+}\otimes0^{+}$ & ${\frac{1}{2}}^{-}\otimes1^{+}\otimes0^{+}$ & ${\frac{1}{2}}^{-}\otimes1^{+}\otimes0^{+}$ & ${\frac{1}{2}}^{-}\otimes0^{+}\otimes1^{-}$ & $[{\frac{1}{2}}^{-}\otimes1^{+}]_{\frac{1}{2}}\otimes1^{-}$ & $[{\frac{1}{2}}^{-}\otimes1^{+}]_{\frac{3}{2}}\otimes1^{-}$ \\ \hline $(0,0,0)$ &-0.377&0.223&-0.231&0.164&-0.035&-0.165 \\ \hline &\multicolumn{2}{c|}{$^{2}P_{\frac{3}{2}}$ ($J^P={\frac{3}{2}}^{+}$)} &\multicolumn{1}{c|}{$^{4}P_{\frac{3}{2}}$ ($J^P={\frac{3}{2}}^{+}$)} &\multicolumn{1}{c|}{${^{4}P_{\frac{5}{2}}^{+}}$(${J^P={\frac{5}{2}}^{+}}$}\\ \cline{2-8} $(Y, I, I_3)$ & ${\frac{1}{2}}^{-}\otimes0^{+}\otimes1^{-}$ & $[{\frac{1}{2}}^{-}\otimes1^{+}]_{\frac{3}{2}}\otimes1^{-}$ & ${\frac{1}{2}}^{-}\otimes1^{+}\otimes1^{-}$ & ${\frac{1}{2}}^{-}\otimes1^{+}\otimes1^{-}$ \\ \hline $(0,0,0)$ &-0.359&0.293&-0.165&-0.196 \\ \hline \bottomrule[1pt] \end{tabular}} \end{center} \end{table*} I have compared the magnetic moment of $P_{cs}(4459)$ in three configurations as shown in the following Table \ref{gao}. The magnetic moment and numerical results illustrate that molecular model is distinguishable from the other two models in $0({\frac{1}{2}}^{-})$ but it is indistinguishable in $0({\frac{3}{2}}^{-})$. As far as diquark-diquark-antiquark model and diquark-triquark model are concerned, they are completely indistinguishable in $0({\frac{1}{2}}^{-})$ and $0({\frac{3}{2}}^{-})$. In addition to this, the magnetic moment of $P_{cs}(4459)$ have been studied in other papers. In Ref.~\cite{Li:2021ryu}, the numerical value in the molecular picture was obtained as $\mu_{P_{cs}} = -0.062\mu_{N}$ with 0(${\frac{1}{2}}^{-}$) and $\mu_{P_{cs}} = 0.465\mu_{N}$ with 0(${\frac{3}{2}}^{-}$). In Ref.\cite{Ozdem:2021ugay}, the magnetic dipole moment of $P_{cs}(4459)$ in the molecular and diquark-diquark-antiquark pictures are extracted as $\mu_{P_{cs}} = 1.75\mu_{N}$ and $\mu_{P_{cs}}=0.34\mu_{N}$. These numerical results differ from our results $\mu_{P_{cs}} = -0.531\mu_{N}$ with 0(${\frac{1}{2}}^{-}$) and $\mu_{P_{cs}} = -0.231\mu_{N}$ with 0(${\frac{3}{2}}^{-}$) in the molecular model and $\mu_{P_{cs}}=0.223\mu_{N}$ in diquark-diquark-antiquark model because of the wavefunction and the quark mass. We compare the results in Table \ref{g11} \begin{table} \centering \caption{The magnetic moments of the $P_{cs}(4459)$ in the molecular model, the diquark-diquark-antiquark model and the diquark-triquark model in $8_{2f}$ representation with isospin $(I,I_3) = (0,0)$.}\label{gao} \begin{center} \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|} \toprule[1pt] \hline $P_{cs}(4459)$ & Multiplet & Spin-orbit coupling& $I(J^P)$ & Magnetic moment & Numerical results \\ \hline \multirow{2}{*}{Molecular model} & \multirow{2}{*}{$8_{2f}$} & \multirow{2}{*}{${\frac{1}{2}}^{+}\otimes1^{-}\otimes0^{+}$} & $0({\frac{1}{2}}^{-})$ & $\frac{1}{9}(6\mu_{\bar{c}}-3\mu_{c}+\mu_{u}+\mu_{d}+4\mu_{s})$ & -0.531 \\ \cline{4-6} & & & $0({\frac{3}{2}}^{-})$ & $\frac{1}{6}(6\mu_{c}+6\mu_{\bar{c}}+\mu_{u}+\mu_{d}+4\mu_{s})$ & -0.231 \\ \hline \multirow{2}{*}{diquark-diquark-antiquark model} & \multirow{2}{*}{$8_{2f}$} & \multirow{2}{*}{$1^{+}\otimes0^{+}\otimes{\frac{1}{2}}^{-}\otimes0^{+}$} & $0({\frac{1}{2}}^{-})$ & $\frac{1}{9}(6\mu_{c}-3\mu_{\bar{c}}+\mu_{u}+\mu_{d}+4\mu_{s})$ & 0.223 \\ \cline{4-6} & & & $0({\frac{3}{2}}^{-})$ & $\frac{1}{6}(6\mu_{c}+6\mu_{\bar{c}}+\mu_{u}+\mu_{d}+4\mu_{s})$ & -0.231 \\ \hline \multirow{2}{*}{diquark-triquark model} & \multirow{2}{*}{$8_{2f}$} & \multirow{2}{*}{${\frac{1}{2}}^{-}\otimes1^{+}\otimes0^{+}$} & $0({\frac{1}{2}}^{-})$ & $\frac{1}{9}(6\mu_{c}-3\mu_{\bar{c}}+\mu_{u}+\mu_{d}+4\mu_{s})$ & 0.223 \\ \cline{4-6} & & & $0({\frac{3}{2}}^{-})$ & $\frac{1}{6}(6\mu_{c}+6\mu_{\bar{c}}+\mu_{u}+\mu_{d}+4\mu_{s})$ & -0.231 \\ \hline \bottomrule[1pt] \end{tabular}} \end{center} \end{table} \begin{table}[htbp] \centering \caption{Our results and other theoretical results for the magnetic moment of $P_{cs}(4459)$.The unit is the magnetic moments of the proton. The A, B, and C are corresponding to the molecular model, diquark-diquark-antiquark model and diquark-triquark model}\label{g11} \begin{center} \resizebox{0.6\columnwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c} \hline Cases & \multicolumn{2}{c|}A &\multicolumn{2}{c|}{B} &\multicolumn{2}{c}{C} \\ \hline $J^P$&${\frac{1}{2}}^{-}$&${\frac{3}{2}}^{-}$&${\frac{1}{2}}^{-}$&${\frac{3}{2}}^{-}$&${\frac{1}{2}}^{-}$&${\frac{3}{2}}^{-}$ \\ \hline Our results & -0.531&-0.231 & 0.223 &-0.231&0.223&-0.231 \\ \hline Ref.\cite{Li:2021ryu} & -0.062&0.465 & - &-&-&- \\ \hline Ref.\cite{Ozdem:2021ugay} & 1.75&- & 0.34 &-&-&- \\ \hline \end{tabular}} \end{center} \end{table} \section{SUMMARY} \label{sec5} Inspired by the recently observed $P_{cs}(4459)$, we systematically calculates the magnetic moments of $P_{cs}$ with $J^{P}={\frac{1}{2}}^{\pm}, {\frac{3}{2}}^{\pm}, {\frac{5}{2}}^{\pm}$, and ${\frac{7}{2}}^{+}$ in three models: molecular, diquark-diquark-antiquark, and diquark-triquark. Comparing the numerical results of the above three models, we observe that the magnetic moments of the states with the same quantum numbers are different. Indeed, even within the same model, the magnetic moments with different configurations are different. Next, we compare the magnetic moment of $P_{cs}(4459)$ in three configurations, which has been predicted involves an S-wave state with $I(J^P)=0({\frac{1}{2}}^{-})$ and $I(J^P)=0({\frac{3}{2}}^{-})$. The result shows that the molecular model is different from the other two models in $I(J^P)=0({\frac{1}{2}}^{-})$. These findings highlight that magnetic moments are helpful to determine their internal structures when the experimental data of $P_{cs}$ keeps accumulating, since the magnetic moments encode information about the charge distributions. \section*{Acknowledgments} This project is supported by the National Natural Science Foundation of China under Grants No. 11905171 and No. 12047502. This work is also supported by the Natural Science Basic Research Plan in Shaanxi Province of China (Grant No. 2022JQ-025).
proofpile-arXiv_065-93
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The physics of the strong interaction is accurately described by a gauge theory based on $SU(3)$, namely quantum chromodynamics (QCD). While QCD has been very successful, some open questions remain. An important example of such an as-of-yet unanswered question is the following: \textit{How do hadronic properties emerge from the properties of the constituent partons?} For example, it is still not completely understood how the spin of the proton arises from the angular momenta of the quarks and gluons inside the proton. Hence we need to look inside the proton, which of course we can do using scattering experiments. The description of such experiments in QCD is somewhat simplified because of factorization, which tells us that physical cross sections can be written as a convolution of a hard-scale function and a soft-scale one. The hard-scale function represents the short-distance part of the process, which involves the partonic degrees of freedom inside the hadron. Typically it can be written as some partonic cross section, calculable perturbatively in the strong coupling $\alpha_s$. The soft-scale function represents the long-distance part of the process and is non-perturbative. This means that it has to be fitted directly from experiment or calculated using non-perturbative techniques, e.g. using lattice QCD. Important examples of such non-perturbative functions are the parton distribution functions (PDFs) and the generalized parton distributions (GPDs). PDFs are accessible in forward-kinematic processes like inclusive deep-inelastic scattering (DIS), $e p \rightarrow e X$, and describe the distribution of the longitudinal momentum and polarization of partons inside hadrons. From the experimental side, they can be accessed by analyzing the data from the HERA collider~\cite{Abramowicz:2015mha,Accardi:2016ndt} and the planned Electron Ion Collider (EIC)~\cite{Boer:2011fh,AbdulKhalek:2021gbh} in the future. GPDs can be considered to be the counterparts of the standard PDFs in processes with off-forward kinematics, like e.g. exclusive deeply-virtual Compton scattering (DVCS, $e p \rightarrow e p \gamma$). They describe the transverse distributions of the partons in the hadronic target. Combining this with the longitudinal information then gives rise to a full three-dimensional description of hadronic structure. Furthermore, GPDs also allow for the determination of the partonic angular momentum contributions to the total hadronic spin~\cite{Diehl:2003ny}. The study of GPDs is one of the main goals of the future EIC~\cite{Boer:2011fh,AbdulKhalek:2021gbh}. The properties of PDFs and GPDs can also be studied using lattice QCD, see e.g.~\cite{Gockeler:2004wp, Gockeler:2010yr} and for recent progress~\cite{Braun:2015axa,Braun:2016wnx,Bali:2018zgl,Bali:2019dqc,Harris:2019bih,Alexandrou:2020sml}. While the distribution functions themselves are non-perturbative, their scale-dependence can be calculated perturbatively in the strong coupling $\alpha_s$. The origin of this lies in the analysis of DIS and DVCS using the Wilsonian operator product expansion (OPE)~\cite{Wilson:1969zs, Zimmermann:1972tv}, which gives a direct relation between PDFs and GPDs and the matrix elements of composite local gauge invariant operators. The consequence of this is that the scale-dependence of the parton distributions is directly related to the scale-dependence of the operators, determined by their anomalous dimensions. Hence it is important to understand the renormalization properties of the relevant operators, both in forward and off-forward kinematics. In this work we focus on the leading-twist flavor-non-singlet quark operators. In forward kinematics, the anomalous dimensions of the twist-2 non-singlet quark operators, written as functions of the Mellin moment $N$ representing their Lorentz spin, are known completely up to the three-loop level~\cite{Gross:1973ju, Floratos:1977au, Moch:2004pa,Blumlein:2021enk}. In certain limits, partial information is also available at four and five loops~\cite{Velizhanin:2011es,Velizhanin:2014fua,Ruijl:2016pkm,Moch:2017uml,Herzog:2018kwj}. In off-forward kinematics, the evolution kernel for the non-singlet operators is known completely up to three loops~\cite{Braun:2017cih}. The calculation exploited conformal symmetry~\cite{Braun:2003rp} of the QCD Lagrangian near the Wilson-Fisher fixed point. This technique was already introduced in the nineties by M\"uller and Belitsky~\cite{Mueller:1993hg,Belitsky:1998gc} for the calculation of two-loop radiative corrections. In addition to the usual variable $N$, the moment space anomalous dimensions in off-forward kinematics also depend on the number $k$ of total derivatives acting on the operator. Besides the computations using the conformal approach, there are also some fixed-moment calculations of the operator matrix elements (OMEs) up to the three-loop level. These calculations were done in the modified minimal subtraction ($\overline{\mbox{MS}}$) scheme as well as in alternative ones, like the regularization invariant (RI) scheme~\cite{Gracey:2009da,Kniehl:2020nhw}. An advantage of such schemes is that they are suitable for direct application to available lattice QCD results. These fixed-moment calculations use a different basis for total derivative operators from the one in the conformal approach, making a direct comparison between the fixed-moment results of~\cite{Gracey:2009da,Kniehl:2020nhw} and~\cite{Braun:2017cih,Mueller:1993hg,Belitsky:1998gc} impossible. Instead, additional computational steps are required. In the present article, we will review the renormalization of flavor-non-singlet quark operators including total derivatives, paying particular attention to possible choices for their bases. This way we can connect different results which, as of yet, appeared unrelated in the literature. Assuming a large number of quark flavors $n_f$, we calculate the relevant OMEs up to four-loop order for a non-zero momentum transfer through the operator vertex. Furthermore, we derive consistency relations for the corresponding operator anomalous dimensions, which allow us to check and extend previous calculations for the leading-$n_f$ terms of the off-forward anomalous dimensions up to five loops. These proceedings are organized as follows. Next, we define the operators and their matrix elements and study their renormalization. In Section \ref{sec:bases}, we then discuss two bases for total derivative operators used in the literature, and summarize the knowledge of the mixing matrices in these bases. The next section introduces a consistency relation which the anomalous dimensions have to obey, leading to a novel algorithm for deriving the full mixing matrix. Results of the application of this algorithm are discussed in Section \ref{sec:results}, and we finish with some concluding remarks in Section \ref{sec:conclusion}. \label{sec:renorm} \section{Operator renormalization} \subsection{Operators and their matrix elements} The operators appearing in the OPE analysis of DIS and DVCS are the spin-$N$ local non-singlet quark operators \begin{equation} \label{eq:OpDef} \mathcal{O}^{NS}_{\mu_1 \dots \mu_{N}} \,=\, \mathcal{S}\, \overline{\psi}\lambda^{\alpha}\gamma_{\mu_1} D_{\mu_2} \dots D_{\mu_{N}}\psi\, , \end{equation} with $\psi$ the quark field, $D_{\mu} = \partial_{\mu} - i g_s A_{\mu}$ the standard QCD covariant derivative, and $\lambda^{\alpha}$ the generators of the flavor group $SU(n_f)$. As we are interested in the leading-twist contributions, we symmetrize the Lorentz indices and take the traceless part, indicated by ${\mathcal S}$. {This projects the twist-two contribution, see e.g.~\cite{Blumlein:1999sc}.} The scale-dependence of these operators is determined by their anomalous dimensions, which can be calculated perturbatively in the strong coupling. Schematically \begin{equation} \frac{d \mathcal{O}}{d \ln \mu^2} = -\gamma \mathcal{O}, \: \: \gamma \equiv a_s \gamma^{(0)} +a_s^2 \gamma^{(1)} + ... \end{equation} with $\mu$ the renormalization scale and $a_s = \alpha_s/(4\pi)$. We gain access to the anomalous dimensions by considering spin-averaged matrix elements of the operators, \begin{equation} \label{eq:generalOME} \langle \psi(p_1) | \mathcal{O}_{\mu_1 \dots \mu_N}^{NS}(p_3) | \overline{\psi}(p_2)\rangle \,, \end{equation} with quarks and anti-quarks of momenta $p_1$ and $p_2$ as external fields, see Fig.\ref{figGreenFun}. We assume all momenta to be incoming, $\sum_{i=1}^3 p_i = 0$. \begin{figure}[htb] \centerline{% \includegraphics[width=0.5\textwidth]{OMEqq-generic.pdf}} \caption{The Green's function $\langle \psi(p_1) | O_{\mu_1\dots \mu_N}^{NS}(p_3) | \overline{\psi}(p_2)\rangle$ with momentum $p_3=-p_1-p_2$ flowing through the operator vertex. For simplicity, we set $p_2 \equiv 0$.} \label{figGreenFun} \end{figure} The tracelessness and symmetry of the Lorentz indices is most easily achieved by contracting the OMEs with a tensor of light-like $\Delta$, \begin{equation} \label{eq:theDeltas} \Delta^{\mu_1}\dots \Delta^{\mu_N} \, , \end{equation} and $\Delta^2=0$. As we are interested in the renormalization of the non-singlet operators including total derivatives, we have to choose $p_3\neq0$. However, for simplicity but without loss of generality, one can nullify one of the external momenta, i.e. we can set $p_2=0$. The calculated OMEs are then of the form \begin{equation} \label{eq:OMEs} \Delta^{\mu_1}\dots \Delta^{\mu_N}\, \langle \psi(p_1) | \mathcal{O}_{\mu_1\dots \mu_N}^{NS}(-p_1) | \overline{\psi}(0) \rangle \equiv \langle \psi(p_1) | \mathcal{O}_{N}(-p_1) | \overline{\psi}(0) \rangle \, . \end{equation} Note that this reduces the initial three-point function to a two-point one. The computation of these OMEs is done entirely automatically using the {\sc Form}~\cite{Vermaseren:2000nd,Kuipers:2012rf} program {\sc Forcer}~\cite{Ruijl:2017cxj}, resulting in fixed moments of the OMEs in Eq.~(\ref{eq:OMEs}). \subsection{Operator renormalization and anomalous dimensions} The actual renormalization will be done using the $\overline{\mbox{MS}}$-scheme \cite{tHooft:1973mfk, Bardeen:1978yd}, in which the evolution of the strong coupling is governed by \begin{equation} \frac{da_s}{d\ln \mu^2} = \beta(a_s) =-a_s(\epsilon+\beta_0 a_s +\beta_1 a_s^2 +\beta_2 a_s^3 + \dots). \end{equation} Here $\beta(a_s)$ is the standard QCD beta-function with $\beta_0 = (11/3) C_A - (2/3) n_f$. $C_A$ is the quadratic Casimir of the adjoint representation of the color group $SU(n_c)$, $C_A = n_c$. In forward kinematics the operators renormalize multiplicatively as \begin{equation} \label{eq:forRen} \mathcal{O}_{N+1} = Z_{N,N}[\mathcal{O}_{N+1}]. \end{equation} The corresponding anomalous dimensions are related to the QCD splitting functions by a Mellin transform \begin{equation} \gamma_{NS}(N) \equiv \gamma_{N-1,N-1} = -\int_{0}^{1}\text{d}x \: x^{N-1}{P_{NS}(x)} \end{equation} which determine, through a convolution $\otimes$ defined as \begin{equation} [P_{NS}\otimes f_{NS}](x) = \int_x^1 \frac{dy}{y} P_{NS}(y)f_{NS}\Big(\frac{x}{y}\Big), \end{equation} the scale-dependence of the standard PDFs \begin{equation} \frac{\text{d} f_{NS}(x,\mu^2)}{\text{d} \ln{\mu^2}} = [P_{NS} \otimes f_{NS}](x). \end{equation} This is the well-known DGLAP evolution equation \cite{Gribov:1972ri, Altarelli:1977zs, Dokshitzer:1977sg}. For off-forward kinematics, the operator renormalization becomes more complicated because of mixing with total derivative operators. This means that now the renormalization takes the form of a matrix equation \begin{equation} \begin{pmatrix} \mathcal{O}_{N+1} \\ \partial \mathcal{O}_{N}\\ \vdots \\ \partial^N \mathcal{O}_{1} \end{pmatrix} \,=\, \begin{pmatrix} Z_{N,N} & Z_{N,N-1} & ... & Z_{N,0} \\ 0 & Z_{N-1,N-1} & ... & Z_{N-1,0} \\ \vdots & \vdots & ... & \vdots \\ 0 & 0 & ... & Z_{0,0} \end{pmatrix} \begin{pmatrix} [\mathcal{O}_{N+1}] \\ [\partial \mathcal{O}_{N}] \\ \vdots \\ [\partial^N \mathcal{O}_{1}] \end{pmatrix}. \end{equation} The off-forward anomalous dimensions, which determine the GPD scale-dependence \cite{Diehl:2003ny}, are related to the $Z$-factors by \begin{equation} \gamma_{N,k}^{\mathcal{D}} \,=\, -\,\bigg( \,\frac{d }{d\ln\mu^2 }\; Z_{N,j} \bigg)\, Z_{j,k}^{\,-1} \end{equation} and can be expanded in a power series in the strong coupling \begin{equation} \gamma_{N,k}^{\mathcal{D}} \,=\, a_s\gamma_{N,k}^{\mathcal{D},(0)} + a_s^2\gamma_{N,k}^{\mathcal{D},(1)} + a_s^3\gamma_{N,k}^{\mathcal{D},(2)} + a_s^4\gamma_{N,k}^{\mathcal{D},(3)} + a_s^5\gamma_{N,k}^{\mathcal{D},(4)} + \dots \, . \end{equation} \section{Two possible bases for total derivative operators} \label{sec:bases} To study the renormalization of the quark operators in off-forward kinematics, we now have to choose a basis for the total derivative operators. In this section, we discuss two possibilities which have appeared in the literature. \subsection{The Gegenbauer basis} One approach is to expand the local operators in terms of Gegenbauer polynomials~\cite{Braun:2017cih} \begin{equation} \mathcal{O}_{N,k}^{\mathcal{G}} = (\Delta \cdot \partial)^k \overline{\psi}(x) \slashed \Delta C_N^{3/2}\Bigg(\frac{\stackrel{\leftarrow}{D} \cdot \Delta-\Delta \cdot \stackrel{\rightarrow}{D}}{\stackrel{\leftarrow}{\partial} \cdot \Delta+\Delta \cdot \stackrel{\rightarrow}{\partial}}\Bigg)\psi(x) \end{equation} where \cite{olver10} \begin{equation} C_N^{\nu}(z) = \frac{\Gamma(\nu+1/2)}{\Gamma(2\nu)}\, \sum\limits_{l=0}^{N}\, (-1)^l\binom{N}{l}\frac{(N+l+2)!}{(l+1)!}\, \Big(\frac{1}{2}-\frac{z}{2}\Big)^l. \end{equation} Here $k \geq N$ is the total number of derivatives and we use the superscript $\mathcal{G}$ to denote the operators in the Gegenbauer basis. Using properties of the Gamma function we can rewrite the operators as a particular double sum of left- and right-derivative operators \begin{align} \mathcal{O}_{N,k}^{\mathcal{G}} =& \: \frac{1}{2N!}\sum_{l=0}^{N}(-1)^l\binom{N}{l}\frac{(N+l+2)!}{(l+1)!} \nonumber \\& \times \sum_{j=0}^{k-l}\binom{k-l}{j}\overline{\psi}(x) \slashed \Delta (\stackrel{\leftarrow}{D} \cdot \Delta)^{k-l-j}(\Delta \cdot \stackrel{\rightarrow}{D})^{l+j}\psi(x). \end{align} The Gegenbauer basis is a natural choice when there is conformal symmetry, e.g. near the Wilson-Fisher critical point of QCD, where $\beta_{\text{QCD}} =~0$~\cite{Efremov:1979qk, Belitsky:1998gc, Braun:2017cih}. The anomalous dimension matrix in this basis is triangular, i.e. its elements $\gamma_{N,j}^{\mathcal{G}} = 0$ if $j>N$, and its diagonal elements correspond to the standard forward anomalous dimensions $\gamma_{N,N}$~\cite{Moch:2017uml}, cf. Eq.~(\ref{eq:forRen}). Note that we can drop the superscript ${\mathcal{G}}$ for $\gamma_{N,N}$ as they do not depend on the basis choice for operators with additional total derivatives. Currently the Gegenbauer mixing matrix is known completely to three loops~\cite{Braun:2017cih}. \subsection{The total derivative basis} Another approach is to identify the operators by counting powers of derivatives \begin{equation} \mathcal{O}_{{ p},{ q},{r}}^{\mathcal{D}} = (\Delta \cdot \partial)^{{ p}} \Big\{(\Delta \cdot D)^{{ q}}\overline{\psi}\, \slashed \Delta (\Delta \cdot D)^{{r}}\psi\Big\}, \end{equation} see e.g.~\cite{Gracey:2011zn,Gracey:2011zg} {and \cite{Geyer:1982fk,Blumlein:1999sc}}. The superscript $\mathcal{D}$ indicates that the operators are written in the total derivative basis. If we now impose the chiral limit, i.e. work with massless quarks, the partial derivatives act as \begin{equation} \label{eq:partialAct} \mathcal{O}_{p,q,r}^{\mathcal{D}} \,=\, \mathcal{O}_{p-1,q+1,r}^{\mathcal{D}} + \mathcal{O}_{p-1,q,r+1}^{\mathcal{D}} \, . \end{equation} Another consequence of the chiral limit is that left- and right-derivative operators renormalize with the same renormalization constants \begin{eqnarray} \label{eq:renormPattern1} \mathcal{O}_{p,0,r}^{\mathcal{D}} &=& \sum\limits_{j=0}^{r}\, Z_{r,r-j}\, [\mathcal{O}_{p+j,0,r-j}^{\mathcal{D}}] \, , \\ \label{eq:renormPattern2} \mathcal{O}_{p,q,0}^{\mathcal{D}} &=& \sum\limits_{j=0}^{q}\, Z_{q,q-j}\, [\mathcal{O}_{p+j,q-j,0}^{\mathcal{D}}] \, \end{eqnarray} and hence have the same anomalous dimensions. The total derivative basis is useful for connecting continuum quantities to lattice ones in non-perturbative studies, see e.g.~\cite{Gockeler:2004wp, Gracey:2009da}. The mixing matrix is also triangular in this basis ($\gamma_{N,k}^{\mathcal{D}} = 0$ if $k > N$) and, as was the case for the Gegenbauer basis, the diagonal elements are just the forward anomalous dimensions $\gamma_{N,N}$~\cite{Moch:2017uml}. We can again drop the superscript ${\mathcal{D}}$ due to basis independence. In this basis, the anomalous dimensions for low-$N$ operators are known up to the three-loop level; see {\cite{Gracey:2009da}} for analytical results and {\cite{Kniehl:2020nhw}} for a numerical extension of these. It is also possible to transform the anomalous dimensions in the $\mathcal{D}/\mathcal{G}$ basis to those in the $\mathcal{G}/\mathcal{D}$ one using \begin{equation} \label{eq:basisTrans} \sum\limits_{j=0}^{N}\, (-1)^j\frac{(j+2)!}{j!}\, \gamma_{N,j}^{\mathcal{G}} \,=\, \frac{1}{N!}\, \sum\limits_{j=0}^{N}\, (-1)^j\binom{N}{j}\frac{(N+j+2)!}{(j+1)!}\, \sum\limits_{l=0}^{j}\, \gamma_{j,l}^{\mathcal{D}}. \end{equation} Note that this is not a 1-to-1 relation between the anomalous dimensions in both bases; the best we can do is relate specific sums to each other. \section{Constraints on the anomalous dimensions in the total derivative basis} \label{sec:constraints} Focusing now on the total derivative basis, it turns out that the elements of the mixing matrices are not all independent. Instead they are subject to particular constraints, which define useful relations between them in the chiral limit. Starting from Eq.~(\ref{eq:partialAct}) we can derive the following relation between the bare operators by acting $N$ times with a partial derivative on $\mathcal{O}_{N,0,0}^{\mathcal{D}}$ \begin{equation} \label{bareREL} \mathcal{O}_{0,N,0}^{\mathcal{D}}-(-1)^N \sum_{j=0}^{N}(-1)^j\binom{N}{j}\mathcal{O}_{j,0,N-j}^{\mathcal{D}} \,=\, 0 \, . \end{equation} Now using the renormalization equations (\ref{eq:renormPattern1}), (\ref{eq:renormPattern2}) and performing some simple algebra, we find a relation between the renormalization factors, and hence between the anomalous dimensions \begin{align} \label{mainConj} \gamma_{N,k}^{\mathcal{D}} &\,=\, \binom{N}{k}\sum_{j=0}^{N-k}(-1)^j \binom{N-k}{j}\gamma_{j+k,j+k} \nonumber\\&+ \sum_{j=k}^N (-1)^k \binom{j}{k} \sum_{l=j+1}^N (-1)^l \binom{N}{l} \gamma_{l,j}^{\mathcal{D}} \, . \end{align} As the relation holds a priori at the level of the renormalization constants, the corresponding relation between the anomalous dimensions is valid to all orders in $a_s$. Putting now $k=0$ in Eq.~(\ref{mainConj}) yields \begin{equation} \label{mainK0} \gamma_{N,0}^{\mathcal{D}} = (-)^N\Bigg[\sum_{i=0}^{N}\gamma_{N,i}^{\mathcal{D}}-\sum_{j=1}^{N-1}(-)^j\binom{N}{j}\gamma_{j,0}^{\mathcal{D}}\Bigg]. \end{equation} This relation allows us to recursively build up the last column of the mixing matrix, provided we can determine the first sum between brackets. We now briefly explain that this is in fact possible. From the renormalization structure of the operators, Eq.~(\ref{eq:renormPattern1}), it is clear that the bare matrix element of $\mathcal{O}_{N+1}$ is related to the sum of renormalization factors $\sum_{i=0}^{N}Z_{N,i}$, and hence its $1/\epsilon$-pole will be related to $\sum_{i=0}^{N}\gamma_{N,i}$. Collecting this information in the quantity $\mathcal{B}(N+1)$ we can then write \begin{equation} \label{eq:BtoSum} \mathcal{B}(N+1) \,=\, \sum_{j=0}^{N} \gamma_{N,j}^{\mathcal{D}}. \, \end{equation} Substituting into Eq.~(\ref{mainK0}) leads to \begin{equation} \label{g0-from-B} \gamma_{N,0}^{\mathcal{D}} \,=\, \sum_{i=0}^{N}(-1)^i\binom{N}{i}\mathcal{B}(i+1). \end{equation} This implies that the last column of the mixing matrix can be directly related to a fixed-moment Feynman diagram calculation of matrix elements of operators without total derivatives. Going back to the general-$k$ relation, Eq.~(\ref{mainConj}), we emphasize that the only assumption made in its derivation was the use of the chiral limit, which imposes constraints on the renormalization structure of the operators. Hence, it can be used as an order-independent consistency check, which any expression for $\gamma_{N,k}^{\mathcal{D}}$ has to obey. Alternatively, Eq.~(\ref{mainConj}) allows for the construction of the full mixing matrix starting from the forward anomalous dimensions $\gamma_{N,N}$ and the last column $\gamma_{N,0}^{\mathcal{D}}$ at any order of perturbation theory. So, with partial information being available even to five-loop order, one can produce an ansatz for the off-diagonal elements of the mixing matrix and use Eq.~(\ref{mainConj}) to test its self-consistency. The last column will then serve as a boundary condition. This leads to a 4-step algorithm for constructing the mixing matrix: \textbf{1.} Starting from the bare OMEs in Eq.~(\ref{eq:OMEs}), one determines the all-$N$ expression for the last column entries $\gamma_{N,0}^{\mathcal{D}}$ of the mixing matrix, cf. Eq.~(\ref{g0-from-B}). \textbf{2.} Next, one calculates a sum of the forward anomalous dimensions \begin{equation} \label{eq:DiaSum} \binom{N}{k}\, \sum_{j=0}^{N-k}\, (-1)^j \binom{N-k}{j}\, \gamma_{j+k,j+k} \, . \end{equation} The structure of the result can then be used to construct an ansatz for the off-diagonal elements. \textbf{3.} Using the chosen ansatz, one calculates the double sum \begin{equation} \label{eq:DoubleSum} \sum_{j=k}^N\, (-1)^k \binom{j}{k}\, \sum_{l=j+1}^N\, (-1)^l \binom{N}{l}\, \gamma_{l,j}^{\mathcal{D}} \end{equation} and collects everything into Eq.~(\ref{mainConj}). This results in a system of equations in the unknown coefficients of the ansatz, subject to the boundary condition that the expression for $\gamma_{N,k}^{\mathcal{D}}$ has to agree with the previously found expression for $\gamma_{N,0}^{\mathcal{D}}$ from step 1. \textbf{4.} If one finds a unique solution for the system of equations, one has successfully determined the final expression for the off-diagonal elements of the mixing matrix. If such a solution is not found, some terms will remain in Eq.~(\ref{mainConj}). The structure of these remnant terms can be used to adapt the ansatz, leading one back to step 3. During the course of this algorithm, some non-trivial sums appear, cf. Eq.~(\ref{eq:DiaSum}) and Eq.~(\ref{eq:DoubleSum}). These can be evaluated using algorithms of symbolic summation, which are nicely implemented in the {\sc Mathematica} package {\sc Sigma}~\cite{Schneider2004} by Carsten Schneider. \section{Results} \label{sec:results} Using the consistency relation and the algorithm introduced in the previous section, we have calculated the off-diagonal elements of the anomalous dimension matrix up to five-loop order in the leading-$n_f$ approximation. As illustration, we quote the five-loop result {\footnotesize \begin{eqnarray} \gamma^{\mathcal{D}, (4)}_{N,k} &=& \frac{16}{81}n_f^4 C_F\Bigg\{\frac{1}{12}\Big(S_{1}(N)-S_{1}(k)\Big)^4\Big(\frac{1}{N+2}-\frac{1}{N-k}\Big)\nonumber\\&&+\frac{1}{3}\Big(S_{1}(N)-S_{1}(k)\Big)^3\Big(\frac{5}{3}\frac{1}{N-k}+\frac{2}{N+1}-\frac{11}{3}\frac{1}{N+2}+\frac{1}{(N+2)^2}\Big)\nonumber\\&&+\frac{1}{2}\Big(S_{1}(N)-S_{1}(k)\Big)^2\Big(S_{2}(N)-S_{2}(k)\Big)\Big(\frac{1}{N+2}-\frac{1}{N-k}\Big)\nonumber\\&&+\Big(S_{1}(N)-S_{1}(k)\Big)^2\Big(\frac{1}{3}\frac{1}{N-k}-\frac{13}{3}\frac{1}{N+1}+\frac{2}{(N+1)^2}+\frac{4}{N+2}\nonumber\\&&-\frac{11}{3}\frac{1}{(N+2)^2}+\frac{1}{(N+2)^3}\Big)\nonumber\\&&+\Big(S_{1}(N)-S_{1}(k)\Big)\Big(S_{2}(N)-S_{2}(k)\Big)\Big(\frac{5}{3}\frac{1}{N-k}+\frac{2}{N+1}-\frac{11}{3}\frac{1}{N+2}+\frac{1}{(N+2)^2}\Big)\nonumber\\&&+\frac{2}{3}\Big(S_{1}(N)-S_{1}(k)\Big)\Big(S_{3}(N)-S_{3}(k)\Big)\Big(\frac{1}{N+2}-\frac{1}{N-k}\Big)\nonumber\\&&+\Big(S_{1}(N)-S_{1}(k)\Big)\Big(\frac{2}{3}\frac{1}{N-k}+\frac{2}{N+1}-\frac{26}{3}\frac{1}{(N+1)^2}+\frac{4}{(N+1)^3}-\frac{8}{3}\frac{1}{N+2}\nonumber\\&&+\frac{8}{(N+2)^2}-\frac{22}{3}\frac{1}{(N+2)^3}+\frac{2}{(N+2)^4}\Big)+\frac{1}{4}\Big(S_{2}(N)-S_{2}(k)\Big)^2\Big(\frac{1}{N+2}-\frac{1}{N-k}\Big)\nonumber\\&&+\Big(S_{2}(N)-S_{2}(k)\Big)\Big(\frac{1}{3}\frac{1}{N-k}-\frac{13}{3}\frac{1}{N+1}+\frac{2}{(N+1)^2}+\frac{4}{N+2}\nonumber\\&&-\frac{11}{3}\frac{1}{(N+2)^2}+\frac{1}{(N+2)^3}\Big)+\frac{2}{3}\Big(S_{3}(N)-S_{3}(k)\Big)\Big(\frac{5}{3}\frac{1}{N-k}+\frac{2}{N+1}\nonumber\\&&-\frac{11}{3}\frac{1}{N+2}+\frac{1}{(N+2)^2}\Big)+\frac{1}{2}\Big(S_{4}(N)-S_{4}(k)\Big)\Big(\frac{1}{N+2}-\frac{1}{N-k}\Big)+\frac{2}{3}\frac{1}{N-k}\nonumber\\&&-\frac{2}{3}\frac{1}{N+1}+\frac{2}{(N+1)^2}-\frac{26}{3}\frac{1}{(N+1)^3}+\frac{4}{(N+1)^4}-\frac{8}{3}\frac{1}{(N+2)^2}+\frac{8}{(N+2)^3}\nonumber\\&&-\frac{22}{3}\frac{1}{(N+2)^4}+\frac{2}{(N+2)^5} \Bigg\} \, . \end{eqnarray} } Here $C_F=(n_c^2-1)/(2n_c)$. For more details and for the lower-loop expressions we refer the reader to our main paper \cite{Moch:2021cdq}. Furthermore, by transforming our results to the Gegenbauer basis using Eq.~(\ref{eq:basisTrans}), we have an independent check of the results in~\cite{Braun:2017cih}. Finally, our algorithm can also be used to extend the results in the Gegenbauer basis to the four-loop level, in the leading-$n_f$ approximation, the expression for which can be found in~\cite{Moch:2021cdq}. \section{Conclusion and outlook} \label{sec:conclusion} We have studied the renormalization of non-singlet quark operators including total derivative operators, which appear in the OPE analysis of DIS and DVCS. In doing so, we have derived a novel method for calculating the off-diagonal elements of the anomalous dimension matrix, based on the renormalization structure of the operators in the chiral limit. On the one hand, this provides an independent check of previous calculations in different operator bases. On the other hand, we also derive new results, e.g. the five-loop anomalous dimensions in the leading-$n_f$ limit. In our main paper~\cite{Moch:2021cdq}, we also show that the method can be used beyond the leading-$n_f$ limit. Results here include the anomalous dimensions of low-$N$ operators to five loops in full QCD. By performing a scheme transformation to the RI-scheme, these will also become useful in studies of the hadron structure using lattice QCD. The presented algorithms, i.e. consistency relations in combination with a direct Feynman diagram computation of the relevant OMEs, allow for an automation of the calculations using various computer algebra programs, such as {\sc Forcer} for the calculation of massless two-point functions up to four loops and symbolic summation using {\sc Sigma}. Finally, it should be straightforward to adapt the method to the calculation of mixing matrices for different operators in QCD and to different models all-together. These aspects are left for future studies. \subsection*{Acknowledgements} This work has been supported by Deutsche Forschungsgemeinschaft (DFG) through the Research Unit FOR 2926, ``Next Generation pQCD for Hadron Structure: Preparing for the EIC'', project number 40824754 and DFG grant $\text{MO~1801/4-1}$. \bigskip \bibliographystyle{JHEP} \providecommand{\href}[2]{#2}\begingroup\raggedright
proofpile-arXiv_065-94
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The Cabibbo-Kobayashi-Maskawa (CKM) matrix~\cite{Kobayashi:1973fv} gives the strength of the cross-generational weak couplings between the up and down type quarks, and is currently the only known source of charge-parity violation in the Standard Model (SM), which is required for understanding the observed matter--anti-matter asymmetry in the universe~\cite{Sakharov:1967dj}. Measurements of the CKM matrix aim to overconstrain the matrix, testing the unitarity assumption. Non-unitarity would indicate the existence of an additional, as yet unknown, coupling from Beyond the Standard Model (BSM) physics. The CKM matrix element \ensuremath{V_{ts}} determines the relative strength of the $t$ quark's weak decay to the $s$ quark compared to other down-type quarks. The magnitude of \ensuremath{|\vts|}\ determined through fits based on the unitarity of the CKM matrix is $39.78^{+0.82}_{-0.60} \times 10^{-3}$, and the indirect measurement through box diagram oscillations and rare decays involving loops is $38.8 \pm 1.1 \times 10^{-3}$~\cite{PDG}. However, a recent reanalysis of Tevatron and 8~TeV LHC data has shown that after relaxing the unitarity constraints, \ensuremath{|\vts|}\ can be as large as 0.1~\cite{Clerbaux:2018vup}. Additionally, a recent CMS analysis of 13~TeV data with the single top channel has given the constraint $\ensuremath{|\vts|} + |V_{td}| < 0.057$ at the 95\%~CL under the SM assumption of CKM unitarity, and $\ensuremath{|\vts|} + |V_{td}|=0.06 \pm 0.06$ after relaxing the unitarity constraints and allowing BSM contributions to the top width~\cite{Sirunyan:2020xoq}. Further measurements are therefore required to constrain \ensuremath{|\vts|}\ in the most general scenario, and in particular, the decay of $t \to sW$ has not yet been observed. A direct measurement of $\ensuremath{|\vts|}^2 = \frac{\mathcal{B} (t \to sW)}{\mathcal{B}(t \to qW)}$ at the LHC using the properties of strange hadrons to tag $s$ jets was proposed in~\cite{Ali_2010}. That proposal made a generator level analysis to argue that the measurement would be feasible at the LHC, showing that under the assumption of perfect non-\ttbar background rejection and perfect top and hadron reconstruction, 10~fb$^{-1}$~is sufficient to observe the $t \to sW$ decay. Now that the LHC has collected more than an order of magnitude more luminosity than the proposal considered, we extend that study. In particular, we perform a full reconstruction analysis using the \textsc{Delphes}\ fast detector simulation package, to make a more realistic estimate of the data luminosity required to observe the decay, and analyze the difficulties that would arise in such a measurement. With our more realistic simulation setup, we investigate the prospects of measuring \ensuremath{|\vts|}\ in several scenarios, including the future High Luminosity LHC (HL-LHC)~\cite{BejarAlonso:2020kmn}. \section{Simulation and Event Selection} We used \textsc{MG5\_aMC@NLO}\ 2.4.2 to generate \ttbar events in the dilepton decay channel with up to 2 additional jets at next to leading order~\cite{Alwall_2014}. We use the next-to-next-to-leading order top pair production cross-section $\sigma(pp \to t\bar{t})= 831.76$~pb for a collision energy of 13~TeV, which was calculated using the {\sc Top}++ program~\cite{Czakon:2011xx}. We generated about 7 million signal \ttbar events where one of the $t$ quarks is forced to decay to a $s$ quark and about 7 million background \ttbar events are generated where both $t$ quarks decay to $b$ quarks. Drell-Yan events with 2 additional partons are the dominant non-\ttbar backgrounds for dilepton \ttbar events. We generated 20 million Drell-Yan plus two parton events for each of the 4 jet flavour categories: $bb$, $cc$, $ss$ and $qq~(q = u,d,g)$ and use the leading order cross section reported by \textsc{MG5\_aMC@NLO}\ for the categories : $bb = 42.9$~pb, $cc = 4.31$~pb, $ss = 4.37$~pb and $qq = 23.8$~pb. In addition to the non-\ttbar backgrounds, we generated 10 million \ttbar plus one $s$ quark and two $s$ quarks respectively with the leading order cross section of 9.41~pb and 1.57~pb. \textsc{Pythia8}~8.212 was used to simulate parton showering and hadronization~\cite{Sjostrand_2015,Sjostrand_2006} with the FxFx merging scheme~\cite{Frederix:2012ps}. We modified the \ensuremath{K^0_S}\ decay in \textsc{Pythia8}\ to allow the \ensuremath{K^0_S}\ to decay inside a fiducial volume of a cylinder centered at the proton collision point with a radius of 860~mm and a length of 4400~mm. This is equivalent to the region of the CMS tracking detector where the pions from the decay may still pass through three silicon detectors, and would allow us to reconstruct \ensuremath{K^0_S}\ using reconstructed tracks when it decays to a charged pion pair. We used \textsc{Delphes}~3.4.2 to simulate the response of a CMS-like detector with particle flow (PF) outputs~\cite{de_Favereau_2014}. For jet clustering, we use the anti-$k_{t}$ algorithm with jet radius $R = 0.4$ using FastJet 3.3.2~\cite{Cacciari_2012}. We used the default CMS card included in \textsc{Delphes}\ but updated it to match the CMS setup used of Run 2. The jet radius was decreased from 0.5 to 0.4. The $\Delta R$ cone used to calculate lepton isolation was reduced from 0.5 to 0.3 for electrons and 0.5 to 0.4 for muons. The track transverse momentum resolution formula was updated using the function given in~\cite{CERN-PH-EP-2014-070}. The b-tagging efficiency was updated to closely match the Run 2 response of CMS~\cite{CMS-PAS-BTV-15-001} and a $b$-tagging based on track counting module was added. Smearing of the track impact parameter in the transverse plane was also added to emulate a more realistic \ensuremath{K^0_S}\ reconstruction using the associated module and parameters provided with \textsc{Delphes}, in order to replicate the performance of the CMS tracker~\cite{CMS-TRK-11-001}. In this study, we use dilepton events in order to remove the additional jet activity from the $W$ decay, which additionally suppresses the background contribution from other processes, especially the multi-jet QCD background. The \ttbar event selection criteria are based on the CMS measurement of the top pair cross-section with the dilepton channel~\cite{CMS-TOP-17-014}. First, we select events with two isolated leptons, each of which has $p_{T} >$ 25(20)~GeV for a leading (sub-leading) lepton and pseudorapidity $|\eta| <$ 2.4 and an invariant mass of a lepton pair is more than 20~GeV. Then, we veto $Z$ boson production by excluding events in the dilepton invariant mass range of $|M_{Z} - M_{ll}| <$ 15~GeV, where $M_{Z} = 91.1876$ GeV~\cite{PDG}. We require events to have missing energy $\cancel{E}_{T} >$ 40 GeV and at least two reconstructed jets with $p_{T} >$ 30 GeV and $|\eta| <$ 2.4. We define primary jets as reconstructed jets which are matched to a generator level quark $q$ from the $t \to qW$ decay by finding the highest $p_T$ jet within $\Delta R < 0.4$ of the quark. 97\% of top pair production events have one primary jet, while 74\% of events have two primary jets, and therefore fully match the dilepton decay topology after reconstruction. These primary jets will be used to train two Boosted Decision Trees (BDT). The Toolkit for Multivariate Data Analysis in ROOT (TMVA) is used to train the BDT using the adaptive boosting algorithm~\cite{Hoecker_2007}. The first BDT is trained to select the two primary jets out of all the reconstructed jets in the events. The second BDT is trained to discriminate between $s$-quark initiated jets from other jets. Once the first BDT selects the two primary jets, the second BDT is applied on these primary jets to look for $t \to sW$ decay. This process is described in further detail below. Both BDTs are trained using the signal $t\bar{t} \to sWbW$ and background $t\bar{t} \to bWbW$ samples. Additional jets in \ttbar events create ambiguities in the selection of the primary jets, so we employ the first BDT to improve the efficiency of the primary jet selection. We constructed a BDT model with the inputs of two jet and two lepton four vectors, the missing transverse momentum, and the $\Delta R$ between the two jets. Each jet pair in the event is evaluated by the BDT. We use the signal \ttbar sample to train this BDT, defining the signal to be the jet pair made from the two primary jets and the background is when one or more of the jets are not the primary jet. The output of the first BDT is shown in Figure~\ref{fig:BDT1A}. For each event, the jet pair with the highest BDT output is selected as the \ttbar primary jet candidates. Figure~\ref{fig:BDT1C} shows the top jet pair selection output for the signal, \ttbar, and Drell-Yan backgrounds. \begin{figure}[tb!] \centering \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{plots/JetDiscrimination/20211111/TOPBDT_MVA_BDT_S.pdf} \caption{BDT output of primary jet selection} \label{fig:BDT1A} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{plots/Result/Background_study/20211111/Redraw_Background_BDTD__step4_nJetNo_nBJetNo_topRecoNo_def_TOP_BDT.pdf} \caption{Highest BDT output of primary jet selection as density} \label{fig:BDT1B} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{plots/Result/Background_study/20211111/Stack_Redraw_Background_BDTD__step4_nJetNo_nBJetNo_topRecoNo_def_TOP_BDT.pdf} \caption{Highest BDT output of primary jet selection as expected number of event on the Run2 integrated luminosity} \label{fig:BDT1C} \end{subfigure} \caption{\ref{fig:BDT1A}) BDT output distribution of signal (Blue) and background (Red) on the BDT for the primary jet pair selection. \ref{fig:BDT1B}, \ref{fig:BDT1C}) BDT output distribution of jet pair selection for signal and background events normalized (\ref{fig:BDT1B}) and scaled to an integrated luminosity of 137.6 fb$^{-1}$ ~(\ref{fig:BDT1C})} \label{fig:top BDT three graphs} \end{figure} Next, in order to distinguish $s$ jets from the background, predominantly $b$ jets from the dominant $t\bar{t} \to bWbW$, we reconstruct \ensuremath{K^0_S}\ candidates inside the primary jet candidates. In $s$ jets, \ensuremath{K^0_S}\ can be produced directly from the initiating $s$ quark, whereas in $b$ jets they will be produced after a cascade of decays of the $b$ hadron or from the quarks produced in the parton shower. This means that \ensuremath{K^0_S}\ should be harder (relative to the jet energy) and more collimated in the case of $s$ quark initiated jets. We reconstruct \ensuremath{K^0_S}\ using its decay into oppositely charged pion pairs, and due to the long lifetime of the \ensuremath{K^0_S}, we require the tracks to come from a displaced vertex within the tracker volume. Using the charged hadron objects from the \textsc{Delphes}\ particle flow reconstruction, we consider all oppositely charged hadron pairs. Since general purpose detectors like CMS do not distinguish between pions from other charged hadrons, we assume all charged hadrons to be pions. We require the charged hadron pair to have $p_{T} >$ 0.95~GeV and $|\eta| <$ 2.4 and the significance of the transverse impact parameter of each track to be greater than two, to ensure the tracks are not from the primary vertex. Then we select reconstructed \ensuremath{K^0_S}\ candidates with an invariant mass of $|M_{\ensuremath{K^0_S}} - M_{\pi\pi}| < 0.1$~GeV, where $M_{\ensuremath{K^0_S}} = 497.611$~MeV~\cite{PDG}. We check that the reconstructed \ensuremath{K^0_S}\ candidate is from a primary jet candidate by requiring the angle between the candidate momentum and the jet axis to satisfy $\Delta{}R < 0.4$. If there are more than one reconstructed \ensuremath{K^0_S}\ candidates, we select the \ensuremath{K^0_S}\ with the highest $p_{T}$. \begin{figure}[tb!] \centering \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{plots/JetDiscrimination/20211111/JETBDT_MVA_BDTD_S.pdf} \caption{BDT output of $s$ jet tagging} \label{fig:BDT2A} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{plots/Result/Background_study/20211111/Redraw_Background_BDTD__step4_nJetNo_nBJetNo_topRecoNo_def_JET_BDT.pdf} \caption{BDT output of $s$ jet tagging on signal and background events as density} \label{fig:BDT2B} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{plots/Result/Background_study/20211111/Stack_Redraw_Background_BDTD__step4_nJetNo_nBJetNo_topRecoNo_def_JET_BDT.pdf} \caption{BDT output of $s$ jet tagging on signal and background events of the Run2 integrated luminosity} \label{fig:BDT2C} \end{subfigure} \caption{\ref{fig:BDT2A}) BDT output of $s$ jet tagging on signal (Blue) and background (Red). \ref{fig:BDT2B}, \ref{fig:BDT2C}) Highest BDT output of $s$ jet tagging on the primary jets normalized (\ref{fig:BDT2B}) and scaled to an integrated luminosity of 137.6 fb$^{-1}$ (\ref{fig:BDT2C})} \label{fig:jet BDT three graphs} \end{figure} After matching the \ensuremath{K^0_S}\ candidates to the primary jet candidates, we use both hadron and jet information to discriminate $s$ jets from all other jets. First, from the jet information, we use jet's $p_{T}$, mass, its minor and major axes and their substructure-related quantities such as charged jet multiplicity, charged daughter's $p_{T}$ fraction in a jet, leptonic constituent's $p_{T}$ fraction in a jet, and $p_{T}D$, also called the jet energy sharing, defined as $\frac{\sqrt{\sum{p_{T,i}^2}}}{\sum{p_{T,i}}}$. To simulate b-tagging, we used a simple track counting method, which tags a jet as a $b$ if more than two tracks are found with a high impact parameter. From the \ensuremath{K^0_S}\ kinematics, we use the hadron's $p_{T}$ fraction relative to the jet $x = p_T^{had} / p_T^{jet}$, the $\Delta$R between the jet axis and the hadron momentum, the Distance of Closest Approach (DCA) between the two tracks, the cosine of the 2D pointing angle between momentum vector and position vector of hadron and decay length calculated by assuming the \ensuremath{K^0_S}\ vertex is the midpoint of DCA between the two charged tracks. In addition, the following information from both the charged pion daughters are included: the $p_{T}$ fraction compared to the \ensuremath{K^0_S}, the significance of the transverse impact parameter, and of the longitudinal impact parameter. For training the second BDT, the signal is defined as jets matched to an s-quark from the $t \to sW$ decay with a matching \ensuremath{K^0_S}\ whose momentum fraction, x $>$ 0.15 and the background is all other jets with \ensuremath{K^0_S}\ with x $>$ 0.15. The result from the s-jet discriminating BDT training is shown in Figure~\ref{fig:BDT2A} and Figure~\ref{fig:BDT2C} shows the s-jet discriminating BDT for the primary jets on the signal and background processes. For the final $s$ jet selection, we first reject primary jets that are $b$ tagged in each event. If both the primary jets are $b$ tagged, the event is vetoed from the analysis. Of the remaining primary jets in each event, we select the jet with the highest $s$ tagging BDT output. Figure~\ref{fig:BDT2C} shows the final $s$ tagging BDT output for signal and background events. \section{Results} The expected number of signal \ttbar and background events is given by \begin{equation} \label{eq:Nexp:1} N_{sig} = \sigma(t\bar{t}) \times \mathcal{L} \times \mathcal{B}(t\bar{t} \to ql^{+}\bar{\nu}\bar{q}l^{-}\nu) \times 2|V_{ts}|^{2}|V_{tb}|^{2} \times \epsilon_{sig} \end{equation} \begin{equation} \label{eq:Nexp:2} N_{bkg} = \sigma(t\bar{t}) \times \mathcal{L} \times \mathcal{B}(t\bar{t} \to ql^{+}\bar{\nu}\bar{q}l^{-}\nu) \times |V_{tb}|^{4} \times \epsilon_{bkg} + N_{DY} \end{equation} where $\sigma(\ttbar)$ is the cross-section, $\mathcal{L}$ is a integrated luminosity, $\mathcal{B}(\ttbar \to ql^{+}\bar{\nu}\bar{q}l^{-}\nu)$ is the branching ratio of dileptonic decay mode, $\epsilon_{sig}$ ($\epsilon_{bkg}$) is the selection efficiency after the selections described in the previous section for the signal $sWbW$ (background $bWbW$) \ttbar sample, and $N_{DY}$ is the expected number of Drell-Yan background events. \begin{figure}[th!] \centering \includegraphics[width=.5\textwidth,trim=0 0 0 0,clip]{plots/Result/RooStats/20211111/mu1_twoside/BDT_All_Lumi_137_61fb_pvalue_Background_BDTD__step4_nJetNo_nBJetNo_topRecoNo_def_mu1_CL0.95.pdf} \caption{The gaussian fit of the BDT output distribution. The background distribution is shown in blue and the signal in red. The background includes Drell-Yan events with 2 jets as well as $t\bar{t} \to bWbW$ and the signal is $t\bar{t} \to sWbW$ } \label{fig:gaussianfit} \end{figure} \begin{figure*}[th!] \includegraphics[width=.5\textwidth,trim=0 0 0 0,clip]{plots/Result/RooStats/20211111/LocalP0.pdf} \includegraphics[width=.5\textwidth,trim=0 0 0 0,clip]{plots/Result/RooStats/20211111/BrazilianBand.pdf} \caption{Local p$_{0}$ value (Left) and 95\% Confidence Level (CL) upper limit on signal strength $\mu$ (Right) for each integrated luminosity. In the left plot, p$_{0}$ is p value for background-only PDF and the red dashed line is on $\mu = 1$. Black dots in the right plot are $\mu$ corresponding to CL of 95\% at 137 (Run2), 300 (expected for Run3), 600, 1200, 2000 and 3000 (expected for HL-HLC) fb$^{-1}$. } \label{fig:Limit} \end{figure*} \begin{figure*}[th!] \includegraphics[width=.5\textwidth,trim=0 0 0 0,clip]{plots/Result/RooStats/20211111/mu1_twoside/All_Lumi_137_61fb_pvalue_Background_BDTD__step4_nJetNo_nBJetNo_topRecoNo_def_mu1_CL0.95.pdf} \includegraphics[width=.5\textwidth,trim=0 0 0 0,clip]{plots/Result/RooStats/20211111/mu1_twoside/All_Lumi_300_0fb_pvalue_Background_BDTD__step4_nJetNo_nBJetNo_topRecoNo_def_mu1_CL0.95.pdf} \includegraphics[width=.5\textwidth,trim=0 0 0 0,clip]{plots/Result/RooStats/20211111/mu1_twoside/All_Lumi_1200_0fb_pvalue_Background_BDTD__step4_nJetNo_nBJetNo_topRecoNo_def_mu1_CL0.95.pdf} \includegraphics[width=.5\textwidth,trim=0 0 0 0,clip]{plots/Result/RooStats/20211111/mu1_twoside/All_Lumi_3000_0fb_pvalue_Background_BDTD__step4_nJetNo_nBJetNo_topRecoNo_def_mu1_CL0.95.pdf} \caption{$p$-value at the Run2 luminosity (top left), the expected luminosity from Run3 (top right), 1200 fb$^{-1}$ (bottom left) and 3000~fb$^{-1}$\ the design luminosity of HL-LHC (bottom right). The dotted line is the median of the expected CL$_{s+b}$, the green (yellow) band is $\pm 1\sigma$ ($\pm 2\sigma$) range.} \label{fig:CLsb} \end{figure*} To study the feasibility of a direct measurement of $\ensuremath{|\vts|}$, we perform an analysis to find the expected significance to reject the hypothesis $H_0$ of no $t \to sW$ decays, the expected upper limit on $\ensuremath{|\vts|}$ and also an expected confidence interval $\ensuremath{|\vts|}$. We use \textsc{RooFit}~\cite{Verkerke_2003} and \textsc{RooStats}~\cite{Moneta_2010} to perform the statistical analysis using the \textsc{RooStats} asymptotic calculator based on the asymptotic properties of likelihood function~\cite{Cowan_2011}. We fit the $s$ jet tagging BDT distributions from Figure~\ref{fig:BDT2C} to obtain approximations of the probability density functions (PDFs) for further study. The PDFs for the signal and background are separately modeled by a sum of four Gaussian distributions and the fitted PDFs are shown in Figure~\ref{fig:gaussianfit}. The BDT distribution in this figure has a different definition of signal and background from the BDT distribution of jet discriminator. The BDT distribution shown in the previous section defines the signal (background) as jets matched to generator level s (b) quarks, while the distributions here define the signal as the jet with the highest BDT output in an event from $t\bar{t} \to sWbW$ signal events and the background as the jet with the highest output from $t\bar{t} \to bWbW$ and Drell-Yan events. The total model is thus \begin{equation} \label{eq:PDF} P_{s+b} = \mu \times N_{sig} \times P_{sig} + \nu \times N_{bkg} \times P_{bkg} \end{equation} where $N_{sig}$ is the number of $\ttbar \to sWbW$ events expected by the SM, corrected for the selection efficiency and $\mu$ is the signal strength relative to the SM $\frac{|V_{ts}^{obs}|^2}{|V_{ts}^{CKM}|^2}$, where $|V_{ts}^{CKM}| = 39.78 \times 10^{-3}$~\cite{PDG}, and $\nu$ is a nuisance parameter to control the background level relative to the expected background. From the model PDF, we generate an Asimov dataset which we use as the observed dataset for the following studies. Figure~\ref{fig:Limit} and Figure~\ref{fig:CLsb} show the results of one-sided and two-sided scanning using the gaussian fitting for several integrated luminosities. The left plot in Figure~\ref{fig:Limit} shows a median expected local p$_{0}$ under assumption of $H_{0}$ ($|V_{ts}|^2=0$) versus the true signal strength $\mu$. For $\mu = 1$, corresponding to the current SM expectation, the significance of rejecting $H_0$ is expected to be more than 5$\sigma$ when the integrated luminosity is 2000 fb$^{-1}$ and greater than 6$\sigma$ for 3000~fb$^{-1}$, HL-LHC designed integrated luminosity. The right plot in Figure~\ref{fig:Limit} shows the median expected upper limit on $\mu$ under the assumption of $H_0$ at the 95\% confidence level (CL) for each luminosity as the dashed line and the $\pm 1(2) \sigma$ range as the green (yellow) band. Figure~\ref{fig:CLsb} shows the expected two-sided p-value distribution for the signal strength $\mu$ under the assumption of $\mu=1$. Table~\ref{tab:upperlimit} summarizes several results shown above: the expected significance to exclude $\ensuremath{|\vts|}=0$, the median 95 \% CL upper limit if $\mu=0$ and the 95\% CL for each luminosity. Under the assumption of $\mu=1$, the expected significance from the hypothesis test calculation shows a 1.36$\sigma$ significance to reject the background-only hypothesis for the integrated luminosity of the Run2. The value becomes 2.00$\sigma$ when the integrated luminosity is 300~fb$^{-1}$, as is expected to be collected during Run 3 of the LHC, and 6.34$\sigma$ for the full HL-LHC integrated luminosity of 3000~fb$^{-1}$. Conversely, if the decay is suppressed, and $\mu=0$, then Run 3 of the LHC will be able to exclude $\mu=1$ at the 95\% CL level. \begin{table*}[tb] \centering \begin{tabular}{|c|c|c|c|} \hline Integrated luminosity (fb$^{-1}$) & \vtop{\hbox{\strut Expected significance ($\sigma$)} \hbox{\strut for $\ensuremath{|\vts|}$ = 0 exclusion}} & \vtop{\hbox{\strut Expected 95\% CL}\hbox{\strut median upper limit ($\mu$)}} & \vtop{\hbox{\strut Expected 95\% CL$_{s+b}$} \hbox{\strut median interval ($\mu$)}} \\ \hline 137.6 & 1.36 & $<$ 1.22 & [0.000, 2.30] \\ 300 & 2.00 & $<$ 0.822 & [0.0210, 1.98] \\ 600 & 2.83 & $<$ 0.582 & [0.307, 1.70] \\ 1200 & 4.01 & $<$ 0.411 & [0.509, 1.49] \\ 2000 & 5.17 & $<$ 0.319 & [0.619, 1.38] \\ 3000 & 6.34 & $<$ 0.262 & [0.689, 1.31] \\ \hline \end{tabular} \caption{\label{tab:upperlimit} The expected significance to exclude $|V_{ts}|$ = 0, the expected 95\% CL median upper limit under the assumption of $|V_{ts}|=0$ and the expected 95\% CL$_{s+b}$ median confidence interval on $\mu = \frac{|V_{ts}^{obs}|^2}{|V_{ts}^{CKM}|^2}$ for several integrated luminosities. } \end{table*} \section{Conclusion} We have studied the feasibility of a direct measurement of $|V_{ts}|$ from the dileptonic \ttbar production process, using hadronization by \textsc{Pythia8}\ and \textsc{Delphes}\ detector simulation to produce a more realistic expectation of the results of the LHC experiments. With an integrated luminosity of 3000~fb$^{-1}$, which is expected to be achieved at the HL-LHC period, $\ensuremath{|\vts|}$ = 0 can be excluded above the 6$\sigma$ significance level. \section{Acknowledgments} This article was supported by the computing resources of the GDSC at the Korea Institute of Science and Technology Information. W.J. is supported by the National Research Foundation of Korea (NRF) grant funded by the Ministry of Science and ICT (MSIT) (2018R1C1B6005826). J.L. is supported by the NRF grant funded by the MSIT (2019R1C1C1009200). I.W. is supported by the Brain Pool Program through the NRF funded by the MSIT (2017H1D3A1A01052807). I.P. is supported by the Basic Science Research Program through the NRF funded by the Ministry of Education (2018R1A6A1A06024977).
proofpile-arXiv_065-95
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Semantic segmentation is a fundamental and challenging problem of computer vision, whose goal is to assign a semantic category to each pixel of the image. It is critical for various tasks such as autonomous driving, image editing, and robot sensing. Recently, with the rapid development of deep learning, fully convolutional~\cite{long2015fully} based methods have been proposed to address the above task. These methods have achieved significant performance on various benchmarks~\cite{ADE20K, Cordts2016Cityscapes, everingham2015pascal, mottaghi2014role}. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{sp3.PNG} \end{center} \setlength{\belowcaptionskip}{0.2cm} \setlength{\abovecaptionskip}{0cm} \caption{\textbf{(a):} An image from ADE20K datasets \textbf{(b):} Superpixel that aggregates similar pixels and groups them into blocks of different sizes and shapes according to low-level features such as the color, texture, and pixel location of the image. \textbf{(c):} Segmentation output from DeepLabV3+. \textbf{(d):} Refined segmentation by our method.} \label{fig:sp} \end{figure} However, many researchers have also noticed that segmented boundary is of low quality to some extent. Therefore, many efforts are devoted to improving the performance of boundary segmentation. DenseCRF~\cite{krahenbuhl2012efficient}, as a classic method, uses color and position relationship of the original pixels of the image to refine the segmentation results containing poor boundary~\cite{chen2014semantic,chen2017deeplab,zheng2015conditional}. However, DenseCRF usually serves as a post-processing module, which makes it not closely integrated with the CNN structure, and has a weak effect on optimizing the feature representation of edge points. Afterward, some works~\cite{wang2018non,huang2019ccnet} leverage attention mechanism of the high-level features to construct more reliable context information between the edge points and the internal points of the object. Some other works ~\cite{bertasius2016semantic, takikawa2019gated, yuan2020segfix} exploit as much boundary information as possible via deep neural networks. For example, SegFix~\cite{yuan2020segfix} encodes the relative distance information of the boundary pixels with a boundary map and a direction map, correcting the wrongly classified boundary pixels via internal points with high confidence. However, the accuracy of the weights or relationships obtained by these methods closely depends on the purity of the high-level features, which is what the edge feature points lack. The boundary features usually contain multiple object information due to the large receptive field of CNN. Therefore, the performance improvement brought by these methods is limited. Superpixels are an over-segmentation of an image that is formed by grouping image pixels based on the basic characteristics of the image, such as color, texture, and pixel position relationship. They provide a perceptually meaningful tessellation of image content, thereby reducing the number of image primitives. Due to their representational and computational efficiency, superpixels have turned into an accepted low/mid-level image representation. Many previous works~\cite{shu2013improving, yan2015object, zhu2014saliency} have employed this advantage and acquired pretty results. On the other hand, the generated pixel blocks usually contain sharp edges as shown in Fig.~\ref{fig:sp} (b), which is also worth exploring for downstream tasks like semantic segmentation. Thus, in this paper, we propose a simple but effective superpixel guided message passing method to correct wrongly segmented boundaries with the help of its sharp boundary. Further, inspired by the multiscale image pyramid, we design a multiscale superpixel information passing module (MSP), enabling multiple sharper edges and farther spatial dependence by a cascade of different scales superpixel blocks. These multiscale superpixel blocks are utilized to replace the high-level features to guide message passing within feature map. Simultaneously, the sharp boundaries of the blocks also restrict the message passing scope, making neighboring boundary features acquire messages from different block sides. Finally, extensive evaluations of our multiscale superpixel algorithm on ADE20K, Cityscapes, PASCAL VOC, and PASCAL Context datasets are conducted to demonstrate its effectiveness and generalizability. A pair of segmentation visualization contrast can be found in Fig.~\ref{fig:sp} (c) and (d). The main contributions of this work are summarized as follows: $\bullet$ We propose a simple but useful algorithm to refine semantic segmentation boundaries by superpixel because of its sharp edges and local consistency in a large area. $\bullet$ A multiscale superpixel module is designed to obtain sharper edges and farther spatial dependence. $\bullet$ Our method has obtained general improvement in semantic segmentation on three outstanding networks and four widely used scene parsing datasets. \section{Related Works} \textbf{Semantic Segmentation.} Driven by powerful deep neural networks \cite{krizhevsky2012imagenet, simonyan2014very, szegedy2015going}, pixel-level prediction tasks like scene parsing and semantic segmentation achieve great progress inspired by replacing the fully-connected layer in classification with the convolution layer \cite{long2015fully}. To enlarge the receptive field of neural networks, several model variants are proposed. For example, GCN~\cite{peng2017large} adopts decoupling of large kernel convolution to gain a large receptive field for the feature map and capture long-range information. DeeplabV3 \cite{chen2017rethinking} extends ASPP with image-level feature to further capture global contexts. DeeplabV3+ \cite{chen2018encoder} adds a decoder upon DeeplabV3 to refine the segmentation results, especially along object boundaries. The success of self-attention mechanism in natural language processing attracts lots of researchers' attention. DANet\cite{danet} applies both spatial and channel attention to gather information around the feature maps, which costs even more computation and memory than the Nonlocal\cite{nonlocal} method. EMANet\cite{li19} integrates Expectation-Maximization (EM) algorithm to CNN to estimate attention map and reconstruct feature map while saving computing resources. \textbf{Superpixel.} Superpixel is pixels with similar characteristics that are grouped together to form a larger block. Since its introduction in 2003~\cite{ren2003learning}, there have been many pretty excellent algorithms\cite{achanta2012slic,weikersdorfer2013depth,van2012seeds} and mature evaluation metrics such as Undersegmentation Error. Moreover, publicly available superpixel algorithms have turned into standard tools in low-level vision. \cite{stutz2018superpixels} conducts fair analysis and evaluation of $28$ superpixel algorithms on $5$ datasets. Recently, in \cite{jampani2018superpixel}, neural network is applied to the generation of superpixel and great results are achieved. Superpixels have been applied in target detection \cite{shu2013improving,yan2015object}, semantic segmentation \cite{gould2008multi,sharma2014recursive,gadde2016superpixel}, saliency estimation \cite{he2015supercnn,perazzi2012saliency,yang2013saliency,zhu2014saliency}. \cite{yan2015object} converts object detection problem into superpixel labeling problem and conducts an energy function considering appearance, spatial context and numbers of labels. \cite{gadde2016superpixel} uses superpixels to change how information is stored in the higher level of a CNN. In \cite{he2015supercnn}, superpixels are taken as input and contextual information is recovered among superpixels, which enables large context to be involved in the analysis efficiently. {\bf Refinement for Segmentation.} Previous work~\cite{zheng2015conditional,lin2017refinenet,chen2017deeplab} improved their segmentation results by DenseCRF~\cite{krahenbuhl2012efficient}. However, the low confidence score of the unary potential in boundary leads to a weak improvement of the object boundary segmentation, even with the help of pairwise potential. Recent works~\cite{acuna2019devil, chen2019learning} extended the conventional level-set scheme to deep network for regularizing the boundaries of predicted segmentation map. Other studies~\cite{bertasius2016semantic, ding2019boundary, ke2018adaptive, takikawa2019gated, yuan2020segfix} also exploit the boundary information to improve the segmentation. These works aim at correct classification of edge pixels, by utilizing high-level features to directly or indirectly guide message passing for reliable boundary feature representations. For example, SegFix~\cite{yuan2020segfix} encodes the relative distance information of the boundary pixels with a boundary map and a direction map to correct the wrong boundary pixels. In this paper, we propose to refine segmentation via using superpixel blocks to guide the passing of information between features. Further, we design a plug-and-play multiscale superpixel module named MSP for sharper edges and longer dependency. Our method has been embedded in three famous semantic segmentation networks and evaluated on four challenging datasets. General gains brought by our method show its great potential. \section{Approach} In this section, we present our simple but effective method named multiscale superpixel module (MSP). Before that, we firstly give a detailed description of our single scale superpixel guided message passing model (SSP). Afterward, we introduce the multiscale superpixel model. Finally, we give an example of a combination with DeeplabV3+~\cite{chen2018encoder} that is considered as our baseline in most experiments and analyze the advantages of our proposed method. \subsection{Superpixel Guided Message Passing} We use the superpixel segmentation algorithm to divide an image into hundreds of pixel blocks. These blocks define their respective scope of message passing. The $i$-th pixel block is denoted as $p_{i}$. In an image, all generated pixel blocks belong to the set $P$. $K$ is the total number of elements in the set $P$. The entire information passing process consists of two steps: 1) computing averaged feature inside a superpixel; 2) adding it back to each pixel feature. Here, superpixel plays a role of a mask. More specifically, for the $i$-th superpixel block $p_{i}$, our approach averages the features inside the $p_{i}$ superpixel and adds the mean value $\bar{x_{i}}$ back to each feature vector covered by superpixel $p_{i}$. In order to obtain the mean feature map $\bar{X}$ of the entire feature map $X$, it is necessary to enumerate all pixel blocks from $1$ to $K$. Finally, $\bar{X}$ is weighted to the original $X$ to realize the message passing. The whole algorithm can also be formulated by the following formula: \begin{equation} \bar{X}=\sum_{i=1}^{K} {S(P_{i},X)\over{N(P_{i})}} \,, \end{equation} \begin{equation} {S(P_{i},X)=\sum P_i\times X} \,, \end{equation} and \begin{equation} {X^{*}=X+{\bf \alpha}\bar{X}} \,. \end{equation} $P_{i}$ is a binary map with the same size as $X$, and the value in the $p_{i}$ pixel block area is set to $1$ (otherwise $0$). The function $S(, )$ sums feature value in $X$ along spacial dimension where the location is provided by $P_i$. The function $N(\cdot)$ calculates the area of the $pi$ pixel block. {\bf $\alpha$} is the weighting coefficient that is set to $0.1$ by default. Thus, superpixel blocks, as low-level features, successfully guide the fusion of high-level information. This method is not only simple but also effective for the improvement of the boundary because the superpixel blocks with sharp boundaries allow adjacent edges from two or more objects to receive the information from respective inside pixel blocks. This greatly boosts the discrimination of boundary features, especially for objects with obvious differences in characteristics. In order to utilize the information of more features in a pixel block and avoid the excessive influence of a single feature vector, we average the features inside a pixel block and add it back to the original features with appropriate weight. In this way, each vector on the original feature map acquires the mean feature information within the pixel block where the vector is located. Note that this method introduces no convolution structure but still guarantees the backpropagation. The entire message passing process is presented in Fig. \ref{fig:message passing}. In the experimental part, this method has achieved good results beyond the baseline, which proves its effectiveness. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{fuse_long.png} \end{center} \caption{Superpixel Guided Message Passing. A superpixel block is first mapped to a corresponding area in the feature map, and then the averaged feature in that area is regarded as a passing message and added back to the area.} \label{fig:message passing} \end{figure} \subsection{Multiscale Superpixel Module} We consider that the single scale superpixel module (SSP) may lack diversity. And multiscale manner used to be a general solution that provides more details and broader context information. Therefore, we refer to the multiscale pyramid model and accordingly construct a multiscale superpixel model (MSP) in the process of information fusion. However, the key point where the model we design changes is not the size of the image but the number $\lambda$ of superpixel blocks in an image that is divided. It is obvious that the shape of superpixel is irregular, which cannot be standardized and uniformly described. For the convenience of description, we define {\bf $\lambda$} as the scale of superpixels. It is worth noticing that the larger the $\lambda$ is, the greater the number of superpixel blocks in an image is, and the smaller the area of each pixel block is. In other words, a pixel block in an image may have a certain overlap with several pixel blocks on the other superpixel scales of that image, or it may be a part of a pixel block on another smaller scale as shown in Fig.~\ref{fig:multiscale}. As a result, the multiscale model makes a wider range of cross-fusion of features available. In terms of the specific implementation, the multiscale model is formed by cascade single scale models. This makes the information passing of every single scale independent from each other, thereby avoiding the problem of message confusion between multiple scales in a parallel fashion. In practice, the cascade fashion is easy to implement. Moreover, when cascade, the order of superpixel scale is from a small scale to a large scale, such as 100, 200, and 300. In other words, we first conduct the message passing in a large superpixel area, then followed by a smaller one. Such a sequence guarantees that information holds longer dependence as small scale superpxiel often has a certain overlap with large scale. In order to present our multiscale model more clearly and intuitively, we attempt to use a simple formula to illustrate it. Firstly, we define the single scale model as the function $F(X,\lambda,I)$, and $X^{*}$ is its output. So we can obtain the following formula: \begin{equation} {X^{*}=F(X,\lambda,I)} \,, \end{equation} where $X$ is the original input feature, $\lambda$ (superpixel scale) is the number of superpixel blocks in a picture, and $I$ is a raw RGB image that is utilized to generate superpixel. Then we present the formulas for a multiscale model based on the cascade of single scale models: \begin{equation} {X^{*}_{0}=F(X,\lambda_{0},I)}\,, \end{equation} \begin{equation} {X^{*}_{1}=F(X^{*}_{0},\lambda_{1},I)}\,, \end{equation} \begin{equation} \label{fo} {X^{*}_{n-1}=F(X^{*}_{n-2},\lambda_{n-1},I)}\,. \end{equation} \begin{figure} \begin{center} \includegraphics[width=1\linewidth]{multiscale_2.png} \end{center} \caption{Illustration on multiscale superpixel overlap. Due to different sizes of pixel blocks between different scales, some areas between pixel blocks of different scales may contain overlap to some extent.} \label{fig:multiscale} \end{figure} As is indicated in the above formulas, the output of the previous stage is used as the input for the next stage. Finally, it is necessary to clarify the meaning of the parameters in fomula~(\ref{fo}) as well as an important rule. $n$ is the number of cascade single scale models. $\lambda_{i}$ is the number of superpixel blocks at the $i$-th scale. As we defined before, the scale here is not the size but the number of superpixel blocks. And keep $\lambda_{i}>\lambda_{i-1}$ in mind. The performance of the multiscale superpixel model exceeds that of a single scale in the experiments of ADE20K below, which proves the effectiveness of the multiscale model and further demonstrates the superiority of superpixel in guiding the message passing of high-level features. \subsection{Network Architecture} In this paper, We use DeeplabV3+ \cite{chen2018encoder} as the baseline of our experiments. The information passing of superpixel is carried out after the depthwise separable convolution layers of DeeplabV3+, which is before the classifier. The overall structure is shown in Fig. \ref{fig:short}, where the superpixel model we design is plug-and-play and can be easily embedded in the network for end-to-end training and testing. Besides, as is described above, our method is simple to implement but the performances in the next section show its effectiveness and generalizability thanks to the sharp edge of superpixel blocks. Finally, since no additional convolution structures are introduced, the parameters of the network model are not increased. \section{Experiment} \begin{figure*} \begin{center} \includegraphics[width=0.9\linewidth]{Picture1.png} \end{center} \caption{An overview of our network based on DeeplabV3+.} \label{fig:short} \end{figure*} To evaluate the proposed method, we conduct extensive experiments on three outstanding neural networks and four widely used sense parsing datasets. In this section, we first introduce implementation details followed by the comparisons with our baselines. Then we perform ablation studies to verify the superiority of the proposed method on the ADE20K dataset. Besides, we compare our method with DenseCRF and SegFix both on mIoU and F-score. Finally, we report our results on the Cityscapes dataset, PASCAL VOC dataset, and PASCAL Context dataset. \subsection{Implementation Details} All our experiments are based on MMSegmentation~\cite{mmseg2020}. We use ResNet~\cite{7780459} (pretrained on ImageNet~\cite{russakovsky2015imagenet}) as our backbone. The output stride of the backbone is set to 8. Following prior works~\cite{chen2017rethinking,zhao2017pyramid}, we employ a poly learning rate strategy where the initial learning rate is multiplied by ${(1 -iter/total iter)}^{0.9}$ after each iteration, which is set to $80000$ as the maximum number of iterations in all experiments. The initial learning rate is set to be 0.01 for all datasets. Momentum and weight decay coefficients are set to 0.9 and 0.0005, respectively. For data augmentation, we apply the common scale (0.5 to 2.0), cropping and flipping of the image to augment the training data. The synchronized batch normalization is adopted in all experiments, together with the multi-grid~\cite{chen2017rethinking}. Input size for ADE20K dataset is set to $512\times512$, while input size for Cityscapes dataset is set to $769\times769$. For PASCAL VOC and PASCAL Context, the input size is set to $512\times512$ and $480\times480$ respectively. The batch size on ADE20K, PASCAL VOC, and PASCAL Context is set to 16 and Cityscapes is set to 8 due to the limited calculation resource. We train 40K iterations on PASCAL VOC and 80K iterations on ADE20K, Cityscapes, PASCAL Context. \subsection{Comparisons with Baselines on ADE20K} In order to prove the effectiveness of the proposed method, we compare with DeeplabV3+~\cite{chen2018encoder} on the validation set of ADE20K. We report the mIoU of each network structure on different backbones in Tab.~\ref{Tab 1}. It is shown that the network structures equipped with our superpixel-based method have achieved excellent performances compared with the original ones. More specifically, our single scale method based on DeeplabV3+ with backbone ResNet101 achieves 45.47\% in mIoU, and outperforms the original one by 0.56\%. The multiscale solution with ResNet-50 and ResNet-101 achieves 43.93\% and 45.81\% respectively in mIoU, and outperforms the single scale solution by 0.61\% and 0.34\% respectively. A multiscale manner can compensate for the omission of information caused by a single scale, and this fashion can simultaneously capture longer and more effective dependency as well. Some visualization results compared with baseline are shown in Fig.~\ref{fig:result}. \begin{table}[t] \begin{center} \begin{tabular}{c|c|c c|c} \hline Method & Backbone & SSP & MSP & mIoU(\%) \\ \hline\hline DeeplabV3+ & ResNet50 & & & 42.91\\ DeeplabV3+ & ResNet50 & \checkmark & &43.32\\ DeeplabV3+ & ResNet50 & & \checkmark &{\bf 43.93}\\ \hline DeeplabV3+ & ResNet101 & & & 44.91\\ DeeplabV3+ & ResNet101 & \checkmark & &45.47\\ DeeplabV3+ & ResNet101 & & \checkmark &{\bf 45.81}\\ \hline \end{tabular} \end{center} \setlength{\abovecaptionskip}{0.2cm} \caption{{\bf SSP:} Single scale superpixel Module. {\bf MSP:} Multiscale superpixel Module. The multiscale method works better than the single scale approach, and shows great potential. We believe that the reason is that the multiscale method contains more useful information and larger spatial dependencies. In the single scale model, $\lambda$ is set to 200, while in the multiscale model, $\lambda$ is set to 200, 300, and 400 respectively.} \label{Tab 2} \end{table} \subsection{Ablation Studies on ADE20K} The ADE20K dataset is one of the most challenging benchmarks, which contains 150 classes and a variety of scenes with 1,038 image-level labels. We follow the official protocol to split the whole dataset. Like most previous works, we use the mean of Intersection over Union (mIoU) for evaluation. Single scale images are adopted as input for testing by default if not specified. For ablation experiments, we adopt ResNet-50 and ResNet-101 as our backbones. \begin{figure*}[t] \begin{center} \includegraphics[width=0.95\linewidth]{result_4.PNG} \end{center} \setlength{\abovecaptionskip}{0cm} \caption{Qualitative comparisons between our method and baseline on ADE20K validation set. It can be seen that our method can better segment objects with consistent textures or colors, such as the pillars and sofas in the first row. Moreover, it can also smooth the edges better, such as the sofa and coffee table in the second row. These all result from the local similarity of the low-level features of superpixels and the sharp edges between pixel blocks.} \label{fig:result} \end{figure*} \subsubsection{Different Superpixel Algorithms} To explore how different algorithms affect the segmentation performance, we have conducted an ablation study on three different kinds of algorithms. They are density-based Quick Shift (QS)~\cite{vedaldi2008quick}, clustering-based SLIC~\cite{achanta2012slic}, and CNN-based SSN~\cite{jampani2018superpixel}. We adopt single scale superpixel guided message passing method and set $\lambda$ to 200 for all the experiments. DeeplabV3+ with backbone ResNet-50 is served as our baseline. The result is reported in Tab.~\ref{Tab al}. Though SSN achieves the best performance, it needs extra training time. On the other hand, SLIC, as an unsupervised clustering-based superpixel method, also performs well. Thus, we use SLIC for a fast implementation. \setlength{\tabcolsep}{0.5cm}{\begin{table}[t] \begin{center} \begin{tabular}{c c c|c} \hline QS & SLIC & SSN & mIoU(\%) \\ \hline\hline & & & 42.91\\ \hline \checkmark & & & 43.19\\ & \checkmark & & 43.32\\ & & \checkmark & 43.47\\ \hline \end{tabular} \end{center} \setlength{\abovecaptionskip}{0.2cm} \caption{Comparisons with different superpixel algorithms.} \label{Tab al} \end{table} \subsubsection{Different Numbers of Superpixel and Scales} In order to explore the internal relationship between segmentation performance and superpixel property, we have conducted abundant experiments on different numbers of scales and combinations of different superpixel numbers. As the two ablation studies are deeply correlative, we integrate them together for better comparisons. The results are reported in Tab.~\ref{Tab scale}. As is indicated in this table, it seems that multiscale $\lambda$ set to 200, 300, and 400 performs the best. Obviously, it achieves a beautiful balance between large dependency and superpixel quality. \setlength{\tabcolsep}{0.25cm}{\begin{table}[h] \begin{center} \begin{tabular}{c|c c c c c|c} \hline Scale & 100 & 200& 300 & 400 & 500 &mIoU(\%) \\ \hline\hline 0 & & & & && 42.91\\ \hline \multirow{4}{*}{1} & \checkmark & & & &&42.84 \\ & & \checkmark & & &&43.32\\ & & & \checkmark & &&43.39\\ & & & & \checkmark &&43.52\\ & & & & &\checkmark&42.59\\ \hline \multirow{3}{*}{2} & \checkmark &\checkmark & & &&43.12 \\ & & \checkmark & \checkmark & &&43.48\\ & & & \checkmark & \checkmark&&43.74 \\ \hline \multirow{2}{*}{3} & \checkmark & \checkmark & \checkmark& &&43.36 \\ & & \checkmark &\checkmark & \checkmark && {\bf 43.93}\\ \hline 4 & \checkmark &\checkmark & \checkmark& \checkmark &&43.56\\ \hline \end{tabular} \end{center} \setlength{\belowcaptionskip}{-0.1cm} \setlength{\abovecaptionskip}{0.2cm} \caption{Comparisons on the ADE20K val set with DeepLabV3+ (ResNet50) as baseline. Since the damage is up to {\bf 0.32\%} while $\lambda$ is set to $500$, we no longer take $\lambda=500$ into account for multiscale.} \label{Tab scale} \end{table}} \setlength{\tabcolsep}{0.2cm}{\begin{table}[t] \begin{center} \begin{tabular}{c c c|c c} \hline DenseCRF & SegFix & MSP & mIoU(\%) & F-score \\ \hline\hline & & & 44.91& 20.23\\ \checkmark & & & 45.43 & 21.86 \\ & \checkmark & & 45.62 & 22.19\\ & & \checkmark &{\bf 46.61} & 22.34\\ \hline \checkmark & & \checkmark& 46.62 & 23.26\\ & \checkmark & \checkmark&{\bf 47.05}& 24.01\\ \checkmark & \checkmark & \checkmark & 47.00 & 24.65\\ \hline \end{tabular} \end{center} \setlength{\abovecaptionskip}{0.2cm} \caption{Comparisons with DenseCRF and SegFix on ADE20K validation set. We take Deeplabv3+ (ResNet101) as the baseline. The post-processing method of DenseCRF and SegFix brings slightly lower gains than our multiscale superpixel solution. When the multiscale superpixel method is combined with SegFix, the mIoU on the ADE20K validation set reaches {\bf 47.05\%}.} \label{Tab 3} \end{table}} \subsection{Comparisons with DenseCRF and SegFix} The sharp edges of the superpixel blocks allow the adjacent boundary features from different sides to receive the information from corresponding inside superpixel blocks when the message is passing. In other words, superpixel greatly enhances the separability of the boundary because of its own sharp boundary. In this section, we compare our method with some edge optimization algorithms. Here we mainly choose DenseCRF~\cite{krahenbuhl2012efficient} and SegFix~\cite{yuan2020segfix}. We keep the same setting of DenseCRF with Deeplab~\cite{chen2014semantic} and fine-tune its parameters for better performance. As for SegFix, we follow the training strategy in ~\cite{yuan2020segfix}, and train SegFix 80000 iterations on the ADE20K training set. Then for fair comparisons, the two post-processing algorithms are applied to refine the prediction of DeeplabV3+ on the ADE20K validation set, in which our method is embedded. In addition, we also superimpose the two algorithms and our method to further improve the segmentation on the ADE20K validation set. The result is reported in Tab. \ref{Tab 3}. As is indicated in the table, our solution reaches the best performance on both mIoU and F-score among all the algorithms. And it can be seen that DenseCRF has brought a limited gain in mIoU and has a weak effect on the improvement of the object boundaries. Moreover, our experiments show that our method is also complementary with the SegFix. \subsection{Generalizability on Other Methods} To verify the effectiveness and generalizability of our proposed MSP, we have conducted extensive experiments based on two different baselines, namely PSPNet~\cite{zhao2017pyramid} and DeeplabV3~\cite{chen2017rethinking} on the validation set of ADE20K. The results are reported in Tab.~\ref{Tab 1} and Tab.~\ref{Tab deep}. As is indicated, our method can bring a relatively large gain to the two networks. Specifically, our method based on PSPNet with backbone ResNet-101 achieves 44.41\% in mIoU, and outperforms the original one by 0.84\%. Our method based on DeeplabV3 with backbone ResNet-101 achieves 44.89\% in mIoU, and outperforms the original one by 0.81\%. \setlength{\tabcolsep}{0.4cm}{\begin{table} \begin{center} \begin{tabular}{c|c|c|c} \hline Method & Backbone & MSP & mIoU(\%) \\ \hline\hline PSPNet & ResNet50 & & 41.23\\ PSPNet & ResNet50 & \checkmark &{\bf 41.99}\\ \hline PSPNet & ResNet101 & & 43.57\\ PSPNet & ResNet101 & \checkmark &{\bf 44.41}\\ \hline \end{tabular} \end{center} \setlength{\belowcaptionskip}{-0.2cm} \setlength{\abovecaptionskip}{0.1cm} \caption{Comparisons with PSPNet on ADE20K dataset.} \label{Tab 1} \end{table}} \setlength{\tabcolsep}{0.36cm}{\begin{table} \begin{center} \begin{tabular}{c|c|c|c} \hline Method & Backbone & MSP & mIoU(\%) \\ \hline\hline DeeplabV3 & ResNet50 & & 42.42\\ DeeplabV3 & ResNet50 & \checkmark &{\bf 43.16}\\ \hline DeeplabV3 & ResNet101 & & 44.08\\ DeeplabV3 & ResNet101 & \checkmark &{\bf 44.89}\\ \hline \end{tabular} \end{center} \setlength{\belowcaptionskip}{-0.2cm} \setlength{\abovecaptionskip}{0.1cm} \caption{Comparisons with DeeplabV3 on ADE20K dataset.} \label{Tab deep} \end{table}} \subsection{Generalizability on Other Datasets} To further verify the effectiveness and generalizability of our proposed MSP, we have conducted extensive experiments on three another datasets, namely Cityscapes, PASCAL VOC, and PASCAL Context. \subsubsection{Cityscapes} Cityscapes is another popular dataset for scene parsing, which contains totally 19 classes. It consists of 5K high-quality pixel-annotated images collected from 50 cities in different seasons, all of which are with $1024\times2048$ pixels. In this data set, the training set contains 2975 finely annotated pictures, the validation set contains 500 pictures, and the test set contains 1525 pictures. \setlength{\tabcolsep}{0.35cm}{\begin{table}[h] \begin{center} \begin{tabular}{c|c|c|c} \hline Method & Backbone & MSP & mIoU(\%) \\ \hline\hline DeeplabV3+ & ResNet50 & & 79.24\\ DeeplabV3+ & ResNet50 & \checkmark & {\bf 79.79} \\ \hline DeeplabV3+ & ResNet101 & &79.93 \\ DeeplabV3+ & ResNet101 &\checkmark &{\bf 80.49}\\ \hline \end{tabular} \end{center} \setlength{\abovecaptionskip}{0.2cm} \setlength{\belowcaptionskip}{-0.3cm} \caption{Comparisons on the Cityscapes validation set with baseline DeeplabV3+. All experiments in the table adopt single scale image as input for network. And since the raw image is 1024 × 2048, in order to maintain the quality of the superpixel block, we adjust $\lambda$ to 100 and 200 naturally while in MSP mode. {\bf MSP :} Multiscale Superpixel Module.} \label{Tab 5} \end{table}} For the sake of a fair comparison, we adopt ResNet-50 and ResNet-101 as our backbones respectively. Taking DeeplabV3+ as the baseline, we use 2975 finely annotated images in the Cityscapes dataset for training, and 500 images in the validation set and 1525 images in test set for evaluation with single scale input. The results can be found in Tab. \ref{Tab 5} and Tab.~\ref{Tab 7} respectively. It is obvious that the proposed approach outperforms DeeplabV3+ in both val and test sets. More specifically, We achieve {\bf 80.49\%} and {\bf 80.14\%} in mIoU with ResNet-101 as backbone on the validation and test set, outperforming the baselines by {\bf 0.56\%} and {\bf 0.92\%}, which further demonstrates the effectiveness of our method. \setlength{\tabcolsep}{0.35cm}{\begin{table}[h] \begin{center} \begin{tabular}{c|c|c|c} \hline Method & Backbone & MSP & mIoU(\%) \\ \hline\hline DeeplabV3+ & ResNet50 & & 78.23\\ DeeplabV3+ & ResNet50 & \checkmark & {\bf 78.98} \\ \hline DeeplabV3+ & ResNet101 & &79.22 \\ DeeplabV3+ & ResNet101 &\checkmark &{\bf 80.14}\\ \hline \end{tabular} \end{center} \setlength{\abovecaptionskip}{0.2cm} \setlength{\belowcaptionskip}{-0.3cm} \caption{Comparisons on the Cityscapes test set with baseline DeeplabV3+. We keep the same setting as cityscapes val set.} \label{Tab 7} \end{table}} \setlength{\tabcolsep}{0.35cm}{\begin{table}[h] \begin{center} \begin{tabular}{c|c|c|c} \hline Method & Backbone & MSP & mIoU(\%) \\ \hline\hline DeeplabV3+ & ResNet50 & & 76.81\\ DeeplabV3+ & ResNet50 & \checkmark & {\bf 77.64} \\ \hline DeeplabV3+ & ResNet101 & &78.62 \\ DeeplabV3+ & ResNet101 &\checkmark &{\bf 79.49}\\ \hline \end{tabular} \end{center} \setlength{\abovecaptionskip}{0.2cm} \caption{Comparisons on the PASCAL VOC set with baseline DeeplabV3+. We set $\lambda$ to 100 and 200 while in MSP mode. Single scale images are also adopted as input for network in all experiments. } \label{Tab 8} \end{table}} \subsubsection{PASCAL VOC} Experiments on the PASCAL VOC dataset are conducted based on DeeplabV3+ with ResNet-101 and ResNet-50 as the backbone, respectively. Quantitative results of PASCAL VOC are shown in Tab.~\ref{Tab 8}. Our method has outperformed baseline remarkably and brought {\bf 0.83 \%} (ResNet-50) and {\bf 0.87\%} (ResNet-101) gains in mIoU. It seems that our method is generally beneficial. \subsubsection{PASCAL Context} We have conducted experiments on the PASCAL Context dataset as well. In the experiments, we set $\lambda$ to 100 and 200 while in MSP mode following PASCAL VOC. When it comes to evaluation, we adopt single scale images as input for network. Comparisons with baseline are shown in Tab.~\ref{Tab all}. As is indicated in the table, our approach achieves {\bf 48.11\%} in mIoU and outperforms DeeplabV3+ by {\bf 0.84\%} with ResNet-101 as the backbone. \setlength{\tabcolsep}{0.35cm}{\begin{table}[h] \begin{center} \begin{tabular}{c|c|c|c} \hline Method & Backbone & MSP & mIoU(\%) \\ \hline\hline DeeplabV3+ & ResNet101 & &47.27 \\ DeeplabV3+ & ResNet101 &\checkmark &{\bf 48.11}\\ \hline \end{tabular} \end{center} \setlength{\abovecaptionskip}{0.2cm} \setlength{\belowcaptionskip}{-0.3cm} \caption{Comparisons on the PASCAL Context set.} \label{Tab all} \end{table}} \section{Conclusion} In this paper, we propose a simple but effective message passing method for the use of superpixel that contains sharp boundary in semantic segmentation, which has brought general gains to our baselines on ADE20K, Cityscapes, PASCAL VOC, and PASCAL Context datasets. However, we just make a small step of exploration in how to use superpixel appropriately. In our opinion, mature superpixel algorithms are of great help to semantic segmentation and even various aspects of computer vision tasks, such as instance segmentation, object detection, saliency estimation, and so on. We will also attempt to dive deep into the follow-up research. {\small
proofpile-arXiv_065-96
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \begin{figure}[t] \begin{subfigure}[t]{\linewidth} \centering \includegraphics[width=0.6\linewidth]{figures/sample_467.png} \caption{Image with human and object detections.} \label{fig:teaser-sample} \end{subfigure} \begin{subfigure}[t]{\linewidth} \centering \includegraphics[width=\linewidth]{figures/tokens.pdf} \caption{Unary and pairwise tokens with predicted scores (\emph{riding a motorcycle}).} \label{fig:teaser-tokens} \end{subfigure} \caption{Our Unary--Pairwise Transformer encodes human and object instances individually and in pairs, allowing it to reason about the data in complementary ways. In this example, our network correctly identifies the interactive pairs for the action \textit{riding a motorcycle}, while suppressing the visually-similar non-interactive pairs and those with different associated actions. } \label{fig:teaser} \vspace{-10pt} \end{figure} Human--object interaction (HOI) detectors localise interactive human--object pairs in an image and classify the actions. They can be categorised as one- or two-stage, mirroring the grouping of object detectors. Exemplified by Faster R-CNN~\cite{fasterrcnn}, two-stage object detectors typically include a region proposal network, which explicitly encodes potential regions of interest in the form of bounding boxes. These bounding boxes can then be classified and further refined via regression in a downstream network. In contrast, one-stage detectors, such as RetinaNet~\cite{retinanet}, retain the abstract feature representations of objects throughout the network, and decode them into bounding boxes and classification scores at the end of the pipeline. In addition to the same categorisation convention, HOI detectors need to localise two bounding boxes per instance instead of one. Early works~\cite{hicodet,gpnn,no-frills,tin} employ a pre-trained object detector to obtain a set of human and object boxes, which are paired up exhaustively and processed by a downstream network for interaction classification. This methodology coincides with that of two-stage detectors and quickly became the mainstream approach due to the accessibility of high-quality pre-trained object detectors. The first instance of one-stage HOI detectors was introduced by Liao \etal.~\cite{ppdm}. They characterised human--object pairs as interaction points, represented as the midpoint of the human and object box centres. Recently, due to the great success in using learnable queries in transformer decoders for localisation~\cite{detr}, the development of one-stage HOI detectors has been greatly advanced. However, HOI detectors that adapt the DETR model rely heavily on the transformer, which is notoriously difficult to train~\cite{train-xfmer}, to produce discriminative features. In particular, when initialised with DETR's pre-trained weights, the decoder attends to regions of high objectness by default. The heavy-weight decoder stack then has to be adapted to attend to regions of high interactiveness. Consequently, training such one-stage detectors often consumes large amounts of memory and time as shown in \cref{fig:convg-time}. In contrast, two-stage HOI detectors do not repurpose the backbone network, but maintain it as an object detector. Since the first half of the pipeline already functions as intended at the beginning of training, the second half can be trained quickly for the specific task of HOI detection. Furthermore, since the object detector can be decoupled from the downstream interaction head during training, its weights can be frozen, and a lighter-weight network can be used for interaction detection, saving a substantial amount of memory and computational resources. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/conv_time.png} \caption{Mean average precision as a function of the number of epochs (left) and training time (right) to convergence. The backbone networks for all methods have been initialised with the same weights and trained on 8 GeForce GTX TITAN X GPUs.} \label{fig:convg-time} \end{figure} \begin{table}[t]\small \vspace{-4pt} \caption{The performance discrepancy between existing state-of-the-art one-stage and two-stage HOI detectors is largely attributable to the choice of backbone network. We report the mean average precision ($\times 100$) on the HICO-DET~\cite{hicodet} test set.} \label{tab:one-vs-two} \setlength{\tabcolsep}{6pt} \vspace{-4pt} \begin{tabularx}{\linewidth}{l l l C} \toprule \textbf{Method} & \textbf{Type} & \textbf{Detector Backbone} & \textbf{mAP} \\ \midrule SCG~\cite{scg} & two-stage & Faster R-CNN R-50-FPN & 24.88 \\ SCG~\cite{scg} & two-stage & DETR R-50 & 28.79 \\ SCG~\cite{scg} & two-stage & DETR R-101 & \textbf{29.26} \\ \midrule QPIC~\cite{qpic} & one-stage & DETR R-50 & 29.07 \\ QPIC~\cite{qpic} & one-stage & DETR R-101 & \textbf{29.90} \\ \midrule Ours & two-stage & DETR R-50 & 31.66 \\ Ours & two-stage & DETR R-101 & \textbf{32.31} \\ \bottomrule \end{tabularx} \vspace{-6pt} \end{table} Despite these advantages, the performance of two-stage detectors has lagged behind their one-stage counterparts. However, most of these two-stage models used Faster R-CNN~\cite{fasterrcnn} rather than more recent object detectors. We found that simply replacing Faster R-CNN with the DETR model in an existing two-stage detector (SCG)~\cite{scg} resulted in a significant improvement, putting it on par with a state-of-the-art one-stage detector (QPIC), as shown in \cref{tab:one-vs-two}. We attribute this performance gain to the representation power of transformers and bipartite matching loss~\cite{detr}. The latter is particularly important because it resolves the misalignment between the training procedure and evaluation protocol. The evaluation protocol dictates that, amongst all detections associated with the same ground truth, the highest scoring one is the true positive while the others are false positives. Without bipartite matching, all such detections will be labelled as positives. The detector then has to resort to heuristics such as non-maximum suppression to mitigate the issue, resulting in procedural misalignment. We propose a two-stage model that refines the output features from DETR with additional transformer layers for HOI classification. As shown in \cref{fig:teaser}, we encode the instance information in two ways: a unary representation where individual human and object instances are encoded separately, and a pairwise representation where human--object pairs are encoded jointly. These representations provide orthogonal information, and we observe different behaviours in their associated layers. The unary encoder layer preferentially increases the predicted interaction scores for positive examples, while the pairwise encoder layer suppresses the negative examples. As a result, this complementary behaviour widens the gap between scores of positive and negative examples, particularly benefiting ranking metrics such as mean average precision (mAP). Our primary contribution is a novel and efficient two-stage HOI detector with unary and pairwise encodings. Our secondary contribution is demonstrating how pairwise box positional encodings---critical for HOI detection---can be incorporated into a transformer architecture, enabling it to jointly reason about unary appearance and pairwise spatial information. We further provide a detailed analysis on the behaviour of the two encoder layers, showing that they have complementary properties. Our proposed model not only outperforms state-of-the-art methods, but also consumes much less time and memory to train. The latter allows us to employ more memory-intensive backbone networks, further improving the performance. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{figures/upt.pdf} \caption{Flowchart for our unary--pairwise transformer. An input image is processed by a backbone CNN to produce image features, which are partitioned into patches of equal size and augmented with sinusoidal positional encodings. These tokens are fed into the DETR~\cite{detr} transformer encoder--decoder stack, generating new features for a fixed number of learnable object queries. These are decoded by an MLP as object classification scores and bounding boxes, and are also passed to the interaction head as unary tokens. The interaction head also receives pairwise positional encodings computed from the predicted bounding box coordinates. A modified transformer encoder layer then refines the unary tokens using the pairwise positional encodings. The output tokens are paired up and fused with the same positional encodings to produce pairwise tokens, which are processed by a standard transformer encoder layer before an MLP decodes the final features as action classification scores. } \label{fig:diagram} \end{figure*} \section{Related work} Transformer networks~\cite{xfmer}, initially developed for machine translation, have recently become ubiquitous in computer vision due to their representation power, flexibility, and global receptive field via the attention mechanism. The image transformer ViT~\cite{vit} represented an image as a set of spatial patches, each of which was encoded as a token through simple linear transformations. This approach for tokenising images rapidly gained traction and inspired many subsequent works~\cite{swint}. Another key innovation of transformers is the use of learnable queries in the decoder, which are initialised randomly and updated through alternating self-attention and cross-attention with encoder tokens. Carion \etal~\cite{detr} use these as object queries in place of conventional region proposals for their object detector. Together with a bipartite matching loss, this design gave rise to a new class of one-stage detection models that formulate the detection task as a set prediction problem. It has since inspired numerous works in HOI detection~\cite{qpic, hoitrans, hotr, asnet}. To adapt the DETR model to HOI detection, Tamura \etal~\cite{qpic} and Zou \etal~\cite{hoitrans} add additional heads to the transformer in order to localise both the human and object, as well as predict the action. As for bipartite matching, additional cost terms are added for action prediction. On the other hand, Kim \etal~\cite{hotr} and Chen \etal~\cite{asnet} propose an interaction decoder to be used alongside the DETR instance decoder. It is specifically responsible for predicting the action while also matching the interactive human--object pairs. These aforementioned one-stage detectors have achieved tremendous success in pushing the state-of-the-art performance. However, they all require significant resources to train the models. In contrast, this work focuses on exploiting novel ideas to produce equally discriminative features while preserving the memory efficiency and low training time of two-stage detectors. Two-stage HOI detectors have also undergone significant development recently. Li \etal~\cite{idn} studied the integration and decomposition of HOIs in an analogy to the superposition of waves in harmonic analysis. Hou \etal explored few-shot learning by fabricating object representations in feature space~\cite{fcl} and learning to transfer object affordance~\cite{atl}. Finally, Zhang \etal~\cite{scg} proposed to fuse features of different modalities within a graphical model to produce more discriminative features. We make use of this modality fusion in our transformer model and show that it leads to significant improvements. \section{Unary--pairwise transformers} To leverage the success of transformer-based detectors, we use DETR~\cite{detr} as our backbone object detector and focus on designing an effective and efficient interaction head for HOI detection, as shown in \cref{fig:diagram}. The interaction head consists of two types of transformer encoder layers, with the first layer modified to accommodate additional pairwise input. The first layer operates on unary tokens, \ie, individual human and object instances, while the second layer operates on pairwise tokens, \ie, human--object pairs. Based on our analysis and experimental observations in \cref{sec:macro} and \cref{sec:micro}, self-attention in the unary layer preferentially increases the interaction scores for positive HOI pairs, whereas self-attention in the pairwise layer decreases the scores for negative pairs. As such, we refer to these layers as \textit{cooperative} and \textit{competitive} layers respectively. \subsection{Cooperative layer} \label{sec:coop} A standard transformer encoder layer takes as input a set of tokens and performs self-attention. Positional encodings are usually indispensable to compensate for the lack of order in the token set. Typically, sinusoidal functions of the position~\cite{xfmer} or learnable embeddings~\cite{detr} are used for this purpose. It is possible to extend sinusoidal encodings to bounding box coordinates, however, our unary tokens already contain positional information, since they were decoded into bounding boxes. Instead, we take this as an opportunity to inject pairwise spatial information into the transformer, something that has been shown to be helpful for the task of HOI detection~\cite{scg}. Specifically, we compute the unary and pairwise spatial features used by Zhang \etal~\cite{scg} from the bounding boxes, including the unary box centre, width and height, and pairwise intersection-over-union, relative area, and direction, and pass this through an MLP to obtain the pairwise positional encodings. We defer the full details to~\cref{app:pe}. We also found that the usual additive approach did not perform as well for our positional encodings. So we slightly modified the attention operation in the transformer encoder layer to allow directly injecting the pairwise positional encodings into the computation of values and attention weights. \begin{figure}[t] \centering \includegraphics[width=0.89\linewidth]{figures/modified_layer.pdf} \caption{Architecture of the modified transformer encoder layer (left) and its attention module (right). FFN stands for feedforward network~\cite{xfmer}. ``Pairwise concat.'' refers to the operation of pairing up all tokens and concatenating the features. ``Duplicate'' refers to the operation of repeating the features along a new dimension.} \label{fig:modified-layer} \end{figure} More formally, given the detections returned by DETR, we first apply non-maximum suppression and thresholding. This leaves a smaller set $\{d_i\}_{i=1}^{n}$, where a detection $d_i=(\bb_i, s_i, c_i, \bx_i)$ consists of the box coordinates $\bb_i \in \reals^4$, the confidence score $s_i \in [0, 1]$, the object class $c_i \in \cK$ for a set of object categories $\cK$, and the object query or feature $\bx_i \in \reals^{m}$. We compute the pairwise box positional encodings $\{\by_{i, j} \in \reals^m\}_{i, j=1}^{n}$ as outlined above. We denote the collection of unary tokens by $X \in \reals^{n \times m}$ and the pairwise positional encodings by $Y \in \reals^{n \times n \times m}$. The complete structure of the modified transformer encoder layer is shown in \cref{fig:modified-layer}. For brevity of exposition, let us assume that the number of heads $h$ is 1, and define \begin{align} \dot{X} \in \reals^{n \times n \times m},\: \dot{X}_i & \triangleq X \in \reals^{n \times m}, \\ \ddot{X} \in \reals^{n \times n \times 2m},\: \ddot{\bx}_{i,j} & \triangleq \bx_{i} \oplus \bx_{j} \in \reals^{2m}, \end{align} where $\oplus$ denotes vector concatenation. That is, the tensors $\dot{X}$ and $\ddot{X}$ are the results of duplication and pairwise concatenation. The equivalent values and attention weights can then be computed as \begin{align} V &= \dot{X} \otimes Y, \\ W &= \text{softmax}( (\ddot{X} \oplus Y) \bw + b ), \end{align} where $\otimes$ denotes elementwise product and $\bw \in \reals^{3m}$ and $b \in \reals$ are the parameters of the linear layer. The output of the attention layer is then computed as $W \otimes V$. Additional details can be found in~\cref{app:me}. \subsection{Competitive layer} To compute the set of pairwise tokens, we form all pairs of distinct unary tokens and remove those where the first token is not human, as object--object pairs are beyond the scope of HOI detection. We denote the resulting set as $\{p_k = (\bx_i, \bx_j, \by_{i, j}) \mid i \neq j, c_i = ``\text{human}"\}$. We then compute the pairwise tokens from the unary tokens and positional encodings via multi-branch fusion (MBF)~\cite{scg} as \begin{equation} \bz_k = \text{MBF}(\bx_i \oplus \bx_j, \by_{i, j}). \end{equation} Specifically, the MBF module fuses two modalities in multiple homogeneous branches and return a unified feature representation. For completeness, full details are provided in~\cref{app:mbf}. Last, the set of pairwise tokens are fed into an additional transformer encoder layer, allowing the network to compare the HOI candidates, before an MLP predicts each HOI pair's action classification logits $\widetilde{\bs}$. \subsection{Training and inference} To make full use of the pre-trained object detector, we incorporate the object confidence scores into the final scores of each human--object pair. Denoting the action logits of the $k^{\text{th}}$ pair $p_k$ as $\widetilde{\bs}_k$, the final scores are computed as \begin{equation} \bs_k=(s_i)^\lambda \cdot (s_j)^\lambda \cdot \sigma(\widetilde{\bs}_k), \label{eq:scores} \end{equation} where $\lambda > 1$ is a constant used during inference to suppress overconfident objects~\cite{scg} and $\sigma$ is the sigmoid function. We use focal loss\footnote{Final scores in \cref{eq:scores} are normalised to the interval $[0, 1]$. In training, we instead recover the scale prior to normalisation and use the corresponding loss with logits for numerical stability. See more details in~\cref{app:loss}.}~\cite{retinanet} for action classification to counter the imbalance between positive and negative examples. Following previous practice~\cite{no-frills,scg}, we only compute the loss on valid action classes for each object type, specified by the dataset. During inference, scores for invalid combinations of actions and objects (\eg, \textit{eating a car}) are zeroed out. \section{Experiments} \begin{table*}[t]\small \centering \caption{Comparison of HOI detection performance (mAP$\times100$) on the HICO-DET~\cite{hicodet} and V-COCO~\cite{vcoco} test sets. The highest result in each section is highlighted in bold.} \label{tab:results} \begin{tabularx}{\linewidth}{@{\extracolsep{\fill}} l l cccccccc} \toprule & & \multicolumn{6}{c}{\textbf{HICO-DET}} & \multicolumn{2}{c}{\textbf{V-COCO}} \\ [4pt] & & \multicolumn{3}{c}{Default Setting} & \multicolumn{3}{c}{Known Objects Setting} & & \\ \cline{3-5}\cline{6-8}\cline{9-10} \\ [-8pt] \textbf{Method} & \textbf{Backbone} & Full & Rare & Non-rare & Full & Rare & Non-rare & AP$_{role}^{S1}$ & AP$_{role}^{S2}$ \\ \midrule HO-RCNN~\cite{hicodet} & CaffeNet & 7.81 & 5.37 & 8.54 & 10.41 & 8.94 & 10.85 & - & - \\ InteractNet~\cite{interactnet} & ResNet-50-FPN & 9.94 & 7.16 & 10.77 & - & - & - & 40.0 & - \\ GPNN~\cite{gpnn} & ResNet-101 & 13.11 & 9.34 & 14.23 & - & - & - & 44.0 & - \\ TIN~\cite{tin} & ResNet-50 & 17.03 & 13.42 & 18.11 & 19.17 & 15.51 & 20.26 & 47.8 & 54.2 \\ Gupta \etal~\cite{no-frills} & ResNet-152 & 17.18 & 12.17 & 18.68 & - & - & - & - & - \\ VSGNet~\cite{vsgnet} & ResNet-152 & 19.80 & 16.05 & 20.91 & - & - & - & 51.8 & 57.0 \\ DJ-RN~\cite{djrn} & ResNet-50 & 21.34 & 18.53 & 22.18 & 23.69 & 20.64 & 24.60 & - & - \\ PPDM~\cite{ppdm} & Hourglass-104 & 21.94 & 13.97 & 24.32 & 24.81 & 17.09 & 27.12 & - & - \\ VCL~\cite{vcl} & ResNet-50 & 23.63 & 17.21 & 25.55 & 25.98 & 19.12 & 28.03 & 48.3 & - \\ ATL~\cite{atl} & ResNet-50 & 23.81 & 17.43 & 27.42 & 27.38 & 22.09 & 28.96 & - & - \\ DRG~\cite{drg} & ResNet-50-FPN & 24.53 & 19.47 & 26.04 & 27.98 & 23.11 & 29.43 & 51.0 & - \\ IDN~\cite{idn} & ResNet-50 & 24.58 & 20.33 & 25.86 & 27.89 & 23.64 & 29.16 & 53.3 & 60.3 \\ HOTR~\cite{hotr} & ResNet-50 & 25.10 & 17.34 & 27.42 & - & - & - & 55.2 & \textbf{64.4} \\ FCL~\cite{fcl} & ResNet-50 & 25.27 & 20.57 & 26.67 & 27.71 & 22.34 & 28.93 & 52.4 & - \\ HOI-Trans~\cite{hoitrans} & ResNet-101 & 26.61 & 19.15 & 28.84 & 29.13 & 20.98 & 31.57 & 52.9 & - \\ AS-Net~\cite{asnet} & ResNet-50 & 28.87 & 24.25 & 30.25 & 31.74 & 27.07 & 33.14 & 53.9 & - \\ SCG~\cite{scg} & ResNet-50-FPN & 29.26 & \textbf{24.61} & 30.65 & \textbf{32.87} & \textbf{27.89} & \textbf{34.35} & 54.2 & 60.9 \\ QPIC~\cite{qpic} & ResNet-101 & \textbf{29.90} & 23.92 & \textbf{31.69} & 32.38 & 26.06 & 34.27 & \textbf{58.8} & 61.0 \\ \midrule Ours (UPT) & ResNet-50 & 31.66 & 25.94 & 33.36 & 35.05 & 29.27 & 36.77 & 59.0 & 64.5 \\ Ours (UPT) & ResNet-101 & 32.31 & 28.55 & 33.44 & 35.65 & \textbf{31.60} & 36.86 & 60.7 & 66.2 \\ Ours (UPT) & ResNet-101-DC5 & \textbf{32.62} & \textbf{28.62} & \textbf{33.81} & \textbf{36.08} & 31.41 & \textbf{37.47} & \textbf{61.3} & \textbf{67.1} \\ \bottomrule \end{tabularx} \end{table*} In this section, we first demonstrate that the proposed unary--pairwise transformer achieves state-of-the-art performance on both the HICO-DET~\cite{hicodet} and V-COCO~\cite{vcoco} datasets, outperforming the next best method by a significant margin. We then provide a thorough analysis on the effects of the cooperative and competitive layers. In particular, we show that the cooperative layer increases the scores of positive examples while the competitive layer suppresses those of the negative examples. We then visualise the attention weights for specific images, and show how these behaviours are achieved by the attention mechanism. At inference time, our method with ResNet50~\cite{resnet} runs at 24 FPS on a single GeForce RTX 3090 device. \paragraph{Datasets:} HICO-DET~\cite{hicodet} is a large-scale HOI detection dataset with $37\,633$ training images, $9\,546$ test images, $80$ object types, $117$ actions, and $600$ interaction types. The dataset has $117\,871$ human--object pairs with annotated bounding boxes in the training set and $33\,405$ in the test set. V-COCO~\cite{vcoco} is much smaller in scale, with $2\,533$ training images, $2\,867$ validation images, $4\,946$ test images, and only $24$ different actions. \subsection{Implementation details} We fine-tune the DETR model on the HICO-DET and V-COCO datasets prior to training and then freeze its weights. For HICO-DET, we use the publicly accessible DETR models pre-trained on MS COCO~\cite{coco}. However, for V-COCO, as its test set is contained in the COCO val2017 subset, we first pre-train DETR models from scratch on MS COCO, excluding those images in the V-COCO test set. For the interaction head, we filter out detections with scores lower than $0.2$, and sample at least $3$ and up to $15$ humans and objects each, prioritising high scoring ones. For the hidden dimension of the transformer, we use $m=256$, the same as DETR. Additionally, we set $\lambda$ to $1$ during training and $2.8$ during inference~\cite{scg}. For the hyperparameters used in the focal loss, we use the same values as SCG~\cite{scg}. We apply a few data augmentation techniques used in other detectors~\cite{detr,qpic}. Inputs images are scaled such that the shortest side is at least $480$ and at most $800$ pixels. The longest side is limited at $1333$ pixels. Additionally, each image is cropped with a probability of $0.5$ to a random rectangle with each side being at least $384$ pixels and at most $600$ pixels before being scaled. We also apply colour jittering, where the brightness, contrast and saturation values are adjusted by a random factor between $0.6$ to $1.4$. We use AdamW~\cite{adamw} as the optimiser with an initial learning rate of $10^{-4}$. All models are trained for $20$ epochs with a learning rate reduction at the $10^{\text{th}}$ epoch by a factor of $10$. Training is conducted on $8$ GeForce GTX TITAN X devices, with a batch size of $2$ per GPU---an effective batch size of $16$. \subsection{Comparison with state-of-the-art methods} \begin{table*}[t]\small \caption{Comparing the effect of the cooperative (coop.) and competitive (comp.) layers on the interaction scores. We report the change in the interaction scores as the layer in the $\Delta$ Architecture column is added to the reference network, for positives, easy negatives and hard negatives, with the number of examples in parentheses. As indicated by the bold numbers, the cooperative layer significantly increases the scores of positive examples while the competitive layer suppresses hard negative examples. Together, these layers widen the gap between scores of positive and negative examples, improving the detection mAP.} \label{tab:delta} \setlength{\tabcolsep}{3pt} \begin{tabularx}{\linewidth}{@{\extracolsep{\fill}} l l c c c c c c} \toprule & & \multicolumn{2}{c}{$\Delta$ \textbf{Positives} ($25\,391$)} & \multicolumn{2}{c}{$\Delta$ \textbf{Easy Negatives} ($3\,903\,416$)} & \multicolumn{2}{c}{$\Delta$ \textbf{Hard Negatives} ($510\,991$)}\\ [4pt] \cline{3-4} \cline{5-6} \cline{7-8} \\ [-8pt] \textbf{Reference} & $\Delta$ \textbf{Architecture} & Mean & Median & Mean & Median & Mean & Median \\ \midrule Ours w/o coop. layer & + coop. layer & \textbf{+0.1487} & +0.1078 & +0.0001 & +0.0000 & +0.0071 & +0.0000 \\ Ours w/o comp. layer & + comp. layer & -0.0463 & -0.0310 & -0.0096 & -0.0024 & \textbf{-0.1080} & -0.0922 \\ Ours w/o both layers & + both layers & \textbf{+0.0799} & +0.0390 & -0.0076 & -0.0018 & \textbf{-0.0814} & -0.0748 \\ \bottomrule \end{tabularx} \end{table*} \begin{figure*}[t] \begin{subfigure}[t]{0.33\linewidth} \centering \includegraphics[width=\linewidth]{figures/add_coop.png} \caption{\cref{tab:delta} first row} \label{fig:scatter-left} \end{subfigure} \begin{subfigure}[t]{0.33\linewidth} \centering \includegraphics[width=\linewidth]{figures/add_comp.png} \caption{\cref{tab:delta} second row} \label{fig:scatter-mid} \end{subfigure} \begin{subfigure}[t]{0.33\linewidth} \centering \includegraphics[width=\linewidth]{figures/add_both.png} \caption{\cref{tab:delta} third row} \label{fig:scatter-right} \end{subfigure} \caption{Change in the interaction score (delta) with respect to the reference score. \subref{fig:scatter-left} The distribution of score deltas when adding the cooperative layer (first row of \cref{tab:delta}). \subref{fig:scatter-mid} Adding the competitive layer to the model (second row). \subref{fig:scatter-right} Adding both layers (last row). For visualisation purposes, only $20\%$ of the negatives are sampled and displayed. } \label{fig:scatter} \end{figure*} The performance of our model is compared to existing methods on the HICO-DET~\cite{hicodet} and V-COCO~\cite{vcoco} datasets in \cref{tab:results}. There are two different settings for evaluation on HICO-DET. \textit{Default Setting}: A detected human--object pair is considered matched with a ground truth pair, if the minimum intersection over union (IoU) between the human boxes and object boxes exceeds $0.5$. Amongst all matched pairs, the one with the highest score is considered the true positive while others are false positives. Pairs without a matched ground truth are also considered false positives. \textit{Known Objects Setting}: Besides the aforementioned criteria, this setting assumes the set of object types in ground truth pairs are known. Therefore, detected pairs with an object type outside the set are removed automatically, thus reducing the difficulty of the problem. For V-COCO, the average precision (AP) is computed under two scenarios, differentiated by the superscripts $S1$ and $S2$. This is to account for missing objects due to occlusion. For scenario $1$, empty object boxes should be predicted in case of occlusion for a detected pair to be considered a match with the corresponding ground truth, while for scenario $2$, object boxes are always assumed to be matched in such cases. We report our model's performance for three different backbone networks. Notably, our model with the lightest-weight backbone already outperforms the next best method by a significant margin in almost every category. This gap is further widened with more powerful backbone networks. In particular, since the backbone CNN and object detection transformer are detached from the computational graph, our model has a small memory footprint. This allows us to use a higher-resolution feature map by removing the stride in the $5^{\text{th}}$ convolutional block (C5) of ResNet~\cite{resnet}, which has been shown to improve detection performance on small objects~\cite{detr}. We denote this as dilated C5 (DC5). \subsection{Macroscopic effects of the interaction head} \label{sec:macro} In this section, we compare the effects of the unary (cooperative) and pairwise (competitive) layers on the HICO-DET test set, with ResNet50~\cite{resnet} as the CNN backbone. Since the parameters in the object detector are kept frozen for our model, the set of detections processed by the downstream network remains the same, regardless of any architectural changes in the interaction head. This allows us to compare how different variants of our model perform on the same human--object pairs. To this end, we collected the predicted interaction scores for all human--object pairs over the test set and compare how adding certain layers influence them. In \cref{tab:delta}, we show some statistics on the change of scores upon an architectural modification. In particular, note that the vast majority of collected pairs are easy negatives with scores close to zero. For analysis, we divide the negative examples into easy and hard, where we define an easy negative as one with a score lower than $0.05$ as predicted by the ``Ours w/o both layers'' model, which accounts for $90\%$ of the negative examples. In addition, we also show the distribution of the change in score with respect to the reference score as scatter plots in \cref{fig:scatter}. The points are naturally bounded by the half-spaces $0 \leq x+y \leq 1$. Notably, adding the cooperative layer results in a significant average increase ($+0.15$) in the scores of positive examples, with little effect on the negative examples. This can be seen in \cref{fig:scatter-left} as well, where the score changes for almost all positive examples are larger than zero. In contrast, adding the competitive layer leads to a significant average decrease ($-0.11$) in the scores of hard negative examples, albeit with a small decrease in the score of positive examples as well. This minor decrease is compensated by the cooperative layer as shown in the last row of \cref{tab:delta}. Furthermore, looking at \cref{fig:scatter-mid}, we can see a dense mass near the line $y=-x$, which indicates that many negative examples have had their scores suppressed to zero. \begin{table}[t]\small \caption{Effect of the cooperative and competitive layers on the HICO-DET test set under the default settings.} \label{tab:ablation} \setlength{\tabcolsep}{3pt} \begin{tabularx}{\linewidth}{l C C C} \toprule \textbf{Model} & \textbf{Full} & \textbf{Rare} & \textbf{Non-rare} \\ \midrule Ours w/o both layers & 29.22 & 23.09 & 31.05 \\ Ours w/o comp. layer & 30.78 & 24.92 & 32.53 \\ Ours w/o coop. layer & 30.68 & 24.69 & 32.47 \\ Ours w/o pairwise pos. enc. & 29.98 & 23.72 & 31.64 \\ \midrule Ours ($1 \times$ coop., $1 \times$ comp.) & 31.33 & 26.02 & 32.91 \\ Ours ($1 \times$ coop., $2 \times$ comp.) & 31.62 & \textbf{26.18} & 33.24 \\ Ours ($2 \times$ coop., $1 \times$ comp.) & \textbf{31.66} & 25.94 & \textbf{33.36} \\ \bottomrule \end{tabularx} \end{table} \paragraph{Ablation study:} In \cref{tab:ablation}, we ablate the effect of different design decisions on performance. Adding the cooperative and competitive layers individually improves the performance by around $1.5$~mAP, while adding both layers jointly improves by over $2$~mAP. We also demonstrate the significance of the pairwise position encodings by removing them from the modified encoder and the multi-branch fusion module. This results in a 1.3~mAP decrease. Finally, we observe a slight improvement (0.3~mAP) when adding an additional cooperative or competitive layer, but no further improvements with more layers. As the competitive layer is more costly, we use two cooperative layers. \subsection{Microscopic effects of the interaction head} \label{sec:micro} \begin{figure}[t] \centering \includegraphics[height=0.36\linewidth]{figures/image.png} \hspace{3pt} \includegraphics[height=0.36\linewidth]{figures/unary_attn.png} \caption{Detected human and object instances (left) and the unary attention map for these instances (right).} \label{fig:unary_attn} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/pairwise_attn.png} \caption{Pairwise attention map for the human and object instances in \cref{fig:unary_attn}.} \label{fig:pairwise_attn} \end{figure} \begin{figure*}[t] \begin{subfigure}[t]{0.19\linewidth} \centering \includegraphics[height=0.68\linewidth]{figures/stand_on_snowboard_6544.png} \caption{\textit{standing on a snowboard}} \label{fig:standing-on-snowboard} \end{subfigure} \hfill% \begin{subfigure}[t]{0.19\linewidth} \centering \includegraphics[height=0.68\linewidth]{figures/holding_umbrella_7243.png} \caption{\textit{holding an umbrella}} \label{fig:holding-umbrella} \end{subfigure} \hfill% \begin{subfigure}[t]{0.19\linewidth} \centering \includegraphics[height=0.68\linewidth]{figures/carrying_suitcase_357.png} \caption{\textit{carrying a suitcase}} \label{fig:carrying-suitcase} \end{subfigure} \hfill% \begin{subfigure}[t]{0.19\linewidth} \centering \includegraphics[height=0.68\linewidth]{figures/sitting_at_dining_table_8701.png} \caption{\textit{sitting at a dining table}} \label{fig:sitting-at-dinning-table} \end{subfigure} \hfill% \begin{subfigure}[t]{0.19\linewidth} \centering \includegraphics[height=0.68\linewidth]{figures/sitting_on_bench_934.png} \caption{\textit{sitting on a bench}} \label{fig:sitting-on-bench} \end{subfigure} \begin{subfigure}[t]{0.19\linewidth} \centering \includegraphics[height=0.68\linewidth]{figures/flying_airplane_573.png} \caption{\textit{flying an airplane}} \label{fig:flying-airplane} \end{subfigure} \hfill% \begin{subfigure}[t]{0.19\linewidth} \centering \includegraphics[height=0.68\linewidth]{figures/holding_surfboard_1681.png} \caption{\textit{holding a surfboard}} \label{fig:holding-surfboard} \end{subfigure} \hfill% \begin{subfigure}[t]{0.19\linewidth} \centering \includegraphics[height=0.68\linewidth]{figures/wielding_baseball_bat_1860.png} \caption{\textit{wielding a baseball bat}} \label{fig:wielding-baseball-bat} \end{subfigure} \hfill% \begin{subfigure}[t]{0.19\linewidth} \centering \includegraphics[height=0.68\linewidth]{figures/riding_bike_998.png} \caption{\textit{riding a bike}} \label{fig:riding-bike} \end{subfigure} \hfill% \begin{subfigure}[t]{0.19\linewidth} \centering \includegraphics[height=0.68\linewidth]{figures/holding_wine_glasses_2661.png} \caption{\textit{holding a wineglass}} \label{fig:holding-wineglass} \end{subfigure} \caption{Qualitative results of detected HOIs. Interactive human--object pairs are connected by red lines, with the interaction scores overlaid above the human box. Pairs with scores lower than $0.2$ are filtered out.} \label{fig:qualitative} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}[t]{0.19\linewidth} \centering \includegraphics[height=0.67\linewidth]{figures/driving_truck_8578.png} \caption{\textit{driving a truck}} \label{fig:driving-truck} \end{subfigure} \hfill \begin{subfigure}[t]{0.19\linewidth} \includegraphics[height=0.67\linewidth]{figures/buying_bananas_2502.png} \caption{\textit{buying bananas}} \label{fig:buying-bananas} \end{subfigure} \hfill \begin{subfigure}[t]{0.19\linewidth} \centering \includegraphics[height=0.67\linewidth]{figures/repairing_laptops_607.png} \caption{\textit{repairing a laptop}} \label{fig:repairing-laptop} \end{subfigure} \hfill \begin{subfigure}[t]{0.19\linewidth} \centering \includegraphics[height=0.67\linewidth]{figures/washing_bicycle_4213.png} \caption{\textit{washing a bicycle}} \label{fig:washing-bike} \end{subfigure} \hfill \begin{subfigure}[t]{0.19\linewidth} \centering \includegraphics[height=0.67\linewidth]{figures/cutting_tie_9522.png} \caption{\textit{cutting a tie}} \label{fig:cutting-tie} \end{subfigure} \caption{Failure cases often occur when there is ambiguity in the interaction~\subref{fig:driving-truck},~\subref{fig:buying-bananas},~\subref{fig:repairing-laptop} or a lack of training data~\subref{fig:repairing-laptop},~\subref{fig:washing-bike},~\subref{fig:cutting-tie}.} \label{fig:failure} \end{figure*} In this section, we focus on a specific image and visualise the effect of attention in our cooperative and competitive layers. In \cref{fig:unary_attn}, we display a detection-annotated image and its associated attention map from the unary (cooperative) layer. The human--object pairs $(1, 4)$, $(2, 5)$ and $(3, 6)$ are engaged in the interaction \textit{riding a horse}. Excluding attention weights along the diagonal, we see that the corresponding human and horse instances attend to each other. We hypothesise that attention between pairs of unary tokens (e.g., $1$ and $4$) helps increase the interaction scores for the corresponding pairs. To validate this hypothesis, we manually set the attention logits between the three positive pairs to minus infinity, thus zeroing out the corresponding attention weights. The effect of this was an average decrease of $0.06$ ($8\%$) in the interaction scores for the three pairs, supporting the hypothesis. In \cref{fig:pairwise_attn}, we visualise the attention map of the pairwise (competitive) layer. Notably, all human--object pairs attend to the interactive pairs $(1, 4)$, $(2, 5)$ and $(3, 6)$ in decreasing order, except for the interactive pairs themselves. We hypothesise that attention is acting here to have the dominant pairs suppress the other pairs. To investigate, we manually set the weights such that the three interactive pairs all attend to $(1, 4)$ as well, with a weight of $1$. This resulted in a decrease of their interaction scores by $0.08$ ($11\%$). We then instead zeroed out the attention weights between the rest of the pairs and ($1, 4$), which resulted in a small increase in the scores of negative pairs. These results together suggest that attention in the competitive layer is acting as a soft version of non-maximum suppression, where pairs less likely to foster interactions attend to, and are suppressed by, the most dominant pairs. See~\cref{app:qual} for more examples. \subsection{Qualitative results and limitations} \vspace{-8pt} In \cref{fig:qualitative}, we present several qualitative examples of successful HOI detections, where our model accurately localises the human and object instances and assigns high scores to the interactive pairs. For example, in \cref{fig:holding-umbrella}, our model correctly identifies the subject of an interaction (the lady in red) despite her proximity to a non-interactive human (the lady in black). We also observe in \cref{fig:standing-on-snowboard} that our model becomes less confident when there is overlap and occlusion. This stems from the use of object detection scores in our model. Confusion in the object detector often translates to confusion in action classification. We also show five representative failure cases for our model, illustrating its limitations. In \cref{fig:driving-truck}, due to the indefinite position of drivers in the training set (and real life), the model struggled to identify the driver. For \cref{fig:washing-bike}, the model failed to recognise the interaction due to a lack of training data ($1$ training example), even though the action is well-defined. Overall, ambiguity in the actions and insufficient data are the biggest challenges for our model. Another limitation, specific to our model, is that the computation and memory requirements of our pairwise layer scale quadratically with the number of unary tokens. For scenes involving many interactive humans and objects, this becomes quite costly. Moreover, since the datasets we used are limited, we may expect poorer performance on data in the wild, where image resolution, lighting condition, etc. may be less controlled. \section{Conclusion} In this paper, we have proposed a two-stage detector of human--object interactions using a novel transformer architecture that exploits both unary and pairwise representations of the human and object instances. Our model not only outperforms the current state-of-the-art---a one-stage detector---but also consumes much less time and memory to train. Through extensive analysis, we demonstrate that attention between unary tokens acts to increase the scores of positive examples, while attention between pairwise tokens acts like non-maximum suppression, reducing the scores of negative examples. We show that these two effects are complementary, and together boost performance significantly. \vspace{-10pt} \paragraph{Potential negative societal impact:} Transformer models are large and computationally-expensive, and so have a significant negative environmental impact. To mitigate this, we use pre-trained models and a two-stage architecture, since fine-tuning an existing model requires less resources, as does training a single stage with the other stage fixed. There is also the potential for HOI detection models to be misused, such as for unauthorised surveillance, which disproportionately affects minority and marginalised communities. \vspace{-10pt} \paragraph{Acknowledgments:} We are grateful for support from Continental AG (D.C.). We would also like to thank Jia-Bin Huang and Yuliang Zou for their help with the reproduction of some experiment results. \clearpage {\small \bibliographystyle{ieee_fullname}
proofpile-arXiv_065-97
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} With the emergence of convolutional neural networks (CNNs)~\cite{1998Gradient}, a lot of salient object detection (SOD) methods~\cite{2018Detect, 7780449, Wu2019CascadedPD, Zhang2017AmuletAM} based on CNNs have been proposed and broken the records. However, these CNNs-based SOD methods heavily rely on large amounts of hand-labeling data with pixel-level annotations, which are labor-intensive and time-consuming~\cite{Zhang2019MemoryorientedDF}. Due to the high cost of labeling pixel-level annotations, some promising works have been proposed to explore other low-cost alternatives, including scribble~\cite{Zhang2020WeaklySupervisedSO, Yu2020StructureConsistentWS} and image-level category labels~\cite{Zeng2019MultiSourceWS, Wang2017LearningTD, Li2018WeaklySS}. Among them, the category label based methods only require category labels for training, and an overwhelming amount of labels for the existence of object categories are already given (\eg ImageNet~\cite{5206848}). Thus, in this paper, we focus on the image-level category label based salient object detection (WSOD\footnote{For convenience, we denote WSOD as methods based on image-level category label in this paper.}). \begin{figure} \vspace{-1mm} \includegraphics[width=1.00\linewidth]{figure/introduction.pdf} \vspace{-4mm} \caption{ Different pseudo labels synthesized by different refinement algorithms on class activation map (CAM), in which $Y_1$ and $Y_2$ represent pseudo labels from pixel-wise~\cite{Araslanov2020SingleStageSS} and superpixel-wise~\cite{Wang2017LearningTD} refinement algorithms, respectively. } \label{introduction} \vspace{-2mm} \end{figure} Previous works on WSOD proposed various techniques such as global smooth pooling~\cite{Wang2017LearningTD}, multi-source supervisions~\cite{Zeng2019MultiSourceWS} and alternate optimization~\cite{Li2018WeaklySS} to pursue single "high-quality" pseudo label for training their saliency networks. Though these works have achieved good performance, the generated single "high-quality" pseudo label is usually trapped by its prejudiced characteristics due to the different adopted refinement algorithms. For example, the incomplete deficiency ($3^{rd}$ column in Figure {\color{red}\ref{introduction}}) and redundant noise ($4^{th}$ column in Figure {\color{red}\ref{introduction}}). Instead of pursuing single "high-quality" pseudo labels, we propose to utilize multiple pseudo labels to establish a more robust framework and avoid the negative impacts from the single prejudiced label. To begin with, we adopt two different refinement algorithms, including a pixel-wise one~\cite{Araslanov2020SingleStageSS} and a superpixel-wise one~\cite{Wang2017LearningTD}, to synthesize two different pseudo labels. Both of these two algorithms utilize abundant appearance information in RGB images to perform refinement for class activation maps (CAMs)~\cite{Zhou2016LearningDF}. The pixel-wise one treats each individual pixel as units, takes its class activation score as clues and then infers its neighbor pixels' scores, while the superpixel-wise one takes superpixels as its operation units. As a result, the synthesized pseudo labels $Y_1$ (from pixel-wise algorithm) and $Y_2$ (from superpixel-wise algorithm) describe different characteristics. As is shown in Figure {\color{red}\ref{introduction}}, $Y_1$ provides better detailed information, but is usually trapped in incompleteness, while $Y_2$ can cover more complete objects but introduces more extra noisy information. These observations drive us to explore how to extract and integrate more comprehensive and robust saliency cues from multiple pseudo labels. The core insight of this work is to adequately excavate the comprehensive saliency cues in multiple pseudo labels and avoid the prejudice of the single label. To be specific, for multiple pseudo labels, we 1) extract abundant accurate multiple saliency cues from multiple noisy labels, and 2) perform integration and propagate the integrated multiple cues to the saliency network. Concretely, our contributions are as follows: \begin{itemize} \hyphenpenalty=1000 \tolerance=10 \vspace{-2mm} \item We introduce a new framework to utilize multiple pseudo labels for WSOD, which employs more comprehensive and robust saliency cues in multiple labels to avoid the negative impacts of a single label. \vspace{-2mm} \item We design a multi-filter directive network (denoted as MFNet), in which multiple directive filters and a multi-guidance loss are proposed to extract and integrate multiple saliency cues from multiple pseudo labels respectively. \vspace{-2mm} \item Extensive experiments on five benchmark datasets over four metrics demonstrate the superiority of our method as well as the multiple pseudo labels. \vspace{-2mm} \item We also extend the proposed framework to existing method MSW~\cite{Zeng2019MultiSourceWS} and the prove its effectiveness by achieving 9.1\% improvements over $F_{\beta}^{\omega}$ metric on ECSSD dataset. \end{itemize} \vspace{-2mm} \section{Related Work} \subsection{Salient Object Detection} Early researches on salient object detection (SOD) mainly leverage handcrafted features to segment the most salient objects, such as boundary prior~\cite{2013Saliency}, center prior~\cite{Jiang2013SubmodularSR} and so on~\cite{Zhu2014SaliencyOF, 2014Salient}. Recently, CNNs-based approaches have yielded a qualitative leap in performance due to the powerful ability of CNNs in extracting informative features. Various effective architectures~\cite{7780449, Zhang2017AmuletAM, Wang2017ASR, Luo2017NonlocalDF, Liu2018PiCANetLP} are proposed to enhance the performance of the saliency networks, among them, Liu \etal~\cite{7780449} propose a deep hierarchical saliency network, which can simultaneously learn powerful feature representations, informative saliency cues, and their optimal combination mechanisms from the global view. With the development of attention mechanisms, some promising works~\cite{Wu2019CascadedPD, Piao2019DepthInducedMR, Zhang2020SelectSA} are presented to introduce various attention modules to improve the saliency networks, in which Wu \etal~\cite{Wu2019CascadedPD} introduce a cascaded partial decoder framework, utilizing generated relatively precise attention maps to refine high-level features for improving the performance. In recent years, boundary information is attached much importance, and lots of works~\cite{Liu2019ASP, Su2019SelectivityOI, Zhao2019EGNetEG} propose to explore boundary of the salient objects to predict a more detailed prediction. In~\cite{Su2019SelectivityOI}, Su \etal propose an effective Cross Refinement Unit (CRU), which bidirectionally passes messages between the two tasks of salient object detection and edge detection. Although these methods have achieved promising improvements, a large amount of pixel-level annotations are needed for training their models, which are prohibitively expensive. \subsection{Weakly Supervised Salient Object Detection} To achieve a trade-off between labeling efficiency and model performance, weakly supervised salient object detection using low-cost labels is presented. Wang \etal~\cite{Wang2017LearningTD} first propose to perform salient object detection with image-level category labels and design a foreground inference network (FIN) to infer saliency maps. A global smooth pooling (GSP) is proposed to generate more integrated CAMs from image-level labels, and a new CRF algorithm which provides more accurate refinementis also proposed to giving rise to more effective network training. In~\cite{Li2018WeaklySS}, Li \etal design a generic alternate optimization framework to progressively refine and update the initial saliency seeds from a traditional SOD method MB+~\cite{Zhang2015MinimumBS}, a conditional random field based graphical model is also introduced to cleanse the noisy pseudo labels. \begin{figure*}[!t] \vspace{-1mm} \includegraphics[width=1.00\linewidth]{figure/framework2.pdf} \setlength{\abovecaptionskip}{0.2cm} \setlength{\belowcaptionskip}{0.2cm} \vspace{-4mm} \caption{ Overall framework of our proposed method. The class activation maps (CAMs)~\cite{Zhou2016LearningDF} are inferred by a trained image classification network, and multiple pseudo labels are synthesized based on it. The proposed MFNet includes two directive filters and a normal encoder-decoder saliency network. The architecture of the saliency decoder and directive filter is illustrated on the right, in which the three inputs of the saliency decoder represent the features from the $3^{rd}$, $4^{th}$ and $5^{th}$ convolution block of the shared encoder. } \label{framework} \vspace{-3mm} \end{figure*} Different from the previous works, Zeng \etal~\cite{Zeng2019MultiSourceWS} propose that the saliency cues in category labels can be supplemented by caption labels, and design a multi-source weak supervision framework to integrate multiple information in various supervisions. Besides, an attention transfer loss is proposed to transmit supervision signal between networks, and an attention coherence loss is presented to encourage the networks to detect the generally salient regions. Owe to the abundant saliency information in multi-source weak supervisions, a promising improvement is achieved in ~\cite{Zeng2019MultiSourceWS}. However, the multi-source framework only integrates the abundant information to generate a single pseudo label, leading that multi-source information cannot be explicitly propagated to the saliency network. \textbf{In conclusion}, the above previous works target to pursue a single "high-quality" pseudo label and then develop saliency networks on it. Different from the aforementioned works, we hold that the saliency cues in image-level category label can be differently excavated to synthesize multiple pseudo labels. The saliency network developed on these multiple labels can be more robust and avoid the prejudiced effects of single labels. \vspace{-3mm} \section{The Proposed Method} To excavate the comprehensive saliency cues in multiple pseudo labels, we propose a multiple pseudo label framework. As is illustrated in Figure {\color{red}\ref{framework}}, the proposed framework can be divided to two parts: \textbf{1)} Synthesizing multiple pixel-level pseudo labels on training images given existing image-level classification dataset. \textbf{2)} Developing the proposed multi-filter directive network (\textbf{MFNet}) with the generated multiple labels. In this section, we will introduce the first part in a brief way and then give the detailed descriptions of the second one. \subsection{Synthesizing Multiple Pseudo Labels} Based on a image classification network, class activation maps (CAMs)~\cite{Zhou2016LearningDF} build a bridge from image-level category labels to pixel-level pseudo labels, and play a vital role in weakly supervised segmentation tasks. Similar to~\cite{Zeng2019MultiSourceWS, Wang2017LearningTD}, we adopt ImageNet dataset~\cite{5206848} as the training set in this part for the sake of fairness. For an image classification network, we replace the fully connected layers with a global average pooling (GAP)~\cite{Lin2014NetworkIN} layer and add an extra convolution layer. The GAP layer encourages the classification network to identify the more complete extent of the object. The classification scores $S$ are computed by: \vspace{-4mm} \begin{equation} \begin{split} S = conv(GAP({F_5})), \end{split} \end{equation} \vspace{-5mm} \noindent where ${conv(\cdot)}$ represents the new-added convolution layer, and ${F_5}$ represents the features from the last convolution block of the classification network. The classification loss ${L_c}$ in this training stage is as follows: \vspace{-4mm} \begin{equation} \begin{split} {L_c}(S,{Y_c}) & = - {1 \over C}*\sum\limits_{i = 1}^C { {y_{ci}}*\log ({{(1 + \exp ( - {s_i}))}^{ - 1}})} \\ & + (1 - {y_{ci}})*\log ({{\exp ( - {s_i})} \over {1 + \exp ( - {s_i})}}), \end{split} \end{equation} \vspace{-4mm} \noindent where $C$ indicates the total numbers of category, $y_{ci}$ and ${s_i}$ represent the elements of the category label $Y_c$ and the computed classification scores $S$, respectively After the training stage of the classification network is completed, we fix the learned parameters and perform inference on the RGB image of DUTS-Train dataset~\cite{Wang2017LearningTD} to generate class activation maps (CAMs) $M$ as follows: \vspace{-4mm} \begin{equation} \begin{split} M = \sum\limits_{i = 1}^C {\;norm(relu{{(conv({F_5})}_i}))} *{s_i}, \end{split} \end{equation} \vspace{-3mm} \noindent where ${conv(\cdot)}$ is the aforementioned new-added convolutional layer. $relu(\cdot)$ indicates the relu activation function, and $norm(\cdot)$ represents the normalization function that normalizes the elements in CAMs to [0, 1]. As is mentioned above, we adopt both pixel-wise and superpixel-wise algorithms on CAMs for refinements. The pixel-wise refinement~\cite{Araslanov2020SingleStageSS} takes the class activation score of individual pixel in CAMs as seeds, and infers the scores of its neighbor pixels using the RGB appearance information. On the other hand, superpixel-wise refinement first clusters pixels in a RGB image into superpixels using a clustering algorithm SLIC~\cite{Achanta2012SLICSC} and then performs the similar refinement on superpixels. Same as the previous works~\cite{Zeng2019MultiSourceWS, Wang2017LearningTD, Li2018WeaklySS}, we also adopt CRF~\cite{Krhenbhl2011EfficientI} for a further refinement, which is widely-accepted in the weakly supervised methods. \subsection{The Multi-filter Directive Network } As is mentioned above, pseudo labels synthesized from different refinements describe different characteristics, and the saliency networks developed on a single label inevitably suffers from its prejudiced characteristics. Therefore, we target to explore how to effectively leverage the abundant and comprehensive saliency cues in multiple pseudo labels. A straightforward method to utilize multiple cues is designing a dual decoder architecture as shown in (b) in Figure {\color{red}\ref{ablation_set}}, which introduces two decoders to learn saliency cues from two different pseudo labels respectively. Meanwhile a mutual guidance loss is adopted to integrate multiple saliency cues. We take the averaged predictions of dual decoders as the final saliency prediction in this ease. However, in this straightforward method, noisy information existing in prejudiced pseudo label may propagate to the saliency network directly, and lead to negative impacts. To solve the above problems, we propose a multi-filter directive network (MFNet) to effectively integrate the filtered cues from multiple pseudo labels. To be specific, we first design a directive filter (DF) to extract and filter the more accurate saliency cues from pseudo labels. The architecture of the proposed directive filters is illustrated in Figure {\color{red}\ref{framework}}. It takes the features from the shared encoder as input, and extracts the saliency cues from pseudo labels through several convolution layers. As is pointed out in~\cite{Fan2020EmployingMF, Araslanov2020SingleStageSS, Veksler2020RegularizedLF, Fan2020LearningIO}, the convolutional neural networks possess good robustness to noisy labels. Therefore, the inaccurate saliency cues in pseudo labels can be gradually corrected by the convolution layers in DF. As shown in Figure {\color{red}\ref{feature}}, the extra noise and incomplete defects in pseudo labels are progressively corrected, and more concrete saliency cues are extracted through convolutions \begin{figure}[!t] \vspace{-0mm} \includegraphics[width=1.00\linewidth]{figure/feature.pdf} \vspace{-4mm} \caption{ Visualization of the directive filter $F1$. (a) and (b) represent the feature maps from the $2^{nd}$ and $4^{th}$ convolution layers of the directive filter, and (c) indicates the predictions $P_1$ of $F1$.} \label{feature} \vspace{-3mm} \end{figure} To effectively utilize and integrate the comprehensive saliency cues from multiple pseudo labels, we design the proposed MFNet as is illustrated in Figure {\color{red}\ref{framework}}. Firstly, we introduce two directive filters $F1$ and $F2$ to filter and extract accurate saliency cues from pseudo labels $Y_1$ and $Y_2$ respectively. To attach equal importance to different pseudo labels, we set the same settings for two directive filters. The corresponding training loss $L_1$ and $L_2$ for $F1$ and $F2$ are computed as: \vspace{-5mm} \begin{equation} \begin{split} {L_k}({P_k},{Y_k}) = & - \sum\limits_{\rm{i}} {{\; y_{ki}}} *\log {p_k}_i - (1 - {y_{ki}})* \\ & \log (1 - {p_{ki}}),\quad k = 1,2, \end{split} \end{equation} \vspace{-3mm} \noindent where $p_{ki}$ and $y_{ki}$ represent the elements of the directive filter predictions $P_k$ and its pseudo labels $Y_k$. Secondly, we then simultaneously propagate these filtered accurate cues to the saliency decoder through a multi-guidance loss $L_{mg}$, which can be described as follows: \vspace{-3mm} \begin{equation} \begin{split} {L_{\rm{mg}}}({P_s},{Y_s}) = & - \sum\limits_i {(1 - {y_i})*\log (1 - {p_{si}})} \\&- {y_i}*\log {p_{si}} , \end{split} \end{equation} \vspace{-3mm} \noindent where $p_{si}$ is the elements of the saliency decoder prediction $P_s$. $Y_s$ is the average prediction of directive filters after the aforementioned pixel-wise refinement~\cite{Araslanov2020SingleStageSS}, and $y_i$ is its elements. In addition, we adopt a self-supervision strategy between two directive filters, which aims to encourage two filters to extract similar saliency cues from different pseudo labels. The insight is that the common saliency cues learned from different pseudo labels describe more accurate and authentic saliency information. The loss $L_{ss}$ of this self-supervision term is defined as follows: \vspace{-2mm} \begin{equation} \begin{split} {L_{ss}}({P_1},{P_2}) = - {\sum\limits_i {({p_{1i}} - {p_{2i}})} ^2}. \end{split} \end{equation} \vspace{-4mm} The final loss function $L$ for training the proposed MFNet is given by the combination of the above loss functions: \vspace{-6mm} \begin{equation} \begin{split} L = {L_1} + {L_2} + {L_{\rm{mg}}} + \delta {L_{ss}}, \end{split} \end{equation} \vspace{-4mm} \noindent where $\delta$ is a hyper-parameter which controls the weight of the self-supervision term. \begin{table*}[!t] \centering \small \renewcommand\arraystretch{1.3} \setlength{\tabcolsep}{0.25mm} \vspace{-1mm} \begin{threeparttable} \caption{ Quantitative comparisons of E-measure ($E_s$), S-measure ($S_{\alpha}$), F-measure ($F_{\beta}$) and MAE ($M$) metrics over five benchmark datasets. The supervision type (\textbf{Sup.}) I indicates using category annotations only, and I\&C represents developing WSOD on both category and caption annotations simultaneously. - means unavailable results. The best results are marked in \textbf{boldface}.} \label{quantitative} \begin{tabular}{ccp{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}} \toprule \multicolumn{1}{c}{\multirow{2.5}{*}{Methods }}& \multicolumn{1}{c}{\multirow{2.5}{*}{ Sup.}}& \multicolumn{4}{c}{ECSSD}& \multicolumn{4}{c}{DUTS-Test}& \multicolumn{4}{c}{HKU-IS}& \multicolumn{4}{c}{DUT-OMRON}& \multicolumn{4}{c}{PASCAL-S}\cr \cmidrule(lr){3-6} \cmidrule(lr){7-10}\cmidrule(lr){11-14}\cmidrule(lr){15-18}\cmidrule(lr){19-22} &{}&$S_{\alpha}$ &$E_s$ &$F_{\beta}$ &$M$ &$S_{\alpha}$ &$E_s$ &$F_{\beta}$ &$M$ &$S_{\alpha}$ &$E_s$ &$F_{\beta}$ &$M$ &$S_{\alpha}$ &$E_s$ &$F_{\beta}$ &$M$ &$S_{\alpha}$ &$E_s$ &$F_{\beta}$ &$M$\cr \midrule \multicolumn{1}{c}{\multirow{1}{*}{WSS~\cite{Wang2017LearningTD}}} &I &.811 &.869 &.823 &.104 &.748 &.795 &.654 &.100 &.822 &.896 &.821 &.079 &.725 &.768 &.603 &.109 &.744 &.791 &.715 &.139 \cr \multicolumn{1}{c}{\multirow{1}{*}{ASMO~\cite{Li2018WeaklySS}}} &I &.802 &.853 &.797 &.110 &.697 &.772 &.614 &.116 &- &- &- &- &.752 &.776 &.622 &.101 &.717 &.772 &.693 &.149 \cr \multicolumn{1}{c}{\multirow{1}{*}{MSW}~\cite{Zeng2019MultiSourceWS}} & I\&C &.827 &.884 &.840 &.096 &.759 &.814 &.684 &.091 &.818 &.895 &.814 &.084 &\bf{.756} &.763 &.609 &.109 &.768 &.790 &.713 &.133 \cr \midrule \multicolumn{1}{c}{\multirow{1}{*}{\textbf{MFNet}}} &I &\bf{.834} &\bf{.885} &\bf{.854} &\bf{.084} &\bf{.775} &\bf{.839} &\bf{.710} &\bf{.076} &\bf{.846} &\bf{.921} &\bf{.851} &\bf{.059} &.742 &\bf{.803} &\bf{.646} &\bf{.087} &\bf{.770} &\bf{.817} &\bf{.751} &\bf{.115} \cr \bottomrule \end{tabular} \end{threeparttable} \vspace{-2mm} \end{table*} \begin{figure*}[!t] \vspace{-2mm} \includegraphics[width=1.00\linewidth]{figure/comparison.pdf} \vspace{-4mm} \caption{ Visual comparisons of our method with existing image-level annotation based WSOD methods in some challengling scenes.} \label{qualitative} \vspace{-4mm} \end{figure*} The architecture of the saliency network is illustrated in Figure {\color{red}\ref{framework}}. We adopt a simple encoder-decoder framework, which usually served as baseline network in SOD. It takes three features from the $3^{rd}$, $4^{th}$ and $5^{th}$ convolution blocks of the encoder as input, and perform multi-scale bottom-up aggregation~\cite{Zhang2017AmuletAM}. The predictions $P_s$ of the saliency decoder is our final prediction. During testing, we only retain the saliency network and discard the multiple directive filters for acceleration. \section{Experiments} \subsection{Implementation Details} We conduct our method on the Pytorch toolbox with a RTX 2080Ti GPU. The shared encoder in our method is designed based on DenseNet-169~\cite{Huang2017DenselyCC}, which is same as the latest work MSW~\cite{Zeng2019MultiSourceWS}. During the training phase of the classification network, we adopt the Adam optimization algorithm~\cite{2014Adam} and set the learning rate and the maximum iteration to 1e-4 and 20000, respectively. In the inference phase, we generate CAMs using a multi-inference strategy following the settings of~\cite{Ahn2018LearningPS}. To be specific, the input images are flipped and then resized to four scales. The final maps are computed as the average of corresponding eight CAMs. For the saliency network, we only take the RGB images of DUTS-Train dataset~\cite{Wang2017LearningTD} and the generated pseudo labels for training. In this stage, we also adopt the Adam optimization algorithm and set the learning rate and the maximum iteration to 3e-6 and 26000, respectively. All the training images are resized to $256\times 256$ and the parameters of new-added layers are initialized by Xavier algorithm~\cite{pmlr-v9-glorot10a}. The source code will be released upon publication. \subsection{Datasets and Evaluation Metrics} Following the previous works~\cite{Wang2017LearningTD, Zeng2019MultiSourceWS}, we adopt ImageNet~\cite{5206848} and DUTS-Train dataset~\cite{Wang2017LearningTD} as our training sets for the classification network and the proposed MFNet respectively for the sake of fairness. We test our method on five widely-adopted datasets: ECSSD~\cite{Yan2013HierarchicalSD}, DUTS-Test~\cite{Wang2017LearningTD}, HKU-IS~\cite{Li2015VisualSB}, DUT-OMRON~\cite{2013Saliency} and PASCAL-S~\cite{Li2014TheSO}. \textbf{ECSSD} contains 1000 images of different sizes with obvious salient objects. \textbf{DUTS-Test} includes 5019 samples of various challenging scenes. \textbf{HKU-IS} consists of 4447 images with many multiple-object scenes. \textbf{DUT-OMRON} contains 5168 images with complex structures and contours. \textbf{PASCAL-S} includes 850 samples that are annotated by 8 subjects in eye-tracking tests. \begin{figure*} \vspace{-3mm} \includegraphics[width=1.00\linewidth]{figure/ablation_setting.pdf} \vspace{-4mm} \caption{ The frameworks of different settings in ablation studies. (a) indicates single pseudo label cases (1) to (2) and (5) to (7), (b) refers to dual-decoder framework in case (8), and (c) indicates single directive filter (SDF) cases (3) and (4). (d) is our proposed MFNet using multiple directive filters (MDF), which corresponds to case (9). } \label{ablation_set} \vspace{-1mm} \end{figure*} For a comprehensive comparison, we adopt four well-accepted metrics, including S-measure~\cite{Fan2017StructureMeasureAN}, E-measure~\cite{ijcai2018-97}, F-measure~\cite{5206596} as well as Mean Absolute Error (MAE), to evaluate our method. Specifically, S-measure focuses on evaluating the structural information of saliency maps and evaluates region-aware and object-aware structural similarity between saliency maps and ground truths. E-measure attaches more importance on the unification of global and local information. Besides, F-measure is a harmonic mean of average precision and average recall, and MAE evaluates the average difference between saliency maps and ground truths. \subsection{Comparison with State-of-the-arts} We compare our approach denoted as \textbf{MFNet} with the existing image-level category label based WSOD methods: WSS~\cite{Wang2017LearningTD}, ASMO~\cite{Li2018WeaklySS} and MSW~\cite{Zeng2019MultiSourceWS}. The quantitative and qualitative results are illustrated in the Table {\color{red}\ref{quantitative}} and Figure {\color{red}\ref{qualitative}}. For a fair comparison, we obtain the saliency maps of these methods from authors and conduct same evaluation on all the methods. \begin{spacing}{1.3} \end{spacing} \noindent \textbf{Quantitative evaluation.} The quantitative results on five datasets are shown in Table {\color{red}\ref{quantitative}}. It can be seen that our method outperforms all the previous works on almost metrics except for S-measure on the DUT-OMRON dataset. It is worth noting that F-measure of our method is significantly better than the second best results on PASCAL-S (0.751 against 0.713), HKU-IS (0.851 against 0.814) and DUT-Test (0.710 against 0.684). The improvements on MAE metrics further prove the superiority of our method. Especially, 29.7\% improvement on HKU-IS dataset and 20.2\% on DUT-OMRON dataset are achieved. \textbf{Moreover}, from a deeper perspective, the previous work ASMO~\cite{Li2018WeaklySS} achieves better performance on the challenging DUT-OMRON dataset while WSS~\cite{Wang2017LearningTD} and MSW~\cite{Zeng2019MultiSourceWS} show more superiority on the other datasets. This is because the former uses a traditional SOD method MB+~\cite{Zhang2015MinimumBS} to perform refinement and generate pseudo labels, while the latter leverages the aforementioned superpixel-wise refinement. It demonstrates that the prejudiced single pseudo label from different refinement algorithms does lead to different generalization abilities of WSOD methods. Based on these observations, we argue that exploring multiple pseudo labels is necessary and the results in Table {\color{red}\ref{quantitative}} also prove its effectiveness. \begin{spacing}{1.3} \end{spacing} \noindent \textbf{Qualitative evaluation.} Figure {\color{red}\ref{qualitative}} shows the qualitative comparisons of our MFNet with existing WSOD methods in some challenging scenes. It can be seen that our method could segment more accurate and integrated objects than other methods. For example, in some similar foreground and background scenes in the $1^{st}$, $3^{rd}$ and $4^{th}$ rows on the left in Figure {\color{red}\ref{qualitative}}, our method could discriminate more salient objects accurately from its similar background. When the background comes complex and noisy such as in the $2^{nd}$ and $3^{rd}$ rows on the right, our method could also perform better than the others. \begin{table*}[!t] \renewcommand\arraystretch{1.1} \small \centering \setlength{\tabcolsep}{0.95mm} \vspace{0mm} \begin{threeparttable} \caption{ Quantitative results of ablation studies, \textbf{Type} means the number of used pseudo labels and \textbf{Pseudo label} indicates different pseudo labels Y$_1$ and Y$_2$. \textbf{DF} represents our proposed directive filter (DF). In \textbf{Case}: (1) and (2) indicate the case which trains the saliency networks with $Y_1$ and $Y_2$ respectively. Based on (1) and (2), case (3) and (4) adopt the proposed DF. Cases (5) to (7) first integrate multiple labels through average (Avg($\cdot$)), intersection ($\cap$) and union ($\cup$) respectively, and then train the saliency networks on these integrated labels. Case (8) adopts a straightforward dual-decoder framework and case (9) is our final MFNet.} \label{ablation} \begin{tabular}{ccp{1.0cm}<{\centering}p{1.7cm}<{\centering}p{1.0cm}<{\centering}p{1.0cm}<{\centering}p{1.0cm}<{\centering}p{1.0cm}<{\centering}p{1.0cm}<{\centering}p{1.0cm}<{\centering}p{1.0cm}<{\centering}p{1.0cm}<{\centering}p{1.0cm}<{\centering}p{1.0cm}<{\centering}p{1.0cm}<{\centering}p{1.0cm}<{\centering}} \toprule \multicolumn{1}{c}{\multirow{2}{*}{\normalsize Type }}& \multicolumn{1}{c}{\multirow{2}{*}{\normalsize Case }}& \multicolumn{1}{c}{\multirow{2}{*}{\normalsize \,DF }}& \multicolumn{1}{c}{\multirow{1.5}{*}{\normalsize Pseudo}}& \multicolumn{2}{c}{ECSSD}& \multicolumn{2}{c}{DUTS-Test}& \multicolumn{2}{c}{HKU-IS} & \multicolumn{2}{c}{DUT-OMRON}& \multicolumn{2}{c}{PASCAL-S}\cr \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} \cmidrule(lr){11-12} \cmidrule(lr){13-14} &{ } &{ } &{\normalsize label} &$F_{\beta}$$\uparrow$ &$M$$\downarrow$ &$F_{\beta}$$\uparrow$ &$M$$\downarrow$ &$F_{\beta}$$\uparrow$ &$M$$\downarrow$ &$F_{\beta}$$\uparrow$ &$M$$\downarrow$ &$F_{\beta}$$\uparrow$ &$M$$\downarrow$ \cr \midrule \multicolumn{1}{c}{\multirow{4}{*}{Single}} & (1) & & Y$_1$ & 0.818 & 0.113 & 0.607 & 0.099 & 0.824 & 0.080 & 0.607 & 0.099 & 0.724 & 0.134 \cr & (2) & & Y$_2$ & 0.824 & 0.090 & 0.639 & 0.090 & 0.801 & 0.067 & 0.576 & 0.108 & 0.717 & 0.122 \cr & (3) & \checkmark & Y$_1$ & 0.835 & 0.095 & 0.698 & 0.082 & 0.840 & 0.066 & 0.641 & 0.089 & 0.734 & 0.125 \cr & (4) & \checkmark & Y$_2$ & 0.847 & 0.085 & 0.684 & 0.084 & 0.836 & 0.062 & 0.602 & 0.103 & 0.743 & \bf0.115 \cr \midrule \multicolumn{1}{c}{\multirow{5}{*}{Multiple}} & (5) & & \small{Avg(Y$_1$, Y$_2$)} & 0.826 & 0.087 & 0.638 & 0.088 & 0.800 & 0.066 & 0.576 & 0.106 & 0.716 & 0.120 \cr & (6) & & Y$_1$ $\cap$ Y$_2$ & 0.831 & 0.086 & 0.649 & 0.085 & 0.810 & 0.064 & 0.595 & 0.098 & 0.723 & 0.118 \cr & (7) & & Y$_1$ $\cup$ Y$_2$ & 0.823 & 0.091 & 0.637 & 0.093 & 0.800 & 0.070 & 0.637 & 0.093 & 0.714 & 0.124 \cr & (8) & & Y$_1$ \& Y$_2$ & 0.843 & 0.087 & 0.670 & 0.083 & 0.831 & 0.064 & 0.607 & 0.093 & 0.735 & 0.118 \cr & (9) & \checkmark & Y$_1$ \& Y$_2$ & \bf0.854 & \bf0.084 & \bf0.710 & \bf0.076 & \bf0.851 & \bf0.059 & \bf0.646 & \bf0.087 & \bf0.751 & \bf0.115 \cr \bottomrule \end{tabular} \end{threeparttable} \vspace{-3mm} \end{table*} \subsection{Ablation Studies} We design various cases in ablation studies to prove the superiority of our method comprehensively. For a clearer description, the different frameworks of each case in Table {\color{red}\ref{ablation}} are shown in Figure {\color{red}\ref{ablation_set}} \begin{spacing}{1.3} \end{spacing} \noindent \textbf{Effectiveness of Directive Filter. } We propose a directive filter (DF) to extract and filter more accurate saliency cues from noisy pseudo labels. It can be applied to both single pseudo label setting (SDF) and multiple setting (MDF) according to the numbers of pseudo labels. \textbf{On the one hand}, SDF can encourage promising improvements on all datasets as shown in cases (1) to (4) in Table {\color{red}\ref{ablation}}, especially on two challenging datasets DUTS-Test and DUT-OMRON. It indicates that when pseudo labels tend to be more inaccurate and noisy in challenging scenes, normal saliency networks inevitably learn more negative information from its direct supervision. In these scenes, the proposed SDF can filter and extract accurate saliency cues and then encourages a more powerful saliency decoder. \textbf{On the other hand}, MDF can effectively integrate multiple saliency cues in various pseudo labels. To prove its superiority, we design four different cases to fuse multiple saliency cues, including three simple ways: average (Avg($\cdot$)), intersection ($\cap$) and union ($\cup$), as well as the aforementioned straightforward way: dual decoder. The results in cases (5) to (7) prove that such three simple ways cannot adequately enough to leverage multiple information. The better performance of case (8) indicates that a more proper approach to leverage multiple labels can achieve a promising improvement. Cases (9) is our final MFNet with MDF shown in (d) in Figure {\color{red}\ref{ablation_set}}, it can be seen that MDF contributes to ourperforming all the other multiple settings by a large margin, especially on two challenging datasets DUTS-Test and DUT-OMRON. These observations support: 1) the effectiveness of our proposed DF on extracting accurate saliency cues. 2) the superiority of the proposed MDF on integrating multiple saliency cues. \textbf{Moreover}, as is illustrated in Table {\color{red}\ref{guidance_main}}, the saliency decoder achieves an obvious improvement compared to its directive filters (DFs). It proves that the filtered saliency cues from DFs are accurate enough to encourage better results with the proposed multi-guidance loss. \begin{table}[!t] \renewcommand\arraystretch{1.2} \small \centering \setlength{\tabcolsep}{1.02mm} \vspace{1mm} \begin{threeparttable} \caption{ Comparisons on the results of the saliency decoder and its two directive filters. Supervised by more accurate saliency cues from directive filters, the final saliency decoder achieves promising improvements.} \label{guidance_main} \begin{tabular}{ccp{1cm}<{\centering}p{0.9cm}<{\centering}p{0.9cm}<{\centering}p{0.9cm}<{\centering}p{0.9cm}<{\centering}p{0.9cm}<{\centering}p{0.9cm}<{\centering}p{0.9cm}<{\centering}} \toprule \multicolumn{1}{c}{ }& \multicolumn{1}{c}{\multirow{2}{*}{Results}}& \multicolumn{2}{c}{ECSSD}& \multicolumn{2}{c}{DUTS-Test}& \multicolumn{2}{c}{HKU-IS} \cr \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8} & {} &$F_{\beta}$$\uparrow$ &$M$$\downarrow$ &$F_{\beta}$$\uparrow$ &$M$$\downarrow$ &$F_{\beta}$$\uparrow$ &$M$$\downarrow$ \cr \midrule & ${P_1}$ & 0.842 & 0.089 & 0.689 & 0.079 & 0.836 & 0.063 \cr & ${P_2}$ & 0.844 & 0.088 & 0.686 & 0.080 & 0.836 & 0.063 \cr & Final ${P_s}$ & \bf0.854 & \bf0.084 & \bf0.710 & \bf0.076 & \bf0.851 & \bf0.059 \cr \bottomrule \end{tabular} \end{threeparttable} \vspace{-5mm} \end{table} \begin{figure} \vspace{1mm} \includegraphics[width=1.00\linewidth]{figure/ablation_visual.pdf} \setlength{\abovecaptionskip}{0.2cm} \setlength{\belowcaptionskip}{0.2cm} \vspace{-4mm} \caption{ Visual analysis of the effectiveness of multiple pseudo labels. Case(3), (4) and (9) represent the results of cases (3), (4) and (9), respectively. It can be seen that multiple labels encourage more generalized and accurate results compared to single label. } \label{ablation_visual} \vspace{-5mm} \end{figure} \begin{spacing}{1.3} \end{spacing} \noindent \textbf{Effectiveness of multiple pseudo labels. } We introduce a multiple-pseudo-label WSOD framework, which targets to integrate multiple saliency cues to avoid the bias of each single pseudo label. \textbf{First of all}, as is mentioned above, cases (5) to (7) in Table {\color{red}\ref{ablation}} are the aforementioned simple ways to integrate multiple cues. Cases (5) and (7) lead to similar performance and do not get obvious improvements than single pseudo label cases (1) and (2), while case (6) achieves good improvements especially on the MAE metric. These results indicate that the average and union introduce more redundant noises from both pseudo labels and lead to inferior performance. The reason why case (6) achieves better performance than cases (5) and (7) is that the intersection operation on two pseudo labels can help to generates high-confidence labels. \textbf{Moreover}, by adopting the dual-decoder framework in case (8), a remarkable improvement is achieved over the single pseudo label case (1) and (2), which proves the superiority of multiple pseudo labels and inspires us for a further exploration. \textbf{Last but not least}, case (9) is our proposed MFNet, compared to cases (3) and (4), a promising improvement is achieved on all metrics, which furthur proves the superiority of multiple cues. Figure {\color{red}\ref{ablation_visual}} provides the visual results of multiple DF and single DF settings. It proves that the more comprehensive saliency cues in multiple pseudo labels helps to avoid the negative impacts from single label and encourage more robust results. \begin{table}[!t] \renewcommand\arraystretch{1.2} \small \centering \setlength{\tabcolsep}{0.82mm} \vspace{-1mm} \begin{threeparttable} \caption{ The experiments on the effect of self-supervision and the setting of its hyper-parameter $\delta$. The best and second-best results are marked in \textbf{boldface} and \underline{underline}, respectively.} \label{self-supervision} \begin{tabular}{cp{1.0cm}<{\centering}p{1cm}<{\centering}p{1cm}<{\centering}p{1cm}<{\centering}p{1cm}<{\centering}p{1cm}<{\centering}p{1cm}<{\centering}p{1cm}<{\centering}} \toprule \multicolumn{1}{c}{ }& \multicolumn{1}{c}{\multirow{2}{*}{\normalsize$\delta$} }& \multicolumn{2}{c}{ ECSSD}& \multicolumn{2}{c}{ DUTS-Test}& \multicolumn{2}{c}{ HKU-IS} \cr \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8} & { } &$F_{\beta}$$\uparrow$ &$M$$\downarrow$ &$F_{\beta}$$\uparrow$ &$M$$\downarrow$ &$F_{\beta}$$\uparrow$ &$M$$\downarrow$ \cr \midrule & {-2 } & {0.844} & \bf0.081 & 0.679 & 0.083 & 0.837 & \bf0.058 \cr & {0 } & \underline{0.851} & \underline{0.084} & 0.702 & \underline{0.077} & 0.848 & \bf0.058 \cr & {$\rightarrow$2$\leftarrow$} & \bf0.854 & \underline{0.084} & \bf0.710 & \bf0.076 & \bf0.851 & \underline{0.059} \cr & {4 } & {0.848} & 0.089 & \underline{0.706} & 0.078 & \underline{0.850} & 0.061 \cr \bottomrule \end{tabular} \end{threeparttable} \vspace{-3mm} \end{table} \subsection{Hyper-parameter Settings} \begin{spacing}{1.2} \end{spacing} We adopt a self-supervision strategy between multiple directive filters, aiming to force them to learn more authentic saliency cues from various pseudo labels. For a comprehensive comparison, we set the hyper-parameter $\delta$ from -2 to 4 in Table {\color{red}\ref{self-supervision}} to discuss the effectiveness of the self-supervision strategy as well as the hyper-parameter $\delta$. To be specific, when the $\delta$ is set to -2, the directive filters are encouraged to learn different saliency cues from various pseudo labels. Setting $\delta$ to 0 means that we do not adopt the self-supervision strategy, and the last two rows in Table {\color{red}\ref{self-supervision}} indicate different hyper-parameters for the self-supervision strategy. It can be seen that encouraging multiple directive filters to learn similar cues does perform better than the other settings and the best performance is achieved when $\delta$ is set to 2. \subsection{Application} To further demonstrate the effectiveness of our proposed framework, we extend it to the latest WSOD methods MSW~\cite{Zeng2019MultiSourceWS}. To be specific, for the coarse maps generated from the multi-source weak supervisions, we also perform two different refinements as we do to synthesize different pseudo labels, and then adopt the proposed multiple-pseudo-label framework to extract and integrate the multiple saliency cues. The architecture of the saliency decoders follows the original setting in MSW for the fair comparison. Here, we add weighted F-measure $F_{\beta}^{\omega}$~\cite{Margolin2014HowTE} for a more comprehensive comparison. The results in Table {\color{red}\ref{application_tab}} illustrate that remarkable improvements are achieved especially on the $F_{\beta}^{\omega}$ and MAE metrics. It indicates that the proposed multiple pseudo label framework does adequately integrate saliency cues from multiple labels and help existing method to achieve better performance. The visual analysis in Figure {\color{red}\ref{application_fig}} also supports this observation, in which our framework helps MSW to predict more accurate and complete saliency maps even in challenging scenes. \textbf{Moreover}, it is worth noting that the proposed framework can not only be extended to other single pseudo label methods, but also flexible enough to integrate more other pseudo labels by just adding more directive filters when more pseudo labels can be obtained. \begin{table}[!t] \renewcommand\arraystretch{1.2} \small \centering \setlength{\tabcolsep}{0.45mm} \vspace{-1mm} \begin{threeparttable} \caption{ The experiments of applying our multiple-pseudo-label framework on the latest work MSW~\cite{Zeng2019MultiSourceWS}. } \label{application_tab} \begin{tabular}{ccp{1cm}<{\centering}p{1cm}<{\centering}p{1cm}<{\centering}p{1cm}<{\centering}p{1cm}<{\centering}p{1cm}<{\centering}p{1cm}<{\centering}} \toprule \multicolumn{1}{c}{ }& \multicolumn{1}{c}{\multirow{2}{*}{Settings }}& \multicolumn{3}{c}{ ECSSD}& \multicolumn{3}{c}{ HKU-IS} \cr \cmidrule(lr){3-5} \cmidrule(lr){6-8} & { } &$F_{\beta}$$\uparrow$ &$F_{\beta}^{\omega}$$\uparrow$ &$M$$\downarrow$ &$F_{\beta}$$\uparrow$ &$F_{\beta}^{\omega}$$\uparrow$ &$M$$\downarrow$ \cr \midrule & {MSW~\cite{Zeng2019MultiSourceWS} } & 0.840 & 0.716 & 0.096 & 0.814 & 0.685 & 0.084 \cr & {$+$ Ours } & \bf+0.016 & \bf+0.065 & \bf-0.019 & \bf+0.006 & \bf+0.058 & \bf-0.015 \cr \bottomrule \end{tabular} \end{threeparttable} \vspace{-1mm} \end{table} \begin{figure}[!t] \vspace{-1mm} \includegraphics[width=1.00\linewidth]{figure/application.pdf} \vspace{-4mm} \caption{ Visual analysis of applying our framework on the latest previous work MSW~\cite{Zeng2019MultiSourceWS}.} \label{application_fig} \vspace{-4mm} \end{figure} \section{Conclusion} In this paper, we propose to utilize multiple pseudo labels to avoid the negative impacts from the prejudiced single label. To this end, we introduce a new framework to explore more comprehensive and accurate saliency cues from multiple labels. To be specific, we design a multi-filter directive network (MFNet) which consists of an encoder-decoder saliency network as well as multiple directive filters. We first use multiple directive filters to extract and filter more accurate saliency cues from multiple labels, and then propagate these filtered cues to the saliency decoder simultaneously. We also adopt a self-supervision strategy to encourage similar guidance of different directive filters, and implicitly integrate multiple saliency cues with a multi-guidance loss. Comparisons with previous methods prove the superiority of the proposed method, and ablation studies also support the effectiveness of each component. \begin{spacing}{2.0} \end{spacing} \noindent \textbf{Acknowledgements.} This work was supported by the Science and Technology Innovation Foundation of Dalian (\#2019J12GX034), the National Natural Science Foundation of China (\#61976035), and the Fundamental Research Funds for the Central Universities (\#DUT20JC42). {\small \bibliographystyle{ieee_fullname}
proofpile-arXiv_065-98
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Supplementary Materials} \renewcommand{\thefigure}{S\arabic{figure}} \setcounter{figure}{0} \renewcommand{\theequation}{S\arabic{equation}} \setcounter{equation}{0} \noindent \textbf{PDF file includes:} Supplementary Text Figs. S1 to S2 Table S1 References (1 to 7) \clearpage \paragraph*{Frequency conversion: monochromatic plane wave limit} Throughout the following derivation we will employ the cgs system of units. The Maxwell equations in the frequency domain used to determine the propagation of the nonlinearly generated electromagnetic waves in the LiF crystal while applying intense driving radiation fields are: \begin{linenomath*} \begin{align} \nabla\, \mathsf{x}\, \mathbf{E}_\omega\!\left(\mathbf{x}\right)& = i \frac{\omega}{c} \mathbf{B}_\omega\!\left(\mathbf{x}\right)& \nabla \mathbf{B}_\omega\!\left(\mathbf{x}\right) = 0 \label{a}\\ \nabla\, \mathsf{x}\, \mathbf{B}_\omega\!\left(\mathbf{x}\right)& = -i \frac{\omega}{c} \mathbf{D}_\omega\!\left(\mathbf{x}\right)& \nabla \mathbf{D}_\omega\!\left(\mathbf{x}\right) = 0 \label{b}. \end{align} \end{linenomath*} $\omega$ represents the frequency of the generated electromagnetic wave, $\mathbf{E}_\omega\!\left(\mathbf{x}\right)$ its electric field, $\mathbf{B}_\omega\!\left(\mathbf{x}\right)$ the corresponding magnetic induction and $\mathbf{D}_\omega\!\left(\mathbf{x}\right)$ the dielectric displacement. $c$ is the speed of light. The equations above assume the magnetic permeability of the nonlinear medium to be equal to $1$. The dielectric displacement is linked to the electric field strength via: \begin{linenomath*} \begin{equation} \mathbf{D}_\omega\!\left(\mathbf{x}\right) = \epsilon\!\left(\omega\right) \mathbf{E}_\omega\!\left(\mathbf{x}\right) + 4\pi \mathbf{P}_\omega\!\left(\mathbf{x}\right) \label{dielectric} \end{equation} \end{linenomath*} with $\epsilon\!\left(\omega\right)$ the linear dielectric constant of the medium at the frequency $\omega$ of the generated electromagnetic wave and $\mathbf{P}_\omega\!\left(\mathbf{x}\right)$ the nonlinear contribution to the polarization of the medium at that frequency. The polaritation wave, in our case, is driven by the propagation of the FEL and NIR laser fields in the LiF crystal. We assume $\epsilon\!\left(\omega\right)$ to be a scalar quantity since the LiF crystal is of cubic symmetry. Absorption of LiF at the frequency of the generated wave renders $\epsilon\!\left(\omega\right)$ a complex quantity with its imaginary part being positive. On the vacuum side from where the FEL and NIR laser beams enter the crystal the same Maxwell equations above apply with $\epsilon\!\left(\omega\right) = 1$ and $\mathbf{P}_\omega\!\left(\mathbf{x}\right) = 0$. The Maxwell equations together with relation \ref{dielectric} for the dielectric displacement can be combined into an inhomogeneous wave equation for the electric field $\mathbf{E}_\omega\!\left(\mathbf{x}\right)$: \begin{linenomath*} \begin{equation} \Delta \mathbf{E}_\omega + \epsilon\!\left(\omega\right) \left(\frac{\omega}{c}\right)^2 \mathbf{E}_\omega = -4\pi \left[ \left(\frac{\omega}{c}\right)^2 \mathbf{P} - \frac{1}{\epsilon\!\left(\omega\right)} \mathbf{k}\left(\mathbf{k} \mathbf{P} \right) \right] \exp{\left(i \mathbf{kx}\right)}, \label{waveeq} \end{equation} \end{linenomath*} with solutions subject to the additional restriction: \begin{linenomath*} \begin{equation} \nabla \mathbf{E}_\omega = - \frac{4\pi i}{\epsilon\!\left(\omega\right)} \mathbf{k} \mathbf{P}\exp{\left(i \mathbf{kx}\right)}. \label{divergence} \end{equation} \end{linenomath*} Relations \ref{waveeq} and \ref{divergence} assume $\mathbf{P}_\omega\!\left(\mathbf{x}\right)$ to be a plane wave $ \mathbf{P}_\omega\!\left(\mathbf{x}\right) = \mathbf{P} \exp \left(i\mathbf{kx}\right)$, implying the driving FEL and NIR laser fields are plane waves. The wave vector $\mathbf{k}$ of the polarization wave is fixed by the specific nonlinear conversion process and by the wave vectors of the driving laser fields in the medium. We further assume the polarization wave to propagate in the $(x,z)$-plane of a suitably chosen coordinate system with $\mathbf{k} = \left(k_x, 0, k_z\right)$ determined by the common plane of incidence of the FEL and NIR radiation on the LiF crystal. $\mathbf{P}$ represents the constant amplitude of the polarization wave. We do not account for any potential nonlinear polarization of the medium bound to the vacuum-material interface. The Maxwell equations \ref{a} and \ref{b} imply specific boundary conditions at the vacuum-medium interface (the $x,y$-plane). Namely, the magnetic induction has to be continuous across the boundary. Also the $x$- and $y$-components of the electric field (i.e. the components parallel to the interface) and the $z$-component of the dielectric displacement (i.e. the component perpendicular to the interface) have to be continuous across the interface. In the medium the generated electromagnetic wave propagates with the polarization wave $\mathbf{P}_\omega\!\left(\mathbf{x}\right)$ into the medium, i.e. towards $z = -\infty$. The generated wave on the vacuum side propagates towards $z = +\infty$. There is no wave on the vacuum side propagating towards the interface at $z = 0$ at the frequency $\omega$. Since the wave vector of the polarization wave $\mathbf{k} = \left(k_x, 0, k_z\right)$ has zero $y$-component so the wave vectors of the generated waves in vacuum and in the medium have a zero $y$-component. The electric field of the generated wave on the vacuum side can thus be represented by $\mathbf{E}_{r}\!\left(\mathbf{x}\right) = \mathbf{A}_r \exp\left(i \mathbf{k}_r\mathbf{x}\right)$ with $\mathbf{k}_r = \left(k_{rx}, 0, k_{rz}\right)$, transverse electric field amplitude $\mathbf{A}_r$ ($\mathbf{k}_r \mathbf{A}_r = 0$) and $k_{rz} > 0$. The dispersion relation $\mathbf{k}_r^2 = \left(\omega / c\right)^2$ ties the $x$- and $z$-components of the wave vector together. The inhomogeneous wave equation \ref{waveeq} in the medium for this geometry is solved with the Ansatz \begin{linenomath*} \begin{equation} \mathbf{E}_M\!\left(\mathbf{x}\right) = \tilde{\mathbf{E}}\left(z\right) \exp{(i \tilde{k}_x x)}. \end{equation} \end{linenomath*} It results in an inhomogeneous ordinary second order differential equation for the amplitude $\tilde{\mathbf{E}}\left(z\right)$ \begin{linenomath*} \begin{equation} \frac{\mathrm{d}^2 \tilde{\mathbf{E}}\left(z\right)}{\mathrm{d}z^2} + \tilde{k}_z^2 \tilde{\mathbf{E}}\left(z\right) = \frac{4\pi}{\epsilon\!\left(\omega\right)} \left[ \mathbf{k}\left(\mathbf{k} \mathbf{P} \right) - \epsilon\!\left(\omega\right) \left( \frac{\omega}{c}\right)^2 \mathbf{P} \right] \exp{\left(i k_z z\right)}. \label{ordinaryweq} \end{equation} \end{linenomath*} $\tilde{k}_z$ used in Eqn.~\ref{ordinaryweq} is set via the dispersion relation $\tilde{k}_z^2 = \epsilon\!\left(\omega\right) \left(\omega / c\right)^2 - \tilde{k}_x^2$ with the real and imaginary parts of $\tilde{k}_z$ chosen to be negative in order to make sure the generated electric field in the nonlinear medium propagates with the polarization wave towards $z = - \infty$ and force absorption in the medium. With these restrictions the general solution of Eq.~\ref{ordinaryweq} reads \begin{linenomath*} \begin{equation} \tilde{\mathbf{E}}\left( z \right) = \tilde{\mathbf{A}} \exp{(i \tilde{k}_z z)} + \frac{\mathbf{H}}{\tilde{k}_z^2 - k_z^2} \exp{\left(i k_z z\right)}, \label{mediumresult} \end{equation} \end{linenomath*} with \begin{linenomath*} \begin{equation} \mathbf{H} = \frac{4\pi}{\epsilon\!\left(\omega\right)} \left[ \mathbf{k}\left(\mathbf{k} \mathbf{P} \right) - \epsilon\!\left(\omega\right) \left( \frac{\omega}{c}\right)^2 \mathbf{P} \right]. \end{equation} \end{linenomath*} The amplitude vectors $\mathbf{A}_r$ of the electric field on the vacuum side (Eq.~\ref{reffield} of the main text) and $\tilde{\mathbf{A}}$ are still free constants. They are fixed by the boundary conditions at the vacuum-medium interface, by the transversality of the electric field in vacuum and by Eq.~\ref{divergence} in the medium. The boundary conditions can only be satisfied provided the still free wave vector components $\tilde{k}_x$ and $k_{rx}$ satisfy the condition $\tilde{k}_x = k_{rx} = k_x$, i.e. they have to be equal to the $x$-component of the wave vector of the nonlinear polarization wave. Via the dispersion relations on the vacuum side and in the medium then also the $z$-components $\tilde{k}_z$ and $k_{rz}$ are fixed subject to the constraints $k_{rz} > 0$ and the real and imaginary parts of $\tilde{k}_z$ have to be negative. Involving these constraints results in the explicit form for the electric field amplitude $\mathbf{A}_r$ on the vacuum side given in Eq.~\ref{refamp} of the main text and in \begin{linenomath*} \begin{align} \tilde{\mathbf{A}} =& \frac{4\pi}{\epsilon\!\left(\omega\right) \left[ \tilde{k}_z - \epsilon\!\left(\omega\right) k_{rz} \right]} \left\{ P_x + \left( k_z - \epsilon\!\left(\omega\right) k_{rz} \right) \frac{[\mathbf{k}\, \mathsf{x}\, \mathbf{P}]_y}{\tilde{k}_z^2 - k_z^2} \right\} \begin{bmatrix} \tilde{k}_z \\ 0 \\ -k_x \end{bmatrix} + \nonumber \\ &4\pi \left(\frac{\omega}{c}\right)^2 \frac{k_z - k_{rz}}{\tilde{k}_z - k_{rz}}\: \frac{P_y}{\tilde{k}_z^2 - k_z^2} \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} \end{align} \end{linenomath*} The result Eq.~\ref{mediumresult} for the dependence of the electric field in the medium on the $z$-coordinate may be rewritten in the form \begin{linenomath*} \begin{equation} \tilde{\mathbf{E}}\left( z \right) = \left[\tilde{\mathbf{A}} + \frac{\mathbf{H}}{\tilde{k}_z^2 - k_z^2} + \mathbf{H} \frac{\exp{\![i (k_z - \tilde{k}_z) z]} - 1}{\tilde{k}_z^2 - k_z^2} \right] \exp{(i \tilde{k}_z z)} \label{phasematching} \end{equation} \end{linenomath*} to emphasize the role of phase matching in the nonlinear process provided the wave vectors $\mathbf{k}$ of the nonlinear polarization wave in the medium and $\mathbf{\tilde{k}}$ of the generated wave become equal (meaning $\tilde{k}_z = k_z$ in Eq.~\ref{phasematching}). In the limit of phase matching the $z$ independent term in square brackets in Eq.~\ref{phasematching}, namely $\tilde{\mathbf{A}} + \mathbf{H} / (\tilde{k}_z^2 - k_z^2 )$, does not become singular. This can be seen when explicitly evaluating it using the expressions for $\mathbf{H}$ and $\tilde{\mathbf{A}}$ given above. Phase matching means the $z$ dependent term in square brackets in Eq.~\ref{phasematching}, i.e. the amplitude of the electric field of the generated wave in the medium grows in proportion to the propagation length $z$ in the medium. However, one has to keep in mind that absorption in the medium will counteract the buildup of the electric field. Absorption enters via the imaginary part of $\tilde{k}_z$ in $\exp{(i \tilde{k}_z z)}$ in the expression for the electric field in Eq.~\ref{phasematching}. After a certain propagation length absorption will always win over the linear buildup in $z$ of the wave's amplitude. \paragraph*{The directions of emission of the sum- and difference-frequency beams on the vacuum side} In the plane wave approximation used here the wave vector components of the XUV and NIR beams impinging on the LiF crystal in the surface plane of the crystal determine the corresponding component of the wave vector of the polarization wave in the crystal. We represent these wave vectors by $\mathbf{k}^{v\text{X}} = (k^{v\text{X}}_x, 0, k^{v\text{X}}_z)$ and $\mathbf{k}^{v\text{I}} = (k^{v\text{I}}_x, 0, k^{v\text{I}}_z)$ for the XUV and NIR beams, respectively, using the coordinate system defined in Fig.~\ref{geometry}. Boundary conditions for the electromagnetic fields imply that the $x$-components of the wave vectors of the XUV and NIR beams (in plane components), which drive the nonlinear processes, do not change when passing into the LiF crystal. Then the $x$-components of the wave vectors of the polarization waves in the medium are $k_x = k^{v\text{X}}_x \pm 2 k^{v\text{I}}_x$, respectively. According to the previous section $k_x$, in turn, equals the $x$-component of the wave vectors of the generated electromagnetic fields on the vacuum side ($k_{rx} = k_x$). Based on the dispersion relation for the sum- and difference-frequency fields on the vacuum side they thus propagate along the wave vector $\mathbf{k}_r = \left(k_x, 0, \sqrt{(\omega / c)^2 - k_x^2}\right)$. Nonlinear reflection thus occurs at an angle $\theta = \arcsin (c k_x / \omega)$ relative to the LiF surface normal. \paragraph*{The nonlinear polarization amplitude $\mathbf{P}$} For the particular nonlinear processes of four-wave mixing relevant to our experiment, namely sum- and difference-frequency mixing, the amplitude $\mathbf{P}$ of the nonlinear polarization of the medium is linked to the specific third order nonlinear susceptibility tensors $\chi^{(3)}\left(-\omega_\text{X} - 2\omega_\text{I}; \omega_\text{X}, \omega_\text{I}, \omega_\text{I}\right)$ and $\chi^{(3)}\left(-\omega_\text{X} + 2\omega_\text{I}; \omega_\text{X}, -\omega_\text{I}, -\omega_\text{I}\right)$, respectively. Based on the cubic symmetry of the LiF crystal (space group $m\overline{3}m$) these tensors have 27 elements which are different from zero with only four of them being independent \cite{Butcher1990}. Skipping the dependence on the frequencies for convenience, one complete set of independent, non-zero elements is $\chi^{(3)}_{x,x,x,x}$, $\chi^{(3)}_{x,x,y,y}$, $\chi^{(3)}_{x,y,x,y}$ and $\chi^{(3)}_{x,y,y,x}$ in a Cartesian frame of reference with axes coinciding with the crystal axes \cite{Butcher1990}. According to the experimental situation, the LiF crystal's $z$-axis is chosen to coincide with the $z$-axis of the plane of incidence of the FEL and NIR laser beams, whereas the crystal's $x$-axis enclosed an angle $\varphi$ with the $x$-axis of the plane of incidence (see Fig. \ref{geometry} of the main text). The components of the electric field amplitudes of the FEL and NIR plane waves in the nonlinear medium may be written $\mathbf{E}_\text{X} = \left(E_{\text{X},x}, 0, E_{\text{X},z}\right)$ and $\mathbf{E}_\text{I} = \left(E_{\text{I},x}, 0, E_{\text{I},z}\right)$. This representation uses as the frame of reference the $x$- and $z$-axes of the plane of incidence together with the corresponding orthogonal $y$-axis. The $y$-components of the amplitudes are both set to zero, assuming the waves are polarized in the plane of incidence, just as the setting in the experiment. With these assumptions the components of the induced nonlinear polarization $(P_x, P_y, P_z)$ in the same reference frame can be written as \begin{linenomath*} \begin{align} P_x =& \chi^{(3)}_{x,x,x,x}E_{\text{X},x}E_{\text{I},x}^2 \left(\sin^4\varphi + \cos^4\varphi\right) + \chi^{(3)}_{x,x,y,y}E_{\text{X},x}E_{\text{I},z}^2 + \left(\chi^{(3)}_{x,y,x,y} + \chi^{(3)}_{x,y,y,x}\right)E_{\text{X},z}E_{\text{I},x}E_{\text{I},z} \nonumber\\ &+ \frac{1}{2}\left(\chi^{(3)}_{x,x,y,y} + \chi^{(3)}_{x,y,x,y} + \chi^{(3)}_{x,y,y,x}\right)E_{\text{X},x}E_{\text{I},x}^2 \sin^22\varphi\\ P_y =& \frac{1}{4}\left(\chi^{(3)}_{x,x,y,y} + \chi^{(3)}_{x,y,x,y} + \chi^{(3)}_{x,y,y,x} - \chi^{(3)}_{x,x,x,x}\right)E_{\text{X},x}E_{\text{I},x}^2 \sin 4\varphi \\ P_z =& \chi^{(3)}_{x,x,x,x}E_{\text{X},z}E_{\text{I},z}^2 + \chi^{(3)}_{x,x,y,y}E_{\text{X},z}E_{\text{I},x}^2 + \left(\chi^{(3)}_{x,y,x,y} + \chi^{(3)}_{x,x,y,y}\right)E_{\text{X},x}E_{\text{I},x}E_{\text{I},z} \end{align} \end{linenomath*} This relation supposes the nonlinear process of sum-frequency mixing. For difference-frequency mixing one has to use the complex conjugate NIR electric field strength components in the equations for the amplitude components of the nonlinear polarization above. In our model for $\chi^{(3)}\left(-\omega_\text{X} - 2\omega_\text{I}; \omega_\text{X}, \omega_\text{I}, \omega_\text{I}\right)$ and $\chi^{(3)}\left(-\omega_\text{X} + 2\omega_\text{I}; \omega_\text{X}, -\omega_\text{I}, -\omega_\text{I}\right)$ we simplify the LiF crystal symmetry by assuming the medium to be invariant under the full rotation group. This introduces an additional constraint for the independent elements of the nonlinear susceptibility tensor above. Only three of the four elements remain independent \cite{Butcher1990} \begin{linenomath*} \begin{equation} \chi^{(3)}_{x,x,y,y} + \chi^{(3)}_{x,y,x,y} + \chi^{(3)}_{x,y,y,x} - \chi^{(3)}_{x,x,x,x} = 0 \ . \end{equation} \end{linenomath*} As one may already expect, this relation eliminates any dependence of the induced nonlinear polarization $\mathbf{P}$ above on the angle $\varphi$. \paragraph*{The model for the linear and 3\textsuperscript{rd} order susceptibilities} Computing the amplitude of the electric field (Eq.~\ref{refamp}, main text) of the reflected sum- and difference-frequency waves requires the knowledge of the linear dielectric constant $\epsilon(\omega)$ of LiF in the relevant photon energy range. We constructed $\epsilon(\omega)$ using the measured linear reflection off LiF in Fig.~\ref{processes}(B). As a model a set of seven discrete, homogeneously broadened resonances was chosen to simulate the structures found in the reflection curve by suitably choosing their positions, widths and oscillator strengths. Since the measurement did not determine the reflection coefficient but only represents the intensity of the reflected light the absolute scale for the dielectric constant had to be set using a reported LiF absorption coefficient at a certain photon energy. We utilized the measured absorption coefficient at 70\,eV photon energy reported in \cite{Milgram1962}. The relation \begin{linenomath*} \begin{equation} \alpha \left(\omega\right) = \sum_{j} \frac{f_j}{\omega_j^2 - \omega^2 - i \gamma_j \omega} \label{alpha} \end{equation} \end{linenomath*} for the microscopic reaction of LiF to an applied electric field in the frequency range of interest is employed to determine the dielectric constant. It is based on molecular polarizability (see \cite{Jackson1998}). The adjustable parameters $\omega_j$, $\gamma_j$ and $f_j > 0$ are chosen so as to simulate the measured LiF reflection in Fig. \ref{processes}B. The Clausius-Massotti equation \begin{linenomath*} \begin{equation} \alpha\left(\omega\right) = 3 \frac{\epsilon\left(\omega\right) - 1}{\epsilon\left(\omega\right) + 2} \end{equation} \end{linenomath*} links the microscopic reaction to the dielectric constant $\epsilon$ \cite{Jackson1998}. The parameters $\omega_j$, $\gamma_j$ and $f_j > 0$ which result in a reasonable fit of the experimental linear reflection off LiF (see Fig. \ref{processes}(B), the blue line) are gathered in table \ref{table}. The real and imaginary parts of the dielectric constant $\epsilon$ corresponding to this choice of parameters are shown in Fig. \ref{epsilon-calc}. The main, $p$-type LiF core exciton resonance is responsible for the maximum of the imaginary part of $\epsilon(\omega)$ at 62\,eV while its low energy shoulder represents the suspected $s$-type core exciton. \begin{table} \renewcommand\thetable{S1} \centering \begin{tabular}{l|c|c|c|c|c|c|c} $j$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline $\omega_j$ / eV & 49.82 & 60.97 & 62.07 & 63.12 & 65.0 & 67.55 & 69.92 \\ $\gamma_j$ / eV & 50.0 & 1.1 & 0.79 & 1.3 & 2.4 & 4.3 & 1.45 \\ $f_j$ / $\mathrm{eV^2}$ & 42.0 & 0.59 & 1.54 & 0.17 & 0.92 & 2.77& 0.75 \\ \end{tabular} \caption{The parameters entering Eq.~\ref{alpha} for the linear microscopic reaction of LiF to an applied electromagnetic wave in the photon energy range between $\approx 58\, \mathrm{eV}$ and $\approx 72\, \mathrm{eV}$ which is relevant to the experiment. The dielectric constant based on this choice of the parameters allows to reasonably reproduce the experimental LiF reflectivity as shown in Fig.~\ref{processes}(B) of the main text.} \label{table} \end{table} \begin{figure} \centering \includegraphics[width=0.75\linewidth]{figureS1.pdf} \caption{The calculated linear dielectric constant $\epsilon(\omega)$ in the photon energy range between $58\, \mathrm{eV}$ and $72\, \mathrm{eV}$. The blue line shows the real and the orange one the imaginary part of $\epsilon$.} \label{epsilon-calc} \end{figure} The calculation of the third order nonlinear susceptibility tensor is based on the expression for its components given in reference \cite{Butcher1990} (page 93). Since in the experiment a detectable, resonance like enhancement of sum- and difference-frequency mixing was observed when the LiF p-type exciton was involved, we only take this resonance into account together with the suspected $s$-type excitonic resonance in determining the dependence of $\chi^{(3)}$ on the driving FEL photon energy. In the expression for $\chi^{(3)}$ we therefore only employ the resonance positions $\omega_j$ and widths $\gamma_j$ in table \ref{table} with $j = 2, 3$ which correspond to these resonances. For the nonlinear susceptibility we neglect a dipole coupling of the $s$-type exciton to the ground electronic state. However, we take into account a potential, dipole allowed coupling of this $s$- to the $p$-type main exciton. We use this simplification since we think the main influence on sum- and difference-frequency generation by the suspected $s$-type exciton is through its coupling to the $p$-type exciton which is driven by the applied NIR laser field. The NIR photon energy of $1.55\, \mathrm{eV}$ does not much exceed the separation in energy of these two excitons ($\omega_3 - \omega_2 = 1.1\, \mathrm{eV}$ according to table \ref{table}). With this simplification, only two dipole matrix elements are relevant in the expression for $\chi^{(3)}$: one for the dipole coupling of the $p$-type exciton to the ground state and one for the coupling of the $s$- and $p$-type excitons. For lack of information on the details of the electronic states involved we use one-electron atomic dipole matrix elements and assume the ground state is represented by a state with angular momentum $l = 0$ ($s$-state) and the excited states involved by $l = 0$ and $l = 1$ with magnetic quantum numbers $m_l = 0, \pm 1$. This approach reduces the number of free parameters in the expression for $\chi^{(3)}$ to two radial dipole matrix elements and modifies the symmetry of the LiF crystal from cubic to full rotational symmetry. Based on this simplified model we calculate the independent components of $\chi^{(3)}$ which in turn provide the amplitude of the third order nonlinear polarization amplitude $\mathbf{P}$ needed for the determination of the sum- and difference-frequency yields for comparison with the experimental result. \paragraph*{Calibration of the MUSIX spectrometer} For every setting of the spectrometer grating a calibration measurement was performed by varying the undulator gap of the FEL. This generated a discrete series of different FEL photon energies reflected off the LiF sample via the grating onto the CCD camera, bypassing the installed beam-block. To prevent saturation of the CCD, two 295\,nm thick zirconium attenuator foils were placed in the FEL beam path. This allowed calibrating the MUSIX spectrometer against the wavelength measurement implemented in the beamline \cite{Braune2018}. In addition, the overall consistency of the various estimated parameters used in the MUSIX spectrometer transmission calculations described below was verified by asserting an agreement between the number of photons measured by an x-ray gas-monitor-detector (XGMD) \cite{Sorokin2019} in the FEL beamline and by the MUSIX spectrometer's CCD at different wavelengths. As no linear reflectivity spectrum of the sample was acquired with the MUSIX spectrometer, a flat sample reflectivity of 0.05\,\% was used in this consistency check. \paragraph*{Estimation of the total number of photons generated in the FWM processes} The read out CCD camera counts were converted to an estimated number of incident photons assuming 50\,\% of these photons (a conservative estimate of the CCD's quantum efficiency) were converted into electron-hole-pairs in the silicon chip with a bandgap of 3.1\,eV. Each electron created one digital count in the analog-to-digital converter of the CCD, according to the manufacturer information for the readout frequency employed in the experimental runs. On the way from the LiF sample to the CCD the photons have been reflected off the MUSIX spectrometer gating and passed an aluminum filter foil. For the grating a 15\,\% diffraction efficiency in first order is assumed. The aluminum filter transmission (thickness 200\,nm) is retrieved from tabulated data \cite{Henke,Henke1993}. We also accounted for an estimated 12.5\,nm aluminum-oxide layer on each side of the filter. In a similar way, the total number of FEL photons impinging on the LiF sample per pulse was derived from pulse energy measurements using an XGMD upstream \cite{Sorokin2019}. Following the XGMD the FEL beam passed a silicon filter, three beamline mirrors, beam width limiting apertures and the incoupling mirror for the NIR laser beam which all reduced the FEL pulse energy before reaching the sample. The number of photons per pulse was determined from the total pulse energy measured by the XGMD, assuming 7\,\% of the measured pulse energy was due to FEL harmonics that were absorbed by the downstream 411\,nm thick silicon filter. Its transmission for the fundamental FEL beam was calculated from tabulated transmission data assuming a 12.5\,nm thick silicon oxide layer on each side \cite{Henke,Henke1993}. Likewise, based on tabulated data \cite{Henke,Henke1993}, the reflectivities of the three beamline mirrors (2$^\circ$ grazing angle of incidence and coated with nickel, gold and platinum, respectively) were taken into account. Wavefront-sensor measurements during beamline-alignment further suggested 25\,\% transmission through the beamline apertures and the incoupling mirror for the IR laser beam due to clipping. Comparing the photon flux on the CCD to that measurd by the XGMD in the calibration measurements described above lends some credence to the transmission estimates made here. However, a significant uncertainty of the scaling factors involved remains. Therefore, we refer to the total number of FEL photons arriving on the LiF sample and photons generated in the FWM processes, which we show in Figs.~\ref{fig2} and \ref{fig3} of the main text, as estimates only. \paragraph*{The experimental dependence of the frequency conversion on the NIR laser pulse energy} To further support the nature of the observed nonlinear processes we determined the dependence of the frequency conversion yield on the pulse energy of the NIR laser pulses. In the regime of low conversion, as was the case in the experiment, the dependence is expected to be quadratic. This is just what the experiment indicates as Fig. \ref{powerdependence} shows. The measurement was done with the FEL photon energy set to 59.25\,eV. The nonlinear process involved was sum-frequency mixing three-photon resonant with the LiF exciton resonance ($\omega_\text{exc} = \omega_\text{X} + 2 \omega_\text{I}$). The data points allow fitting a quadratic dependence of the sum-frequency photon yield on the NIR laser pulse energy (blue line in Fig. \ref{powerdependence}). There was a certain amount of residual stray light photons present which was impossible to eliminate by background subtraction. It is responsible for the non-zero number of photons detected with the NIR photon energy approaching zero. \begin{figure} \centering \includegraphics[width=0.75\linewidth]{figureS2.pdf} \caption{Dependence of the number of sum-frequency photons generated on the NIR laser pulse energy arriving on the LiF crystal. The FEL laser photon energy was set to 59.25\,eV, i.e. 3-photon resonant with the LiF exciton resonance ($\omega_\text{exc} = \omega_\text{X} + 2 \omega_\text{I}$). The blue line represents a fit to the data points using a quadratic polynomial in the NIR pulse energy.} \label{powerdependence} \end{figure} \clearpage
proofpile-arXiv_065-99
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }